You want to know machine utilisation. Here's every layer that makes that answer possible — starting from the consumer who asks the question, working backwards through every component, all the way to the machine publishing data on the shop floor. Nine steps. Three solutions. One standard.
Everything starts with someone — or something — that needs an answer. A dashboard showing live utilisation. An ERP system pulling production counts. A maintenance lead checking MTBF on their phone. An AI model requesting historical state data.
In the UNS, a consumer is any entity that receives information. It doesn't need to know where the data comes from, which machine produced it, or how it was processed. It just subscribes to a topic or calls an API.
The UNS Framework standard describes consumers in YAML — so your entire data landscape is documented, version-controlled, and machine-readable.
When a consumer asks "what was the utilisation of CNC-01 last shift?", the request hits uns-kpi — a pure read function written in Go. It doesn't store anything. It queries PostgreSQL, computes the answer, and returns JSON.
One API call returns utilisation, availability, throughput, MTBF, MTTR, and a stoppage pareto — all computed from real data, not pre-aggregated summaries. Filter by machine, area, or any time range.
Because it's a stateless function, it scales horizontally. Because it's code (not a dashboard configuration), it's testable, reviewable, and version-controlled.
GET /uns-kpi?hours=8&machine=cnc-01
The KPI function needs data to compute from. That data lives in PostgreSQL — four tables, auto-created on first run, no migrations needed. Each table is written by a specific function and read by uns-kpi.
This is where the UNS becomes more than a real-time data stream. Every state change, every stoppage, every production run is recorded with precise timestamps and durations. History is what turns live data into actionable insight.
The UNS Framework standard describes your database configuration in YAML — so it's documented alongside everything else.
| Table | Written by | Stores |
|---|---|---|
| uns_state | uns-state | Machine state durations with timestamps |
| uns_stoppage | uns-stoppage | Classified stoppage reasons and durations |
| uns_productivity | uns-productivity | Production runs, parts count, throughput |
| uns_input | uns-input | Manual operator entries — scrap, quality, notes |
All tables are auto-created on first run. No schema migrations, no DBA required. Each function owns its table — clean separation of concerns.
The database doesn't fill itself. Five independent serverless functions — written in Go and Node.js — read from the cache, detect changes, and write structured records to PostgreSQL.
uns-state tracks machine state transitions and logs precise durations. uns-stoppage auto-classifies why machines aren't running. uns-productivity tracks production runs and parts counts. uns-log snapshots every data change. uns-input captures manual operator entries.
Each function is its own container, its own repo, its own deploy lifecycle. They compose through shared infrastructure — not through each other. The UNS Framework describes them in YAML:
Functions don't read directly from MQTT. They read from Valkey — an open-source, Redis-compatible cache that holds the current state of the entire namespace.
For every MQTT topic, the cache stores two values:
current (uns:data:*) and previous (uns:prev:*). This is the key pattern: by comparing current to previous,
functions know whether something actually changed — and only write
to the database when it did.
The cache also maintains a topic registry (uns:topics) — a set of every topic that has ever been seen. This is how
functions discover what machines exist without hardcoding
anything.
The cache is populated by uns-framework, which
subscribes to the MQTT broker using a wildcard (v1.0/#) and caches every message it receives.
The MQTT broker is the central nervous system of the UNS. It's a lightweight message bus that decouples producers from consumers. Machines publish to topics. Applications subscribe to what they need. The broker handles delivery.
Adding a new machine means publishing to a new topic — the entire pipeline picks it up automatically. Adding a new consumer means subscribing to existing topics — no changes to producers required.
Every data point in the UNS has a canonical address following the ISA-95 international standard for manufacturing systems integration. The path is self-describing — any consumer can parse it to understand where data comes from without mapping tables.
The UNS Framework standard describes your entire namespace in YAML: the top-level configuration, sites, areas, lines, cells, equipment, and sensors. Templates let you define a sensor type once and reuse it across hundreds of machines.
This is the paradigm shift: instead of configuring your namespace in a proprietary platform, you describe it in version-controlled YAML files that are both human-readable and machine-readable.
The ISA-95 hierarchy tells you where data comes from. But sensors define what data each piece of equipment actually publishes — the information model. This is where you spec the machine data model for the UNS.
A sensor template defines the data points a type of equipment produces: status (string), program (string), tool (float), axis positions (float), temperature (float). You define the template once, then attach it to any equipment using YAML anchors. Every data point in the template becomes an MQTT topic.
This is the contract between producers and consumers. The sensor template says "a CNC machine publishes these five data points with these types." Any consumer can read the YAML and know exactly what data to expect — without inspecting live traffic or reading documentation.
Different equipment types get different templates. A CNC machine publishes axis positions, status, program, and tool data. An environment sensor publishes temperature. The framework supports any information model — user-defined, Sparkplug, OPC/UA, or MT Connect.
At the very bottom of the stack: the machines themselves. PLCs, sensors, gateways, and simulators that publish data to MQTT topics every few seconds.
Each machine publishes three topics — /status (machine state), /program (current job, parts count, progress), and /tool (active tool, tool life, offset). The topic path follows the ISA-95 hierarchy defined in your YAML.
A producer is any entity that sends information into the UNS. It doesn't need to know who's listening. It just publishes to its topic, and the broker handles the rest.
These nine steps are powered by three open-source projects that work together — or independently. Use the standard to describe what you have. Use fn-uns as a reference implementation. Use fnkit to deploy and run everything.
An open YAML standard for describing your Unified Namespace. Sites, areas, equipment, sensors, functions, producers, consumers, MQTT brokers, and databases — all defined in version-controlled files that are both human-readable and machine-readable. Complements Sparkplug, OPC/UA, and MT Connect.
A complete reference implementation of the UNS — serverless functions that capture, process, and report on real-time factory data. Nine independent components from simulator to KPI API. Go and Node.js. Open source. Self-hosted.
The serverless platform that deploys and runs fn-uns functions. Git push to deploy. Automatic TLS. API gateway with auth and rate limiting. Shared cache and database. GitOps workflow — the same tools your software team already uses.
Read the open standard, explore the reference implementation, or dive into the documentation. Everything is open source, self-hosted, and ready to deploy.