fn-uns is an open-source Unified Namespace. Serverless functions that turn fragmented factory data into real-time KPIs. Self-hosted. Version controlled. Deployed with git push.
Every factory generates vast amounts of data — machine states, production counts, quality metrics, maintenance logs. But it's fragmented across disconnected systems that don't talk to each other.
Every new connection adds complexity. Every system has its own data model. Getting a simple answer — "what's the utilisation of cnc-01 this shift?" — requires pulling data from multiple systems and hoping the timestamps align.
Any system can publish data. Any system can subscribe. No point-to-point wiring. No vendor lock-in. Adding a new consumer means subscribing to existing topics — not building a new integration.
fn-uns is built on two open foundations — an open standard for structuring manufacturing data, and an open platform for deploying serverless functions.
An open standard that defines how to structure manufacturing data in MQTT. It specifies the ISA-95 topic hierarchy, versioned namespaces, payload conventions, and best practices for topic naming and QoS selection. The framework is vendor-neutral — any system that follows the standard can participate in the namespace.
A lightweight Functions-as-a-Service platform that runs anywhere Containers runs. Write functions in any language — Go, Node.js, Python — scaffold with one command, deploy via git push. fnkit handles the gateway, routing, TLS, caching, and GitOps workflow. No cloud vendor required.
Serverless functions that implement a complete UNS data pipeline on fnkit. From machine simulation to KPI computation — capture, process, and report on manufacturing data. Each function follows the UNS Framework standard and deploys through fnkit's GitOps workflow.
Three ideas come together: an ISA-95 topic hierarchy gives every data point a canonical address, lightweight functions process the data, and MQTT connects everything in real time.
All data is organised using the ISA-95 international standard — a structured path from enterprise down to individual data points. The hierarchy is self-describing: any consumer can parse the topic path to understand where data comes from without mapping tables or documentation.
MQTT decouples producers from consumers. Machines publish to topics, applications subscribe to what they need. The broker handles delivery. Adding a new machine means publishing to a new topic — the entire pipeline picks it up automatically.
Each piece of logic is a small, independent function — Go or Node.js. No monolithic platform, no flow builder. Functions read from a shared cache, write to PostgreSQL, and compose through infrastructure rather than through each other.
Every function is version controlled. Deploy with
git push. Roll back with
git revert. Full audit trail, code review, and CI/CD — the same workflow
your software team already uses.
The data flow: Machines publish → MQTT broker distributes → Functions cache in Valkey → Functions detect changes and write to PostgreSQL → KPI functions query and compute on demand.
The deploy flow: Write code → git commit → git push → fnkit builds container → running in production. No manual server management, no UI configuration.
Data flows through three stages. Each stage is independent and composable — built from small functions that do one thing well.
Machines publish status, program, and tool data to MQTT every 3 seconds. The framework subscribes to the entire namespace and caches every message in Valkey — current value and previous value, for change detection.
HTTP functions poll the cache, detect changes, and write structured records to PostgreSQL — state durations, stoppage classifications, production runs, and operator input.
KPI functions query all PostgreSQL tables and compute manufacturing metrics on demand — utilisation, availability, throughput, MTBF, MTTR, and stoppage pareto.
All functions run as Containers on a shared network. External access goes through Caddy (automatic TLS) → fnkit-gateway (auth, routing, rate limiting). Internal communication happens directly between containers. Deployed via git push using fnkit's GitOps workflow.
Each function is a standard Go or Node.js program — its own container, its own repo, its own deploy lifecycle. They compose through shared infrastructure, not through each other.
The UNS isn't just a technical architecture — it's a business tool. Different teams get different value from the same underlying data.
Real-time dashboards showing machine utilisation, production progress, and stoppage reasons across the entire shop floor. No more walking the floor to check machine status.
Automatic MTBF and MTTR calculations per machine. Alarm history with durations. Stoppage pareto charts showing exactly where to focus improvement efforts.
Actual throughput data — parts per hour, target attainment — compared against planned schedules. Identify bottleneck machines and underperforming programs.
Scrap tracking linked to specific machines, programs, and operators. Quality check pass rates over time. Correlation between machine conditions and quality outcomes.
Data-driven kaizen. Every state change, every stoppage, every production run is recorded with timestamps and durations. The data tells you exactly where time is being lost.
A clean, maintainable architecture. Each function is independent, version-controlled, and deployable via git push. No monolithic platforms. No vendor lock-in. Standard tools.
No drag-and-drop flow builders. No visual wiring. No opaque configuration UIs. Just code that any developer can read, review, and extend.
// Read current + previous for every topic const pipe = rdb.pipeline() for (const topic of topics) { pipe.get(`uns:data:${topic}`) pipe.get(`uns:prev:${topic}`) } const results = await pipe.exec() // Did the value change? const changed = current !== previous
The cache stores two versions of every topic — current and
previous. A single pipelined read returns both, plus metadata. The
changed
flag tells downstream functions whether to act.
This is the pattern that powers the entire pipeline. State tracking, stoppage classification, production logging — they all read from this cache and only write to PostgreSQL when something actually changes.
Standard Node.js. Standard Redis commands. No proprietary SDK, no vendor abstraction layer.
Utilisation, availability, throughput, reliability, stoppage analysis — all computed on demand from the data flowing through the UNS pipeline. No pre-aggregation, no stale reports.
Traditional manufacturing software relies on monolithic platforms, drag-and-drop flow builders, or heavyweight MES systems. fn-uns takes a fundamentally different approach.
Clone the repo, start the infrastructure, and verify data is flowing. The full UNS pipeline — from simulated machines to KPI APIs — runs locally as a Container.