Limitless functions. One UNS.

fn-uns is a complete Unified Namespace implementation — serverless functions that capture, process, and report on real-time factory data. Built on fnkit. Follows the UNS Framework standard.

CAPTURE PROCESS REPORT cnc-01 ACTIVE cnc-02 IDLE cnc-03 ACTIVE cnc-04 ALARM MQTT Broker Valkey Cache uns:data:* · uns:prev:* uns-framework uns-state Go uns-stoppage Node uns-productivity Go uns-log Go uns-input Node PostgreSQL 4 tables uns-kpi Go · HTTP 72% Utilisation 94% Availability 48/hr Throughput 142m MTBF 8.3m MTTR ISA-95 TOPIC HIERARCHY v1.0 / enterprise / site1 / area1 / cnc-01 / status

Structured data. Serverless functions. Git push to deploy.

Three ideas come together: an ISA-95 topic hierarchy gives every data point a canonical address, lightweight functions process the data, and MQTT connects everything in real time.

ISA-95 Topic Hierarchy

All data is organised using the ISA-95 international standard — a structured path from enterprise down to individual data points. The hierarchy is self-describing: any consumer can parse the topic path to understand where data comes from without mapping tables or documentation.

Publish / Subscribe Messaging

MQTT decouples producers from consumers. Machines publish to topics, applications subscribe to what they need. The broker handles delivery. Adding a new machine means publishing to a new topic — the entire pipeline picks it up automatically.

Serverless Functions

Each piece of logic is a small, independent function — Go or Node.js. No monolithic platform, no flow builder. Functions read from a shared cache, write to PostgreSQL, and compose through infrastructure rather than through each other.

GitOps Deployment

Every function is version controlled. Deploy with git push. Roll back with git revert. Full audit trail, code review, and CI/CD — the same workflow your software team already uses.

v1.0 / enterprise / site1 / area1 / cnc-01 / status
└─ namespace version
└─ company / business unit
└─ factory / physical site
└─ production area
└─ individual machine
└─ data point

The data flow: Machines publish → MQTT broker distributes → Functions cache in Valkey → Functions detect changes and write to PostgreSQL → KPI functions query and compute on demand.

The deploy flow: Write code → git commit → git push → fnkit builds container → running in production. No manual server management, no UI configuration.

Capture. Process. Report.

Data flows through three stages. Each stage is independent and composable — built from small functions that do one thing well.

STAGE 01

Capture

Machines publish status, program, and tool data to MQTT every 3 seconds. The framework subscribes to the entire namespace and caches every message in Valkey — current value and previous value, for change detection.

uns-simuns-frameworkMQTTValkey
STAGE 02

Process

HTTP functions poll the cache, detect changes, and write structured records to PostgreSQL — state durations, stoppage classifications, production runs, and operator input.

uns-stateuns-stoppageuns-productivityuns-loguns-input
STAGE 03

Report

KPI functions query all PostgreSQL tables and compute manufacturing metrics on demand — utilisation, availability, throughput, MTBF, MTTR, and stoppage pareto.

uns-kpiuns-cachePostgreSQL

All functions run as Containers on a shared network. External access goes through Caddy (automatic TLS) → fnkit-gateway (auth, routing, rate limiting). Internal communication happens directly between containers. Deployed via git push using fnkit's GitOps workflow.

Limitless functions. Zero coupling.

Each function is a standard Go or Node.js program — its own container, its own repo, its own deploy lifecycle. They compose through shared infrastructure, not through each other.

uns-simNode.js · MQTT
Simulates 4 CNC machines publishing realistic status, program, and tool data every 3 seconds
uns-frameworkGo · MQTT
Subscribes to the entire namespace, caches every message with current/previous value tracking
uns-cacheNode.js · HTTP
Read API for cached topics — returns JSON with change detection and metadata
uns-logGo · HTTP
Logs data snapshots to PostgreSQL whenever a topic value changes
uns-stateGo · HTTP
Tracks machine state transitions and logs precise durations — the foundation for all time-based KPIs
uns-stoppageNode.js · HTTP
Auto-classifies why machines aren't running, with manual operator override for real reasons
uns-productivityGo · HTTP
Tracks production runs — parts completed, target attainment, throughput per hour
uns-inputNode.js · HTTP
Manual operator data entry — scrap counts, quality notes, shift handover information
uns-kpiGo · HTTP
Pure read function — queries all tables and computes manufacturing KPIs on demand via a single API call

Readable. Testable. Reviewable.

No drag-and-drop flow builders. No visual wiring. No opaque configuration UIs. Just code that any developer can read, review, and extend.

// Read current + previous for every topic
const pipe = rdb.pipeline()

for (const topic of topics) {
  pipe.get(`uns:data:${topic}`)
  pipe.get(`uns:prev:${topic}`)
}

const results = await pipe.exec()

// Did the value change?
const changed = current !== previous

Current value. Previous value. Change detection. One call.

The cache stores two versions of every topic — current and previous. A single pipelined read returns both, plus metadata. The changed flag tells downstream functions whether to act.

This is the pattern that powers the entire pipeline. State tracking, stoppage classification, production logging — they all read from this cache and only write to PostgreSQL when something actually changes.

Standard Node.js. Standard Redis commands. No proprietary SDK, no vendor abstraction layer.

uns-cache/index.js — the complete cache read function

Real-time metrics. One API call.

Utilisation, availability, throughput, reliability, stoppage analysis — all computed on demand from the data flowing through the UNS pipeline. No pre-aggregation, no stale reports.

72%
Utilisation
ACTIVE time ÷ total tracked time
94%
Availability
Excluding unplanned downtime only
48/hr
Throughput
Parts per hour with target attainment
142m
MTBF
Mean time between failures
8.3m
MTTR
Mean time to repair
Pareto
Stoppages
Top reasons ranked by total duration
GET /uns-kpi?hours=8&machine=cnc-01 — filter by machine, area, or time range

Code beats configuration.

Traditional manufacturing software relies on monolithic platforms, drag-and-drop flow builders, or heavyweight MES systems. fn-uns takes a fundamentally different approach.

Traditional Platforms

  • Vendor-locked platforms with proprietary formats
  • Drag-and-drop flow UIs that become unreadable spaghetti
  • No version control — flows live in runtime databases
  • No code review, no pull requests, no diffs
  • Impossible to unit test or run CI/CD
  • Six-figure licence fees or per-device subscriptions
  • Monolithic deployments — one bad change breaks everything
  • Manual rollback — risky and time-consuming

fn-uns + fnkit

  • Open source, self-hosted, your infrastructure
  • Code-first — readable, auditable, maintainable
  • Full git history — every change tracked with author and date
  • Pull requests, diffs, and code review for every change
  • Unit tests, integration tests, CI/CD pipelines
  • Infrastructure cost only — no licence fees
  • Independent microservices — deploy each function separately
  • git revert + push — instant, safe rollback

Running in 5 minutes.

Clone the repo, start the infrastructure, and verify data is flowing. The full UNS pipeline — from simulated machines to KPI APIs — runs locally as a Container.

# Clone and set up the shared network
$ git clone https://github.com/functionkit/fnkit.git && cd fnkit
$ docker network create fnkit-network
$ fnkit cache start
 
# Start the UNS capture pipeline
$ cd uns-framework && docker compose up -d && cd ..
$ cd uns-sim && docker compose up -d && cd ..
$ cd uns-cache && docker compose up -d && cd ..
 
# Verify real-time data is flowing
$ curl http://localhost:8080/uns-cache | jq
# → JSON with live data from 4 simulated CNC machines