Digital Twins & GitOps — Building Custom Twin Functions with fnkit
An in-depth look at digital twins in manufacturing — what they really are, why they're notoriously hard to scale, and how a Unified Namespace with fnkit gives you the platform to scaffold, deploy, and manage whatever twin functions your factory needs. Includes a CNC quality twin case study built from scratch with fnkit.
What Is a Digital Twin?
The term "digital twin" is one of the most overloaded phrases in manufacturing technology. It gets applied to everything from a 3D CAD model to a real-time simulation to a simple dashboard. Let's be precise about what it actually means.
A digital twin is a live, software representation of a physical system that evolves in real-time as the physical system changes. It's not a static model. It's not a one-time snapshot. It's a continuously updated mirror of reality — fed by real sensor data, enriched with historical context, and capable of answering questions about the physical system's current state, past behaviour, and future trajectory.
The critical distinction is the live data connection. A CAD model is a digital representation, but it doesn't know whether the physical machine is running, alarming, or producing scrap. A digital twin does. It knows the machine's current state, how long it's been in that state, what program it's running, what tool is loaded, and whether the spindle vibration is trending outside normal parameters.
What a Digital Twin Is Not
| Often Called "Digital Twin" | What It Actually Is | What's Missing |
|---|---|---|
| 3D CAD model | Static geometry | No live data, no state, no behaviour |
| SCADA screen | Real-time visualisation | No history, no derived intelligence, no prediction |
| Simulation model | Offline what-if analysis | Not connected to real-time data from the physical asset |
| Dashboard | Reporting layer | No state model, no causal reasoning, no bidirectional flow |
| IoT data lake | Raw data storage | No structure, no context, no real-time state representation |
A true digital twin combines all of these capabilities — real-time data, historical context, structured state models, derived intelligence, and the ability to influence the physical system — into a single, coherent software entity.
The Three Maturity Levels
The industry has converged on three maturity levels for digital representations of physical systems. Understanding where you are — and where you need to be — determines the architecture you need.
Digital Shadow
One-way data flow from physical to digital. The software representation reflects reality but cannot influence it. Think: a dashboard that shows machine status, or a historian that logs sensor values. Data flows from the machine to the twin — never the other way.
Physical → Digital
Digital Twin
Bidirectional data flow. The digital representation can send commands, setpoints, or recommendations back to the physical system. Think: an AI that detects anomalous vibration and automatically adjusts feed rate, or a scheduling system that pushes optimised programs to machines.
Physical ↔ Digital
Digital Thread
Full lifecycle traceability from design through manufacturing to service. The twin connects CAD/CAM data, process parameters, quality measurements, and field performance into a single traceable thread. Think: tracing a quality defect back to a specific tool, program revision, and material batch.
Design → Manufacturing → Service
Most manufacturing organisations today are at Level 1 — Digital Shadow. They have dashboards, historians, and SCADA systems that reflect machine state. The fn-uns reference pipeline gives you a solid Level 1 starting point, with a clear path to Level 2 through MQTT's bidirectional pub/sub architecture. But the real power is that fnkit lets you scaffold new twin functions — quality prediction, vibration analysis, thermal compensation, energy monitoring — whatever your specific factory needs to move up the maturity curve.
Why Digital Twins Are Hard
If digital twins are so valuable, why do so few manufacturers have them? The answer isn't technology — it's complexity. Building a digital twin for one machine is straightforward. Scaling that to a factory floor with 100 machines across 5 sites is where every approach breaks down.
The Seven Scaling Problems
| Problem | What Happens | Why It's Hard |
|---|---|---|
| Protocol fragmentation | Every machine speaks a different language — OPC UA, MTConnect, Modbus, PROFINET, EtherNet/IP, proprietary serial protocols | A single factory floor might have 6+ protocols. Each requires a different driver, different parsing logic, different error handling. Most digital twin platforms punt on this — they assume clean data arrives magically. |
| State explosion | A single CNC machine has 50+ parameters (spindle speed, feed rate, tool ID, coolant pressure, axis positions, alarm codes). 100 machines = 5,000+ live data points, all changing asynchronously. | The twin must track current state, previous state, state duration, and state transitions for every parameter on every machine. Monolithic approaches collapse under this combinatorial explosion. |
| Context assembly | Raw data is meaningless without context. "Spindle speed = 0" means nothing unless you know the machine was supposed to be running, what program was loaded, and whether an alarm is active. | The twin must correlate data across multiple sources — machine status, production schedule, quality specs, maintenance history — in real-time. This is where most implementations stall. |
| Configuration drift | The twin's logic diverges from reality. Someone changes a threshold, adds a machine, or modifies a classification rule — and the twin no longer accurately represents the physical system. | Without version control, there's no way to track what changed, when, or why. The twin silently becomes wrong, and decisions based on it become unreliable. |
| Vendor lock-in | Most digital twin platforms (Azure Digital Twins, AWS IoT TwinMaker, Siemens MindSphere, PTC ThingWorx) lock you into their cloud, their data model, their pricing. | You're paying per-device, per-message, per-query. At scale, costs become unpredictable. Migrating away means rebuilding everything. Your twin's intelligence belongs to the vendor, not to you. |
| Multi-site replication | Deploying identical twin logic across 5 factories means manually replicating configuration, adapting to local differences, and keeping everything in sync. | Each site drifts independently. Bug fixes at one site don't propagate. New capabilities require manual deployment at every location. This becomes a full-time job. |
| Auditability | Regulated industries (aerospace, automotive, pharma) require traceability — who changed the twin's logic, when, and why? | Most twin platforms have no native audit trail for logic changes. You can see what the twin does, but not why it does it or who configured it that way. |
The Cost Curve
Here's what happens to complexity and cost as you scale a digital twin — and why the architecture you choose at the start determines whether you succeed or stall:
At one machine, every approach works. The divergence happens around 10–50 machines, when the monolithic platform's complexity grows exponentially while the composable approach scales linearly. By the time you reach multi-site, the monolithic approach requires dedicated teams just to manage the twin infrastructure — while the composable approach is still just git push.
Manufacturing Engineer
You've seen this pattern before — a vendor demo looks amazing with one machine, but when you try to roll it out across the shop floor, the complexity explodes. Every machine has slightly different signals, different alarm codes, different program naming conventions. The "one-size-fits-all" platform can't handle the variation without extensive customisation.
Developer
Monolithic twin platforms are essentially proprietary application servers with visual configuration tools. When you hit the edge of what the visual tools can do — and you always do — you're writing custom code inside someone else's runtime, with their debugging tools, their deployment model, and their limitations. You'd rather write clean code in a standard language with standard tools.
IT Engineer
Cloud-based twin platforms mean your factory data leaves your network, your costs scale with message volume (unpredictably), and your twin stops working if the internet goes down. For a system that's supposed to represent your physical factory in real-time, that's an unacceptable dependency. You need something that runs on-premise, on your infrastructure, under your control.
The UNS as the Foundation for Digital Twins
Here's the insight that changes everything: a Unified Namespace is already a digital twin's data backbone. The UNS provides exactly the three things a digital twin needs — a structured representation of physical assets, a real-time data connection, and a historical record.
How UNS Concepts Map to Digital Twin Concepts
| Digital Twin Concept | UNS Equivalent | How It Works |
|---|---|---|
| Asset hierarchy | ISA-95 topic structure | site/area/line/cell/equipment/sensor — the twin's address space is the topic tree |
| Live state | MQTT payloads + Valkey cache | Every machine's current state is a JSON payload on an MQTT topic, cached in Valkey with sub-second latency |
| Previous state | Valkey uns:prev:* keys |
The framework layer automatically tracks the previous value of every topic — enabling change detection without polling |
| Historical memory | PostgreSQL tables | Functions you deploy log value changes, state transitions, and periodic snapshots — building the twin's long-term memory |
| Derived intelligence | Custom functions | Any function you scaffold with fnkit can read from the cache and database to compute KPIs, classify events, or detect patterns |
| Predictive capability | Custom analysis functions | Scaffold a function that reads the twin's full context — historical trends, current state, operator input — and runs prediction logic |
| Bidirectional control | MQTT publish | Any function can publish back to MQTT — closing the loop from digital insight to physical action |
| Human input | Input functions | Operators contribute context the machine can't provide — scrap reasons, quality notes, manual overrides |
The Architecture
The left side is your physical factory. The right side is your digital twin — built from functions you scaffold with fnkit. The MQTT broker is the bridge between them. The dashed green boxes are your custom twin functions — the ones specific to your factory's needs. Every function adds a new cognitive capability to the twin, and because they're independent containers, you can deploy them incrementally.
Building Twin Functions with fnkit
Here's the key idea: your digital twin isn't a product you buy — it's a set of functions you build. The fn-uns reference pipeline gives you a solid starting point — state tracking, historical logging, KPI calculations, stoppage classification. But your factory has problems that are unique to your factory. A CNC shop needs vibration analysis and tool wear prediction. A food plant needs temperature compliance and batch traceability. An assembly line needs cycle time anomaly detection and takt adherence.
fnkit gives you the platform to scaffold whatever twin functions your specific use case needs — in seconds.
The fnkit Scaffold Workflow
# Scaffold a new twin function — one command # Go HTTP function (for request/response twin queries) fnkit go uns-quality-predict # Node.js HTTP function (for analysis endpoints) fnkit node uns-vibration-alert # Go MQTT function (for real-time stream processing) fnkit go-mqtt uns-thermal-comp # Node.js MQTT function (for event-driven twin logic) fnkit node-mqtt uns-energy-monitor
Each command creates a complete, deployable function in seconds:
uns-quality-predict/ ├── function.go # Your handler — write your twin logic here ├── cmd/main.go # Entry point (auto-generated) ├── Dockerfile # Container build (auto-generated) ├── docker-compose.yml # Service definition — already on fnkit-network ├── .env.example # CACHE_URL, DATABASE_URL, MQTT_BROKER ├── go.mod # Dependencies ├── .gitignore └── README.md # The scaffolded function already has: # ✓ Connection to the shared Valkey cache (same as all other functions) # ✓ Connection to PostgreSQL (same database, same tables) # ✓ Docker networking (fnkit-network — can talk to every other function) # ✓ Dockerfile for containerised deployment # ✓ GitOps-ready structure (git push deploy main)
Twin Functions You'd Build
The fn-uns reference pipeline handles the fundamentals — state tracking, historical logging, KPIs, stoppage classification. But your twin needs functions specific to your factory. Here are examples of what you'd scaffold:
| Twin Function | Scaffold Command | What It Does |
|---|---|---|
| Quality prediction | fnkit go uns-quality-predict |
Correlates spindle load, tool wear, and cycle time drift with historical scrap events. Predicts quality risk before the part is finished. Publishes alerts to MQTT. |
| Vibration analysis | fnkit go uns-vibration |
Reads accelerometer data from the UNS, computes FFT spectra, detects bearing wear signatures and spindle imbalance. Trends vibration energy over time. |
| Thermal compensation | fnkit go-mqtt uns-thermal-comp |
Models thermal drift after machine startup or long idle periods. Correlates ambient temperature, spindle temperature, and dimensional accuracy. Recommends warm-up cycles. |
| Energy twin | fnkit node uns-energy |
Correlates power draw with production state. Identifies energy waste during idle periods. Computes energy-per-part metrics. Benchmarks across machines. |
| Tool wear prediction | fnkit go uns-tool-wear |
Tracks cutting force trends per tool, models wear rate acceleration, predicts remaining useful life. Recommends tool changes before quality degrades. |
| Batch traceability | fnkit node uns-batch-trace |
Links material batch IDs to machine conditions, operator, and quality outcomes. Enables full traceability for regulated industries (aerospace, pharma, automotive). |
| Maintenance predictor | fnkit go uns-maint-predict |
Reads alarm history, stoppage patterns, and sensor trends from PostgreSQL. Identifies machines trending toward failure. Publishes maintenance recommendations. |
Your Twin Is a Mix of Reference + Custom
In practice, your digital twin repository looks like this — a mix of fn-uns reference functions and custom functions you've scaffolded for your specific needs:
# Your digital twin — reference pipeline + custom functions fn-uns/ ├── uns-framework/ # Reference: perception layer (Go MQTT) ├── uns-state/ # Reference: state tracking (Go) ├── uns-historian/ # Reference: value change logging (Go) ├── uns-stoppage/ # Reference: downtime classification (Node.js) ├── uns-kpi/ # Reference: KPI calculations (Go) │ ├── uns-quality-predict/ # YOUR TWIN: quality prediction (Go) ├── uns-vibration/ # YOUR TWIN: vibration analysis (Go) ├── uns-thermal-comp/ # YOUR TWIN: thermal compensation (Go MQTT) ├── uns-energy/ # YOUR TWIN: energy monitoring (Node.js) ├── uns-tool-wear/ # YOUR TWIN: tool wear prediction (Go) │ └── uns-dashboard/ # Visualisation (Grafana) # Every function — reference or custom — connects to the same # Valkey cache, the same PostgreSQL database, the same MQTT broker. # Every function is independently deployable, testable, replaceable.
What Makes This Different from a Platform
Monolithic Twin Platform
One vendor, one runtime, one data model. You get the capabilities the vendor decided to build. Need vibration analysis? Buy the add-on module. Need thermal compensation? That's a different product. Need something the vendor doesn't offer? Write custom code inside their proprietary runtime, with their debugging tools and their limitations.
fnkit + UNS
Open platform, standard tools, your infrastructure. Need vibration analysis? fnkit go uns-vibration — you have a deployable function in seconds. Write the logic in Go or Node.js with standard libraries. It reads from the same cache and database as everything else. Deploy with git push. You own the code, the data, and the roadmap.
Case Study: A CNC Quality Twin
To make this concrete, let's walk through scaffolding and building a quality-focused digital twin function for a CNC machining cell — one of the most complex and valuable twin applications in discrete manufacturing.
The Quality Problem
A CNC machining cell produces precision aerospace components. Quality is measured by dimensional accuracy (±0.01mm tolerances), surface finish (Ra values), and geometric conformance (GD&T). When a part fails inspection, the cost is enormous — not just the scrapped material, but the machine time, the operator time, and the schedule disruption.
The fundamental question is: can we predict quality problems before they produce scrap?
To answer that, you need to correlate data from multiple sources in real-time:
| Data Source | What It Tells You | UNS Topic |
|---|---|---|
| Spindle load | Cutting force — trending up means tool wear or material hardness variation | .../cnc-01/status → spindle_load |
| Tool life remaining | How much cutting the tool has done — correlates with surface finish degradation | .../cnc-01/tool → life_remaining |
| Program & operation | Which part, which operation — different ops have different quality sensitivities | .../cnc-01/program → name, operation |
| Machine state history | Was there an alarm before this part? A tool change? A long idle period (thermal drift)? | PostgreSQL → state history table |
| Stoppage context | Was the last stoppage a TOOL_BREAK? A FAULT? These correlate with quality issues on the next part. | PostgreSQL → stoppage table |
| Operator input | Scrap count, scrap reason, quality measurement values, free-text notes | PostgreSQL → input table |
| Production history | Parts made on this tool — quality degrades as tool wears; cycle time drift indicates process change | PostgreSQL → productivity table |
Scaffolding the Quality Twin Function
All of this data is already flowing through the UNS and being captured by the reference pipeline. The quality twin doesn't require new infrastructure — it requires a new function that reads from the existing data and adds intelligence on top.
Here's how you build it:
# Step 1: Scaffold the function fnkit go uns-quality-predict # → Creates uns-quality-predict/ with: # function.go, cmd/main.go, Dockerfile, # docker-compose.yml, .env.example, go.mod # Step 2: Configure environment cd uns-quality-predict cp .env.example .env # .env already has CACHE_URL and DATABASE_URL # pointing to the shared fnkit infrastructure
Now write the twin logic in function.go:
// uns-quality-predict/function.go // Quality prediction twin function — reads from shared cache + DB func QualityPredict(w http.ResponseWriter, r *http.Request) { machine := r.URL.Query().Get("machine") // 1. Read current state from Valkey cache // (same cache that uns-framework writes to) spindleLoad := cache.Get("uns:v1.0:...:"+machine+":status:spindle_load") toolLife := cache.Get("uns:v1.0:...:"+machine+":tool:life_remaining") cycleTime := cache.Get("uns:v1.0:...:"+machine+":program:cycle_time") // 2. Query historical context from PostgreSQL // (same database that historian/state/stoppage write to) recentScrap := db.Query(` SELECT count, reason, timestamp FROM uns_input WHERE machine = $1 AND type = 'scrap' AND timestamp > NOW() - INTERVAL '2 hours'`, machine) wearTrend := db.Query(` SELECT value, timestamp FROM uns_historian WHERE topic LIKE $1 AND timestamp > NOW() - INTERVAL '4 hours' ORDER BY timestamp`, "%"+machine+"%tool%life_remaining") lastStoppage := db.Query(` SELECT reason, duration_seconds FROM uns_stoppage WHERE machine = $1 ORDER BY start_time DESC LIMIT 1`, machine) // 3. Compute quality risk score risk := computeQualityRisk(spindleLoad, toolLife, cycleTime, recentScrap, wearTrend, lastStoppage) // 4. If risk exceeds threshold, publish alert to MQTT if risk.Score > 0.7 { mqtt.Publish("v1.0/.../"+machine+"/twin/quality-alert", risk) } // 5. Return prediction as JSON json.NewEncoder(w).Encode(risk) }
# Step 3: Test locally docker compose up -d curl http://localhost:8080/uns-quality-predict?machine=cnc-01 | jq # Step 4: Deploy alongside existing pipeline git add uns-quality-predict/ git commit -m "add quality prediction twin function Scaffolded with fnkit go. Reads spindle load, tool life, and cycle time from Valkey cache. Queries scrap history, wear trends, and stoppage context from PostgreSQL. Computes quality risk score. Publishes alert to MQTT when risk > 0.7. Motivation: 2 scrap events last week correlated with tool wear on T03 during PART-A-001 finishing ops. This function detects the pattern automatically and alerts before scrap occurs." git push deploy main # → Function is live, reading from the same infrastructure # → Existing functions are completely untouched
Why This Is Hard Without a UNS
Without a Unified Namespace, building this quality twin requires:
- ✗ Custom integration per data source — one connector for the CNC controller, another for the tool presetter, another for the CMM, another for the MES
- ✗ Manual data correlation — matching a quality measurement to the specific tool, program, and machine conditions at the time of machining
- ✗ No standard data model — every system has its own schema, its own timestamps, its own naming conventions
- ✗ Batch analysis — quality data is typically analysed after the fact, not in real-time. By the time you find the pattern, you've already produced scrap.
- ✗ Siloed knowledge — the operator knows the tool feels "different," the quality engineer sees the measurement trend, the maintenance engineer sees the alarm history — but nobody has the complete picture
With the UNS + fnkit approach, all of this data is already flowing through the same namespace. The quality twin is just a new function on top of existing data — scaffolded in seconds, deployed in minutes, not a new infrastructure project.
Why GitOps Is Essential for Digital Twins
A digital twin is only as trustworthy as the logic that drives it. If you can't answer "who changed the twin's behaviour, when, and why?" — then you can't trust the twin's output. This is where GitOps transforms digital twins from fragile experiments into production infrastructure.
Twin Logic Is Code
Every aspect of your digital twin — reference functions and custom twin functions alike — is code in a git repository:
# Your digital twin is a git repository fn-uns/ ├── uns-framework/ # Perception layer (Go MQTT) │ ├── function.go # MQTT subscription + cache logic │ ├── Dockerfile │ └── README.md │ ├── uns-state/ # State model (Go) │ ├── function.go # State transition detection │ └── ... │ ├── uns-quality-predict/ # YOUR TWIN: quality prediction (Go) │ ├── function.go # Quality risk scoring logic │ ├── Dockerfile │ └── ... │ ├── uns-vibration/ # YOUR TWIN: vibration analysis (Go) │ ├── function.go # FFT analysis + bearing wear detection │ └── ... │ ├── uns-thermal-comp/ # YOUR TWIN: thermal compensation (Go MQTT) │ ├── function.go # Thermal drift model + warm-up recs │ └── ... │ ├── uns-energy/ # YOUR TWIN: energy monitoring (Node.js) │ ├── index.js # Power draw correlation + waste detection │ └── ... │ └── uns-dashboard/ # Visualisation (Grafana) └── grafana/ # Dashboard JSON — version-controlled # Every file is tracked. Every change has an author. # Every deployment is a git commit. # Every new twin function is a `fnkit` scaffold + git push.
What GitOps Gives Your Digital Twin
| Twin Requirement | Without GitOps | With fnkit + GitOps |
|---|---|---|
| Change tracking | Someone changed the quality prediction thresholds. Nobody knows when or why. The twin's alert frequency changed — was it intentional? | git log uns-quality-predict/ shows every change with author, date, and commit message explaining the reasoning. |
| Safe evolution | You want to add vibration analysis to the twin. If it breaks, the entire twin goes down. You test in production because there's no other option. | fnkit go uns-vibration → develop → test locally with docker compose up → pull request → review → merge. If it breaks: git revert. Existing functions are untouched. |
| Twin consistency | Site A runs version 2.3 of the twin logic. Site B runs 2.1 with a local patch. Site C has an undocumented modification. Nobody knows which site is "correct." | Every site runs the same git commit. git log --oneline at each site shows exactly what version is deployed. Differences are intentional and documented. |
| Regulatory compliance | An auditor asks: "How do you ensure the twin's quality calculations haven't been tampered with?" You have no answer. | Git provides a cryptographically signed, immutable audit trail. Every change to every calculation is traceable to a specific person, date, and approval. |
| Failure isolation | A bug in the quality prediction crashes the entire twin platform. State tracking, KPIs, dashboards — all down. | uns-quality-predict crashes. State tracking, KPIs, dashboards, vibration analysis — all continue running. The twin loses one capability, not everything. |
| Incremental capability | Adding a new twin capability means upgrading the entire platform, risking regression in existing capabilities. | fnkit go uns-new-capability → deploy. It reads from the same Valkey cache and PostgreSQL database. Existing functions are untouched. |
The Twin Drift Problem
One of the most insidious problems with digital twins is drift — the gradual divergence between what the twin thinks is happening and what's actually happening. Drift has two forms:
Data Drift
The physical system changes but the twin's data model doesn't. A new machine is added, a sensor is recalibrated, an alarm code is redefined. The twin's perception becomes stale. In fn-uns, the UNS Framework standard and wildcard MQTT subscriptions handle this automatically — new machines appear in the namespace without configuration changes.
Logic Drift
The twin's processing logic is modified at one site but not others. A threshold is tweaked, a classification rule is changed, a prediction model is adjusted. Without version control, these changes are invisible. With GitOps, every logic change is a git commit — visible, reviewable, and deployable consistently across all sites.
The Deployment Workflow: Adding a New Twin Capability
# Example: Adding thermal compensation to your digital twin # 1. Scaffold the new function fnkit go-mqtt uns-thermal-comp cd uns-thermal-comp # 2. Write the twin logic # function.go: subscribe to temperature topics via MQTT # Read ambient temp, spindle temp, coolant temp from UNS # Query uns_historian for dimensional accuracy after idle periods # Model thermal drift curve for each machine # Publish warm-up recommendations to MQTT # 3. Test locally cp .env.example .env docker compose up -d # Verify: function subscribes, processes, publishes correctly # 4. Commit with full context git add uns-thermal-comp/ git commit -m "add thermal compensation twin function Scaffolded with fnkit go-mqtt. Subscribes to temperature topics. Models thermal drift after startup and long idle periods. Queries historian for dimensional accuracy correlation. Publishes warm-up recommendations to v1.0/.../twin/thermal-comp. Motivation: first-part scrap rate is 3x higher on Monday mornings and after lunch breaks. Thermal drift from cold spindle is the likely cause. This function detects the condition and recommends warm-up cycles before production starts. Tested: local docker compose with simulated temperature data." # 5. Push, review, merge git push origin feature/thermal-compensation # → Create pull request → Team reviews → Merge to main # 6. Deploy to all sites git push deploy main # → Every site gets the new capability, simultaneously # → Existing functions are completely untouched # 7. If something goes wrong at any site git revert HEAD git push deploy main # → Previous version running everywhere in seconds
Scaling Digital Twins with fn-uns
The real test of any digital twin architecture is what happens when you scale. Here's the progression — and why composable functions with GitOps handle each stage gracefully.
1 Machine → 10 Machines
Monolithic Platform
Configure each machine in the platform's UI. Define data models, create dashboards, set up alerts — per machine. 10× the configuration work. If the platform charges per-device, 10× the cost.
fnkit + UNS
Zero changes. The framework subscribes to v1.0/# — it automatically discovers new machines as they publish to the namespace. Your custom twin functions parse the machine identity from the topic path. Add a machine to MQTT and the twin sees it immediately.
10 Machines → 100 Machines
Monolithic Platform
Performance problems emerge. The platform's single runtime struggles with message volume. Dashboard load times increase. The visual configuration tool becomes unwieldy with 100 assets. You need platform-specific expertise to optimise. Vendor suggests upgrading to the enterprise tier.
fnkit + UNS
Same functions — reference and custom. Valkey handles the cache volume (it's designed for millions of keys). PostgreSQL handles the write volume (with proper indexing). If any single function needs more resources, scale that container independently. The architecture was designed for this — stateless functions reading from shared infrastructure.
1 Site → 5 Sites
Monolithic Platform
Deploy the platform at each site. Replicate configuration manually. Each site drifts independently. Bug fixes require manual deployment at every location. Cross-site analytics require a separate aggregation layer. Licensing costs multiply. This is where most digital twin initiatives stall.
fnkit + UNS
git clone at each site. Site-specific config in .env files (broker address, database credentials). Twin logic is identical everywhere — guaranteed by git. Updates: git pull && docker compose up -d. New twin function at one site? Push to git and every site gets it.
The Cost Comparison
| Cost Factor | Cloud Twin Platform (100 machines) | fnkit + UNS (100 machines) |
|---|---|---|
| Platform license | $50K–$500K/year (varies by vendor and tier) | $0 — open source |
| Per-device fees | $10–$100/device/month = $12K–$120K/year | $0 — no per-device pricing |
| Cloud compute | $500–$5,000/month for message processing and storage | $0 — runs on your existing server or a $50/month VM |
| Data egress | $0.05–$0.12/GB — unpredictable at scale | $0 — data stays on-premise |
| Infrastructure | Included in cloud pricing (but you don't control it) | One server: MQTT + Valkey + PostgreSQL + your functions. A modern NUC or small server handles this easily. |
| Adding new twin capabilities | Buy add-on modules, upgrade tier, or write custom code in vendor's proprietary runtime | fnkit go uns-new-thing — scaffold, develop, deploy. Free. |
| Vendor lock-in risk | High — proprietary data models, APIs, and tooling | Zero — standard MQTT, standard SQL, standard containers |
Manufacturing Engineer
At one site with a few machines, any approach works. But when the VP says "roll this out globally," you need something that doesn't require you to manually configure each site. fnkit gives you identical twin logic everywhere — the same quality predictions, the same vibration analysis, the same thermal compensation — deployed via git, not via manual configuration.
Developer
Need a new twin capability? fnkit go uns-new-thing and you have a deployable function in seconds. Write Go or Node.js with standard libraries. Test with Docker Compose. Deploy with git push. The same workflow you use for every other system — not a vendor's proprietary toolchain.
IT Engineer
Cloud-based twin platforms mean your factory data leaves your network, your costs are unpredictable, and your twin depends on internet connectivity. fnkit functions run entirely on-premise. You control the infrastructure, the data, and the costs. Each site is self-contained — if the WAN goes down, the local twin keeps running.
The Bottom Line
Digital twins don't have to be expensive, vendor-locked, or impossibly complex. The industry has been sold a vision of monolithic platforms that promise everything and deliver pilot projects that never scale. The reality is simpler.
A digital twin needs three things:
- ✓ A structured data backbone — the UNS Framework provides this. ISA-95 topic hierarchy, MQTT pub/sub, real-time data flow. Your twin's nervous system.
- ✓ A platform to build twin functions — fnkit provides this. Scaffold any function in seconds — quality prediction, vibration analysis, thermal compensation, energy monitoring — whatever your factory needs. Each function connects to the same shared infrastructure.
- ✓ A management layer — GitOps provides this. Version control, code review, safe rollbacks, multi-site consistency, regulatory compliance. Your twin's governance.
Together, these three layers give you a digital twin that is:
| Property | What It Means |
|---|---|
| Composable | Add capabilities incrementally — fnkit go uns-new-capability and you have a deployable function. Start with state tracking, add quality prediction, add vibration analysis. Each function is independent. |
| Version-controlled | Every change to twin logic is a git commit with an author, timestamp, and reason. Full audit trail. |
| Reproducible | git clone + docker compose up = identical twin at any site, every time. |
| Failure-isolated | One function crashes, the rest keep running. The twin degrades gracefully, not catastrophically. |
| Vendor-neutral | Open source. Standard protocols. Standard tools. Your data, your infrastructure, your twin. |
| Cost-predictable | No per-device fees, no per-message charges, no cloud egress costs. Runs on hardware you already own. |
Next Steps
| Resource | Description |
|---|---|
| fnkit Basics | Complete guide to scaffolding, developing, and deploying functions with fnkit — the tool that powers your twin |
| Extending the System | How to add new functions, KPIs, and data sources to your digital twin |
| Pipeline Functions Guide | Walkthrough of the fn-uns reference functions — the starting point for your twin |
| Flow vs GitOps Guide | When to graduate from Node-RED experiments to production GitOps — the twin management layer |
| Getting Started | Deploy the reference pipeline in under 5 minutes — the foundation for your digital twin |
| UNS Framework Standard | The open standard that defines the twin's data structure — ISA-95 topic hierarchy |
Guide Version: 2.0 · Applies To: fnkit, fn-uns reference pipeline, digital twin architecture, GitOps deployment
Last updated March 2026.