Digital Twin GitOps fnkit Architecture ~25 min read

Digital Twins & GitOps — Building Custom Twin Functions with fnkit

An in-depth look at digital twins in manufacturing — what they really are, why they're notoriously hard to scale, and how a Unified Namespace with fnkit gives you the platform to scaffold, deploy, and manage whatever twin functions your factory needs. Includes a CNC quality twin case study built from scratch with fnkit.

What Is a Digital Twin?

The term "digital twin" is one of the most overloaded phrases in manufacturing technology. It gets applied to everything from a 3D CAD model to a real-time simulation to a simple dashboard. Let's be precise about what it actually means.

A digital twin is a live, software representation of a physical system that evolves in real-time as the physical system changes. It's not a static model. It's not a one-time snapshot. It's a continuously updated mirror of reality — fed by real sensor data, enriched with historical context, and capable of answering questions about the physical system's current state, past behaviour, and future trajectory.

The concept originated at NASA in the early 2000s, where engineers built software replicas of spacecraft to diagnose problems from Earth. Michael Grieves formalised the term in 2002 at the University of Michigan, defining it as a three-part model: the physical entity, the virtual entity, and the data connection between them. Today, the concept has expanded far beyond aerospace into manufacturing, energy, healthcare, and infrastructure — driven by reduced downtime, improved quality, and optimised throughput.
Source: Grieves, M. (2014) "Digital Twin: Manufacturing Excellence through Virtual Factory Replication"

The critical distinction is the live data connection. A CAD model is a digital representation, but it doesn't know whether the physical machine is running, alarming, or producing scrap. A digital twin does. It knows the machine's current state, how long it's been in that state, what program it's running, what tool is loaded, and whether the spindle vibration is trending outside normal parameters.

What a Digital Twin Is Not

Often Called "Digital Twin"What It Actually IsWhat's Missing
3D CAD modelStatic geometryNo live data, no state, no behaviour
SCADA screenReal-time visualisationNo history, no derived intelligence, no prediction
Simulation modelOffline what-if analysisNot connected to real-time data from the physical asset
DashboardReporting layerNo state model, no causal reasoning, no bidirectional flow
IoT data lakeRaw data storageNo structure, no context, no real-time state representation

A true digital twin combines all of these capabilities — real-time data, historical context, structured state models, derived intelligence, and the ability to influence the physical system — into a single, coherent software entity.

The Three Maturity Levels

The industry has converged on three maturity levels for digital representations of physical systems. Understanding where you are — and where you need to be — determines the architecture you need.

Level 1

Digital Shadow

One-way data flow from physical to digital. The software representation reflects reality but cannot influence it. Think: a dashboard that shows machine status, or a historian that logs sensor values. Data flows from the machine to the twin — never the other way.

Physical → Digital

Level 2

Digital Twin

Bidirectional data flow. The digital representation can send commands, setpoints, or recommendations back to the physical system. Think: an AI that detects anomalous vibration and automatically adjusts feed rate, or a scheduling system that pushes optimised programs to machines.

Physical ↔ Digital

Level 3

Digital Thread

Full lifecycle traceability from design through manufacturing to service. The twin connects CAD/CAM data, process parameters, quality measurements, and field performance into a single traceable thread. Think: tracing a quality defect back to a specific tool, program revision, and material batch.

Design → Manufacturing → Service

A 2019 Gartner survey found that 75% of organisations implementing IoT already use or plan to use digital twins. Yet the vast majority remain at Level 1 — digital shadows with one-way data flow. The gap between aspiration and reality is enormous, and it's almost entirely an architecture problem, not a technology problem. Most organisations can collect the data — they just can't structure, correlate, and act on it in real-time.
Source: Gartner (2019) "Gartner Survey Reveals Digital Twins Are Entering Mainstream Use"

Most manufacturing organisations today are at Level 1 — Digital Shadow. They have dashboards, historians, and SCADA systems that reflect machine state. The fn-uns reference pipeline gives you a solid Level 1 starting point, with a clear path to Level 2 through MQTT's bidirectional pub/sub architecture. But the real power is that fnkit lets you scaffold new twin functions — quality prediction, vibration analysis, thermal compensation, energy monitoring — whatever your specific factory needs to move up the maturity curve.

Key insight: You don't need to build a Level 3 digital thread on day one. Start with a digital shadow — real-time state, historical context, derived KPIs — and evolve incrementally. Each new function you scaffold with fnkit adds a new capability to your twin.

Why Digital Twins Are Hard

If digital twins are so valuable, why do so few manufacturers have them? The answer isn't technology — it's complexity. Building a digital twin for one machine is straightforward. Scaling that to a factory floor with 100 machines across 5 sites is where every approach breaks down.

The Seven Scaling Problems

ProblemWhat HappensWhy It's Hard
Protocol fragmentation Every machine speaks a different language — OPC UA, MTConnect, Modbus, PROFINET, EtherNet/IP, proprietary serial protocols A single factory floor might have 6+ protocols. Each requires a different driver, different parsing logic, different error handling. Most digital twin platforms punt on this — they assume clean data arrives magically.
State explosion A single CNC machine has 50+ parameters (spindle speed, feed rate, tool ID, coolant pressure, axis positions, alarm codes). 100 machines = 5,000+ live data points, all changing asynchronously. The twin must track current state, previous state, state duration, and state transitions for every parameter on every machine. Monolithic approaches collapse under this combinatorial explosion.
Context assembly Raw data is meaningless without context. "Spindle speed = 0" means nothing unless you know the machine was supposed to be running, what program was loaded, and whether an alarm is active. The twin must correlate data across multiple sources — machine status, production schedule, quality specs, maintenance history — in real-time. This is where most implementations stall.
Configuration drift The twin's logic diverges from reality. Someone changes a threshold, adds a machine, or modifies a classification rule — and the twin no longer accurately represents the physical system. Without version control, there's no way to track what changed, when, or why. The twin silently becomes wrong, and decisions based on it become unreliable.
Vendor lock-in Most digital twin platforms (Azure Digital Twins, AWS IoT TwinMaker, Siemens MindSphere, PTC ThingWorx) lock you into their cloud, their data model, their pricing. You're paying per-device, per-message, per-query. At scale, costs become unpredictable. Migrating away means rebuilding everything. Your twin's intelligence belongs to the vendor, not to you.
Multi-site replication Deploying identical twin logic across 5 factories means manually replicating configuration, adapting to local differences, and keeping everything in sync. Each site drifts independently. Bug fixes at one site don't propagate. New capabilities require manual deployment at every location. This becomes a full-time job.
Auditability Regulated industries (aerospace, automotive, pharma) require traceability — who changed the twin's logic, when, and why? Most twin platforms have no native audit trail for logic changes. You can see what the twin does, but not why it does it or who configured it that way.
The "pilot trap" is the most common failure mode for digital twin initiatives. They work for one machine or one line but fail to scale to the factory or enterprise level. The reasons are consistently the same: integration complexity across heterogeneous equipment, lack of standardised data models, and inability to maintain consistency across sites. These are architecture problems, not technology problems — and they're exactly what a composable, UNS-based approach is designed to solve.

The Cost Curve

Here's what happens to complexity and cost as you scale a digital twin — and why the architecture you choose at the start determines whether you succeed or stall:

DIGITAL TWIN COMPLEXITY AS YOU SCALE COMPLEXITY / COST SCALE → 1 machine 10 machines 100 machines 5 sites Monolithic platform fnkit composable ↑ approaches diverge here same

At one machine, every approach works. The divergence happens around 10–50 machines, when the monolithic platform's complexity grows exponentially while the composable approach scales linearly. By the time you reach multi-site, the monolithic approach requires dedicated teams just to manage the twin infrastructure — while the composable approach is still just git push.

🔧

Manufacturing Engineer

You've seen this pattern before — a vendor demo looks amazing with one machine, but when you try to roll it out across the shop floor, the complexity explodes. Every machine has slightly different signals, different alarm codes, different program naming conventions. The "one-size-fits-all" platform can't handle the variation without extensive customisation.

💻

Developer

Monolithic twin platforms are essentially proprietary application servers with visual configuration tools. When you hit the edge of what the visual tools can do — and you always do — you're writing custom code inside someone else's runtime, with their debugging tools, their deployment model, and their limitations. You'd rather write clean code in a standard language with standard tools.

🖥️

IT Engineer

Cloud-based twin platforms mean your factory data leaves your network, your costs scale with message volume (unpredictably), and your twin stops working if the internet goes down. For a system that's supposed to represent your physical factory in real-time, that's an unacceptable dependency. You need something that runs on-premise, on your infrastructure, under your control.

The UNS as the Foundation for Digital Twins

Here's the insight that changes everything: a Unified Namespace is already a digital twin's data backbone. The UNS provides exactly the three things a digital twin needs — a structured representation of physical assets, a real-time data connection, and a historical record.

How UNS Concepts Map to Digital Twin Concepts

Digital Twin ConceptUNS EquivalentHow It Works
Asset hierarchy ISA-95 topic structure site/area/line/cell/equipment/sensor — the twin's address space is the topic tree
Live state MQTT payloads + Valkey cache Every machine's current state is a JSON payload on an MQTT topic, cached in Valkey with sub-second latency
Previous state Valkey uns:prev:* keys The framework layer automatically tracks the previous value of every topic — enabling change detection without polling
Historical memory PostgreSQL tables Functions you deploy log value changes, state transitions, and periodic snapshots — building the twin's long-term memory
Derived intelligence Custom functions Any function you scaffold with fnkit can read from the cache and database to compute KPIs, classify events, or detect patterns
Predictive capability Custom analysis functions Scaffold a function that reads the twin's full context — historical trends, current state, operator input — and runs prediction logic
Bidirectional control MQTT publish Any function can publish back to MQTT — closing the loop from digital insight to physical action
Human input Input functions Operators contribute context the machine can't provide — scrap reasons, quality notes, manual overrides
The UNS doesn't just support a digital twin — it is the digital twin's nervous system. The topic hierarchy is the twin's structure. The MQTT messages are the twin's senses. The functions you build are the twin's brain. The database is the twin's memory.

The Architecture

PHYSICAL FACTORY ↔ DIGITAL TWIN PHYSICAL WORLD CNC-01 ACTIVE · 8500rpm CNC-02 IDLE · tool change CNC-03 SETUP · PART-B CNC-04 ALARM · E-0342 Sensors · PLCs · Stack Lights · Operators MQTT Broker v1.0/enterprise/site1/# MES ERP Operator Input · Scrap · Quality Notes · Overrides MQTT MQTT DIGITAL TWIN (fnkit functions) PERCEPTION — framework → Valkey cache STATE — tracking HISTORY — logging YOUR — quality YOUR — vibration MEMORY — PostgreSQL + Valkey YOUR — thermal YOUR — energy INTELLIGENCE — AI · predict · analyse HUMAN — operator input · scrap · quality · notes

The left side is your physical factory. The right side is your digital twin — built from functions you scaffold with fnkit. The MQTT broker is the bridge between them. The dashed green boxes are your custom twin functions — the ones specific to your factory's needs. Every function adds a new cognitive capability to the twin, and because they're independent containers, you can deploy them incrementally.

Building Twin Functions with fnkit

Here's the key idea: your digital twin isn't a product you buy — it's a set of functions you build. The fn-uns reference pipeline gives you a solid starting point — state tracking, historical logging, KPI calculations, stoppage classification. But your factory has problems that are unique to your factory. A CNC shop needs vibration analysis and tool wear prediction. A food plant needs temperature compliance and batch traceability. An assembly line needs cycle time anomaly detection and takt adherence.

fnkit gives you the platform to scaffold whatever twin functions your specific use case needs — in seconds.

The fnkit Scaffold Workflow

# Scaffold a new twin function — one command

# Go HTTP function (for request/response twin queries)
fnkit go uns-quality-predict

# Node.js HTTP function (for analysis endpoints)
fnkit node uns-vibration-alert

# Go MQTT function (for real-time stream processing)
fnkit go-mqtt uns-thermal-comp

# Node.js MQTT function (for event-driven twin logic)
fnkit node-mqtt uns-energy-monitor

Each command creates a complete, deployable function in seconds:

uns-quality-predict/
├── function.go          # Your handler — write your twin logic here
├── cmd/main.go          # Entry point (auto-generated)
├── Dockerfile           # Container build (auto-generated)
├── docker-compose.yml   # Service definition — already on fnkit-network
├── .env.example         # CACHE_URL, DATABASE_URL, MQTT_BROKER
├── go.mod               # Dependencies
├── .gitignore
└── README.md

# The scaffolded function already has:
# ✓ Connection to the shared Valkey cache (same as all other functions)
# ✓ Connection to PostgreSQL (same database, same tables)
# ✓ Docker networking (fnkit-network — can talk to every other function)
# ✓ Dockerfile for containerised deployment
# ✓ GitOps-ready structure (git push deploy main)
Zero infrastructure work. The scaffolded function connects to the same Valkey cache, the same PostgreSQL database, the same MQTT broker as every other function in your twin. You write the logic — fnkit handles the plumbing.

Twin Functions You'd Build

The fn-uns reference pipeline handles the fundamentals — state tracking, historical logging, KPIs, stoppage classification. But your twin needs functions specific to your factory. Here are examples of what you'd scaffold:

Twin FunctionScaffold CommandWhat It Does
Quality prediction fnkit go uns-quality-predict Correlates spindle load, tool wear, and cycle time drift with historical scrap events. Predicts quality risk before the part is finished. Publishes alerts to MQTT.
Vibration analysis fnkit go uns-vibration Reads accelerometer data from the UNS, computes FFT spectra, detects bearing wear signatures and spindle imbalance. Trends vibration energy over time.
Thermal compensation fnkit go-mqtt uns-thermal-comp Models thermal drift after machine startup or long idle periods. Correlates ambient temperature, spindle temperature, and dimensional accuracy. Recommends warm-up cycles.
Energy twin fnkit node uns-energy Correlates power draw with production state. Identifies energy waste during idle periods. Computes energy-per-part metrics. Benchmarks across machines.
Tool wear prediction fnkit go uns-tool-wear Tracks cutting force trends per tool, models wear rate acceleration, predicts remaining useful life. Recommends tool changes before quality degrades.
Batch traceability fnkit node uns-batch-trace Links material batch IDs to machine conditions, operator, and quality outcomes. Enables full traceability for regulated industries (aerospace, pharma, automotive).
Maintenance predictor fnkit go uns-maint-predict Reads alarm history, stoppage patterns, and sensor trends from PostgreSQL. Identifies machines trending toward failure. Publishes maintenance recommendations.

Your Twin Is a Mix of Reference + Custom

In practice, your digital twin repository looks like this — a mix of fn-uns reference functions and custom functions you've scaffolded for your specific needs:

# Your digital twin — reference pipeline + custom functions

fn-uns/
├── uns-framework/          # Reference: perception layer (Go MQTT)
├── uns-state/              # Reference: state tracking (Go)
├── uns-historian/          # Reference: value change logging (Go)
├── uns-stoppage/           # Reference: downtime classification (Node.js)
├── uns-kpi/                # Reference: KPI calculations (Go)
│
├── uns-quality-predict/    # YOUR TWIN: quality prediction (Go)
├── uns-vibration/          # YOUR TWIN: vibration analysis (Go)
├── uns-thermal-comp/       # YOUR TWIN: thermal compensation (Go MQTT)
├── uns-energy/             # YOUR TWIN: energy monitoring (Node.js)
├── uns-tool-wear/          # YOUR TWIN: tool wear prediction (Go)
│
└── uns-dashboard/          # Visualisation (Grafana)

# Every function — reference or custom — connects to the same
# Valkey cache, the same PostgreSQL database, the same MQTT broker.
# Every function is independently deployable, testable, replaceable.

What Makes This Different from a Platform

Monolithic Twin Platform

One vendor, one runtime, one data model. You get the capabilities the vendor decided to build. Need vibration analysis? Buy the add-on module. Need thermal compensation? That's a different product. Need something the vendor doesn't offer? Write custom code inside their proprietary runtime, with their debugging tools and their limitations.

fnkit + UNS

Open platform, standard tools, your infrastructure. Need vibration analysis? fnkit go uns-vibration — you have a deployable function in seconds. Write the logic in Go or Node.js with standard libraries. It reads from the same cache and database as everything else. Deploy with git push. You own the code, the data, and the roadmap.

The composable architecture advantage is straightforward. When each capability is an independent, deployable unit, you can add new functionality without modifying or risking existing capabilities. New twin functions don't require platform upgrades, vendor negotiations, or regression testing of the entire system. You scaffold, develop, test, and deploy — and the existing functions never know the difference. This is the fundamental advantage of composable systems over monolithic platforms.

Case Study: A CNC Quality Twin

To make this concrete, let's walk through scaffolding and building a quality-focused digital twin function for a CNC machining cell — one of the most complex and valuable twin applications in discrete manufacturing.

Quality is the most compelling use case for CNC digital twins. The principle is well-established in manufacturing research: correlating real-time process parameters (spindle load, vibration, tool wear) with quality outcomes enables early detection of scrap-producing conditions. The key enabler is multi-source data integration — combining machine state, tool condition, and historical quality data in real-time — which is exactly what a UNS provides.

The Quality Problem

A CNC machining cell produces precision aerospace components. Quality is measured by dimensional accuracy (±0.01mm tolerances), surface finish (Ra values), and geometric conformance (GD&T). When a part fails inspection, the cost is enormous — not just the scrapped material, but the machine time, the operator time, and the schedule disruption.

The fundamental question is: can we predict quality problems before they produce scrap?

To answer that, you need to correlate data from multiple sources in real-time:

Data SourceWhat It Tells YouUNS Topic
Spindle load Cutting force — trending up means tool wear or material hardness variation .../cnc-01/status → spindle_load
Tool life remaining How much cutting the tool has done — correlates with surface finish degradation .../cnc-01/tool → life_remaining
Program & operation Which part, which operation — different ops have different quality sensitivities .../cnc-01/program → name, operation
Machine state history Was there an alarm before this part? A tool change? A long idle period (thermal drift)? PostgreSQL → state history table
Stoppage context Was the last stoppage a TOOL_BREAK? A FAULT? These correlate with quality issues on the next part. PostgreSQL → stoppage table
Operator input Scrap count, scrap reason, quality measurement values, free-text notes PostgreSQL → input table
Production history Parts made on this tool — quality degrades as tool wears; cycle time drift indicates process change PostgreSQL → productivity table

Scaffolding the Quality Twin Function

All of this data is already flowing through the UNS and being captured by the reference pipeline. The quality twin doesn't require new infrastructure — it requires a new function that reads from the existing data and adds intelligence on top.

Here's how you build it:

# Step 1: Scaffold the function
fnkit go uns-quality-predict

# → Creates uns-quality-predict/ with:
#   function.go, cmd/main.go, Dockerfile,
#   docker-compose.yml, .env.example, go.mod

# Step 2: Configure environment
cd uns-quality-predict
cp .env.example .env
# .env already has CACHE_URL and DATABASE_URL
# pointing to the shared fnkit infrastructure

Now write the twin logic in function.go:

// uns-quality-predict/function.go
// Quality prediction twin function — reads from shared cache + DB

func QualityPredict(w http.ResponseWriter, r *http.Request) {
    machine := r.URL.Query().Get("machine")

    // 1. Read current state from Valkey cache
    //    (same cache that uns-framework writes to)
    spindleLoad := cache.Get("uns:v1.0:...:"+machine+":status:spindle_load")
    toolLife    := cache.Get("uns:v1.0:...:"+machine+":tool:life_remaining")
    cycleTime   := cache.Get("uns:v1.0:...:"+machine+":program:cycle_time")

    // 2. Query historical context from PostgreSQL
    //    (same database that historian/state/stoppage write to)
    recentScrap := db.Query(`
        SELECT count, reason, timestamp
        FROM uns_input
        WHERE machine = $1 AND type = 'scrap'
        AND timestamp > NOW() - INTERVAL '2 hours'`, machine)

    wearTrend := db.Query(`
        SELECT value, timestamp
        FROM uns_historian
        WHERE topic LIKE $1
        AND timestamp > NOW() - INTERVAL '4 hours'
        ORDER BY timestamp`, "%"+machine+"%tool%life_remaining")

    lastStoppage := db.Query(`
        SELECT reason, duration_seconds
        FROM uns_stoppage
        WHERE machine = $1
        ORDER BY start_time DESC LIMIT 1`, machine)

    // 3. Compute quality risk score
    risk := computeQualityRisk(spindleLoad, toolLife, cycleTime,
                               recentScrap, wearTrend, lastStoppage)

    // 4. If risk exceeds threshold, publish alert to MQTT
    if risk.Score > 0.7 {
        mqtt.Publish("v1.0/.../"+machine+"/twin/quality-alert", risk)
    }

    // 5. Return prediction as JSON
    json.NewEncoder(w).Encode(risk)
}
# Step 3: Test locally
docker compose up -d
curl http://localhost:8080/uns-quality-predict?machine=cnc-01 | jq

# Step 4: Deploy alongside existing pipeline
git add uns-quality-predict/
git commit -m "add quality prediction twin function

Scaffolded with fnkit go. Reads spindle load, tool life, and
cycle time from Valkey cache. Queries scrap history, wear trends,
and stoppage context from PostgreSQL. Computes quality risk score.
Publishes alert to MQTT when risk > 0.7.

Motivation: 2 scrap events last week correlated with tool wear
on T03 during PART-A-001 finishing ops. This function detects
the pattern automatically and alerts before scrap occurs."

git push deploy main
# → Function is live, reading from the same infrastructure
# → Existing functions are completely untouched
CNC QUALITY TWIN — SCAFFOLDED WITH FNKIT CNC-01 spindle: 8500rpm load: 72% tool: T03 (life: 34%) program: PART-A-001 parts: 42/50 MQTT Reference Pipeline framework → cache values state → track transitions stoppage → classify reasons productivity → track runs historian → log every change Shared Infrastructure Valkey cache · PostgreSQL · MQTT broker Operator Input scrap: 2 parts · reason: burr reads uns-quality-predict scaffolded with: fnkit go Reads from shared infra: ✓ Valkey: spindle load trending +8% ✓ Valkey: tool T03 at 34% life ✓ PG: 2 scrap parts in last 30 min ✓ PG: last alarm was tool-related ✓ PG: cycle time drifting +3% Quality Prediction: "Tool T03 wear rate is accelerating. Risk score: 0.82. Recommend tool change within 8 parts. Burr defects correlate with load >70% at this wear." Published to MQTT v1.0/.../cnc-01/twin/quality-alert → MES · Dashboard · Operator Alert
The quality twin function reads from the same infrastructure the reference pipeline writes to. No new databases, no new brokers, no new networks. You scaffolded it with one command, wrote the logic, and deployed it alongside everything else. The existing functions don't know or care that it exists.

Why This Is Hard Without a UNS

Without a Unified Namespace, building this quality twin requires:

With the UNS + fnkit approach, all of this data is already flowing through the same namespace. The quality twin is just a new function on top of existing data — scaffolded in seconds, deployed in minutes, not a new infrastructure project.

The "last mile" problem in manufacturing AI is not the AI itself — it's getting clean, contextualised, real-time data to the AI. In practice, the majority of digital twin project effort goes into data integration — connecting disparate systems, normalising schemas, and correlating timestamps — not into the twin logic itself. The UNS eliminates this cost by providing a single, structured, real-time data layer that any function — whether reference or custom — can read from.

Why GitOps Is Essential for Digital Twins

A digital twin is only as trustworthy as the logic that drives it. If you can't answer "who changed the twin's behaviour, when, and why?" — then you can't trust the twin's output. This is where GitOps transforms digital twins from fragile experiments into production infrastructure.

Twin Logic Is Code

Every aspect of your digital twin — reference functions and custom twin functions alike — is code in a git repository:

# Your digital twin is a git repository

fn-uns/
├── uns-framework/          # Perception layer (Go MQTT)
│   ├── function.go         # MQTT subscription + cache logic
│   ├── Dockerfile
│   └── README.md
│
├── uns-state/              # State model (Go)
│   ├── function.go         # State transition detection
│   └── ...
│
├── uns-quality-predict/    # YOUR TWIN: quality prediction (Go)
│   ├── function.go         # Quality risk scoring logic
│   ├── Dockerfile
│   └── ...
│
├── uns-vibration/          # YOUR TWIN: vibration analysis (Go)
│   ├── function.go         # FFT analysis + bearing wear detection
│   └── ...
│
├── uns-thermal-comp/       # YOUR TWIN: thermal compensation (Go MQTT)
│   ├── function.go         # Thermal drift model + warm-up recs
│   └── ...
│
├── uns-energy/             # YOUR TWIN: energy monitoring (Node.js)
│   ├── index.js            # Power draw correlation + waste detection
│   └── ...
│
└── uns-dashboard/          # Visualisation (Grafana)
    └── grafana/            # Dashboard JSON — version-controlled

# Every file is tracked. Every change has an author.
# Every deployment is a git commit.
# Every new twin function is a `fnkit` scaffold + git push.

What GitOps Gives Your Digital Twin

Twin RequirementWithout GitOpsWith fnkit + GitOps
Change tracking Someone changed the quality prediction thresholds. Nobody knows when or why. The twin's alert frequency changed — was it intentional? git log uns-quality-predict/ shows every change with author, date, and commit message explaining the reasoning.
Safe evolution You want to add vibration analysis to the twin. If it breaks, the entire twin goes down. You test in production because there's no other option. fnkit go uns-vibration → develop → test locally with docker compose up → pull request → review → merge. If it breaks: git revert. Existing functions are untouched.
Twin consistency Site A runs version 2.3 of the twin logic. Site B runs 2.1 with a local patch. Site C has an undocumented modification. Nobody knows which site is "correct." Every site runs the same git commit. git log --oneline at each site shows exactly what version is deployed. Differences are intentional and documented.
Regulatory compliance An auditor asks: "How do you ensure the twin's quality calculations haven't been tampered with?" You have no answer. Git provides a cryptographically signed, immutable audit trail. Every change to every calculation is traceable to a specific person, date, and approval.
Failure isolation A bug in the quality prediction crashes the entire twin platform. State tracking, KPIs, dashboards — all down. uns-quality-predict crashes. State tracking, KPIs, dashboards, vibration analysis — all continue running. The twin loses one capability, not everything.
Incremental capability Adding a new twin capability means upgrading the entire platform, risking regression in existing capabilities. fnkit go uns-new-capability → deploy. It reads from the same Valkey cache and PostgreSQL database. Existing functions are untouched.

The Twin Drift Problem

One of the most insidious problems with digital twins is drift — the gradual divergence between what the twin thinks is happening and what's actually happening. Drift has two forms:

Data Drift

The physical system changes but the twin's data model doesn't. A new machine is added, a sensor is recalibrated, an alarm code is redefined. The twin's perception becomes stale. In fn-uns, the UNS Framework standard and wildcard MQTT subscriptions handle this automatically — new machines appear in the namespace without configuration changes.

Logic Drift

The twin's processing logic is modified at one site but not others. A threshold is tweaked, a classification rule is changed, a prediction model is adjusted. Without version control, these changes are invisible. With GitOps, every logic change is a git commit — visible, reviewable, and deployable consistently across all sites.

Twin drift is the #1 reason digital twins lose trust. When the twin's output doesn't match reality — even once — users stop trusting it. And once trust is lost, it's nearly impossible to regain. GitOps prevents logic drift by making every change explicit, reviewed, and auditable. Data drift is prevented by the UNS's self-describing topic hierarchy and wildcard subscriptions.

The Deployment Workflow: Adding a New Twin Capability

# Example: Adding thermal compensation to your digital twin

# 1. Scaffold the new function
fnkit go-mqtt uns-thermal-comp
cd uns-thermal-comp

# 2. Write the twin logic
# function.go: subscribe to temperature topics via MQTT
# Read ambient temp, spindle temp, coolant temp from UNS
# Query uns_historian for dimensional accuracy after idle periods
# Model thermal drift curve for each machine
# Publish warm-up recommendations to MQTT

# 3. Test locally
cp .env.example .env
docker compose up -d
# Verify: function subscribes, processes, publishes correctly

# 4. Commit with full context
git add uns-thermal-comp/
git commit -m "add thermal compensation twin function

Scaffolded with fnkit go-mqtt. Subscribes to temperature topics.
Models thermal drift after startup and long idle periods.
Queries historian for dimensional accuracy correlation.
Publishes warm-up recommendations to v1.0/.../twin/thermal-comp.

Motivation: first-part scrap rate is 3x higher on Monday mornings
and after lunch breaks. Thermal drift from cold spindle is the
likely cause. This function detects the condition and recommends
warm-up cycles before production starts.

Tested: local docker compose with simulated temperature data."

# 5. Push, review, merge
git push origin feature/thermal-compensation
# → Create pull request → Team reviews → Merge to main

# 6. Deploy to all sites
git push deploy main
# → Every site gets the new capability, simultaneously
# → Existing functions are completely untouched

# 7. If something goes wrong at any site
git revert HEAD
git push deploy main
# → Previous version running everywhere in seconds
Every new twin capability follows the same workflow: scaffold → develop → test → commit → review → deploy. Six months from now, you can trace exactly when thermal compensation was added, who built it, why, and what it changed. The twin grows organically as your understanding of your factory deepens.

Scaling Digital Twins with fn-uns

The real test of any digital twin architecture is what happens when you scale. Here's the progression — and why composable functions with GitOps handle each stage gracefully.

1 Machine → 10 Machines

Monolithic Platform

Configure each machine in the platform's UI. Define data models, create dashboards, set up alerts — per machine. 10× the configuration work. If the platform charges per-device, 10× the cost.

fnkit + UNS

Zero changes. The framework subscribes to v1.0/# — it automatically discovers new machines as they publish to the namespace. Your custom twin functions parse the machine identity from the topic path. Add a machine to MQTT and the twin sees it immediately.

10 Machines → 100 Machines

Monolithic Platform

Performance problems emerge. The platform's single runtime struggles with message volume. Dashboard load times increase. The visual configuration tool becomes unwieldy with 100 assets. You need platform-specific expertise to optimise. Vendor suggests upgrading to the enterprise tier.

fnkit + UNS

Same functions — reference and custom. Valkey handles the cache volume (it's designed for millions of keys). PostgreSQL handles the write volume (with proper indexing). If any single function needs more resources, scale that container independently. The architecture was designed for this — stateless functions reading from shared infrastructure.

1 Site → 5 Sites

Monolithic Platform

Deploy the platform at each site. Replicate configuration manually. Each site drifts independently. Bug fixes require manual deployment at every location. Cross-site analytics require a separate aggregation layer. Licensing costs multiply. This is where most digital twin initiatives stall.

fnkit + UNS

git clone at each site. Site-specific config in .env files (broker address, database credentials). Twin logic is identical everywhere — guaranteed by git. Updates: git pull && docker compose up -d. New twin function at one site? Push to git and every site gets it.

MULTI-SITE DIGITAL TWIN DEPLOYMENT Git Repository ref + custom twin functions · v2.4 git clone git clone Site 1 — Detroit ref + 5 custom · 25 machines v2.4 ✓ Site 2 — Munich ref + 5 custom · 40 machines v2.4 ✓ Site 3 — Nagoya ref + 3 custom · 30 m v2.4 ✓ Site 4 — Pune ref + 4 custom · 18 m v2.4 ✓ Site 5 — Querétaro ref + 5 custom · 22 m v2.4 ✓ Same code, same version, same twin logic — different .env files for site-specific config

The Cost Comparison

Cost FactorCloud Twin Platform (100 machines)fnkit + UNS (100 machines)
Platform license $50K–$500K/year (varies by vendor and tier) $0 — open source
Per-device fees $10–$100/device/month = $12K–$120K/year $0 — no per-device pricing
Cloud compute $500–$5,000/month for message processing and storage $0 — runs on your existing server or a $50/month VM
Data egress $0.05–$0.12/GB — unpredictable at scale $0 — data stays on-premise
Infrastructure Included in cloud pricing (but you don't control it) One server: MQTT + Valkey + PostgreSQL + your functions. A modern NUC or small server handles this easily.
Adding new twin capabilities Buy add-on modules, upgrade tier, or write custom code in vendor's proprietary runtime fnkit go uns-new-thing — scaffold, develop, deploy. Free.
Vendor lock-in risk High — proprietary data models, APIs, and tooling Zero — standard MQTT, standard SQL, standard containers
Cloud-based digital twin platforms carry significant costs at scale — platform licensing, per-device fees, cloud compute, and data egress charges add up quickly when you're running 100+ machines across multiple sites. Self-hosted, open-source approaches eliminate these recurring cost categories entirely. The investment shifts to engineering time, which produces reusable, version-controlled assets that you own — rather than recurring vendor payments for capabilities you can't modify or migrate.
🔧

Manufacturing Engineer

At one site with a few machines, any approach works. But when the VP says "roll this out globally," you need something that doesn't require you to manually configure each site. fnkit gives you identical twin logic everywhere — the same quality predictions, the same vibration analysis, the same thermal compensation — deployed via git, not via manual configuration.

💻

Developer

Need a new twin capability? fnkit go uns-new-thing and you have a deployable function in seconds. Write Go or Node.js with standard libraries. Test with Docker Compose. Deploy with git push. The same workflow you use for every other system — not a vendor's proprietary toolchain.

🖥️

IT Engineer

Cloud-based twin platforms mean your factory data leaves your network, your costs are unpredictable, and your twin depends on internet connectivity. fnkit functions run entirely on-premise. You control the infrastructure, the data, and the costs. Each site is self-contained — if the WAN goes down, the local twin keeps running.

The Bottom Line

Digital twins don't have to be expensive, vendor-locked, or impossibly complex. The industry has been sold a vision of monolithic platforms that promise everything and deliver pilot projects that never scale. The reality is simpler.

A digital twin needs three things:

Together, these three layers give you a digital twin that is:

PropertyWhat It Means
ComposableAdd capabilities incrementally — fnkit go uns-new-capability and you have a deployable function. Start with state tracking, add quality prediction, add vibration analysis. Each function is independent.
Version-controlledEvery change to twin logic is a git commit with an author, timestamp, and reason. Full audit trail.
Reproduciblegit clone + docker compose up = identical twin at any site, every time.
Failure-isolatedOne function crashes, the rest keep running. The twin degrades gracefully, not catastrophically.
Vendor-neutralOpen source. Standard protocols. Standard tools. Your data, your infrastructure, your twin.
Cost-predictableNo per-device fees, no per-message charges, no cloud egress costs. Runs on hardware you already own.
fnkit gives you the platform to build exactly the digital twin your factory needs. The fn-uns reference pipeline provides the foundation — state tracking, historical logging, KPIs, diagnostics. But your twin will be different from every other twin, because your factory is different from every other factory. fnkit makes it trivial to scaffold, develop, and deploy the custom twin functions that solve your specific problems. And because it's all code in a git repository, every step is safe, auditable, and reversible.

Next Steps

ResourceDescription
fnkit BasicsComplete guide to scaffolding, developing, and deploying functions with fnkit — the tool that powers your twin
Extending the SystemHow to add new functions, KPIs, and data sources to your digital twin
Pipeline Functions GuideWalkthrough of the fn-uns reference functions — the starting point for your twin
Flow vs GitOps GuideWhen to graduate from Node-RED experiments to production GitOps — the twin management layer
Getting StartedDeploy the reference pipeline in under 5 minutes — the foundation for your digital twin
UNS Framework StandardThe open standard that defines the twin's data structure — ISA-95 topic hierarchy

Guide Version: 2.0 · Applies To: fnkit, fn-uns reference pipeline, digital twin architecture, GitOps deployment

Last updated March 2026.