uns-ai — AI-Powered Analysis
Runs AI analysis on live and historical manufacturing data. Built-in analysis types for anomaly detection, shift summaries, smart alerting, predictive maintenance, production optimisation, and root cause analysis.
| Language | Node.js |
| Type | HTTP function |
| Scaffolded with | fnkit node |
| Depends on | Valkey, PostgreSQL, MQTT Broker, AI Provider (OpenAI/Anthropic) |
What It Does
Reads live machine data from Valkey cache and historical data from PostgreSQL, assembles a rich context for the chosen analysis type, sends it to an AI provider, publishes results back to MQTT, and stores a full audit trail in PostgreSQL.
Analysis Types
| Type | Purpose | Data Sources |
|---|---|---|
anomaly | Detect abnormal machine behaviour | Live status, state history, baseline stats, alarms, tool data |
shift-summary | Generate shift handover report | Live status, states, stoppages, production, scrap, operator notes |
smart-alert | Contextual alert triage | Live status, recent alarms, stoppages, tool condition, production |
predictive | Tool wear / maintenance prediction | Tool data, tool history, alarms, tool-related stoppages, baseline |
optimisation | Machine-program assignment | Live status, production aggregates, scrap rates, reliability |
root-cause | Explain why an alarm happened | State timeline, stoppages, tool condition, sensor history, notes |
custom | User-defined prompt | Configurable — select which data sources to include |
Data Context Assembly
Each analysis type automatically assembles the right data from across the UNS pipeline:
1. LIVE LAYER (Valkey cache) └─ Status, program, tool payloads per machine 2. HISTORY LAYER (PostgreSQL) └─ uns_state, uns_stoppage, uns_productivity, uns_input, uns_historian 3. AGGREGATE LAYER (computed at query time) └─ Utilisation %, MTBF, MTTR, alarm frequency, stoppage pareto 4. PROMPT ASSEMBLY └─ System prompt (analysis-specific) + data context → AI provider
Config in Valkey
docker exec fnkit-cache valkey-cli SET fnkit:config:uns-ai \
'{"provider":"openai","model":"gpt-4o-mini","analysis":"anomaly",
"machines":["cnc-01","cnc-02","cnc-03","cnc-04"],
"context":{"history_hours":24,"baseline_days":7},
"output_topic":"v1.0/enterprise/site1/ai/anomaly"}'Supports openai and anthropic providers. All responses are structured JSON with full token/latency tracking.
API Usage
# Run analysis (from config) curl https://api.fnkit.dev/uns-ai # Override analysis type curl "https://api.fnkit.dev/uns-ai?analysis=shift-summary&hours=8" # Filter to specific machine curl "https://api.fnkit.dev/uns-ai?machine=cnc-01" # Query past AI results curl "https://api.fnkit.dev/uns-ai?history=true&hours=24" # List available analysis types curl "https://api.fnkit.dev/uns-ai?types=true"
PostgreSQL Table
CREATE TABLE IF NOT EXISTS uns_ai ( id BIGSERIAL PRIMARY KEY, logged_at TIMESTAMPTZ DEFAULT NOW(), enterprise TEXT, site TEXT, area TEXT, machine TEXT, provider TEXT, model TEXT, analysis TEXT, prompt_tokens INT, completion_tokens INT, latency_ms INT, input_data JSONB, response JSONB, output_topic TEXT, published BOOLEAN );
Every AI call is logged — full audit trail with token counts, latency, and the complete response.
Quick Start
# Set config in Valkey
docker exec fnkit-cache valkey-cli SET fnkit:config:uns-ai \
'{"provider":"openai","model":"gpt-4o-mini","analysis":"anomaly",
"machines":["cnc-01","cnc-02","cnc-03","cnc-04"]}'
# Set API key and start
cp .env.example .env # edit OPENAI_API_KEY
cd uns-ai && docker compose up -d
# Run an analysis
curl http://localhost:8080/uns-ai