Appearance
Optimal Reality Platform
Physical operations — power grids, transport networks, mines, logistics corridors — have always been reactive. OR changes the equation.
OR is a Physical AI platform that continuously models the physical world, runs AI and simulation faster than events unfold, and closes the loop between insight and action at millisecond speed — from the data layer to the control room. At its core is the Autonomic Integration Fabric (AIF): the orchestration layer that connects data, models, applications, and compute without manual integration, enabling a platform that is faster than real-time by design.
The Autonomic Integration Fabric
OR's core differentiator is the Autonomic Integration Fabric (AIF) — the integration layer that continuously orchestrates workflows, data, AI models, and simulations across operational problems, eliminating the need for manual integration projects.
The AIF addresses what has historically constrained large-scale AI, simulation, and emerging agentic applications: the integration problem. Connecting data sources, models, interfaces, and compute into a coherent operational system has traditionally required expensive, bespoke engineering effort for every deployment. The AIF makes this automatic and composable.
The fabric spans four layers:
| Layer | Description |
|---|---|
| Applications | Where users interact with the system — the interfaces, APIs, and business logic built on top of OR |
| Models | All the computational models, algorithms, and simulations within the system |
| Data | Your information, from raw inputs to processed results — the authoritative world model |
| Compute | The operational layer — servers, containers, and GPUs — at whatever scale the problem demands |
The AIF continuously orchestrates across these layers without manual wiring between components. This is what enables OR to operate faster than real-time: by the time an operational event reaches a decision-maker, the platform has already processed it, run the relevant models, and surfaced recommended actions. Physical AI real-time is measured in milliseconds — the AIF is designed around this constraint, distinguishing OR from batch-oriented orchestration tools such as Airflow or Step Functions.
The World Model
At the foundation of OR is the concept of the world model — the authoritative digital representation of a physical system against which all computation runs. The world model is not a snapshot: it is a continuously synchronised, structured representation of system state, updated in real time as the physical environment changes.
OR approaches this through an ontology-first architecture — viewing the enterprise as a graph of interconnected assets, relationships, and events. This graph is the backbone of the DDK's schema-first data layer, and it is what enables OR to represent any physical system — from a mine fleet to a national rail network — as a queryable, composable System of Systems. The ontology approach is not architecturally limited to industrial Physical AI: any enterprise with interconnected entities, dependencies, and operational workflows can be modelled as a System of Systems using OR's data and execution layer.
AI-Native Architecture
AI is embedded throughout OR, not bolted on. Every component is designed to be both AI-augmented and AI-orchestratable:
- Each DK is available as a tool to AI — the DDK, MDK, FDK, ORA, and Nexus can all be invoked dynamically by AI agents, making the platform automatically composable in agentic workflows.
- ORA (OR Assistant) is the AI application built on the ADK — engineers, analysts, and operators describe a problem in natural language and ORA plans and builds OR applications powered by ADK's agent and plugin capabilities.
AI Guardrails
OR operates in regulated, safety-critical environments where AI decisions have real physical consequences. Guardrails are not optional — they are built into the platform architecture at two levels.
Development-time guardrails govern how models are introduced and promoted. Every model goes through a registry before it can participate in a workflow. Promotion from development to production requires a structured experiment process: models run in shadow mode alongside the live workflow, with outputs captured but not acted on, until accuracy and stability thresholds are met. This mirrors the controlled testing processes that regulated industries require for any decision system. Experiment configurations are snapshotted and immutable, so a model run months ago can be reproduced exactly — including the parameters, the data, and the operator who approved it. No model reaches production without a traceable trail.
End-user guardrails govern what operators can trigger and under what conditions automated decisions execute. The platform supports a configurable spectrum from full decision support (the system recommends, a human confirms) through to adaptive automation (the system acts within defined thresholds, escalates outside them). High-impact actions require explicit operator confirmation with full attribution — who confirmed, when, with what reasoning context. Operators can inspect the models and parameters behind any recommendation before approving. Automation thresholds are set per workflow and per organisation, and can be tightened or relaxed as trust develops over time. This is OR's answer to the question every enterprise AI deployment faces: how do you give AI meaningful authority without removing human accountability?
The right model for the right task. A core architectural guardrail is knowing when not to use AI. In physical operations, physics-based simulation and mathematical optimisation are often the better choice: they are deterministic, fully explainable, and can be validated against known ground truth. OR's traffic simulation, for example, runs ODE-based vehicle dynamics that produce reproducible outputs against which real-world observations can be directly compared — something a neural network cannot offer in the same way. The MDK's model execution layer is deliberately model-agnostic: AI, simulation, statistical models, and optimisation solvers are all first-class execution types, composed within the same workflow. This means the platform never forces a problem into an AI-shaped box — the workflow selects the appropriate model type for each task, based on what the problem actually demands.
The Agentic Development Kit (ADK)
The Agentic Development Kit (ADK) is OR's AI runtime layer. It surfaces every OR DK — DDK, MDK, FDK — as a composable AI tool (plugin), manages agent lifecycle and chat sessions, and orchestrates AI workflows. It is the engine that enables AI agents to programmatically interact with the full OR platform.
ORA is built on the ADK today: when a user sends a message through ORA's chatbot interface, the ADK manages the agent, routes tool calls to the relevant DK plugins, and executes the resulting workflow. The ADK's architecture is designed to extend into broader autonomous orchestration — enabling agents to dynamically discover, select, and chain any OR capability in response to a goal, composing novel execution paths at runtime. This is OR's answer to a core Physical AI question: how do you coordinate and control fleets of AI agents operating across a physical system?
Architecture & Components
OR is structured into four architectural domains, each supported by dedicated SDK components.
State & World Representation
This domain maintains the world model — the digital representation of physical systems against which all computation runs. It provides asset graph and ontology modelling, time-series ingestion, event stream processing, spatial (GIS) integration, and versioned system state.
Data Development Kit (DDK) — generates production-ready GraphQL servers from declarative schema definitions, providing the data layer for any physical system representation.
Model Execution
OR supports the full lifecycle of models — from training and registration through to production execution — within a common orchestration framework. Supported paradigms include: deterministic physics and engineering models, statistical and ML models, discrete event simulations, mathematical optimisation solvers, and AI/agent-based models. Models are containerised, independently versioned, and connected to shared state.
Modelling Development Kit (MDK) — the workflow orchestration engine for building, training, and executing AI and simulation workflows across model types, managing the full lifecycle from model development through to result capture.
Scenario & Decision Framework
OR provides infrastructure for structured scenario analysis. The MDK's Studies and Experiments model organises execution into parameterised runs, batch simulations, sensitivity analyses, and multi-objective optimisation. Decision logic is auditable — results are reproducible and constraint-aware.
Execution & Control Integration
OR supports the transition from analysis to execution via policy-based decision triggers and integration with OT (operational technology) control systems. The platform supports a spectrum from decision support through to adaptive automation, with operator oversight applied at configurable thresholds.
Nexus manages service deployment and monitoring. ORA extends this with agentic AI workflows for orchestrating platform capabilities from natural language prompts.
Platform Components
| Component | Role |
|---|---|
| ORA | OR Assistant — AI application for planning and building OR applications, built on the ADK |
| DDK | Data Development Kit — schema-driven data layer and GraphQL server generation |
| MDK | Modelling Development Kit — workflow orchestration and AI/simulation model execution |
| FDK | Frontend Development Kit — UI framework for building operational interfaces |
| Nexus | Service management, deployment, and monitoring |
| SCDK | Source Control Development Kit — version management for platform artefacts |
| ADK | Agentic Development Kit — OR's AI runtime layer; powers ORA today, designed to extend into autonomous agent orchestration across all OR capabilities |
The SDK API and SDK UI form the shared core infrastructure connecting these components.
Scalability and High-Compute
OR is built for the scale that Physical AI problems demand. The platform is designed to run high-compute simulation and AI workloads across distributed infrastructure including GPU compute for model inference and simulation.
This positions OR for cloud-scale physical operations, including:
- Multi-site, multi-asset systems operating simultaneously
- High-frequency simulation cycles running faster than real-world event cadence
- Parallel scenario evaluation across thousands of parameterised runs
- AI inference integrated into millisecond-range operational feedback loops
OT Integration
Optimal Reality (OR) connects to the operational technology (OT) layer using a secure, layered architecture that prioritises read-only access, network segmentation, and industrial protocols designed for critical infrastructure. In most deployments, OR integrates with OT systems such as PLCs, SCADA, and plant historians through standard industrial interfaces including OPC UA, MQTT, Modbus TCP, and vendor historians such as the PI System. Data is typically accessed via a demilitarised zone (OT DMZ) or secure data gateway that replicates or streams telemetry out of the control network into the OR platform without exposing control systems directly to external networks. This approach allows OR to ingest high-frequency sensor data, equipment telemetry, and operational events while maintaining strict separation between enterprise analytics platforms and safety-critical control environments.
Several integration patterns can be used depending on security requirements and operational maturity. The most conservative model uses historian-based replication, where OR consumes data from platforms such as PI System or site data lakes that already mirror OT telemetry. A second approach uses secure edge gateways that publish telemetry streams (e.g., via MQTT or OPC UA Pub/Sub) to the OR platform in near real time while maintaining one-way or tightly controlled communication channels. In more advanced implementations, OR can operate as a supervisory optimisation layer, providing recommendations or target setpoints back to operations through controlled interfaces with SCADA or dispatch systems, subject to operator approval and governance controls. Across all approaches, the architecture emphasises read-first integration, strong identity and access management, network zoning, and full auditability to ensure compliance with industrial cybersecurity standards such as IEC 62443 and NIST guidance for critical infrastructure.
Use Cases
OR is designed to meet users where they are — from shift managers and domain experts to data scientists and software engineers, all operating on the same shared system state. See Use Cases for detailed end-to-end examples across port terminal operations, power grid dispatch, and mine fleet management.
Design Principles
- AI-native — AI is embedded throughout the platform, not added as a layer. Every component is designed to be augmented by and orchestrated by AI agents.
- Faster than real-time — The platform is designed to process, model, and surface decisions faster than physical events unfold. Simulation precedes intervention.
- Model-agnostic — No single modelling paradigm is privileged. Physics models, ML models, and optimisation solvers coexist in the same execution environment.
- State-first — All computation references structured system state. The world model is the foundation, not an afterthought.
- Workflow-driven execution — Models are composed through explicit orchestration. Execution paths are defined, versioned, and auditable.
- Explainability — Decisions are reproducible and constraint-aware. The reasoning chain from state to decision is preserved.
- Incremental autonomy — Automation is layered, not assumed. The degree of autonomous execution is a controllable parameter, increasing over time as trust is established.
Next Steps
- Understand the graph model — Graph Model & Ontology
- See OR in action — Use Cases
- Work with the data layer — Data Development Kit (DDK)
- Build and run models — Modelling Development Kit (MDK)
- Build operational interfaces — Frontend Development Kit (FDK)
