Skip to content

Use Cases

OR is designed to meet users where they are. A shift manager can open an interface, adjust a parameter, and run a scenario without writing a line of code. A domain expert with no software background can describe a simulation study to ORA and run it themselves. A data scientist can train a new model and promote it into a live workflow through controlled shadow testing. A software engineer can extend the data layer or build new interfaces. The same platform serves all of them, operating on the same shared system state.

The following examples show what these workflows look like in practice across three operational environments.

Container Port Terminal Operations

A delayed vessel doesn't just affect one berth — it ripples through crane allocation, yard density, and truck collections for hours. Terminal controllers need to see the impact of a disruption before they commit to a plan, not after.

OR maintains the terminal's live state — vessel ETAs, berth occupancy, crane availability, yard utilisation — and runs a planning workflow that chains arrival prediction, berth allocation, crane scheduling, and yard positioning models in sequence. The output is a ranked set of plans, each with projected crane utilisation, truck gate throughput, and yard impact.

The terminal controller sees a dashboard they interact with every shift. When a vessel flags a delay, the system surfaces a notification. The controller triggers a re-plan, reviews the top alternatives, confirms their preferred option, and the decision is logged with its full reasoning trail — the models that ran, the parameters used, and who confirmed it — so the next shift has complete context at handover.

The port operations planner — who understands the terminal deeply but doesn't write code — is asked to model the impact of adding a third berth during peak season. They open ORA and describe the study in plain language. ORA proposes an experiment configuration and the planner reviews it — adjusting one assumption about crane availability before running. The simulation results come back as a structured report they can share directly with the operations director, without involving an engineer at any point.

The software engineer integrating live weather data into the berth safety model describes the requirement to ORA, which drafts the DDK schema extension and resolver. The engineer reviews the generated code, requests one naming adjustment, and merges after a teammate approves the PR.

Power Grid Dispatch and Constraint Management

Renewable generation is variable. Demand fluctuates. The physics of power flow mean that generation must balance load in real time within strict thermal and voltage constraints. An error in dispatch scheduling can result in curtailment or, in the worst case, cascading failure.

OR runs a short-cycle dispatch workflow: demand and renewable generation forecasts feed into a unit commitment optimiser, whose output is validated by a physics-based power flow simulation. If the simulation finds a constraint violation, it feeds that information back into the optimiser and iterates until a feasible schedule is found. This cycle — mixing ML forecasting, mathematical optimisation, and physics simulation in a single automated loop — would be intractable to coordinate with standalone tools.

The control room analyst receives a recommended dispatch schedule that has already been physics-validated. Flagged constraints are highlighted in the interface. The analyst drills into the reasoning for any flagged item, annotates concerns for the shift supervisor if needed, and confirms the schedule — every confirmation timestamped and attributed for regulatory audit.

The grid operations manager — an experienced network engineer who doesn't write software — needs to model the impact of an interconnector derate during summer peak. They describe the scenario to ORA, which shows how it has interpreted the request into experiment parameters before running anything. The manager adjusts one assumption, confirms, and ORA executes the study. The results — at-risk dispatch windows, projected curtailment, operational mitigations — come back as a report the manager can share directly with the planning team and iterate on independently.

The data scientist monitoring model quality notices the renewable forecast error trending upward. They pull recent operational data from DDK, retrain the forecast model, and register the new version in the MDK model registry. Rather than promoting directly, they configure a shadow experiment — the new model runs in parallel with the production model for two weeks, outputs captured but not acted on. Once the accuracy improvement is confirmed, the model is promoted. Operators notice nothing except better schedule quality over the following weeks.

Open-Cut Mine Fleet Management

On an active mine site, the optimal truck-to-shovel assignment changes continuously as trucks complete cycles, shovels relocate, and plant constraints shift. A static schedule becomes wrong within hours. The coordination problem runs around the clock.

The shift manager reviews the production forecast and truck utilisation at handover. Shovel 3 is going offline for a tyre change — they update the shovel availability in the interface, which triggers a re-simulation of the remaining shift. They review the projected impact on grade blend, confirm the adjusted plan, and it's locked for the shift with a full record of what changed and why.

The mine geologist — back from mapping a newly identified ore zone — wants to understand how bringing it into production early affects plant grade blend over the next quarter. They describe the scenario to ORA, including grade estimates from their core samples. ORA shows the geologist how it has translated the request into simulation parameters before running. The geologist confirms, the simulation runs, and the results — a production curve and grade blend projection — feed directly into the next planning meeting. The geologist ran the study themselves; the modelling team wasn't involved.

The modelling engineer monitors dispatch model performance through the MDK dashboard. A new ore grade predictor has been trained on drill data from a recently opened pit area. They register it, configure a shadow experiment alongside the live workflow, and let it run for a week. The comparison is clear — improved accuracy in the new zones — and the updated model is promoted without touching the optimiser or route selector, and without disrupting live dispatch.

The software engineer integrating a new autonomous truck fleet describes the telemetry format to ORA, which drafts the DDK schema extension, resolver updates, and the MDK workflow step to normalise the new vehicle type into the dispatch loop. The engineer reviews, corrects a naming inconsistency, and merges. A mine systems engineer on the same team — less experienced with the codebase — uses ORA to build a monitoring panel for the autonomous fleet in the FDK Panel Builder: describing what they want, reviewing what ORA generates, connecting it to live data, and publishing to the control room dashboard. No frontend code written.

User documentation for Optimal Reality