Skip to content

Remote Environments (EKS via SSM)

Platform Users — Engineers & Low-code Ops Users (ORA / Panel Builder) OR Platform ORA — AI Planning Interface Agent Workflows Plan Visualisation ADK Integration SDK UI — Frontend Shell FDK Architecture Low code Config-driven DDK Schema Definition Code Generator Generated Server MDK WEM DAL Experiment Manager Nexus Deployment Control Live Monitoring Registry Browser SCDK Source Control Pipeline Mgmt Azure DevOps deploys ↓ SDK API — GraphQL Federation Gateway Federation Gateway Component Resolvers Auth & Licensing Plugins: gql-autogeneration Migrator Helm KinD Boilerplate GenAI ··· Microservices — Domain IP Services Data Pipeline Core Platform Metrics & Analytics Spatial & Geo Simulation Event Detection Camera & Device Fire & Resource Opt. Satellite Modelling ↓ Nexus deploys Deployed OR Applications Rail Ops Dashboard Mine Mgmt Dashboard Port Ops Dashboard ··· FDK-built · DDK-backed · MDK-powered · deployed via Nexus ↑ Application Users — Operations Teams (shift managers, analysts, planners)

Nexus supports two environment types: local (KinD) and remote (AWS EKS). Remote environments are staging, pre-production, and production OR installations running on AWS EKS. Nexus connects to them using AWS SSM Session Manager port forwarding — no VPN, no exposed Kubernetes API endpoint, and no kubeconfig file required on the developer's machine.

This page covers how remote environment connectivity works, how SSM port forwarding is established, and what operations are available once connected.


Why SSM Port Forwarding?

AWS EKS clusters in the OR platform are private: the Kubernetes API server is not exposed to the public internet. The standard approach to reach a private EKS cluster involves either a VPN or a bastion host with an SSH jump. Both create operational overhead — VPN clients to manage, SSH keys to distribute, and network-level access controls to configure per developer.

AWS SSM Session Manager solves this differently. SSM uses the AWS IAM identity of the caller to authenticate and authorise the session, and tunnels traffic through the AWS SSM service rather than requiring network-level access. The result is that any developer with the correct IAM role for a project environment can open a port-forward tunnel to the cluster using only their AWS credentials — no VPN, no SSH key, no firewall rule.

The OR platform extends this with licence-based access control: the AWS account ID required for a session is embedded in the project's licence, scoped per environment. A developer does not need to know which AWS account an environment runs in — the platform resolves it automatically.


Connection Architecture

When Nexus activates a remote environment, it opens two simultaneous SSM port-forward tunnels through SubActivateSsm:

TunnelTargetRemote portLocal portPurpose
KubeTargetMiKube500001180Kubernetes API (kubectl-equivalent operations)
HelmTargetMiHelm500009998Helm chart operations

Both tunnels run as streaming subscriptions (<-chan string) that emit status messages to the UI as they are established and while they are active. The connection is maintained for the duration of the Nexus session for that environment; the channels are closed when the context is cancelled (i.e. when the user navigates away or the session ends).

Developer machine
    ↓ AWS SSM port-forward (IAM authenticated)
Bastion / SSM target (port 50000)
    → EKS control plane (port 1180)  ← Kubernetes API operations
    → Helm service (port 9998)       ← Helm install/upgrade operations

AWS Account Resolution

Nexus resolves the AWS account ID for an SSM session at runtime from the project's licence. The licence contains a list of ProjectRole entries, each mapping an environment name to an AWS account ID. When connecting to a remote environment:

  1. SubRemoteEnvironment reads the project licence using the project GUID.
  2. It finds the ProjectRole entry whose EnvironmentName matches the envID.
  3. It uses the AwsAccountID from that entry as the accID parameter for SubActivateSsm.

If no role with an AwsAccountID exists for the environment, the connection fails immediately with a message directing the user to contact their system admin to be added to remote access for that project. AWS credentials never appear in config files or code — they are always sourced from the licence at runtime.


The Remote Cluster Client

Once the SSM tunnels are active, Nexus uses RemoteClusterClient to issue Kubernetes operations. This client is returned by NewClusterClient when the environment type is remote:

go
// domain/kube/client/interface.go
if env.Type == optimalClient.EnvironmentTypeRemote {
    c := NewRemoteClusterInitClient(details)
    initClient = c
}

RemoteClusterClient wraps the same LocalClusterClient struct but uses inmemory.NewRemoteClient() instead of a kubeconfig-backed in-memory client. The remote client routes kubectl-equivalent commands through the Kubernetes API tunnel on localhost:1180 — the local end of the SSM port-forward. No kubeconfig file is written or read.

This means every operation available on a local KinD cluster is equally available on a remote EKS cluster:

  • List and inspect deployments
  • Fetch live namespace health (CPU, memory, replica status)
  • Read Kubernetes events
  • Fetch snapshot logs
  • Stream live container logs
  • Create namespaces and secrets

The calling code is identical in both cases — the environment type abstraction is fully transparent above the KubeClient interface.


Deploying to a Remote Cluster

Deployment to a remote EKS environment follows the same flow as a local deployment. Nexus sends InstallResource commands to the Helm application layer, which communicates with the Helm plugin over localhost:9998 — the local end of the Helm SSM tunnel. The Helm plugin constructs the Kubernetes resources and applies them to the cluster.

The deployment configuration (image tag, replica count, environment variables, ingress settings, persistent volumes) is sourced from the project's environment-specific namespace application values — the same Project → Environment → Namespace → Application hierarchy used for local environments. Operators can configure different image versions and replica counts for staging vs. production through the same Nexus interface.


Troubleshooting Remote Connections

Tunnel fails to activate

If SubActivateSsm errors during connection:

  • Verify you are authenticated to the correct AWS account for the environment
  • Confirm you have been granted the EnvironmentRemoteAccess role for the project (contact your system admin if not)
  • Check that the SSM managed instance for the environment is online in the AWS console

Kubernetes operations fail after connection

If live namespace queries or log streaming fail despite tunnels being established:

  • The EKS cluster control plane may be temporarily unavailable — check cluster health in the AWS console
  • The namespace may not exist yet — trigger a fresh deployment to create it
  • Confirm the correct namespace is selected for the environment in the Nexus UI

Logs show "cluster is not running"

This error appears when GetDeploymentLogs or GetDeploymentEvents cannot reach the Kubernetes API. For remote environments this typically means the SSM tunnels were not activated before navigating to the logs view. Activate the remote environment connection first from the Nexus environment panel.


  • Helm Plugin — The gRPC plugin that wraps the Helm CLI (connected via the SSM Helm tunnel for remote environments)

User documentation for Optimal Reality