Skip to content

Usage

Platform Users — Engineers & Low-code Ops Users (ORA / Panel Builder) OR Platform ORA — AI Planning Interface Agent Workflows Plan Visualisation ADK Integration SDK UI — Frontend Shell FDK Architecture Low code Config-driven DDK Schema Definition Code Generator Generated Server MDK WEM DAL Experiment Manager Nexus Deployment Control Live Monitoring Registry Browser SCDK Source Control Pipeline Mgmt Azure DevOps deploys ↓ SDK API — GraphQL Federation Gateway Federation Gateway Component Resolvers Auth & Licensing Plugins: gql-autogeneration Migrator Helm KinD Boilerplate GenAI ··· Microservices — Domain IP Services Data Pipeline Core Platform Metrics & Analytics Spatial & Geo Simulation Event Detection Camera & Device Fire & Resource Opt. Satellite Modelling ↓ Nexus deploys Deployed OR Applications Rail Ops Dashboard Mine Mgmt Dashboard Port Ops Dashboard ··· FDK-built · DDK-backed · MDK-powered · deployed via Nexus ↑ Application Users — Operations Teams (shift managers, analysts, planners)

This guide covers how to use the DDK service to create and manage GraphQL servers.

Table of Contents


Prerequisites

Required

  • Access to the SDK API (which includes the DDK service)
  • PostgreSQL database (version 12+)
  • Basic understanding of GraphQL
  • Your schema files (.graphqls)

Optional

  • Redis (for caching)
  • Docker (for containerized development)

Understanding the Architecture

Service Flow

You → SDK API (GraphQL) → DDK Service (gRPC :7788) → Generated Server

The DDK runs as a gRPC service (OR_DDK) that accepts protobuf requests and streams progress messages back. The SDK API exposes a user-facing GraphQL interface that proxies to these gRPC methods.

Internal Configuration

The DDK service is initialized with a Configuration that locates key template directories:

Config FieldPurpose
BaseCodeLocationPath to _templates/base_code/ — the full Go server scaffold
WorkspaceCodeLocationPath to _templates/workspace_code/ — user workspace boilerplate
GeneratorLocationPath to the pre-built generator binary (optimal-reality-ddk-be-generator)
DeployFlagIf "true", auto-starts the server after creation via make docker-start

Streaming Progress

All gRPC methods use a streaming response pattern. Operations run in a goroutine that sends messages through a channel:

  • {"DATA": "message"} — Progress update (streamed to client)
  • {"ERROR": "message"} — Error occurred (closes channel, returns gRPC error)
  • {"DONE": "message"} — Operation complete (closes channel, final message)

Creating a Server

The CreateServer method scaffolds a new Go server without schema processing — it copies the base code template and configures the server.

What Happens Internally

  1. Copy workspace boilerplate → Creates {serverPath}/workspace/ from _templates/workspace_code/
  2. Create server directory{serverPath}/{serverName}/
  3. Copy base code → Copies entire _templates/base_code/ template into server directory
  4. Create config file → Generates .env configuration from parameters
  5. Create Dockerfile → Generates from Dockerfile.tpl template
  6. Create docker-compose → Generates compose file with PostgreSQL & Redis services
  7. Deploy (optional) → If DeployFlag is set, runs make docker-start
  8. Generate JWT → If JWT enabled, generates a signing key and initial token

Step 1: Prepare Your Schema

Create a .graphqls file with your data model:

user.graphqls:

graphql
type User @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") {
  id: ID! @constraint(type: "primarykey")
  name: String!
  email: String! @constraint(type: "unique")
  age: Int
}

type Post @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") {
  id: ID! @constraint(type: "primarykey")
  title: String!
  content: String!
  authorId: String
}

Step 2: Call CreateServer via SDK API

Using the SDK API GraphQL interface, execute a mutation that calls the DDK's CreateServer gRPC method:

Configuration Parameters:

javascript
{
  serverName: "my-app",
  serverPort: 8080,
  serverPath: "/path/to/generated/server",
  dbSchema: "public",
  dbName: "myapp_db",
  dbUrl: "localhost",
  dbUser: "postgres",
  dbPassword: "password",
  dbPort: 5432,
  redisUrl: "localhost:6379",  // Optional
  jwt: true,                     // Enable JWT auth
  gormAutomigrate: true,         // Auto-migrate database
  doAwsTokenRefresh: false,
  serviceLogLevel: "info",
  serviceName: "myapp-api"
}

Step 3: Monitor Generation

The DDK streams progress messages as it generates your server:

starting server creation
creating server directory
moving files to new server
creating config file in new server
creating docker file for new server
creating jwt token to access server
server created successfully with token: eyJ...

Step 4: Generated Output

The DDK creates a complete Go application following DDD architecture:

my-app/
├── application/
│   ├── repository/          # Repository interfaces (generated)
│   └── service/             # Service interfaces
├── cmd/
│   └── server.go            # Application entry point
├── domain/                  # Domain layer
├── entrypoint/
│   └── controller/          # DDD controllers
├── graph/
│   ├── exceptions/          # DatabaseException type
│   ├── generated/           # gqlgen generated code
│   ├── model/               # GORM model structs
│   ├── resolver/            # Generated + custom resolvers
│   └── schema/              # GraphQL schema files
├── infrastructure/
│   ├── directives/          # GraphQL directive handlers
│   ├── migrations/          # Database migration config
│   ├── registry/            # DI registry (wires everything)
│   └── repository/          # Repository implementations (GORM)
├── internal/
│   └── generator/           # Generator binary (temporary)
├── mock/                    # gomock generated mocks
├── server/
│   ├── graceful/            # Graceful shutdown
│   ├── handlers/            # GraphQL & error handlers
│   ├── middleware/           # CORS, JWT, TX, Zap middleware
│   └── router.go            # Gin router setup
├── service/                 # Service implementations
├── test/                    # Generated test files
├── docker-compose.yml
├── Dockerfile
├── Makefile
├── gqlgen.yml
├── go.mod
└── go.sum

Creating a Server with Schema

The CreateServerWithSchema method combines server creation with schema processing in a single call. This is the more common method for creating a fully functional server.

What Happens Internally

  1. Validate schema → Runs two-phase validation on schema files
  2. Create server directory{serverPath}/{serverName}/
  3. Copy base code → Copies _templates/base_code/ template
  4. Create resolvers → Reads .graphqls files and generates default CRUD resolvers (default.graphqls)
  5. Place schemas → Copies schema files into server's infrastructure/ directory
  6. Workspace migration → Copies custom resolver files from workspace to server
  7. Run gqlgen → Executes make gqlgen to generate Go code from schemas
  8. Copy models → Moves models_gen.go to workspace
  9. Copy custom resolvers → Moves custom resolver stubs back to workspace
  10. Copy migration config → Copies Atlas migration loader/main.go and atlas.hcl
  11. Create config → Generates .env and Docker files
  12. Deploy (optional) → Starts the server

Default Resolver Generation

The DDK automatically generates CRUD resolvers based on @required directives. For each type, it inspects the type parameter:

graphql
# This generates: createUser, getUser, getUserList, updateUser, deleteUser
type User @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") { ... }

# This generates only: getProduct, getProductList
type Product @required(type: "READ", table: "true") { ... }

The resolver builder:

  • Parses type definitions and field types
  • Maps GraphQL scalars to input types (e.g., PointInputPoint)
  • Creates a default.graphqls file with standard Query and Mutation entries
  • Marks resolvers with @resolver(type: "DEFAULT", operation: "CREATE|READ|UPDATE|DELETE")
  • Adds QueryParams support on list queries for pagination/sorting

Regenerating a Server

When you update your schema, use ReGenerateServer instead of creating from scratch. This is the primary method for iterating on an existing server.

When to Regenerate

  • Schema changes: Adding/removing types or fields
  • Relationship updates: Modifying @mapping directives
  • Constraint changes: Updating @constraint directives
  • Custom resolver additions: Adding new custom operations

What Happens Internally

The regeneration pipeline has 10 distinct phases:

  1. Validate schema → Two-phase validation (semantic + pragmatic) on {schemaPath}/graph/schema/
  2. Create resolvers → Reads all .graphqls files (excluding _custom_resolvers.graphqls), generates default.graphqls with CRUD operations
  3. Place schemas → Copies schema files to {serverPath}/{serverName}/graph/schema/
  4. Workspace → Server migration → Copies custom code from workspace to server:
    • *.custom.resolvers.go files from graph/resolver/
    • application/, domain/, entrypoint/ directories
    • infrastructure/registry/, infrastructure/repository/ (only *.repository.go and *.custom.repository.go)
    • infrastructure/directives/, infrastructure/custom/
    • graph/model/ (only *.custom.go files)
    • graph/exceptions/, test/, graph/generated/
    • server/graceful/shutdownfunctions.go
  5. Clean generator → Removes and recreates internal/generator/ directory
  6. Copy generator binary → Places the pre-built optimal-reality-ddk-be-generator binary
  7. Run gqlgengo generate . — Executes gqlgen to generate GraphQL runtime code
  8. Run generator pluginsgo generate ./internal/generator — Runs the 6 DDK plugins (modelgen, Relay, Resolver, CustomResolver, Migration, DDD)
  9. Generate mocksgo generate ./application/... — Creates gomock mocks for repository interfaces
  10. Server → Workspace migration → Copies generated code back to workspace (models, mocks, resolvers, migrations)

Regeneration Process

Step 1: Update Your Schema

Edit your .graphqls files in the workspace's graph/schema/ directory:

graphql
type User @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") {
  id: ID! @constraint(type: "primarykey")
  name: String!
  email: String! @constraint(type: "unique")
  age: Int
  bio: String # New field added
}

Step 2: Call ReGenerateServer

Via SDK API, execute regeneration:

Parameters:

javascript
{
  serverName: "my-app",
  serverPath: "/path/to/generated/server",
  schemaPath: "/path/to/schemas"         // Note: workspace is at {schemaPath}/graph/schema/
}

Step 3: What Gets Updated

Regenerated (overwritten each time):

  • Generated resolvers (*.resolvers.go) — existing implementations preserved via gqlgen's rewriter
  • GraphQL model structs (models_gen.go)
  • gqlgen runtime code (graph/generated/)
  • DDD layers: application interfaces, infrastructure repositories, entrypoint controllers, registry
  • Default resolver schema (default.graphqls)
  • Mock files (mock/)

Preserved (never overwritten):

  • Custom resolver implementations (*.custom.resolvers.go)
  • Custom model extensions (*.custom.go in graph/model/)
  • Custom repository code (*.custom.repository.go)
  • Server graceful shutdown functions
  • Infrastructure custom code (infrastructure/custom/)
  • Test files

Important Notes

  • Always use ReGenerateServer for existing projects
  • Custom resolvers are preserved automatically (see How Resolver Preservation Works)
  • The schema path is treated as the workspace location — schemas are read from {schemaPath}/graph/schema/
  • The generator binary is cleaned up after generation (removed from internal/generator/)

Validating Schemas

Before generating or regenerating, validate your schemas to catch errors early.

Using ValidateSchema

Call the ValidateSchema gRPC method via SDK API:

Parameters:

javascript
{
  locationOfFiles: "/path/to/schemas";
}

Two-Phase Validation

The ValidateSchemaFile function performs validation in two phases (see Architecture — Schema Validation):

Phase 1 — Semantic Validation (semanticValidation):

  • Directive syntax correctness (@required, @constraint, @mapping)
  • Type references resolve to defined types
  • Constraint types are valid (from the 18+ supported types)
  • Mapping directive parameters are valid

Phase 2 — Pragmatic Validation (pragmaticValidation):

  • Every table type has a primary key
  • Foreign key references exist and are correct
  • Relationship mappings are valid (one2one, one2many, many2many)
  • Check constraint SQL is syntactically valid
  • No circular back-references in relationships

Validation Response

Success (returns the validated schema content):

"Schema validation passed"

Failure (returns gRPC FailedPrecondition error with details):

Type 'Post': missing primary key constraint
Type 'User': foreign key 'profileId' references non-existent type 'Profile'
Type 'Product': check constraint has invalid SQL syntax

Best Practice

Always validate before generation:

  1. Update schema files
  2. Run ValidateSchema
  3. Fix any errors
  4. Run ReGenerateServer or CreateServerWithSchema

Deleting Objects

The DeleteObject method removes a type from your schema by placing a migration.json file and running a regeneration.

What Happens Internally

  1. Reads the migration.json from the deletion path
  2. Places it in the server directory
  3. Calls RegenerateServer internally — the MigrationPlugin detects the migration file and drops the associated tables/columns

Parameters

javascript
{
  serverName: "my-app",
  serverPath: "/path/to/generated/server",
  schemaPath: "/path/to/schemas",
  deletionJson: "/path/to/deletion/config"   // Contains migration.json
}

Migration JSON Format

The migration.json describes what to remove:

json
{
  "drop": ["TypeName"],
  "dropFields": {
    "ExistingType": ["fieldToRemove"]
  }
}

Testing Custom Logic

The TestCustomFunction method creates a temporary test server for validating custom logic before deploying changes.

What Happens Internally

  1. Creates a new server directory
  2. Copies base code template
  3. Reads schema.graphqls and generates default resolvers
  4. Reads optional custom.graphqls for custom operations
  5. Reads optional custom_implementation.json for custom autogeneration
  6. Runs gqlgen code generation
  7. Creates config file and Docker setup
  8. Optionally starts the server for testing

Parameters

javascript
{
  serverName: "test-app",
  serverPath: "/path/to/test/server",
  serverPort: 9090,
  schemaPath: "/path/to/test/schemas",    // Contains schema.graphqls
  dbSchema: "public",
  dbName: "test_db",
  dbUrl: "localhost",
  dbUser: "postgres",
  dbPassword: "password",
  dbPort: 5432
}

Working with Custom Resolvers

Adding Custom Operations

Step 1: Create Custom Schema

Create {name}.custom.graphqls:

graphql
extend type Query {
  getUserByEmail(email: String!): User! @resolver(type: "CUSTOM")
}

extend type Mutation {
  promoteToAdmin(userId: ID!): User! @resolver(type: "CUSTOM")
}

Step 2: Regenerate

Run ReGenerateServer. The DDK's CustomResolverPlugin creates stub files only for new operations (skips existing files):

graph/resolver/
├── getuserbyemail.custom.resolvers.go   # New stub
├── promotetoadmin.custom.resolvers.go   # New stub
test/
├── getuserbyemail.custom.resolvers_test.go   # New test stub
├── promotetoadmin.custom.resolvers_test.go   # New test stub

Step 3: Implement

Edit the generated stub files. The stub template returns a zero value with an error:

go
func (r *queryResolver) resolver_GetUserByEmail(
  ctx context.Context,
  email string,
) (model.User, error) {
  var target model.User
  return target, errors.New("method not implemented")
}

Replace with your implementation:

go
func (r *queryResolver) resolver_GetUserByEmail(
  ctx context.Context,
  email string,
) (model.User, error) {
  tx := ctx.Value("tx").(*gorm.DB)
  var user model.User
  err := tx.Where("email = ?", email).First(&user).Error
  return user, err
}

Step 4: Test

Use GraphQL Playground to test:

graphql
query {
  getUserByEmail(email: "john@example.com") {
    id
    name
    email
  }
}

Step 5: Re-Regenerate (Safe)

When you add more custom operations:

  1. Run ReGenerateServer again
  2. Existing implementations are preserved — the plugin checks os.Stat() and skips existing files
  3. New stubs are created only for new operations

See Custom Resolvers for detailed implementation patterns.


The Workspace Concept

The DDK uses a workspace as a bidirectional staging area between the user's files and the generated server. Understanding this flow is critical for advanced usage.

Workspace Structure

The workspace mirrors the server structure but only contains user-modifiable files:

workspace/
├── application/              # Custom service interfaces
├── domain/                   # Domain layer (user-owned)
├── entrypoint/               # Custom controllers
├── graph/
│   ├── exceptions/           # Error types
│   ├── generated/            # gqlgen output (read-only)
│   ├── model/                # *.custom.go files (user modifications)
│   └── resolver/             # *.custom.resolvers.go (user implementations)
├── infrastructure/
│   ├── custom/               # Custom infrastructure code
│   ├── directives/           # GraphQL directive handlers
│   ├── migrations/           # Atlas migration config
│   ├── registry/             # DI registry (generated + user extensions)
│   └── repository/           # *.custom.repository.go files
├── mock/                     # Generated mocks
├── server/
│   └── graceful/             # Shutdown functions
└── test/                     # Test files

Bidirectional Migration

During ReGenerateServer, the workspace is migrated twice:

1. Workspace → Server (before generation):

  • Custom resolver files (*.custom.resolvers.go) are copied to the server
  • Full directories copied: application/, domain/, entrypoint/, infrastructure/registry/, infrastructure/directives/, infrastructure/custom/, test/, graph/exceptions/, server/graceful/
  • Selective copies: Only *.repository.go and *.custom.repository.go from infrastructure/repository/; only *.custom.go from graph/model/

2. Server → Workspace (after generation):

  • Generated models and mocks are copied back
  • Custom resolvers that were newly generated are copied back
  • Migration config (infrastructure/migrations/) is copied

First-Time Workspace

If the workspace doesn't exist yet, it's created from _templates/workspace_code/ boilerplate.


Testing Your Server

Starting the Server

Using Docker Compose

bash
cd /path/to/generated/server
docker-compose up

This starts:

  • PostgreSQL database
  • Redis (if configured)
  • GraphQL server

Using Make

bash
make run

Manual Start

bash
go run cmd/server.go

Accessing GraphQL Playground

Once the server is running, the GraphQL playground is accessible at the root path:

http://localhost:{serverPort}/

The Gin router maps the root path to the GraphQL playground handler. The GraphQL endpoint itself is at the same path.

Testing CRUD Operations

Create a User:

graphql
mutation {
  createUser(id: "1", name: "John Doe", email: "john@example.com", age: 30) {
    id
    name
    email
    age
  }
}

Get a User (by primary key):

graphql
query {
  getUser(id: "1") {
    id
    name
    email
    age
  }
}

List Users (with QueryParams):

graphql
query {
  getUserList(
    params: { limit: 10, offset: 0, orderBy: "name", orderDirection: "ASC" }
  ) {
    id
    name
    email
  }
}

Update a User (primary key is required, other fields are optional):

graphql
mutation {
  updateUser(id: "1", age: 31) {
    id
    name
    age
  }
}

Delete a User:

graphql
mutation {
  deleteUser(id: "1") {
    id
  }
}

Testing Relationships

graphql
query {
  getUser(id: "1") {
    id
    name
    posts {
      id
      title
      content
    }
  }
}

JWT Authentication

If JWT is enabled, set the Authorization header:

Authorization: Bearer eyJ...

The JwtMiddleware uses ParseUnverified — it validates the token structure and extracts claims but does not verify the signature. This allows integration with any identity provider.


Deployment

Docker Deployment

The generated docker-compose.yml is ready for deployment:

yaml
version: "3.8"

services:
  postgres:
    image: postgres:15
    environment:
      POSTGRES_DB: ${DB_NAME}
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data

  graphql_server:
    build: .
    ports:
      - "${SERVER_PORT}:${SERVER_PORT}"
    depends_on:
      - postgres
    environment:
      - DB_URL=${DB_URL}

Environment Variables

The generated server reads configuration from environment variables set in .env:

bash
DB_NAME=myapp_db
DB_USER=postgres
DB_PASSWORD=securepassword
DB_PORT=5432
DB_URL=postgres
DB_SCHEMA=public
SERVER_PORT=8080
REDIS_URL=localhost:6379
JWT_KEY=<generated-key>
SERVICE_LOG_LEVEL=info
SERVICE_NAME=myapp-api
GORM_AUTOMIGRATE=true

Production Considerations

  1. Set GORM_AUTOMIGRATE=false — Use Atlas or manual SQL migrations in production
  2. Use Managed Databases — AWS RDS, Azure Database for PostgreSQL, Google Cloud SQL
  3. Use Managed Redis — ElastiCache, Azure Cache for Redis
  4. Container Orchestration — Kubernetes, Docker Swarm, Azure Container Apps
  5. Load Balancing — NGINX, AWS ALB, Azure Application Gateway
  6. Secret Management — Azure Key Vault, AWS Secrets Manager
  7. Monitoring — The generated server uses structured Zap logging with trace IDs — feed into ELK, CloudWatch, or Azure Monitor

Database Migrations

The generated server includes Atlas migration configuration:

infrastructure/migrations/
├── atlas.hcl          # Atlas configuration
└── loader/
    └── main.go        # Migration schema loader

For production, use Atlas CLI:

bash
atlas schema apply -c file://infrastructure/migrations/atlas.hcl

Or manual SQL migration tools like golang-migrate or goose.


Common Workflows

Workflow 1: New Project from Scratch

  1. Design your schema (myapp.graphqls)
  2. Validate schema (ValidateSchema)
  3. Create server with schema (CreateServerWithSchema)
  4. Test generated CRUD API in Playground
  5. Deploy using Docker

Workflow 2: Adding Fields to Existing Types

  1. Update schema (add new fields)
  2. Validate schema
  3. Regenerate server (ReGenerateServer)
  4. Test new fields in Playground
  5. If GORM_AUTOMIGRATE=true, columns are added automatically
  6. Deploy update

Workflow 3: Adding Custom Business Logic

  1. Create {name}.custom.graphqls with custom operations marked @resolver(type: "CUSTOM")
  2. Regenerate server (ReGenerateServer)
  3. Implement custom resolver stubs (see Custom Resolvers)
  4. Test custom operations
  5. Deploy

Workflow 4: Adding Relationships

  1. Update schema with @mapping directives
  2. Validate schema
  3. Regenerate server
  4. Test relationship queries (preloaded automatically in generated resolvers)
  5. Deploy

Workflow 5: Removing Types/Fields

  1. Use DeleteObject gRPC method with a migration.json
  2. Or: Remove from schema and regenerate
  3. Create migration to drop columns/tables if GORM_AUTOMIGRATE doesn't handle drops
  4. Test remaining functionality
  5. Deploy

Workflow 6: Iterative Development

The typical development loop:

Edit schema → Validate → Regenerate → Test → Implement custom logic → Regenerate → Test → Deploy

Key: Custom implementations are always preserved across regenerations.


Configuration Reference

CreateServer Parameters

ParameterTypeRequiredDescription
serverNamestringYesName of the server (becomes directory name)
serverPortint64YesHTTP port (e.g., 8080)
serverPathstringYesParent directory for generated server
dbSchemastringYesPostgreSQL schema name (usually "public")
dbNamestringYesDatabase name
dbUrlstringYesDatabase host
dbUserstringYesDatabase username
dbPasswordstringYesDatabase password
dbPortint64YesDatabase port (typically 5432)
redisUrlstringNoRedis connection string (empty to disable)
jwtboolNoEnable JWT middleware
gormAutomigrateboolNoEnable GORM auto-migration
doAwsTokenRefreshboolNoAWS token refresh for RDS IAM auth
serviceLogLevelstringNoZap log level (info, debug, warn, error)
serviceNamestringNoService identifier (for logging and tracing)

ReGenerateServer Parameters

ParameterTypeRequiredDescription
serverNamestringYesName of the server
serverPathstringYesPath to server parent directory
schemaPathstringYesPath to workspace (schemas at {schemaPath}/graph/schema/)

ValidateSchema Parameters

ParameterTypeRequiredDescription
locationOfFilesstringYesPath to directory containing .graphqls files

DeleteObject Parameters

ParameterTypeRequiredDescription
serverNamestringYesName of the server
serverPathstringYesPath to server parent directory
schemaPathstringYesPath to workspace
deletionJsonstringYesPath to directory containing migration.json

Troubleshooting

Generation Failures

Issue: Schema validation errors
Solution: Run ValidateSchema first and fix all reported errors. Check for missing primary keys, invalid constraint types, and unresolved type references.

Issue: Generator binary not found
Solution: The GeneratorLocation config must point to the pre-built optimal-reality-ddk-be-generator binary. A missing binary causes "cant run without binary generator" error.

Issue: go generate failures
Solution: Ensure the server directory has valid go.mod and all dependencies are available. Check that gqlgen.yml is present and correctly configured.

Issue: File permission errors
Solution: Check write permissions for serverPath. The generator binary needs chmod +x (done automatically).

Runtime Errors

Issue: Database connection failed
Solution: Verify PostgreSQL is running and credentials match .env file values.

Issue: Redis connection failed
Solution: Check Redis URL or set REDIS_URL to empty string to disable. Redis is optional.

Issue: Port already in use
Solution: Change SERVER_PORT in .env or stop the conflicting service.

Issue: Transaction errors in resolvers
Solution: Ensure you're getting the transaction from context: ctx.Value("tx").(*gorm.DB). The TXMiddleware creates a savepoint-based transaction for each request.

Regeneration Issues

Issue: Custom resolvers not preserved
Solution: Ensure custom resolver files are in graph/resolver/ with .custom.resolvers.go suffix. The WorkspaceMigrate function uses FindCustomResolverFiles() to locate them.

Issue: Schema changes not reflected
Solution: Verify schemaPath points to the workspace root (schemas are read from {schemaPath}/graph/schema/).

Issue: Stale generated code
Solution: The regeneration pipeline runs 3 separate go generate commands in sequence. If any fails silently, later stages may use stale code. Check all generation logs.


User documentation for Optimal Reality