Appearance
Usage
This guide covers how to use the DDK service to create and manage GraphQL servers.
Table of Contents
- Prerequisites
- Understanding the Architecture
- Creating a Server
- Creating a Server with Schema
- Regenerating a Server
- Validating Schemas
- Deleting Objects
- Testing Custom Logic
- Working with Custom Resolvers
- The Workspace Concept
- Testing Your Server
- Deployment
- Common Workflows
Prerequisites
Required
- Access to the SDK API (which includes the DDK service)
- PostgreSQL database (version 12+)
- Basic understanding of GraphQL
- Your schema files (
.graphqls)
Optional
- Redis (for caching)
- Docker (for containerized development)
Understanding the Architecture
Service Flow
You → SDK API (GraphQL) → DDK Service (gRPC :7788) → Generated ServerThe DDK runs as a gRPC service (OR_DDK) that accepts protobuf requests and streams progress messages back. The SDK API exposes a user-facing GraphQL interface that proxies to these gRPC methods.
Internal Configuration
The DDK service is initialized with a Configuration that locates key template directories:
| Config Field | Purpose |
|---|---|
BaseCodeLocation | Path to _templates/base_code/ — the full Go server scaffold |
WorkspaceCodeLocation | Path to _templates/workspace_code/ — user workspace boilerplate |
GeneratorLocation | Path to the pre-built generator binary (optimal-reality-ddk-be-generator) |
DeployFlag | If "true", auto-starts the server after creation via make docker-start |
Streaming Progress
All gRPC methods use a streaming response pattern. Operations run in a goroutine that sends messages through a channel:
{"DATA": "message"}— Progress update (streamed to client){"ERROR": "message"}— Error occurred (closes channel, returns gRPC error){"DONE": "message"}— Operation complete (closes channel, final message)
Creating a Server
The CreateServer method scaffolds a new Go server without schema processing — it copies the base code template and configures the server.
What Happens Internally
- Copy workspace boilerplate → Creates
{serverPath}/workspace/from_templates/workspace_code/ - Create server directory →
{serverPath}/{serverName}/ - Copy base code → Copies entire
_templates/base_code/template into server directory - Create config file → Generates
.envconfiguration from parameters - Create Dockerfile → Generates from
Dockerfile.tpltemplate - Create docker-compose → Generates compose file with PostgreSQL & Redis services
- Deploy (optional) → If
DeployFlagis set, runsmake docker-start - Generate JWT → If JWT enabled, generates a signing key and initial token
Step 1: Prepare Your Schema
Create a .graphqls file with your data model:
user.graphqls:
graphql
type User @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") {
id: ID! @constraint(type: "primarykey")
name: String!
email: String! @constraint(type: "unique")
age: Int
}
type Post @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") {
id: ID! @constraint(type: "primarykey")
title: String!
content: String!
authorId: String
}Step 2: Call CreateServer via SDK API
Using the SDK API GraphQL interface, execute a mutation that calls the DDK's CreateServer gRPC method:
Configuration Parameters:
javascript
{
serverName: "my-app",
serverPort: 8080,
serverPath: "/path/to/generated/server",
dbSchema: "public",
dbName: "myapp_db",
dbUrl: "localhost",
dbUser: "postgres",
dbPassword: "password",
dbPort: 5432,
redisUrl: "localhost:6379", // Optional
jwt: true, // Enable JWT auth
gormAutomigrate: true, // Auto-migrate database
doAwsTokenRefresh: false,
serviceLogLevel: "info",
serviceName: "myapp-api"
}Step 3: Monitor Generation
The DDK streams progress messages as it generates your server:
starting server creation
creating server directory
moving files to new server
creating config file in new server
creating docker file for new server
creating jwt token to access server
server created successfully with token: eyJ...Step 4: Generated Output
The DDK creates a complete Go application following DDD architecture:
my-app/
├── application/
│ ├── repository/ # Repository interfaces (generated)
│ └── service/ # Service interfaces
├── cmd/
│ └── server.go # Application entry point
├── domain/ # Domain layer
├── entrypoint/
│ └── controller/ # DDD controllers
├── graph/
│ ├── exceptions/ # DatabaseException type
│ ├── generated/ # gqlgen generated code
│ ├── model/ # GORM model structs
│ ├── resolver/ # Generated + custom resolvers
│ └── schema/ # GraphQL schema files
├── infrastructure/
│ ├── directives/ # GraphQL directive handlers
│ ├── migrations/ # Database migration config
│ ├── registry/ # DI registry (wires everything)
│ └── repository/ # Repository implementations (GORM)
├── internal/
│ └── generator/ # Generator binary (temporary)
├── mock/ # gomock generated mocks
├── server/
│ ├── graceful/ # Graceful shutdown
│ ├── handlers/ # GraphQL & error handlers
│ ├── middleware/ # CORS, JWT, TX, Zap middleware
│ └── router.go # Gin router setup
├── service/ # Service implementations
├── test/ # Generated test files
├── docker-compose.yml
├── Dockerfile
├── Makefile
├── gqlgen.yml
├── go.mod
└── go.sumCreating a Server with Schema
The CreateServerWithSchema method combines server creation with schema processing in a single call. This is the more common method for creating a fully functional server.
What Happens Internally
- Validate schema → Runs two-phase validation on schema files
- Create server directory →
{serverPath}/{serverName}/ - Copy base code → Copies
_templates/base_code/template - Create resolvers → Reads
.graphqlsfiles and generates default CRUD resolvers (default.graphqls) - Place schemas → Copies schema files into server's
infrastructure/directory - Workspace migration → Copies custom resolver files from workspace to server
- Run gqlgen → Executes
make gqlgento generate Go code from schemas - Copy models → Moves
models_gen.goto workspace - Copy custom resolvers → Moves custom resolver stubs back to workspace
- Copy migration config → Copies Atlas migration
loader/main.goandatlas.hcl - Create config → Generates
.envand Docker files - Deploy (optional) → Starts the server
Default Resolver Generation
The DDK automatically generates CRUD resolvers based on @required directives. For each type, it inspects the type parameter:
graphql
# This generates: createUser, getUser, getUserList, updateUser, deleteUser
type User @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") { ... }
# This generates only: getProduct, getProductList
type Product @required(type: "READ", table: "true") { ... }The resolver builder:
- Parses type definitions and field types
- Maps GraphQL scalars to input types (e.g.,
Point→InputPoint) - Creates a
default.graphqlsfile with standard Query and Mutation entries - Marks resolvers with
@resolver(type: "DEFAULT", operation: "CREATE|READ|UPDATE|DELETE") - Adds
QueryParamssupport on list queries for pagination/sorting
Regenerating a Server
When you update your schema, use ReGenerateServer instead of creating from scratch. This is the primary method for iterating on an existing server.
When to Regenerate
- Schema changes: Adding/removing types or fields
- Relationship updates: Modifying
@mappingdirectives - Constraint changes: Updating
@constraintdirectives - Custom resolver additions: Adding new custom operations
What Happens Internally
The regeneration pipeline has 10 distinct phases:
- Validate schema → Two-phase validation (semantic + pragmatic) on
{schemaPath}/graph/schema/ - Create resolvers → Reads all
.graphqlsfiles (excluding_custom_resolvers.graphqls), generatesdefault.graphqlswith CRUD operations - Place schemas → Copies schema files to
{serverPath}/{serverName}/graph/schema/ - Workspace → Server migration → Copies custom code from workspace to server:
*.custom.resolvers.gofiles fromgraph/resolver/application/,domain/,entrypoint/directoriesinfrastructure/registry/,infrastructure/repository/(only*.repository.goand*.custom.repository.go)infrastructure/directives/,infrastructure/custom/graph/model/(only*.custom.gofiles)graph/exceptions/,test/,graph/generated/server/graceful/shutdownfunctions.go
- Clean generator → Removes and recreates
internal/generator/directory - Copy generator binary → Places the pre-built
optimal-reality-ddk-be-generatorbinary - Run gqlgen →
go generate .— Executes gqlgen to generate GraphQL runtime code - Run generator plugins →
go generate ./internal/generator— Runs the 6 DDK plugins (modelgen, Relay, Resolver, CustomResolver, Migration, DDD) - Generate mocks →
go generate ./application/...— Creates gomock mocks for repository interfaces - Server → Workspace migration → Copies generated code back to workspace (models, mocks, resolvers, migrations)
Regeneration Process
Step 1: Update Your Schema
Edit your .graphqls files in the workspace's graph/schema/ directory:
graphql
type User @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") {
id: ID! @constraint(type: "primarykey")
name: String!
email: String! @constraint(type: "unique")
age: Int
bio: String # New field added
}Step 2: Call ReGenerateServer
Via SDK API, execute regeneration:
Parameters:
javascript
{
serverName: "my-app",
serverPath: "/path/to/generated/server",
schemaPath: "/path/to/schemas" // Note: workspace is at {schemaPath}/graph/schema/
}Step 3: What Gets Updated
Regenerated (overwritten each time):
- Generated resolvers (
*.resolvers.go) — existing implementations preserved via gqlgen's rewriter - GraphQL model structs (
models_gen.go) - gqlgen runtime code (
graph/generated/) - DDD layers: application interfaces, infrastructure repositories, entrypoint controllers, registry
- Default resolver schema (
default.graphqls) - Mock files (
mock/)
Preserved (never overwritten):
- Custom resolver implementations (
*.custom.resolvers.go) - Custom model extensions (
*.custom.goingraph/model/) - Custom repository code (
*.custom.repository.go) - Server graceful shutdown functions
- Infrastructure custom code (
infrastructure/custom/) - Test files
Important Notes
- Always use ReGenerateServer for existing projects
- Custom resolvers are preserved automatically (see How Resolver Preservation Works)
- The schema path is treated as the workspace location — schemas are read from
{schemaPath}/graph/schema/ - The generator binary is cleaned up after generation (removed from
internal/generator/)
Validating Schemas
Before generating or regenerating, validate your schemas to catch errors early.
Using ValidateSchema
Call the ValidateSchema gRPC method via SDK API:
Parameters:
javascript
{
locationOfFiles: "/path/to/schemas";
}Two-Phase Validation
The ValidateSchemaFile function performs validation in two phases (see Architecture — Schema Validation):
Phase 1 — Semantic Validation (semanticValidation):
- Directive syntax correctness (
@required,@constraint,@mapping) - Type references resolve to defined types
- Constraint types are valid (from the 18+ supported types)
- Mapping directive parameters are valid
Phase 2 — Pragmatic Validation (pragmaticValidation):
- Every table type has a primary key
- Foreign key references exist and are correct
- Relationship mappings are valid (one2one, one2many, many2many)
- Check constraint SQL is syntactically valid
- No circular back-references in relationships
Validation Response
Success (returns the validated schema content):
"Schema validation passed"Failure (returns gRPC FailedPrecondition error with details):
Type 'Post': missing primary key constraint
Type 'User': foreign key 'profileId' references non-existent type 'Profile'
Type 'Product': check constraint has invalid SQL syntaxBest Practice
Always validate before generation:
- Update schema files
- Run
ValidateSchema - Fix any errors
- Run
ReGenerateServerorCreateServerWithSchema
Deleting Objects
The DeleteObject method removes a type from your schema by placing a migration.json file and running a regeneration.
What Happens Internally
- Reads the
migration.jsonfrom the deletion path - Places it in the server directory
- Calls
RegenerateServerinternally — theMigrationPlugindetects the migration file and drops the associated tables/columns
Parameters
javascript
{
serverName: "my-app",
serverPath: "/path/to/generated/server",
schemaPath: "/path/to/schemas",
deletionJson: "/path/to/deletion/config" // Contains migration.json
}Migration JSON Format
The migration.json describes what to remove:
json
{
"drop": ["TypeName"],
"dropFields": {
"ExistingType": ["fieldToRemove"]
}
}Testing Custom Logic
The TestCustomFunction method creates a temporary test server for validating custom logic before deploying changes.
What Happens Internally
- Creates a new server directory
- Copies base code template
- Reads
schema.graphqlsand generates default resolvers - Reads optional
custom.graphqlsfor custom operations - Reads optional
custom_implementation.jsonfor custom autogeneration - Runs gqlgen code generation
- Creates config file and Docker setup
- Optionally starts the server for testing
Parameters
javascript
{
serverName: "test-app",
serverPath: "/path/to/test/server",
serverPort: 9090,
schemaPath: "/path/to/test/schemas", // Contains schema.graphqls
dbSchema: "public",
dbName: "test_db",
dbUrl: "localhost",
dbUser: "postgres",
dbPassword: "password",
dbPort: 5432
}Working with Custom Resolvers
Adding Custom Operations
Step 1: Create Custom Schema
Create {name}.custom.graphqls:
graphql
extend type Query {
getUserByEmail(email: String!): User! @resolver(type: "CUSTOM")
}
extend type Mutation {
promoteToAdmin(userId: ID!): User! @resolver(type: "CUSTOM")
}Step 2: Regenerate
Run ReGenerateServer. The DDK's CustomResolverPlugin creates stub files only for new operations (skips existing files):
graph/resolver/
├── getuserbyemail.custom.resolvers.go # New stub
├── promotetoadmin.custom.resolvers.go # New stub
test/
├── getuserbyemail.custom.resolvers_test.go # New test stub
├── promotetoadmin.custom.resolvers_test.go # New test stubStep 3: Implement
Edit the generated stub files. The stub template returns a zero value with an error:
go
func (r *queryResolver) resolver_GetUserByEmail(
ctx context.Context,
email string,
) (model.User, error) {
var target model.User
return target, errors.New("method not implemented")
}Replace with your implementation:
go
func (r *queryResolver) resolver_GetUserByEmail(
ctx context.Context,
email string,
) (model.User, error) {
tx := ctx.Value("tx").(*gorm.DB)
var user model.User
err := tx.Where("email = ?", email).First(&user).Error
return user, err
}Step 4: Test
Use GraphQL Playground to test:
graphql
query {
getUserByEmail(email: "john@example.com") {
id
name
email
}
}Step 5: Re-Regenerate (Safe)
When you add more custom operations:
- Run
ReGenerateServeragain - Existing implementations are preserved — the plugin checks
os.Stat()and skips existing files - New stubs are created only for new operations
See Custom Resolvers for detailed implementation patterns.
The Workspace Concept
The DDK uses a workspace as a bidirectional staging area between the user's files and the generated server. Understanding this flow is critical for advanced usage.
Workspace Structure
The workspace mirrors the server structure but only contains user-modifiable files:
workspace/
├── application/ # Custom service interfaces
├── domain/ # Domain layer (user-owned)
├── entrypoint/ # Custom controllers
├── graph/
│ ├── exceptions/ # Error types
│ ├── generated/ # gqlgen output (read-only)
│ ├── model/ # *.custom.go files (user modifications)
│ └── resolver/ # *.custom.resolvers.go (user implementations)
├── infrastructure/
│ ├── custom/ # Custom infrastructure code
│ ├── directives/ # GraphQL directive handlers
│ ├── migrations/ # Atlas migration config
│ ├── registry/ # DI registry (generated + user extensions)
│ └── repository/ # *.custom.repository.go files
├── mock/ # Generated mocks
├── server/
│ └── graceful/ # Shutdown functions
└── test/ # Test filesBidirectional Migration
During ReGenerateServer, the workspace is migrated twice:
1. Workspace → Server (before generation):
- Custom resolver files (
*.custom.resolvers.go) are copied to the server - Full directories copied:
application/,domain/,entrypoint/,infrastructure/registry/,infrastructure/directives/,infrastructure/custom/,test/,graph/exceptions/,server/graceful/ - Selective copies: Only
*.repository.goand*.custom.repository.gofrominfrastructure/repository/; only*.custom.gofromgraph/model/
2. Server → Workspace (after generation):
- Generated models and mocks are copied back
- Custom resolvers that were newly generated are copied back
- Migration config (
infrastructure/migrations/) is copied
First-Time Workspace
If the workspace doesn't exist yet, it's created from _templates/workspace_code/ boilerplate.
Testing Your Server
Starting the Server
Using Docker Compose
bash
cd /path/to/generated/server
docker-compose upThis starts:
- PostgreSQL database
- Redis (if configured)
- GraphQL server
Using Make
bash
make runManual Start
bash
go run cmd/server.goAccessing GraphQL Playground
Once the server is running, the GraphQL playground is accessible at the root path:
http://localhost:{serverPort}/The Gin router maps the root path to the GraphQL playground handler. The GraphQL endpoint itself is at the same path.
Testing CRUD Operations
Create a User:
graphql
mutation {
createUser(id: "1", name: "John Doe", email: "john@example.com", age: 30) {
id
name
email
age
}
}Get a User (by primary key):
graphql
query {
getUser(id: "1") {
id
name
email
age
}
}List Users (with QueryParams):
graphql
query {
getUserList(
params: { limit: 10, offset: 0, orderBy: "name", orderDirection: "ASC" }
) {
id
name
email
}
}Update a User (primary key is required, other fields are optional):
graphql
mutation {
updateUser(id: "1", age: 31) {
id
name
age
}
}Delete a User:
graphql
mutation {
deleteUser(id: "1") {
id
}
}Testing Relationships
graphql
query {
getUser(id: "1") {
id
name
posts {
id
title
content
}
}
}JWT Authentication
If JWT is enabled, set the Authorization header:
Authorization: Bearer eyJ...The JwtMiddleware uses ParseUnverified — it validates the token structure and extracts claims but does not verify the signature. This allows integration with any identity provider.
Deployment
Docker Deployment
The generated docker-compose.yml is ready for deployment:
yaml
version: "3.8"
services:
postgres:
image: postgres:15
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
graphql_server:
build: .
ports:
- "${SERVER_PORT}:${SERVER_PORT}"
depends_on:
- postgres
environment:
- DB_URL=${DB_URL}Environment Variables
The generated server reads configuration from environment variables set in .env:
bash
DB_NAME=myapp_db
DB_USER=postgres
DB_PASSWORD=securepassword
DB_PORT=5432
DB_URL=postgres
DB_SCHEMA=public
SERVER_PORT=8080
REDIS_URL=localhost:6379
JWT_KEY=<generated-key>
SERVICE_LOG_LEVEL=info
SERVICE_NAME=myapp-api
GORM_AUTOMIGRATE=trueProduction Considerations
- Set
GORM_AUTOMIGRATE=false— Use Atlas or manual SQL migrations in production - Use Managed Databases — AWS RDS, Azure Database for PostgreSQL, Google Cloud SQL
- Use Managed Redis — ElastiCache, Azure Cache for Redis
- Container Orchestration — Kubernetes, Docker Swarm, Azure Container Apps
- Load Balancing — NGINX, AWS ALB, Azure Application Gateway
- Secret Management — Azure Key Vault, AWS Secrets Manager
- Monitoring — The generated server uses structured Zap logging with trace IDs — feed into ELK, CloudWatch, or Azure Monitor
Database Migrations
The generated server includes Atlas migration configuration:
infrastructure/migrations/
├── atlas.hcl # Atlas configuration
└── loader/
└── main.go # Migration schema loaderFor production, use Atlas CLI:
bash
atlas schema apply -c file://infrastructure/migrations/atlas.hclOr manual SQL migration tools like golang-migrate or goose.
Common Workflows
Workflow 1: New Project from Scratch
- Design your schema (
myapp.graphqls) - Validate schema (
ValidateSchema) - Create server with schema (
CreateServerWithSchema) - Test generated CRUD API in Playground
- Deploy using Docker
Workflow 2: Adding Fields to Existing Types
- Update schema (add new fields)
- Validate schema
- Regenerate server (
ReGenerateServer) - Test new fields in Playground
- If
GORM_AUTOMIGRATE=true, columns are added automatically - Deploy update
Workflow 3: Adding Custom Business Logic
- Create
{name}.custom.graphqlswith custom operations marked@resolver(type: "CUSTOM") - Regenerate server (
ReGenerateServer) - Implement custom resolver stubs (see Custom Resolvers)
- Test custom operations
- Deploy
Workflow 4: Adding Relationships
- Update schema with
@mappingdirectives - Validate schema
- Regenerate server
- Test relationship queries (preloaded automatically in generated resolvers)
- Deploy
Workflow 5: Removing Types/Fields
- Use
DeleteObjectgRPC method with amigration.json - Or: Remove from schema and regenerate
- Create migration to drop columns/tables if
GORM_AUTOMIGRATEdoesn't handle drops - Test remaining functionality
- Deploy
Workflow 6: Iterative Development
The typical development loop:
Edit schema → Validate → Regenerate → Test → Implement custom logic → Regenerate → Test → DeployKey: Custom implementations are always preserved across regenerations.
Configuration Reference
CreateServer Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
serverName | string | Yes | Name of the server (becomes directory name) |
serverPort | int64 | Yes | HTTP port (e.g., 8080) |
serverPath | string | Yes | Parent directory for generated server |
dbSchema | string | Yes | PostgreSQL schema name (usually "public") |
dbName | string | Yes | Database name |
dbUrl | string | Yes | Database host |
dbUser | string | Yes | Database username |
dbPassword | string | Yes | Database password |
dbPort | int64 | Yes | Database port (typically 5432) |
redisUrl | string | No | Redis connection string (empty to disable) |
jwt | bool | No | Enable JWT middleware |
gormAutomigrate | bool | No | Enable GORM auto-migration |
doAwsTokenRefresh | bool | No | AWS token refresh for RDS IAM auth |
serviceLogLevel | string | No | Zap log level (info, debug, warn, error) |
serviceName | string | No | Service identifier (for logging and tracing) |
ReGenerateServer Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
serverName | string | Yes | Name of the server |
serverPath | string | Yes | Path to server parent directory |
schemaPath | string | Yes | Path to workspace (schemas at {schemaPath}/graph/schema/) |
ValidateSchema Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
locationOfFiles | string | Yes | Path to directory containing .graphqls files |
DeleteObject Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
serverName | string | Yes | Name of the server |
serverPath | string | Yes | Path to server parent directory |
schemaPath | string | Yes | Path to workspace |
deletionJson | string | Yes | Path to directory containing migration.json |
Troubleshooting
Generation Failures
Issue: Schema validation errors
Solution: Run ValidateSchema first and fix all reported errors. Check for missing primary keys, invalid constraint types, and unresolved type references.
Issue: Generator binary not found
Solution: The GeneratorLocation config must point to the pre-built optimal-reality-ddk-be-generator binary. A missing binary causes "cant run without binary generator" error.
Issue: go generate failures
Solution: Ensure the server directory has valid go.mod and all dependencies are available. Check that gqlgen.yml is present and correctly configured.
Issue: File permission errors
Solution: Check write permissions for serverPath. The generator binary needs chmod +x (done automatically).
Runtime Errors
Issue: Database connection failed
Solution: Verify PostgreSQL is running and credentials match .env file values.
Issue: Redis connection failed
Solution: Check Redis URL or set REDIS_URL to empty string to disable. Redis is optional.
Issue: Port already in use
Solution: Change SERVER_PORT in .env or stop the conflicting service.
Issue: Transaction errors in resolvers
Solution: Ensure you're getting the transaction from context: ctx.Value("tx").(*gorm.DB). The TXMiddleware creates a savepoint-based transaction for each request.
Regeneration Issues
Issue: Custom resolvers not preserved
Solution: Ensure custom resolver files are in graph/resolver/ with .custom.resolvers.go suffix. The WorkspaceMigrate function uses FindCustomResolverFiles() to locate them.
Issue: Schema changes not reflected
Solution: Verify schemaPath points to the workspace root (schemas are read from {schemaPath}/graph/schema/).
Issue: Stale generated code
Solution: The regeneration pipeline runs 3 separate go generate commands in sequence. If any fails silently, later stages may use stale code. Check all generation logs.
Related Documentation
- Schema Guide - Writing schemas with directives
- Custom Resolvers - Adding custom business logic
- Architecture - How the DDK works internally
- Examples - Real-world examples
- FAQs - Common questions
