Appearance
FAQs
Frequently asked questions about the Data Development Kit (DDK).
Table of Contents
- General Questions
- Schema Design
- Generated Code
- Custom Resolvers
- Database Operations
- Performance and Optimization
- Deployment
- Troubleshooting
General Questions
What is the DDK?
The DDK (Data Development Kit) is a code generation service that creates complete GraphQL servers in Go from schema definitions. It generates:
- GraphQL API with queries, mutations, and subscriptions
- Database models and repositories (GORM)
- Docker configuration
- Complete server infrastructure
How does the DDK differ from gqlgen?
- gqlgen: Low-level code generation library, requires manual configuration
- DDK: High-level service that uses gqlgen internally + adds:
- Automatic CRUD operations
- Database integration (GORM + PostgreSQL)
- Custom directive system
- Complete server scaffolding
- Docker setup
- Relationship management
When should I use the DDK?
Use the DDK when you:
- Need rapid GraphQL API development
- Want automatic CRUD operations
- Require PostgreSQL integration
- Need relationship management (one-to-one, one-to-many, many-to-many)
- Want built-in authentication (JWT)
- Need consistent code structure
The DDK is the right choice for teams building data services that will evolve over time — where the schema will change, new types will be added, and the ability to regenerate without losing custom code is essential. If you need a GraphQL API that you write once and never regenerate, you could use the DDK for initial scaffolding and then treat the output as a starting point. But the DDK's real value is in the ongoing development cycle: edit schema, regenerate, continue. Teams that want instant, zero-code database APIs backed by an existing database should look at Hasura or PostGraphile. Teams that need a generated starting point they can build sophisticated domain logic on top of — while retaining full control of the generated code — should use the DDK.
What is generated vs what is manual?
Generated (automatic):
- GraphQL schema
- CRUD resolvers
- Database models
- Repositories
- GraphQL type definitions
- Server configuration
- Docker setup
Manual (your code):
- Custom resolver implementations
- Business logic
- Custom validation
- External integrations
Schema Design
Do I have to use all CRUD operations?
No! Use the @required directive to specify which operations you need:
graphql
# Only CREATE and READ
type User @required(type: "CREATE,READ", table: "true") {
id: ID! @constraint(type: "primarykey")
name: String!
}
# All CRUD operations
type Post @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") {
id: ID! @constraint(type: "primarykey")
title: String!
}Can I use custom scalars?
Yes! The DDK supports these custom scalars:
Time(mapped to Go'stime.Time)JSON(mapped tojson.RawMessage)Upload(for file uploads)
Example:
graphql
type Event @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") {
id: ID! @constraint(type: "primarykey")
startTime: Time!
metadata: JSON
}How do I model optional vs required fields?
Use GraphQL's type system:
graphql
type User @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") {
id: ID! @constraint(type: "primarykey")
email: String! # Required (has !)
name: String # Optional (no !)
age: Int # Optional
bio: String # Optional
}In the database:
String!→NOT NULLcolumnString→ Nullable column
Can I have multiple many-to-many relationships between the same types?
Not directly. Use intermediate types:
Wrong:
graphql
type User {
friends: [User] @mapping(...)
blockedUsers: [User] @mapping(...) # Can't have two!
}Right:
graphql
type User {
friends: [Friend] @mapping(id: "userId", type: "one2many")
blockedUsers: [BlockedUser] @mapping(id: "userId", type: "one2many")
}
type Friend {
userId: String
friendId: String
user: User @mapping(id: "userId", type: "backRef")
}
type BlockedUser {
userId: String
blockedUserId: String
user: User @mapping(id: "userId", type: "backRef")
}How do I create a unique composite constraint?
Use a join table with composite primary key:
graphql
type UserRole @required(type: "CREATE,READ,DELETE", table: "true") {
userId: String @constraint(type: "primarykey")
roleId: String @constraint(type: "primarykey")
}Both fields together form the primary key, ensuring uniqueness.
Can I use check constraints with multiple columns?
Yes:
graphql
type DateRange @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") {
id: ID! @constraint(type: "primarykey")
startDate: String @constraint(type: "check:start_date < end_date")
endDate: String
}SQL column names are snake_case, so startDate becomes start_date in constraints.
Generated Code
Can I modify generated code?
DO NOT modify:
graph/generated/- Regenerated every timegraph/schema/- Regenerated from your.graphqlsfilesgqlgen.yml- Regenerated with configuration
SAFE to modify:
- Custom resolver files (
*.custom.resolvers.go) cmd/server.go(after initial generation)application/service/(for custom services)- Environment configuration
What happens to my custom resolvers when I regenerate?
They are preserved! As long as:
- Files are named
*.custom.resolvers.go - Files are in
graph/resolver/ - You use
ReGenerateServer(notCreateServer)
How do I add custom middleware?
Edit infrastructure/middleware/ after generation:
go
// infrastructure/middleware/custom.go
package middleware
func CustomMiddleware() gin.HandlerFunc {
return func(c *gin.Context) {
// Your logic here
c.Next()
}
}Then register in cmd/server.go:
go
router.Use(middleware.CustomMiddleware())Can I change the GraphQL endpoint?
Yes, edit cmd/server.go:
go
// Default: /graphql
router.POST("/graphql", graphqlHandler())
// Custom: /api/query
router.POST("/api/query", graphqlHandler())How do I add custom types that aren't in the schema?
Create them in graph/model/:
go
// graph/model/custom_types.go
package model
type CustomResponse struct {
Success bool `json:"success"`
Message string `json:"message"`
}Use in custom resolvers:
go
func (r *mutationResolver) CustomOperation(...) (*model.CustomResponse, error) {
return &model.CustomResponse{
Success: true,
Message: "Operation completed",
}, nil
}Custom Resolvers
Do custom resolvers require database transactions?
Not required, but highly recommended:
go
func (r *queryResolver) resolver_MyQuery(ctx context.Context) (*model.Result, error) {
// Get transaction from context
tx := ctx.Value("tx").(*gorm.DB)
// Use tx instead of direct DB access
// This ensures consistency with other operations
}Benefits:
- Consistency across operations
- Rollback on errors
- Connection pooling
Can I call other generated resolvers from custom resolvers?
Yes! Access through repositories:
go
func (r *mutationResolver) resolver_CustomCreate(ctx context.Context, input CustomInput) (*model.User, error) {
tx := ctx.Value("tx").(*gorm.DB)
// Use repository methods
user := &model.User{
Email: input.Email,
Name: input.Name,
}
if err := r.UserRepo.Create(tx,user); err != nil {
return nil, err
}
// Additional custom logic...
return user, nil
}How do I return errors from custom resolvers?
Return Go errors directly:
go
import "errors"
func (r *queryResolver) resolver_MyQuery(...) (*model.Result, error) {
if invalidInput {
return nil, errors.New("invalid input provided")
}
// Or use fmt.Errorf for formatted errors
if userNotFound {
return nil, fmt.Errorf("user with id %s not found", userId)
}
return result, nil
}These become GraphQL errors automatically.
Can I use external APIs in custom resolvers?
Yes:
go
import "net/http"
func (r *queryResolver) resolver_GetWeather(ctx context.Context, city string) (*model.Weather, error) {
// Call external API
resp, err := http.Get(fmt.Sprintf("https://api.weather.com/city/%s", city))
if err != nil {
return nil, err
}
defer resp.Body.Close()
// Parse and return
// ...
}How do I implement subscriptions?
Use channels:
go
func (r *subscriptionResolver) resolver_OnNewMessage(ctx context.Context) (<-chan *model.Message, error) {
messages := make(chan *model.Message)
go func() {
// Subscribe to message source (Redis, pub/sub, etc.)
for {
select {
case <-ctx.Done():
close(messages)
return
case msg := <-yourMessageSource:
messages <- msg
}
}
}()
return messages, nil
}Database Operations
Does the DDK support database migrations?
Auto-migration (development):
- Set
gormAutomigrate: truein CreateServer - GORM automatically creates/updates tables
- Does not drop columns or tables
Manual migrations (production):
- Set
gormAutomigrate: false - Use tools like
golang-migrateorgoose - Better control over schema changes
Can I use composite primary keys?
Yes:
graphql
type UserRole @required(type: "CREATE,READ,DELETE", table: "true") {
userId: String @constraint(type: "primarykey")
roleId: String @constraint(type: "primarykey")
}Both fields together form the primary key.
How do I handle soft deletes?
Add a deletedAt field:
graphql
type User @required(type: "CREATE,READ,UPDATE,DELETE", table: "true") {
id: ID! @constraint(type: "primarykey")
name: String!
deletedAt: String # Soft delete timestamp
}Then implement custom delete:
go
func (r *mutationResolver) resolver_SoftDeleteUser(ctx context.Context, id string) (bool, error) {
tx := ctx.Value("tx").(*gorm.DB)
result := tx.Model(&model.User{}).Where("id = ?", id).Update("deleted_at", time.Now())
return result.Error == nil, result.Error
}Can I use different PostgreSQL schemas?
Yes, set dbSchema in CreateServer:
javascript
{
dbSchema: "tenant_1", // Uses "tenant_1" schema
dbName: "myapp_db"
}All tables are created in the specified schema.
How do I handle connection pooling?
GORM handles this automatically. Configure in generated infrastructure/database/:
go
sqlDB, _ := db.DB()
sqlDB.SetMaxIdleConns(10)
sqlDB.SetMaxOpenConns(100)
sqlDB.SetConnMaxLifetime(time.Hour)Performance and Optimization
How do I optimize queries with many relationships?
Use GraphQL field selection to load only needed relationships:
Client query:
graphql
query {
getUser(id: "1") {
name
# Don't request posts if not needed
}
}Generated resolvers use GORM's lazy loading, so unrequested relationships aren't fetched.
Can I add database indexes?
Yes, but requires manual migration:
sql
CREATE INDEX idx_user_email ON users(email);
CREATE INDEX idx_post_author ON posts(author_id);Or use GORM in custom code:
go
db.Exec("CREATE INDEX idx_user_email ON users(email)")How do I implement pagination?
Generated list operations have built-in pagination:
graphql
query {
listUsers(limit: 20, offset: 40, sortBy: "created_at", sortOrder: "DESC") {
id
name
}
}For custom resolvers, implement manually:
go
func (r *queryResolver) resolver_SearchUsers(ctx context.Context, query string, limit int, offset int) ([]*model.User, error) {
tx := ctx.Value("tx").(*gorm.DB)
var users []*model.User
err := tx.Where("name ILIKE ?", "%"+query+"%").
Limit(limit).
Offset(offset).
Find(&users).Error
return users, err
}Should I use Redis caching?
Use Redis for:
- Session storage
- JWT token blacklists
- Frequently accessed data
- Rate limiting
- Pub/sub for subscriptions
Configure:
javascript
{
redisUrl: "localhost:6379";
}Generated code includes Redis client in infrastructure/redis/.
How do I batch/dataloader queries to avoid N+1?
Implement custom resolvers with batching:
go
// Use a dataloader library like github.com/graph-gophers/dataloader
func (r *userResolver) Posts(ctx context.Context, obj *model.User) ([]*model.Post, error) {
loader := ctx.Value("postLoader").(*PostLoader)
return loader.Load(ctx, obj.ID)
}Deployment
What's the recommended production setup?
- Application: Docker container (generated Dockerfile)
- Database: Managed PostgreSQL (AWS RDS, Azure Database, Google Cloud SQL)
- Redis: Managed Redis (ElastiCache, Azure Cache)
- Orchestration: Kubernetes or Docker Swarm
- Load Balancer: NGINX, AWS ALB, Azure Load Balancer
- Secrets: AWS Secrets Manager, Azure Key Vault
- Monitoring: Prometheus + Grafana
How do I deploy to Kubernetes?
Create Kubernetes manifests:
deployment.yaml:
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-graphql
spec:
replicas: 3
selector:
matchLabels:
app: myapp-graphql
template:
metadata:
labels:
app: myapp-graphql
spec:
containers:
- name: graphql-server
image: myregistry/myapp:latest
ports:
- containerPort: 8080
env:
- name: DB_URL
valueFrom:
secretKeyRef:
name: db-secret
key: urlHow do I handle database migrations in production?
Option 1: Init container (Kubernetes)
yaml
initContainers:
- name: migrations
image: migrate/migrate
command: ["migrate", "-path", "/migrations", "-database", "$(DB_URL)", "up"]Option 2: CI/CD pipeline
yaml
# GitHub Actions / GitLab CI
- name: Run migrations
run: migrate -path ./migrations -database $DB_URL upOption 3: Separate migration job
bash
# Before deploying new version
./run-migrations.shHow do I configure environment variables?
Use .env file or environment:
bash
# .env
DB_NAME=myapp_db
DB_USER=postgres
DB_PASSWORD=securepassword
DB_URL=postgres.example.com
DB_PORT=5432
SERVER_PORT=8080
REDIS_URL=redis.example.com:6379
SERVICE_LOG_LEVEL=info
JWT_SECRET=your-secret-keyLoad in cmd/server.go:
go
import "github.com/joho/godotenv"
func main() {
godotenv.Load()
dbURL := os.Getenv("DB_URL")
// ...
}How do I enable HTTPS?
Option 1: Use a reverse proxy (recommended)
nginx
server {
listen 443 ssl;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:8080;
}
}Option 2: Configure in Go server
go
router.RunTLS(":443", "cert.pem", "key.pem")Troubleshooting
"Invalid schema" error when validating
Common causes:
- Missing primary key: Every type needs
@constraint(type: "primarykey") - Invalid relationship: Foreign key references non-existent type
- Syntax error in check constraint
- Missing
@requireddirective
Solution: Run ValidateSchema and read error messages carefully.
"Port already in use" error
Solution 1: Change port in configuration
javascript
{
serverPort: 8081;
} // Instead of 8080Solution 2: Kill process on port
bash
lsof -ti:8080 | xargs kill -9Generated resolvers not working after regeneration
Check:
- Did you run
ReGenerateServer(notCreateServer)? - Are schema files in the correct location (
schemaPath)? - Did gqlgen run successfully? (check logs)
- Did database schema change? (run migration)
Solution: Delete generated files and regenerate:
bash
rm -rf graph/generated
# Run ReGenerateServerCustom resolvers not called
Check:
- Is
@resolver(type: "CUSTOM")directive present? - Is file named
*.custom.resolvers.go? - Is file in
graph/resolver/directory? - Did you regenerate after adding directive?
Database connection failed
Check:
- PostgreSQL is running
- Credentials are correct
- Database exists
- Host/port are correct
- Firewall allows connection
Test connection:
bash
psql -h localhost -U postgres -d myapp_dbMemory leaks in subscriptions
Cause: Not closing channels when client disconnects
Solution: Always check ctx.Done():
go
func (r *subscriptionResolver) resolver_OnEvent(ctx context.Context) (<-chan *model.Event, error) {
events := make(chan *model.Event)
go func() {
defer close(events) // Important!
for {
select {
case <-ctx.Done(): // Client disconnected
return
case event := <-source:
events <- event
}
}
}()
return events, nil
}Slow query performance
Solutions:
- Add database indexes on frequently queried columns
- Use eager loading for relationships:
tx.Preload("Posts").Find(&users) - Implement pagination for large result sets
- Use Redis caching for frequently accessed data
- Optimize check constraints and complex joins
Type assertion panic in custom resolvers
Wrong:
go
tx := ctx.Value("tx").(*gorm.DB) // Panics if nilRight:
go
txValue := ctx.Value("tx")
if txValue == nil {
return nil, errors.New("no database transaction in context")
}
tx := txValue.(*gorm.DB)Getting Help
Where can I find more examples?
- Examples Documentation - Real-world schemas
- Test suite:
/test/_schema/in the repository - Generated code: Review what the DDK creates
How do I report bugs?
Contact your internal development team or check your organization's issue tracking system.
Where can I learn more about GraphQL?
Where can I learn more about GORM?
Related Documentation
- Schema Guide - Writing schemas
- Custom Resolvers - Adding custom logic
- Architecture - How the DDK works
- Usage - Using the DDK service
- Examples - Real-world examples
