Deploying bunqueue to Production
bunqueue is designed as a single-instance job queue. This simplifies deployment significantly - no clustering, no leader election, no split-brain concerns. Here’s how to deploy it reliably.
Docker Deployment
The recommended approach for most teams:
FROM oven/bun:1-alpine
WORKDIR /app
# Install bunqueue globallyRUN bun add -g bunqueue
# Create data directoryRUN mkdir -p /data
# Health checkHEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \ CMD wget -qO- http://localhost:6790/health || exit 1
EXPOSE 6789 6790
CMD ["bunqueue", "start", \ "--tcp-port", "6789", \ "--http-port", "6790", \ "--data-path", "/data/bunqueue.db"]Run with a persistent volume:
docker run -d \ --name bunqueue \ -p 6789:6789 \ -p 6790:6790 \ -v bunqueue-data:/data \ --restart unless-stopped \ bunqueue-serversystemd Service
For bare-metal or VPS deployments:
[Unit]Description=bunqueue Job Queue ServerAfter=network.target
[Service]Type=simpleUser=bunqueueGroup=bunqueueWorkingDirectory=/opt/bunqueueExecStart=/usr/local/bin/bun run bunqueue start \ --tcp-port 6789 \ --data-path /var/lib/bunqueue/queue.dbRestart=alwaysRestartSec=5
# Resource limitsLimitNOFILE=65535MemoryMax=1G
# SecurityNoNewPrivileges=trueProtectSystem=strictReadWritePaths=/var/lib/bunqueue
[Install]WantedBy=multi-user.targetsudo systemctl enable bunqueuesudo systemctl start bunqueuesudo systemctl status bunqueueEnvironment Variables
Configure bunqueue through environment variables in production:
# ServerTCP_PORT=6789HTTP_PORT=6790HOST=0.0.0.0DATA_PATH=/var/lib/bunqueue/queue.db
# AuthenticationAUTH_TOKENS=your-secret-token-here
# TimeoutsSHUTDOWN_TIMEOUT_MS=30000WORKER_TIMEOUT_MS=30000
# S3 BackupS3_BACKUP_ENABLED=1S3_BUCKET=my-bunqueue-backupsS3_ACCESS_KEY_ID=your-keyS3_SECRET_ACCESS_KEY=your-secretS3_REGION=us-east-1S3_BACKUP_INTERVAL=21600000 # Every 6 hoursS3_BACKUP_RETENTION=7 # Keep 7 daysHealth Checks
bunqueue exposes HTTP health endpoints:
# Basic health checkcurl http://localhost:6790/health# Returns: { "status": "ok", "uptime": 3600, "memory": {...} }
# TCP ping (from your application)const queue = new Queue('test', { connection: { host: 'localhost', port: 6789 },});await queue.waitUntilReady(); // Pings the serverUse the HTTP health endpoint for load balancer checks, Docker HEALTHCHECK, and Kubernetes liveness probes.
Graceful Shutdown
bunqueue handles SIGTERM for graceful shutdown:
- Stop accepting new connections
- Wait for active jobs to complete (up to
SHUTDOWN_TIMEOUT_MS) - Flush the write buffer to SQLite
- Close the database
- Exit cleanly
# Docker stop sends SIGTERM, waits 10s, then SIGKILLdocker stop bunqueue
# For longer running jobs, increase stop timeoutdocker stop -t 60 bunqueueResource Sizing
bunqueue’s memory usage scales with the number of in-flight jobs:
| In-Flight Jobs | Approximate RAM |
|---|---|
| 1,000 | ~50 MB |
| 10,000 | ~200 MB |
| 100,000 | ~800 MB |
| 1,000,000 | ~3 GB |
The SQLite database size depends on job data size and retention settings. A typical deployment with removeOnComplete: true stays compact.
Monitoring Checklist
Essential metrics to track in production:
# Queue metrics (via HTTP API)curl http://localhost:6790/metrics
# Prometheus formatcurl http://localhost:6790/prometheusKey metrics to alert on:
- DLQ size growing - indicates systematic failures
- Active jobs count > expected - jobs may be stalled
- Memory usage approaching limit - adjust
maxEntriessettings - Waiting queue depth - add more workers or increase concurrency
Backup Strategy
Enable S3 backup for disaster recovery:
S3_BACKUP_ENABLED=1S3_BACKUP_INTERVAL=21600000 # Every 6 hoursS3_BACKUP_RETENTION=7 # Keep 7 days of backupsFor critical deployments, also consider:
- Filesystem-level snapshots (LVM, ZFS, or cloud provider snapshots)
- Replicating the SQLite file to a secondary location
- Monitoring backup success/failure with alerts