Skip to content

Bunqueue Production Deployment Guide: Docker, Systemd & PM2

This guide covers deploying bunqueue in production. bunqueue is designed as a single-instance job queue - it doesn’t support clustering or horizontal scaling.

┌─────────────────────────────────────────────────────────────┐
│ Your Application │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Web App │ │ API │ │ Workers │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │
│ └────────────────┼────────────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ bunqueue │ ◄── Single instance │
│ │ (embedded mode) │ │
│ └───────────┬───────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ SQLite Database │ ◄── Local file │
│ │ (./data/bunq.db) │ │
│ └───────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
▼ (optional)
┌───────────────────────┐
│ S3 Backup │
│ (disaster recovery) │
└───────────────────────┘

Run bunqueue directly in your application process. No separate server needed.

Terminal window
# Enable SQLite persistence
export DATA_PATH=./data/bunq.db
app.ts
import { Queue, Worker } from 'bunqueue/client';
// Your web framework (Hono, Elysia, Express...)
const app = new Hono();
// Queue is embedded in the same process (uses DATA_PATH for persistence)
const emailQueue = new Queue('emails', { embedded: true });
app.post('/send-email', async (c) => {
const { to, subject } = await c.req.json();
await emailQueue.add('send', { to, subject });
return c.json({ queued: true });
});
// Worker runs in the same process
new Worker('emails', async (job) => {
await sendEmail(job.data);
return { sent: true };
}, { embedded: true, concurrency: 5 });
export default app;

Pros:

  • Simplest setup
  • No network latency
  • Single deployment unit

Cons:

  • Queue dies if app dies
  • Harder to scale workers independently

Run your API and workers as separate processes sharing the same SQLite database.

Terminal window
# Both processes MUST use the same DATA_PATH
export DATA_PATH=./data/bunq.db
// api.ts - Your web server
import { Queue } from 'bunqueue/client';
const queue = new Queue('tasks', { embedded: true });
app.post('/task', async (c) => {
await queue.add('process', { data: '...' });
return c.json({ ok: true });
});
// worker.ts - Separate process
import { Worker } from 'bunqueue/client';
new Worker('tasks', async (job) => {
// Heavy processing here
return { done: true };
}, { embedded: true, concurrency: 10 });
console.log('Worker started');
Terminal window
# Run both (same DATA_PATH)
DATA_PATH=./data/bunq.db bun run api.ts &
DATA_PATH=./data/bunq.db bun run worker.ts &

Pros:

  • Workers can be restarted independently
  • Better resource isolation

Cons:

  • Two processes to manage
  • Still single SQLite file (no true distribution)

Run bunqueue as a standalone server. Interact via CLI or HTTP API.

Terminal window
# Start server
bunqueue start --tcp-port 6789 --http-port 6790
Terminal window
# Add jobs via CLI
bunqueue push emails '{"to": "user@example.com", "subject": "Hello"}'
# Or via HTTP API
curl -X POST http://localhost:6790/queues/emails/jobs \
-H "Content-Type: application/json" \
-d '{"data": {"to": "user@example.com"}}'
import { Queue, Worker } from 'bunqueue/client';
// Connects to localhost:6789 by default
const queue = new Queue('emails');
await queue.add('send', { to: 'user@example.com' });
// Worker also connects to the server
const worker = new Worker('emails', async (job) => {
await sendEmail(job.data);
return { sent: true };
});
FROM oven/bun:1
WORKDIR /app
# Copy package files
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile --production
# Copy application
COPY . .
# Create data directory
RUN mkdir -p /app/data
# Environment
ENV DATA_PATH=/app/data/bunq.db
ENV NODE_ENV=production
# Health check
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://127.0.0.1:6790/health || exit 1
EXPOSE 6789 6790
CMD ["bun", "run", "start"]
version: '3.8'
services:
bunqueue:
build: .
ports:
- "6789:6789" # TCP
- "6790:6790" # HTTP
volumes:
- bunqueue-data:/app/data
environment:
- DATA_PATH=/app/data/bunq.db
- AUTH_TOKENS=${AUTH_TOKENS}
- S3_BACKUP_ENABLED=1
- S3_ACCESS_KEY_ID=${S3_ACCESS_KEY_ID}
- S3_SECRET_ACCESS_KEY=${S3_SECRET_ACCESS_KEY}
- S3_BUCKET=${S3_BUCKET}
- S3_REGION=${S3_REGION}
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 256M
volumes:
bunqueue-data:

For bare-metal or VM deployments:

/etc/systemd/system/bunqueue.service
[Unit]
Description=bunqueue Job Queue
After=network.target
[Service]
Type=simple
User=bunqueue
Group=bunqueue
WorkingDirectory=/var/lib/bunqueue
ExecStart=/usr/local/bin/bunqueue start
Restart=always
RestartSec=5
# Environment
Environment=NODE_ENV=production
Environment=DATA_PATH=/var/lib/bunq.db
EnvironmentFile=/etc/env
# Security
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/bunqueue
# Resource limits
MemoryMax=512M
CPUQuota=200%
[Install]
WantedBy=multi-user.target
Terminal window
# Install
sudo systemctl daemon-reload
sudo systemctl enable bunqueue
sudo systemctl start bunqueue
# Check status
sudo systemctl status bunqueue
sudo journalctl -u bunqueue -f

Compile bunqueue into a standalone executable for production deployment.

Terminal window
# Clone the repository
git clone https://github.com/egeominotti/bunqueue.git
cd bunqueue
# Install dependencies
bun install
# Build standalone binary
bun run build

This creates dist/bunqueue (~56 MB), a self-contained executable with no runtime dependencies.

Terminal window
# Check version
./dist/bunqueue --version
# Show help
./dist/bunqueue --help
# Start server
./dist/bunqueue start
Terminal window
# Copy to system path
sudo cp dist/bunqueue /usr/local/bin/
# Verify installation
bunqueue --version

For cross-platform process management with PM2:

First, build the standalone executable:

Terminal window
bun run build

Then configure PM2:

ecosystem.config.js
module.exports = {
apps: [{
name: 'bunqueue',
script: '/usr/local/bin/bunqueue', // Compiled binary
args: 'start',
instances: 1, // Single instance only - no cluster mode
exec_mode: 'fork',
autorestart: true,
watch: false,
max_memory_restart: '512M',
env: {
NODE_ENV: 'production',
DATA_PATH: '/var/lib/bunq.db',
TCP_PORT: 6789,
HTTP_PORT: 6790,
},
error_file: '/var/log/error.log',
out_file: '/var/log/out.log',
log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
merge_logs: true,
}]
};

For development or when using the source directly:

ecosystem.config.js
module.exports = {
apps: [{
name: 'bunqueue',
script: 'bun',
args: 'run start',
cwd: '/opt/bunqueue',
instances: 1,
exec_mode: 'fork',
autorestart: true,
max_memory_restart: '512M',
env: {
NODE_ENV: 'production',
DATA_PATH: '/var/lib/bunq.db',
TCP_PORT: 6789,
HTTP_PORT: 6790,
},
}]
};
Terminal window
# Start
pm2 start ecosystem.config.js
# Restart
pm2 restart bunqueue
# Stop
pm2 stop bunqueue
# View logs
pm2 logs bunqueue
# Monitor
pm2 monit
# Save process list for startup
pm2 save
pm2 startup
VariableDescriptionDefault
DATA_PATHSQLite database pathin-memory
TCP_PORTTCP server port6789
HTTP_PORTHTTP server port6790
AUTH_TOKENSComma-separated auth tokens-
S3_BACKUP_ENABLEDEnable S3 backups0
S3_ACCESS_KEY_IDS3 access key-
S3_SECRET_ACCESS_KEYS3 secret key-
S3_BUCKETS3 bucket name-
S3_REGIONS3 regionus-east-1
S3_ENDPOINTCustom S3 endpoint-
S3_BACKUP_INTERVALBackup interval (ms)21600000 (6h)
S3_BACKUP_RETENTIONBackups to keep7
Terminal window
S3_BACKUP_ENABLED=1
S3_ACCESS_KEY_ID=AKIA...
S3_SECRET_ACCESS_KEY=...
S3_BUCKET=my-bunqueue-backups
S3_REGION=us-east-1
S3_BACKUP_INTERVAL=3600000 # Every hour
S3_BACKUP_RETENTION=24 # Keep 24 backups
Terminal window
S3_BACKUP_ENABLED=1
S3_ACCESS_KEY_ID=...
S3_SECRET_ACCESS_KEY=...
S3_BUCKET=bunqueue-backups
S3_ENDPOINT=https://ACCOUNT_ID.r2.cloudflarestorage.com
S3_REGION=auto
Terminal window
S3_BACKUP_ENABLED=1
S3_ACCESS_KEY_ID=minioadmin
S3_SECRET_ACCESS_KEY=minioadmin
S3_BUCKET=bunqueue
S3_ENDPOINT=http://minio:9000
S3_REGION=us-east-1

bunqueue exposes health endpoints:

Terminal window
# HTTP health check (detailed)
curl http://localhost:6790/health
# {"ok":true,"status":"healthy","uptime":3600,"version":"2.5.7",
# "queues":{"waiting":5,"active":2},"connections":{"ws":0,"sse":0},
# "memory":{"heapUsed":45,"heapTotal":64,"rss":128}}
# Simple liveness probe
curl http://localhost:6790/healthz
# OK
# Readiness probe
curl http://localhost:6790/ready
# {"ok":true,"ready":true}
# Queue stats
curl http://localhost:6790/stats
# {"ok":true,"stats":{"waiting":5,"active":2,"completed":1000,"dlq":0}}
# Prometheus metrics (text format)
curl http://localhost:6790/prometheus
livenessProbe:
httpGet:
path: /healthz
port: 6790
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 6790
initialDelaySeconds: 5
periodSeconds: 5
WorkloadRecommended RAM
Light (<1k jobs/day)128 MB
Medium (1k-10k jobs/day)256 MB
Heavy (10k-100k jobs/day)512 MB
Very Heavy (>100k jobs/day)1 GB+

SQLite database size depends on:

  • Number of jobs retained
  • Job data size
  • removeOnComplete setting
// Reduce disk usage
new Queue('tasks', {
defaultJobOptions: {
removeOnComplete: true, // Don't keep completed jobs
removeOnFail: false, // Keep failed for debugging
}
});

bunqueue is I/O bound, not CPU bound. A single core handles most workloads.

  1. Enable S3 backups

    Don’t skip this. SQLite corruption = data loss.

  2. Set auth tokens

    Terminal window
    AUTH_TOKENS=token1,token2,token3
  3. Use durable: true for critical jobs

    // Payments, orders, and critical events
    await queue.add('payment', data, { durable: true });
  4. Configure resource limits

    Prevent runaway memory/CPU usage.

  5. Set up monitoring

    Scrape /prometheus with Prometheus or similar.

  6. Configure log aggregation

    Send logs to a central system.

  7. Test backup restoration

    Terminal window
    # List backups first
    bunqueue backup list
    # Then restore by key
    bunqueue backup restore backups/bunq-2026-01-30T12:00:00.db --force
  8. Set up alerts

    • DLQ count > threshold
    • Waiting jobs growing
    • Worker not processing
  • ❌ No multi-node deployment
  • ❌ No automatic failover
  • ❌ No distributed processing across machines
  • ✅ Multiple workers in same process (concurrency)
  • ✅ Multiple worker processes on same machine (shared SQLite)
ScenarioJobs/daybunqueue?
Small SaaS<10k✅ Perfect
Medium app10k-100k✅ Fine
Large app100k-1M✅ Tested
Enterprise>1M⚠️ Test first

If you need:

  • High availability → Redis + BullMQ with Sentinel
  • Distributed processing → Kafka, RabbitMQ
  • Multi-region → Managed queues (SQS, Cloud Tasks)
  • Complex workflows → Temporal, Inngest

bunqueue scales vertically well:

  • More RAM = more jobs in memory
  • Faster disk (NVMe) = faster SQLite
  • More CPU cores = more worker concurrency
// Scale worker concurrency with available CPUs
import { cpus } from 'os';
new Worker('tasks', processor, {
concurrency: cpus().length * 2
});
Every 1 hour → S3 backup
Every 6 hours → Verify backup integrity
Every day → Test restore in staging
  1. Stop bunqueue

    Terminal window
    systemctl stop bunqueue
  2. List available backups

    Terminal window
    bunqueue backup list
  3. Restore from backup

    Terminal window
    bunqueue backup restore backups/bunq-2024-01-30T12:00:00.db --force
  4. Start bunqueue

    Terminal window
    systemctl start bunqueue

SQLite WAL mode allows recovery to recent states:

Terminal window
# Backup includes WAL file
cp data/bunq.db data/bunq.db-wal /backup/
# Restore
cp /backup/bunq.db* data/
  • Run behind reverse proxy (nginx, Caddy)
  • Use TLS for external connections
  • Firewall TCP/HTTP ports
Terminal window
# Generate strong tokens
AUTH_TOKENS=$(openssl rand -hex 32),$(openssl rand -hex 32)
Terminal window
# Restrict database access
chmod 600 /var/lib/bunq.db
chown bunqueue:bunqueue /var/lib/bunq.db
prometheus.yml
scrape_configs:
- job_name: 'bunqueue'
static_configs:
- targets: ['localhost:6790']
metrics_path: /prometheus
MetricAlert Threshold
bunqueue_jobs_waiting> 1000 for 5 min
bunqueue_jobs_dlq> 10
bunqueue_jobs_active0 for 5 min (workers dead?)
bunqueue_jobs_failed_totalincreasing rapidly
/etc/env
NODE_ENV=production
DATA_PATH=/var/lib/bunq.db
TCP_PORT=6789
HTTP_PORT=6790
AUTH_TOKENS=prod-token-abc123,deploy-token-xyz789
# S3 Backups
S3_BACKUP_ENABLED=1
S3_ACCESS_KEY_ID=AKIA...
S3_SECRET_ACCESS_KEY=...
S3_BUCKET=company-bunqueue-backups
S3_REGION=eu-west-1
S3_BACKUP_INTERVAL=3600000
S3_BACKUP_RETENTION=48
production.ts
import { Queue, Worker } from 'bunqueue/client';
const queue = new Queue('production-tasks', {
embedded: true,
defaultJobOptions: {
attempts: 5,
backoff: 5000,
removeOnComplete: true,
}
});
// Configure DLQ alerts
queue.setDlqConfig({
maxEntries: 1000,
maxAge: 7 * 24 * 60 * 60 * 1000, // 7 days
});
// Regular jobs (buffered writes - high throughput)
await queue.add('send-email', { to: 'user@example.com' });
// Critical jobs (immediate disk write - no data loss)
await queue.add('process-payment', { orderId: '123' }, { durable: true });
// Worker with production settings
new Worker('production-tasks', async (job) => {
await job.updateProgress(0, 'Starting...');
try {
const result = await processJob(job.data);
await job.log(`Completed: ${JSON.stringify(result)}`);
return result;
} catch (error) {
await job.log(`Error: ${error.message}`);
throw error;
}
}, {
embedded: true,
concurrency: 10,
heartbeatInterval: 5000,
});