bunqueue S3 Backup and Disaster Recovery for SQLite Job Queues
bunqueue stores everything in a single SQLite file. This makes backups simple - but “simple” doesn’t mean “optional.” Here’s how to set up automated S3 backups and recover from disasters.
Why Backup?
Section titled “Why Backup?”SQLite is crash-safe (WAL mode + fsync), but it can’t protect against:
- Disk failure - hardware dies, data is gone
- Accidental deletion -
rm -rfhappens - Corruption - filesystem bugs, power loss during write
- Migration errors - bad deploy wipes the data directory
S3 backup gives you point-in-time recovery with minimal effort.
Enabling S3 Backup
Section titled “Enabling S3 Backup”Configure via environment variables or a configuration file:
# RequiredS3_BACKUP_ENABLED=1S3_BUCKET=my-bunqueue-backupsS3_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLES3_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEYS3_REGION=us-east-1
# OptionalS3_ENDPOINT= # Custom endpoint (MinIO, R2, etc.)S3_BACKUP_INTERVAL=21600000 # Every 6 hours (default)S3_BACKUP_RETENTION=7 # Keep 7 days of backupsOr pass them when starting the server:
S3_BACKUP_ENABLED=1 \S3_BUCKET=my-backups \S3_REGION=us-east-1 \bunqueue start --data-path ./data/queue.dbCompatible Storage Providers
Section titled “Compatible Storage Providers”Any S3-compatible storage works:
| Provider | S3_ENDPOINT | Notes |
|---|---|---|
| AWS S3 | (empty) | Default |
| Cloudflare R2 | https://<account>.r2.cloudflarestorage.com | No egress fees |
| MinIO | http://minio:9000 | Self-hosted |
| DigitalOcean Spaces | https://<region>.digitaloceanspaces.com | Simple setup |
| Backblaze B2 | https://s3.<region>.backblazeb2.com | Cheapest storage |
# Cloudflare R2 exampleS3_BACKUP_ENABLED=1S3_BUCKET=bunqueue-backupsS3_ENDPOINT=https://abc123.r2.cloudflarestorage.comS3_ACCESS_KEY_ID=your-r2-keyS3_SECRET_ACCESS_KEY=your-r2-secretS3_REGION=autoBackup Process
Section titled “Backup Process”The backup runs on a timer (default: every 6 hours):
- Checkpoint WAL - forces all pending writes to the main database file
- Create consistent snapshot - SQLite’s backup API ensures a point-in-time consistent copy
- Compress - the backup is compressed before upload
- Upload to S3 - stored with a timestamp-based key
- Cleanup old backups - removes backups older than retention period
Backup File Naming
Section titled “Backup File Naming”Backups are stored with this key pattern:
bunqueue-backups/ backup-2024-01-15T00-00-00.db.gz backup-2024-01-15T06-00-00.db.gz backup-2024-01-15T12-00-00.db.gz backup-2024-01-15T18-00-00.db.gzDisaster Recovery
Section titled “Disaster Recovery”To restore from a backup:
-
Stop bunqueue
Terminal window systemctl stop bunqueue# or: docker stop bunqueue -
Download the latest backup
Terminal window aws s3 cp s3://my-bunqueue-backups/backup-2024-01-15T18-00-00.db.gz ./gunzip backup-2024-01-15T18-00-00.db.gz -
Replace the database file
Terminal window # Backup the current (possibly corrupted) filemv /var/lib/bunqueue/queue.db /var/lib/bunqueue/queue.db.corrupted# Restore from backupmv backup-2024-01-15T18-00-00.db /var/lib/bunqueue/queue.db -
Restart bunqueue
Terminal window systemctl start bunqueue# or: docker start bunqueue
bunqueue will recover the queue state from the restored database, reloading pending jobs, cron schedules, and DLQ entries.
Backup Monitoring
Section titled “Backup Monitoring”Monitor backup health in production:
# Check backup status via health endpointcurl http://localhost:6790/health# Response includes last backup time and statusSet up alerts for:
- No backup in 2x the interval - backup process may be failing
- Backup size anomalies - sudden size changes may indicate issues
- S3 upload failures - check credentials and bucket permissions
Supplementary Strategies
Section titled “Supplementary Strategies”S3 backup covers most scenarios, but consider layering additional protection:
Filesystem Snapshots
# LVM snapshot (instant, zero downtime)lvcreate -s -n bunqueue-snap -L 1G /dev/vg0/bunqueue
# ZFS snapshotzfs snapshot tank/bunqueue@dailyCron-based local backup
# Copy SQLite file every hour (WAL must be checkpointed first)0 * * * * sqlite3 /var/lib/bunqueue/queue.db ".backup /backups/queue-$(date +\%H).db"Replication to secondary server
# rsync the database file periodically*/30 * * * * rsync -az /var/lib/bunqueue/ backup-server:/bunqueue-replica/Best Practices
Section titled “Best Practices”- Enable S3 backup from day one - don’t wait for your first data loss
- Test recovery regularly - a backup you can’t restore from is worthless
- Monitor backup health - alert on missed backups
- Use retention policies - keep 7-30 days depending on your needs
- Consider R2 or B2 for cost-effective storage (no egress fees with R2)