S3 Backup
Automated backups to any S3-compatible storage with gzip compression and SHA256 integrity verification.
Configuration
# Environment variablesS3_BACKUP_ENABLED=1S3_ACCESS_KEY_ID=your-access-keyS3_SECRET_ACCESS_KEY=your-secret-keyS3_BUCKET=my-backupsS3_REGION=us-east-1S3_BACKUP_INTERVAL=21600000 # 6 hoursS3_BACKUP_RETENTION=7 # Keep 7 backupsS3_BACKUP_PREFIX=backups/ # Default prefixSupported Providers
| Provider | Endpoint |
|---|---|
| AWS S3 | (default) |
| Cloudflare R2 | https://<account>.r2.cloudflarestorage.com |
| MinIO | http://localhost:9000 |
| DigitalOcean Spaces | https://<region>.digitaloceanspaces.com |
CLI Commands
# Create backup nowbunqueue backup now
# List backupsbunqueue backup list
# Restore from backupbunqueue backup restore <key>bunqueue backup restore <key> -f # Force overwrite
# Check statusbunqueue backup statusBackup Contents
Each backup includes:
- SQLite database file (all jobs, cron, DLQ), compressed with gzip
- Metadata file (
.meta.json) with timestamp, version, original size, compressed size, and SHA256 checksum
How It Works
- Compression — The database is compressed with gzip before upload for efficient storage
- Checksum — A SHA256 hash of the original data is computed and stored in the metadata file
- Upload — The compressed backup and metadata are uploaded to S3 as separate files
- Cleanup — Old backups exceeding the retention limit are automatically deleted
Scheduling
When enabled, backups are automatically scheduled:
- Initial backup: Runs 1 minute after server startup
- Periodic backups: Runs every
S3_BACKUP_INTERVALmilliseconds (default: 6 hours) - Concurrent protection: Only one backup can run at a time; overlapping requests are rejected
Restore Verification
When restoring, bunqueue automatically:
- Detects whether the backup is gzip-compressed (via metadata or magic bytes)
- Decompresses the backup if needed
- Verifies the SHA256 checksum against the metadata to ensure data integrity
- Supports older uncompressed backups for backward compatibility