Embedded Mode
Up to 286K ops/sec
Direct SQLite access, zero network overhead
bunqueue supports two deployment modes: Embedded (direct SQLite) and TCP (network + SQLite). This page shows real benchmark results comparing both modes at scale.
Embedded Mode
Up to 286K ops/sec
Direct SQLite access, zero network overhead
TCP Mode
Up to 149K ops/sec
Network client/server for distributed systems
Embedded Advantage
2-4x Faster
Depending on operation type
Scale Tested
50,000 Jobs
Verified at multiple scales
Embedded mode uses direct SQLite access with no network overhead. Ideal for single-process applications.
| Scale | Push | Bulk Push | Process |
|---|---|---|---|
| 1,000 | 86,248 ops/sec | 221,365 ops/sec | 47,538 ops/sec |
| 5,000 | 187,256 ops/sec | 278,587 ops/sec | 64,716 ops/sec |
| 10,000 | 177,098 ops/sec | 279,640 ops/sec | 77,713 ops/sec |
| 50,000 | 204,913 ops/sec | 286,616 ops/sec | 74,772 ops/sec |
TCP mode connects clients to a bunqueue server over the network. Required for distributed systems with multiple client processes.
| Scale | Push | Bulk Push | Process |
|---|---|---|---|
| 1,000 | 28,633 ops/sec | 96,635 ops/sec | 20,195 ops/sec |
| 5,000 | 49,284 ops/sec | 142,152 ops/sec | 34,098 ops/sec |
| 10,000 | 54,562 ops/sec | 149,218 ops/sec | 34,679 ops/sec |
| 50,000 | 51,827 ops/sec | 131,897 ops/sec | 32,544 ops/sec |
How much faster is Embedded mode compared to TCP mode?
| Scale | Push | Bulk Push | Process |
|---|---|---|---|
| 1,000 | 3.0x faster | 2.3x faster | 2.4x faster |
| 5,000 | 3.8x faster | 2.0x faster | 1.9x faster |
| 10,000 | 3.2x faster | 1.9x faster | 2.2x faster |
| 50,000 | 4.0x faster | 2.2x faster | 2.3x faster |
| Operation | Embedded Mode | TCP Mode |
|---|---|---|
| Push (peak) | 204,913 ops/sec | 54,562 ops/sec |
| Bulk Push (peak) | 286,616 ops/sec | 149,218 ops/sec |
| Process (peak) | 77,713 ops/sec | 34,679 ops/sec |
The benchmark tests three operations at four different scales (1K, 5K, 10K, 50K jobs):
Sequential push of individual jobs to measure single-job insertion speed.
for (let i = 0; i < scale; i++) { await queue.add('job', PAYLOAD);}Push jobs in batches of 100 to measure bulk insertion efficiency.
const jobs = Array.from({ length: 100 }, (_, i) => ({ name: 'bulk-job', data: { ...PAYLOAD, i },}));
for (let i = 0; i < scale / 100; i++) { await queue.addBulk(jobs);}Push jobs in parallel batches of 500, then process them with 10 concurrent workers.
// Push in parallel batchesfor (let i = 0; i < scale; i += 500) { const promises = []; for (let j = 0; j < Math.min(500, scale - i); j++) { promises.push(queue.add('job', PAYLOAD)); } await Promise.all(promises);}
// Wait for all to completewhile (processed < scale) { await sleep(5);}The benchmark source code is available at bench/comprehensive.ts.
# Clone the repositorygit clone https://github.com/egeominotti/bunqueue.gitcd bunqueuebun install
# Start the bunqueue server (for TCP tests)bun run start &
# Run the comprehensive benchmarkbun run bench/comprehensive.ts═══════════════════════════════════════════════════════════════ bunqueue Comprehensive Benchmark Embedded vs TCP Mode═══════════════════════════════════════════════════════════════
Scales: 1,000, 5,000, 10,000, 50,000 jobsBulk size: 100Concurrency: 10Payload: 111 bytes
✓ TCP server connected (port 6789)
📦 EMBEDDED MODE (Direct SQLite)══════════════════════════════════════════════════
🔄 Testing 1,000 jobs... Push: 86,248 ops/sec Bulk Push: 221,365 ops/sec Process: 47,538 ops/sec
...
📊 EMBEDDED MODE RESULTS
┌──────────┬────────────────┬────────────────┬────────────────┐│ Scale │ Push (ops/s) │ Bulk (ops/s) │ Process (ops/s)│├──────────┼────────────────┼────────────────┼────────────────┤│ 1,000 │ 86,248 │ 221,365 │ 47,538 ││ 5,000 │ 187,256 │ 278,587 │ 64,716 ││ 10,000 │ 177,098 │ 279,640 │ 77,713 ││ 50,000 │ 204,913 │ 286,616 │ 74,772 │└──────────┴────────────────┴────────────────┴────────────────┘
📈 EMBEDDED vs TCP (Embedded is X times faster)
┌──────────┬────────────────┬────────────────┬────────────────┐│ Scale │ Push │ Bulk │ Process │├──────────┼────────────────┼────────────────┼────────────────┤│ 1,000 │ 3.0x │ 2.3x │ 2.4x ││ 5,000 │ 3.8x │ 2.0x │ 1.9x ││ 10,000 │ 3.2x │ 1.9x │ 2.2x ││ 50,000 │ 4.0x │ 2.2x │ 2.3x │└──────────┴────────────────┴────────────────┴────────────────┘Embedded Mode
Best for single-process apps
TCP Mode
Best for distributed systems
SQLite WAL Mode
Concurrent reads/writes with memory-mapped I/O via Bun’s native FFI bindings.
Auto Sharding
Auto-detected from CPU cores (power of 2, max 64). Minimizes lock contention.
TCP Pipelining
Multiple commands in flight per connection with reqId-based response matching.
Efficient Structures
Skip lists for O(log n) priority queues. MinHeap for delayed jobs. LRU caches.
Hardware:
Software:
Configuration:
The repository includes additional specialized benchmarks:
| Benchmark | Purpose | File |
|---|---|---|
| Comprehensive | Embedded vs TCP at scale | bench/comprehensive.ts |
| BullMQ Comparison | bunqueue vs BullMQ (Redis) | bench/comparison/run.ts |
| Throughput | Individual operation speeds | bench/throughput.bench.ts |
| Worker | Realistic worker simulation | bench/worker.bench.ts |
| Stress | Production validation | bench/stress.bench.ts |
| Million Jobs | High-volume integrity test | bench/million-jobs.bench.ts |
# Run BullMQ comparison (requires Redis)redis-server --daemonize yesbun run start &bun run bench/comparison/run.tsRun benchmarks on your hardware and share results via GitHub Discussions.
Include: