1.3x Faster Push
54,140 vs 43,261 ops/sec - Single job push operations
Performance claims without numbers are just marketing. Here are real benchmark results comparing bunqueue’s TCP server against BullMQ with Redis on identical workloads. Both systems use network connections for a fair comparison.
All benchmarks run on the same machine with identical job payloads:
// Identical payload for both systemsconst jobData = { userId: 123, action: 'process', timestamp: Date.now() };Both use TCP connections (bunqueue TCP mode, BullMQ via Redis TCP). Embedded mode benchmarks are excluded since BullMQ has no equivalent.
1.3x Faster Push
54,140 vs 43,261 ops/sec - Single job push operations
3.2x Faster Bulk
139,200 vs 44,000 ops/sec - Bulk push (100 jobs per batch)
1.7x Faster Processing
17,300 vs 10,200 ops/sec - Full push-process-complete cycle
Zero Dependencies
No Redis server required
Single job push measures the raw ingestion speed:
// bunqueueconst queue = new Queue('bench', { connection: { host: 'localhost', port: 6789 },});for (let i = 0; i < 10000; i++) { await queue.add('task', { id: i });}
// BullMQconst queue = new BullQueue('bench', { connection: { host: 'localhost', port: 6379 },});for (let i = 0; i < 10000; i++) { await queue.add('task', { id: i });}| Operation | bunqueue | BullMQ | Ratio |
|---|---|---|---|
| Single push | 54,140 ops/s | 43,261 ops/s | 1.3x |
Bulk operations show the biggest difference because bunqueue’s msgpack protocol batches efficiently:
// bunqueue - native bulkconst jobs = Array.from({ length: 100 }, (_, i) => ({ name: 'task', data: { id: i },}));await queue.addBulk(jobs);
// BullMQ - also supports addBulkawait queue.addBulk(jobs);| Scale | bunqueue | BullMQ | Ratio |
|---|---|---|---|
| 100 jobs/batch | 139,200 ops/s | 44,000 ops/s | 3.2x |
| 1000 jobs/batch | 148,500 ops/s | 42,100 ops/s | 3.5x |
The gap widens with batch size because bunqueue encodes the entire batch in a single msgpack frame, while BullMQ uses multiple Redis commands (even with pipelines).
This measures the realistic end-to-end throughput including worker processing:
const worker = new Worker('bench', async (job) => { return { processed: true };}, { concurrency: 10 });| Metric | bunqueue | BullMQ |
|---|---|---|
| Jobs/sec (10 workers) | 17,300 | 10,200 |
| p50 latency | 0.4ms | 1.2ms |
| p99 latency | 2.1ms | 8.7ms |
For single-process applications, embedded mode eliminates the network entirely:
| Operation | Embedded | TCP | Redis (BullMQ) |
|---|---|---|---|
| Push | 286,000/s | 54,140/s | 43,261/s |
| Pull | 195,000/s | 38,000/s | 28,000/s |
| Full cycle | 98,000/s | 17,300/s | 10,200/s |
All benchmarks are included in the repository:
git clone https://github.com/egeominotti/bunqueuecd bunqueuebun installbun run benchbunqueue is consistently faster than BullMQ for queue operations. The advantage comes from three sources:
The tradeoff is clear: bunqueue is single-instance while BullMQ can leverage Redis clustering. For applications that fit on a single server (which is most of them), bunqueue delivers better performance with less operational complexity.