Skip to content

bunqueue vs BullMQ: Real Benchmark Results

Performance claims without numbers are just marketing. Here are real benchmark results comparing bunqueue’s TCP server against BullMQ with Redis on identical workloads. Both systems use network connections for a fair comparison.

Test Environment

All benchmarks run on the same machine with identical job payloads:

// Identical payload for both systems
const jobData = { userId: 123, action: 'process', timestamp: Date.now() };

Both use TCP connections (bunqueue TCP mode, BullMQ via Redis TCP). Embedded mode benchmarks are excluded since BullMQ has no equivalent.

Results Summary

1.3x Faster Push

54,140 vs 43,261 ops/sec - Single job push operations

3.2x Faster Bulk

139,200 vs 44,000 ops/sec - Bulk push (100 jobs per batch)

1.7x Faster Processing

17,300 vs 10,200 ops/sec - Full push-process-complete cycle

Zero Dependencies

No Redis server required

Push Throughput

Single job push measures the raw ingestion speed:

// bunqueue
const queue = new Queue('bench', {
connection: { host: 'localhost', port: 6789 },
});
for (let i = 0; i < 10000; i++) {
await queue.add('task', { id: i });
}
// BullMQ
const queue = new BullQueue('bench', {
connection: { host: 'localhost', port: 6379 },
});
for (let i = 0; i < 10000; i++) {
await queue.add('task', { id: i });
}
OperationbunqueueBullMQRatio
Single push54,140 ops/s43,261 ops/s1.3x

Bulk Push

Bulk operations show the biggest difference because bunqueue’s msgpack protocol batches efficiently:

// bunqueue - native bulk
const jobs = Array.from({ length: 100 }, (_, i) => ({
name: 'task',
data: { id: i },
}));
await queue.addBulk(jobs);
// BullMQ - also supports addBulk
await queue.addBulk(jobs);
ScalebunqueueBullMQRatio
100 jobs/batch139,200 ops/s44,000 ops/s3.2x
1000 jobs/batch148,500 ops/s42,100 ops/s3.5x

The gap widens with batch size because bunqueue encodes the entire batch in a single msgpack frame, while BullMQ uses multiple Redis commands (even with pipelines).

Full Cycle: Push, Process, Complete

This measures the realistic end-to-end throughput including worker processing:

const worker = new Worker('bench', async (job) => {
return { processed: true };
}, { concurrency: 10 });
MetricbunqueueBullMQ
Jobs/sec (10 workers)17,30010,200
p50 latency0.4ms1.2ms
p99 latency2.1ms8.7ms

Embedded Mode (Bonus)

For single-process applications, embedded mode eliminates the network entirely:

OperationEmbeddedTCPRedis (BullMQ)
Push286,000/s54,140/s43,261/s
Pull195,000/s38,000/s28,000/s
Full cycle98,000/s17,300/s10,200/s

Run the Benchmarks Yourself

All benchmarks are included in the repository:

Terminal window
git clone https://github.com/egeominotti/bunqueue
cd bunqueue
bun install
bun run bench

What the Numbers Mean

bunqueue is consistently faster than BullMQ for queue operations. The advantage comes from three sources:

  1. Protocol efficiency - msgpack binary encoding vs Redis RESP protocol
  2. Batching - automatic coalescing of operations into bulk commands
  3. Architecture - purpose-built for job queues, not a general-purpose data store

The tradeoff is clear: bunqueue is single-instance while BullMQ can leverage Redis clustering. For applications that fit on a single server (which is most of them), bunqueue delivers better performance with less operational complexity.