Skip to content

bunqueue vs BullMQ

Real benchmark results comparing bunqueue TCP server against BullMQ with Redis on identical workloads.

Summary

1.3x Faster Push

54,140 vs 43,261 ops/sec

Single job push operations

3.2x Faster Bulk

139,200 vs 44,000 ops/sec

Bulk push (100 jobs per batch)

1.7x Faster Processing

33,052 vs 19,225 ops/sec

Job processing throughput

Zero Infrastructure

No Redis Required

Embedded SQLite vs Redis server


Throughput Comparison

Throughput comparison chart
OperationbunqueueBullMQSpeedup
Push54,140 ops/sec43,261 ops/sec1.3x faster
Bulk Push139,200 ops/sec44,000 ops/sec3.2x faster
Process33,052 ops/sec19,225 ops/sec1.7x faster

Latency Comparison

Latency comparison chart
Operationbunqueue p99BullMQ p99Improvement
Bulk Push3.26ms4.53ms1.4x lower

Speedup by Operation

Speedup comparison chart

How the Benchmark Works

The benchmark performs three tests for both bunqueue and BullMQ:

1. Push Test

Pushes 10,000 jobs using 32 parallel connections to measure raw push throughput.

// 32 concurrent clients pushing jobs
await benchmarkParallel('Push', async () => {
await queue.add('job', PAYLOAD);
}, 10000, 32);

2. Bulk Push Test

Pushes 100 jobs per batch, 1,000 times (100,000 total jobs) to measure bulk insertion performance.

// Batch of 100 jobs per operation
const jobs = Array.from({ length: 100 }, (_, i) => ({
name: 'bulk-job',
data: { ...PAYLOAD, i },
}));
await queue.addBulk(jobs);

3. Process Test

Measures end-to-end throughput: pushing 10,000 jobs and processing them with 50 concurrent workers.

// Push jobs in parallel batches
for (let i = 0; i < 10000; i += 1000) {
await Promise.all(batch.map(() => queue.add('job', PAYLOAD)));
}
// Wait for all to be processed
while (processed < 10000) {
await new Promise(r => setTimeout(r, 10));
}

Why is bunqueue Faster?

TCP Pipelining

bunqueue’s TCP protocol supports pipelining - multiple commands in flight per connection, with reqId-based response matching.

Optimized Data Structures

Skip lists, MinHeap, and LRU cache provide O(log n) or O(1) operations for common tasks.

Batch Transactions

SQLite transactions batch multiple operations into single disk writes with WAL mode.

Auto-Scaled Sharding

Lock contention is minimized by distributing work across N independent shards (auto-detected from CPU cores, max 64).


Feature Comparison

FeaturebunqueueBullMQ
Queue Types✅ Standard, Priority, LIFO✅ Standard, Priority, LIFO
Delayed Jobs✅ Yes✅ Yes
Retries & Backoff✅ Exponential✅ Exponential
Dead Letter Queue✅ Built-in✅ Built-in
Rate Limiting✅ Per-queue✅ Per-queue
Cron Jobs✅ Built-in✅ Via scheduler
Job Dependencies✅ Parent-child flows✅ Parent-child flows
Persistence✅ SQLite (embedded)✅ Redis
Horizontal Scaling⚠️ Single process✅ Multi-process
External Dependencies✅ None❌ Redis required
S3 Backup✅ Built-in❌ Manual
TCP Pipelining✅ Built-in✅ Via Redis

Run the Benchmark Yourself

The benchmark source code is available at bench/comparison/run.ts.

Terminal window
# Clone the repository
git clone https://github.com/egeominotti/bunqueue.git
cd bunqueue
bun install
# Start Redis (required for BullMQ)
redis-server --daemonize yes
# Start bunqueue server
bun run start &
# Run the benchmark
bun run bench/comparison/run.ts

Benchmark Output

═══════════════════════════════════════════════════════════════
bunqueue vs BullMQ Comparison Benchmark
═══════════════════════════════════════════════════════════════
Iterations: 10,000
Bulk size: 100
Concurrency: 50
Payload: 111 bytes
✓ Redis connected
✓ bunqueue server connected (port 6789)
📦 bunqueue (TCP mode) benchmarks...
Push: 54,140 ops/sec
Bulk Push: 139,200 ops/sec (p99: 3.26ms)
Process: 33,052 ops/sec
🐂 BullMQ (Redis) benchmarks...
Push: 43,261 ops/sec
Bulk Push: 44,000 ops/sec (p99: 4.53ms)
Process: 19,225 ops/sec
═══════════════════════════════════════════════════════════════
RESULTS
═══════════════════════════════════════════════════════════════
┌─────────────┬──────────────────┬──────────────────┬──────────┐
│ Operation │ bunqueue │ BullMQ │ Speedup │
├─────────────┼──────────────────┼──────────────────┼──────────┤
│ Push │ 54,140 ops/s │ 43,261 ops/s │ 1.3x │
│ Bulk Push │ 139,200 ops/s │ 44,000 ops/s │ 3.2x │
│ Process │ 33,052 ops/s │ 19,225 ops/s │ 1.7x │
└─────────────┴──────────────────┴──────────────────┴──────────┘

Benchmark Environment

Hardware:

  • Mac Studio, Apple M1 Max
  • 32GB RAM
  • SSD storage

Software:

  • macOS Tahoe
  • Bun 1.3.8
  • Redis 7.x (localhost)

Configuration:

  • 10,000 iterations per test
  • Bulk size: 100 jobs
  • Concurrency: 50 workers
  • Payload: ~100 bytes per job
  • TCP pipelining enabled
  • Connection pool: 32 connections

When to Use BullMQ Instead

While bunqueue is faster for most use cases, BullMQ may be better when:

  • Horizontal scaling is required across multiple processes/servers
  • Redis is already part of your infrastructure
  • Redis-specific features like pub/sub or Lua scripts are needed
  • Multi-language workers need to share the same queue