Skip to content

HTTP REST API Reference

The bunqueue HTTP API runs on port 6790 by default (configurable via HTTP_PORT environment variable). All request and response bodies use JSON (Content-Type: application/json) unless otherwise noted.

Response contract: Every response includes an ok boolean field. Successful responses return "ok": true with operation-specific data. Failed responses return "ok": false with an "error" string describing the failure reason.

// Success
{ "ok": true, "id": "019ce9d7-6983-7000-946f-48737be2b0f9" }
// Error
{ "ok": false, "error": "Job not found" }

When AUTH_TOKENS is configured, all endpoints (except health probes and CORS preflight) require a Bearer token in the Authorization header. Multiple tokens are supported, separated by commas.

Terminal window
# Server configuration
AUTH_TOKENS=secret-token-1,secret-token-2
# Client usage
curl -H "Authorization: Bearer secret-token-1" http://localhost:6790/stats

Token comparison uses constant-time equality (crypto.timingSafeEqual equivalent) to prevent timing attacks. Each token is compared against all configured tokens, ensuring no information leaks about token length or prefix.

Endpoints that skip authentication:

EndpointReason
GET /healthLoad balancer health checks must work without credentials
GET /healthz, GET /liveKubernetes liveness probes
GET /readyKubernetes readiness probes
OPTIONS *CORS preflight must respond before auth headers are available

The GET /prometheus endpoint optionally requires auth when requireAuthForMetrics: true is set in the server configuration. This allows Prometheus to scrape without credentials in trusted networks, while requiring auth in public-facing deployments.

Unauthorized response (401):

{ "ok": false, "error": "Unauthorized" }

Cross-Origin Resource Sharing is configured via the CORS_ALLOW_ORIGIN environment variable. Defaults to * (allow all origins). Set to specific origins for production (e.g., CORS_ALLOW_ORIGIN=https://dashboard.example.com).

All JSON responses include the Access-Control-Allow-Origin header. Preflight (OPTIONS) requests return:

HTTP/1.1 204 No Content
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
Access-Control-Allow-Headers: Content-Type, Authorization
Access-Control-Max-Age: 86400

The Max-Age: 86400 (24 hours) means browsers cache the preflight response, avoiding repeated OPTIONS requests.


All errors follow a consistent format with appropriate HTTP status codes:

CodeMeaningWhen
200SuccessOperation completed successfully
400Bad RequestInvalid JSON, missing required fields, validation failure (e.g., queue name too long, priority out of range)
401UnauthorizedMissing or invalid Bearer token
404Not FoundJob, queue, cron, or webhook not found
429Rate LimitedClient exceeded the configured request rate
500Internal ErrorUnexpected server error (logged server-side)

Error response body:

{ "ok": false, "error": "Queue name contains invalid characters" }

Validation rules applied to all endpoints:

  • Queue names: 1-256 characters, alphanumeric + -_.:
  • Numeric fields: Validated for type, range, and finiteness (e.g., delay must be 0 to 365 days, priority must be -1M to +1M)
  • Job data: Max 10MB per job payload
  • Job IDs: UUID v7 format (auto-generated) or custom string (via jobId field)

HTTP requests are rate-limited per client IP using a sliding window algorithm. The client IP is resolved in order: X-Forwarded-For header (first IP) > X-Real-IP header > "unknown".

VariableDefaultDescription
RATE_LIMIT_WINDOW_MS60000Sliding window duration in milliseconds
RATE_LIMIT_MAX_REQUESTSInfinityMaximum requests per window per IP. Set to 0 to disable.
RATE_LIMIT_CLEANUP_MS60000Interval for cleaning up expired rate limit entries

When rate limited, the server responds with:

{ "ok": false, "error": "Rate limit exceeded" }

Status code: 429. The client should implement exponential backoff before retrying.


Understanding the job lifecycle is essential for using the API effectively. A job flows through these states:

┌──────────┐
push ──────►│ waiting │◄──── promote (from delayed)
└────┬─────┘
│ pull
┌────▼─────┐
│ active │◄──── retry (from failed, if attempts remain)
└────┬─────┘
┌────┴────┐
ack │ │ fail
┌────────▼┐ ┌───▼──────┐
│completed│ │ failed │
└─────────┘ └────┬─────┘
│ max attempts exceeded
┌────▼─────┐
│ DLQ │
└──────────┘

Delayed jobs: When delay > 0 is set at push time, the job enters delayed state and becomes waiting after the delay expires. A delayed job can be promoted to waiting immediately via the Promote endpoint.

Durable mode: When durable: true is set, the job is written to SQLite synchronously before returning. Without it, jobs are buffered in memory (10ms write buffer) for ~10x higher throughput, with a small window of potential data loss on crash.


Add a new job to a queue. The job enters waiting state (or delayed if delay > 0).

POST /queues/:queue/jobs
Terminal window
curl -X POST http://localhost:6790/queues/emails/jobs \
-H "Content-Type: application/json" \
-d '{
"data": {"to": "user@test.com", "subject": "Welcome"},
"priority": 10,
"delay": 5000
}'

Request body — only data is required:

FieldTypeDefaultDescription
dataany(required)Job payload. Any JSON-serializable value. Max 10MB.
prioritynumber0Higher value = processed sooner. Range: -1,000,000 to 1,000,000.
delaynumber0Milliseconds before the job becomes available for processing. Max: 1 year.
maxAttemptsnumber3Maximum retry attempts before the job moves to the DLQ. Range: 1-1000.
backoffnumber1000Base retry delay in milliseconds. Increases exponentially: backoff * 2^attempt. Max: 1 day.
ttlnumberTime-to-live from creation in milliseconds. Job is discarded if not processed within this window. Max: 1 year.
timeoutnumberProcessing timeout in milliseconds. If a worker doesn’t ACK within this time, the job is considered stalled. Max: 1 day.
uniqueKeystringDeduplication key. If a job with the same uniqueKey already exists in the queue, the push is silently ignored.
jobIdstringCustom job ID. If a job with this ID already exists, the push is idempotent (returns the existing ID).
tagsstring[][]Metadata tags for filtering and querying.
groupIdstringGroup identifier for per-group concurrency limiting. Jobs in the same group are processed sequentially.
lifobooleanfalseLast-in-first-out ordering. When true, the job is processed before other jobs at the same priority.
removeOnCompletebooleanfalseAutomatically remove the job from memory after completion. Saves memory for fire-and-forget jobs.
removeOnFailbooleanfalseAutomatically remove the job after final failure (after all retries exhausted).
durablebooleanfalseBypass the write buffer and persist to SQLite immediately. Slower (~10k/s vs ~100k/s) but zero data loss risk.
dependsOnstring[][]Job IDs that must complete before this job becomes available. The job enters waiting-children state until all dependencies are met.
repeatobjectRepeat configuration: { every: ms, limit: n } for interval-based, or { cron: "expression" } for cron-based.

Success response (200):

{ "ok": true, "id": "019ce9d7-6983-7000-946f-48737be2b0f9" }

The id is a UUID v7 (time-ordered, sortable). If jobId was provided and a job with that ID already exists, the existing job’s ID is returned (idempotent).

Error responses:

StatusErrorCause
400Invalid JSON bodyRequest body is not valid JSON
400Queue name is requiredEmpty queue name
400Queue name contains invalid charactersQueue name has chars outside a-zA-Z0-9_-.:
400Job data too large (max 10MB)Serialized data exceeds 10MB
400priority must be an integerNon-integer priority
400delay must be at least 0Negative delay

Push multiple jobs to a queue in a single round-trip. More efficient than individual pushes — all jobs are inserted in a single batch operation.

POST /queues/:queue/jobs/bulk
Terminal window
curl -X POST http://localhost:6790/queues/emails/jobs/bulk \
-H "Content-Type: application/json" \
-d '{
"jobs": [
{"data": {"to": "user1@test.com"}, "priority": 5},
{"data": {"to": "user2@test.com"}},
{"data": {"to": "user3@test.com"}, "delay": 60000}
]
}'

Each item in jobs supports all the same fields as a single push. The operation is atomic — either all jobs are pushed or none are (if validation fails for any job).

Response (200):

{ "ok": true, "ids": ["id-1", "id-2", "id-3"] }

IDs are returned in the same order as the input jobs.


Pull the next available job from a queue for processing. The job transitions from waiting to active state. Respects priority ordering (higher priority first) and FIFO within the same priority.

GET /queues/:queue/jobs[?timeout=ms]
Terminal window
# Immediate return (no wait) — returns null if queue is empty
curl http://localhost:6790/queues/emails/jobs
# Long-poll for up to 5 seconds — waits for a job to become available
curl http://localhost:6790/queues/emails/jobs?timeout=5000
ParameterTypeDefaultMaxDescription
timeoutnumber060000Long-poll timeout in ms. 0 = return immediately if no job available.

Response with job (200):

{
"ok": true,
"job": {
"id": "019ce9d7-6983-7000-946f-48737be2b0f9",
"queue": "emails",
"data": {"to": "user@test.com", "subject": "Welcome"},
"priority": 10,
"createdAt": 1700000000000,
"runAt": 1700000000000,
"attempts": 0,
"maxAttempts": 3,
"backoff": 1000,
"progress": 0,
"tags": [],
"lifo": false,
"removeOnComplete": false,
"removeOnFail": false
}
}

No job available (200):

{ "ok": true, "job": null }

Behavior notes:

  • Paused queues return null even if jobs exist
  • The pulled job is tracked for the duration of the HTTP request. If the client disconnects without ACKing, the stall detector will eventually return the job to waiting state
  • Rate-limited queues may return null even if jobs exist (rate limit exceeded)
  • Per-group concurrency: if the job’s groupId has reached its concurrency limit, the next job from a different group is returned

Pull multiple jobs at once. More efficient than individual pulls for high-throughput workers.

POST /queues/:queue/jobs/pull-batch
Terminal window
curl -X POST http://localhost:6790/queues/emails/jobs/pull-batch \
-H "Content-Type: application/json" \
-d '{"count": 10, "timeout": 5000}'
FieldTypeRequiredRangeDescription
countnumberYes1-1000Number of jobs to pull
timeoutnumberNo0-60000Long-poll timeout (ms)
ownerstringNoLock owner identifier for lock-based processing
lockTtlnumberNoLock time-to-live (ms). Job is released if lock expires without ACK.

Response (200):

{
"ok": true,
"jobs": [
{"id": "id-1", "queue": "emails", "data": {...}, "priority": 5, ...},
{"id": "id-2", "queue": "emails", "data": {...}, "priority": 3, ...}
]
}

Returns fewer jobs than count if the queue doesn’t have enough available jobs.


Retrieve a job by ID. Returns the full job object regardless of state (waiting, active, delayed, completed).

GET /jobs/:id
Terminal window
curl http://localhost:6790/jobs/019ce9d7-6983-7000-946f-48737be2b0f9

Response (200):

{
"ok": true,
"job": {
"id": "019ce9d7-6983-7000-946f-48737be2b0f9",
"queue": "emails",
"data": {"to": "user@test.com"},
"priority": 0,
"createdAt": 1700000000000,
"runAt": 1700000000000,
"startedAt": 1700000001000,
"completedAt": null,
"attempts": 1,
"maxAttempts": 3,
"backoff": 1000,
"progress": 50,
"tags": ["onboarding"],
"lifo": false,
"removeOnComplete": false,
"removeOnFail": false
}
}

Not found (404): { "ok": false, "error": "Job not found" }


Look up a job using the custom jobId that was set at push time. Useful for idempotent workflows where you generate your own IDs.

GET /jobs/custom/:customId
Terminal window
curl http://localhost:6790/jobs/custom/order-12345

Returns the same response format as GET /jobs/:id.


GET /jobs/:id/state
{ "ok": true, "id": "019ce9d7-...", "state": "active" }

Possible states: waiting, delayed, active, completed, failed, unknown


Retrieve the result stored when a job was acknowledged. Only available for completed jobs.

GET /jobs/:id/result
{ "ok": true, "id": "019ce9d7-...", "result": {"sent": true, "messageId": "abc-123"} }

Results are stored in an LRU cache (max 5,000 entries). Oldest results are evicted when the cache is full. For permanent result storage, use the result field in your own database.


Remove a job from the queue. Works on waiting, delayed, and active jobs.

DELETE /jobs/:id
Terminal window
curl -X DELETE http://localhost:6790/jobs/019ce9d7-...

Response (200): { "ok": true }

If the job is active, it’s removed from the processing queue and the worker’s next heartbeat or ACK attempt will fail with “job not found”. The job is not re-queued.


Mark a job as successfully completed. The job transitions from active to completed state. Optionally store a result that can be retrieved later via GET /jobs/:id/result.

POST /jobs/:id/ack
Terminal window
curl -X POST http://localhost:6790/jobs/019ce9d7-.../ack \
-H "Content-Type: application/json" \
-d '{"result": {"sent": true, "messageId": "abc-123"}}'

Request body (optional):

FieldTypeDescription
resultanyCompletion result. Stored in LRU cache (5,000 max).
tokenstringLock token (if using lock-based processing).

Response (200): { "ok": true }

Error (400): { "ok": false, "error": "Job not found or not active" }

What happens on ACK:

  1. Job is removed from the active processing queue
  2. Result is stored in the LRU cache (if provided)
  3. Completion counter incremented
  4. job:completed event broadcast to all subscribers
  5. queue:counts event broadcast with updated counts
  6. Dependent jobs (via dependsOn) are checked and promoted if all dependencies are met
  7. If removeOnComplete: true, the job is permanently deleted from memory

Acknowledge multiple jobs in a single round-trip.

POST /jobs/ack-batch
Terminal window
curl -X POST http://localhost:6790/jobs/ack-batch \
-H "Content-Type: application/json" \
-d '{"ids": ["id-1", "id-2", "id-3"], "results": [{"a": 1}, null, {"c": 3}]}'
FieldTypeRequiredDescription
idsstring[]YesJob IDs to acknowledge
resultsunknown[]NoPer-job results (positional, same order as ids)
tokensstring[]NoLock tokens (positional)

Mark a job as failed. If retry attempts remain, the job is automatically re-queued with exponential backoff (backoff * 2^attempt). If all attempts are exhausted, the job moves to the Dead Letter Queue (DLQ).

POST /jobs/:id/fail
Terminal window
curl -X POST http://localhost:6790/jobs/019ce9d7-.../fail \
-H "Content-Type: application/json" \
-d '{"error": "SMTP connection refused"}'
FieldTypeDescription
errorstringError message. Stored with the job for debugging.
tokenstringLock token (if using lock-based processing).

Retry behavior:

Attempt 1 fails → wait 1s (backoff) → retry
Attempt 2 fails → wait 2s (backoff * 2) → retry
Attempt 3 fails → wait 4s (backoff * 4) → move to DLQ

The retry delay is calculated as min(backoff * 2^attempt, 24 hours).


Edit the JSON payload of a job in-place. Works on jobs in waiting, delayed, or active state. Useful for modifying job parameters before processing or while a job is being retried.

PUT /jobs/:id/data
Terminal window
curl -X PUT http://localhost:6790/jobs/019ce9d7-.../data \
-H "Content-Type: application/json" \
-d '{"data": {"to": "new@email.com", "subject": "Updated subject"}}'

The entire data field is replaced (not merged). To update a single field, read the current data first, modify it, then PUT the full object.

Broadcasts: job:data-updated event.


Change the priority of a job in waiting or delayed state. Higher priority = processed sooner.

PUT /jobs/:id/priority
{ "priority": 100 }

The job is repositioned in the priority queue immediately. Does not work on active jobs (they’re already being processed).

Broadcasts: job:priority-changed event with { jobId, newPriority }.


Move a job from delayed to waiting state for immediate processing. The job becomes available for the next PULL operation.

POST /jobs/:id/promote
Terminal window
curl -X POST http://localhost:6790/jobs/019ce9d7-.../promote

Error (400): { "ok": false, "error": "Job not found or not delayed" } — returned if the job doesn’t exist, is already in waiting state, or is active.

Broadcasts: job:promoted event.


Alias for Promote. Identical behavior.

POST /jobs/:id/move-to-wait

Move an active job back to delayed state. Useful when a worker determines it can’t process the job right now but doesn’t want to fail it.

POST /jobs/:id/move-to-delayed
{ "delay": 60000 }

The job will become waiting again after delay milliseconds.


Update the delay of a delayed job. The job’s runAt time is recalculated.

PUT /jobs/:id/delay
{ "delay": 30000 }

Broadcasts: job:delay-changed event with { jobId, newDelay }.


Move a job directly to the Dead Letter Queue, bypassing the normal retry mechanism. Works on waiting, delayed, and active jobs.

POST /jobs/:id/discard
Terminal window
curl -X POST http://localhost:6790/jobs/019ce9d7-.../discard

Broadcasts: job:discarded event.


Long-poll until a job completes or the timeout expires. This is event-driven (not polling) — the server subscribes to the job’s completion event internally and resolves immediately when the job finishes.

POST /jobs/:id/wait
Terminal window
curl -X POST http://localhost:6790/jobs/019ce9d7-.../wait \
-H "Content-Type: application/json" \
-d '{"timeout": 30000}'
FieldTypeDefaultDescription
timeoutnumber30000Maximum wait time in milliseconds

Completed within timeout:

{ "ok": true, "completed": true, "result": {"sent": true} }

Timed out:

{ "ok": true, "completed": false }

Not found:

{ "ok": false, "error": "Job not found" }

If the job is already completed when the request arrives, the result is returned immediately without waiting.


Workers can report progress (0-100) during long-running jobs. The dashboard can display this as a progress bar.

Get current progress:

GET /jobs/:id/progress
{ "ok": true, "progress": 75, "message": "Processing attachments..." }

Update progress:

POST /jobs/:id/progress
{ "progress": 75, "message": "Processing attachments..." }

Progress is stored on the job object and broadcast as a job:progress event to all WebSocket subscribers.


For jobs that use dependsOn (flow/pipeline), retrieve the results of all completed child jobs.

GET /jobs/:id/children
{ "ok": true, "values": {"child-job-1": {"result": "..."}, "child-job-2": {"result": "..."}} }

Send a heartbeat to prevent the stall detector from marking the job as stalled. Workers should send heartbeats at regular intervals (default: every 10 seconds) for long-running jobs.

POST /jobs/:id/heartbeat
{ "token": "lock-token", "duration": 30000 }

Both fields are optional. If the job doesn’t exist or isn’t active, returns an error.

Batch heartbeat:

POST /jobs/heartbeat-batch
{ "ids": ["id-1", "id-2"], "tokens": ["tok-1", "tok-2"] }

Extend the lock TTL on an active job. Used in lock-based processing where a worker holds a lock on a job and needs more time.

POST /jobs/:id/extend-lock
{ "duration": 30000, "token": "lock-token" }

Batch extend:

POST /jobs/extend-locks
{ "ids": ["id-1", "id-2"], "tokens": ["tok-1", "tok-2"], "durations": [30000, 60000] }

Structured logging attached to individual jobs. Useful for debugging failed jobs — each log entry has a level and message.

Add a log entry:

POST /jobs/:id/logs
Terminal window
curl -X POST http://localhost:6790/jobs/019ce9d7-.../logs \
-H "Content-Type: application/json" \
-d '{"message": "Connecting to SMTP server...", "level": "info"}'
FieldTypeRequiredDescription
messagestringYesLog message
levelstringNoinfo (default), warn, or error

Logs are stored in an LRU cache (max 100 entries per job, 10,000 jobs total).

Get all logs:

GET /jobs/:id/logs

Clear logs:

DELETE /jobs/:id/logs

Returns all queue names that have had at least one job pushed to them. Queue names persist until the queue is obliterated.

GET /queues
{ "ok": true, "queues": ["emails", "notifications", "reports"] }

Paginated listing of jobs in a specific queue, filtered by state.

GET /queues/:queue/jobs/list[?state=waiting&limit=10&offset=0]
Terminal window
curl "http://localhost:6790/queues/emails/jobs/list?state=waiting&limit=20&offset=0"
ParameterTypeDefaultDescription
statestringallFilter: waiting, delayed, active
limitnumberallMax jobs to return
offsetnumber0Skip first N jobs

Response (200):

{
"ok": true,
"jobs": [
{"id": "...", "queue": "emails", "data": {...}, "priority": 5, "createdAt": 1700000000000, "runAt": 1700000000000, "attempts": 0, "progress": 0}
]
}

Jobs are returned in priority order (highest first).


Returns the number of jobs in each state for a specific queue.

GET /queues/:queue/counts
{
"ok": true,
"counts": {"waiting": 150, "active": 12, "delayed": 30, "completed": 5000, "failed": 3}
}

Returns the total number of jobs (all states) in a queue.

GET /queues/:queue/count
{ "ok": true, "count": 192 }

Returns a breakdown of jobs by priority level. Useful for dashboards showing priority distribution.

GET /queues/:queue/priority-counts
{ "ok": true, "queue": "emails", "counts": {"0": 100, "5": 30, "10": 12} }

GET /queues/:queue/paused
{ "ok": true, "paused": false }

Stop processing new jobs from this queue. Active jobs continue to completion — only new pulls are blocked.

POST /queues/:queue/pause

Broadcasts: queue:paused event.


Resume processing after a pause.

POST /queues/:queue/resume

Broadcasts: queue:resumed event.


Remove all waiting and delayed jobs from a queue. Active jobs are not affected — they continue processing normally. This is useful for clearing a backlog without affecting in-progress work.

POST /queues/:queue/drain
{ "ok": true, "count": 150 }

Broadcasts: queue:drained event with { queue, count }.


Completely destroy a queue and all its jobs (waiting, delayed, and metadata). Active jobs continue but their ACK/FAIL will be no-ops.

POST /queues/:queue/obliterate

Broadcasts: queue:obliterated event.


Remove jobs older than a grace period, optionally filtered by state. Useful for maintenance — cleaning up old waiting/delayed jobs that are no longer relevant.

POST /queues/:queue/clean
Terminal window
curl -X POST http://localhost:6790/queues/emails/clean \
-H "Content-Type: application/json" \
-d '{"grace": 86400000, "state": "waiting", "limit": 500}'
FieldTypeDefaultDescription
gracenumber0Only remove jobs older than this many milliseconds. 0 = remove all.
statestringallwaiting or delayed.
limitnumber1000Max jobs to remove per call.

Response (200):

{ "ok": true, "count": 42 }

Uses a temporal index for efficient O(log n + k) cleanup instead of full queue scan.

Broadcasts: queue:cleaned event with { queue, state, count }.


Move all (or up to N) delayed jobs in a queue to waiting state immediately.

POST /queues/:queue/promote-jobs
{ "count": 50 }

Omit count to promote all delayed jobs.


Re-queue completed jobs for reprocessing. Useful for replaying jobs after a bug fix.

POST /queues/:queue/retry-completed
{ "id": "specific-job-id" }

Omit id to retry all completed jobs in the queue.


Jobs that exhaust all retry attempts or are explicitly discarded land in the DLQ. Each queue has its own DLQ. DLQ entries include the original job data, failure reason, and timestamp.

GET /queues/:queue/dlq[?count=100]
{
"ok": true,
"jobs": [
{"id": "...", "data": {...}, "attempts": 3, "createdAt": 1700000000000}
]
}

Default count: 100.


Re-queue jobs from the DLQ back to the main queue for reprocessing. The job’s attempt counter is reset.

POST /queues/:queue/dlq/retry
{ "jobId": "specific-job-id" }

Omit jobId to retry all DLQ jobs. Returns { "ok": true, "count": 5 }.

Broadcasts: dlq:retried (single) or dlq:retry-all (all) event.


Remove all jobs from the DLQ permanently. This is irreversible.

POST /queues/:queue/dlq/purge
{ "ok": true, "count": 12 }

Broadcasts: dlq:purged event with { queue, count }.


Per-queue controls for throughput and parallelism. These are queue-level settings, independent of HTTP rate limiting.

Limit the number of jobs that can be processed per second from a queue.

PUT /queues/:queue/rate-limit
{ "limit": 100 }

When the rate limit is hit, workers pulling from this queue receive null until the next window opens.

Broadcasts: ratelimit:set event.

DELETE /queues/:queue/rate-limit

Broadcasts: ratelimit:cleared event.

Limit the number of jobs that can be processed simultaneously from a queue.

PUT /queues/:queue/concurrency
{ "limit": 5 }

Broadcasts: concurrency:set event.

DELETE /queues/:queue/concurrency

Broadcasts: concurrency:cleared event.


Stall detection identifies jobs that a worker started processing but never acknowledged. This can happen when a worker crashes, hangs, or loses network connectivity.

Get current config:

GET /queues/:queue/stall-config

Update config:

PUT /queues/:queue/stall-config
{
"config": {
"stallInterval": 30000,
"maxStalls": 3,
"gracePeriod": 5000
}
}
FieldDefaultDescription
stallInterval30000How often to check for stalled jobs (ms)
maxStalls3Max times a job can stall before moving to DLQ
gracePeriod5000Grace period after job starts before stall detection kicks in

Broadcasts: config:stall-changed event.

Get current config:

GET /queues/:queue/dlq-config

Update config:

PUT /queues/:queue/dlq-config
{
"config": {
"autoRetry": true,
"maxAge": 604800000,
"maxEntries": 10000
}
}
FieldDefaultDescription
autoRetryfalseAutomatically retry DLQ entries after a delay
maxAge604800000Max age of DLQ entries in ms (default: 7 days). Older entries are removed.
maxEntries10000Max DLQ entries per queue. Oldest are evicted when full.

Broadcasts: config:dlq-changed event.


Schedule recurring jobs using cron expressions or fixed intervals.

GET /crons
{
"ok": true,
"crons": [
{
"name": "daily-cleanup",
"queue": "maintenance",
"schedule": "0 2 * * *",
"repeatEvery": null,
"nextRun": 1700100000000,
"executions": 42,
"maxLimit": null,
"timezone": "UTC"
}
]
}

POST /crons
Terminal window
curl -X POST http://localhost:6790/crons \
-H "Content-Type: application/json" \
-d '{
"name": "daily-cleanup",
"queue": "maintenance",
"data": {"task": "cleanup-stale-sessions"},
"schedule": "0 2 * * *",
"timezone": "America/New_York"
}'
FieldTypeRequiredDescription
namestringYesUnique identifier. Re-using a name updates the existing cron.
queuestringYesTarget queue for the generated jobs.
dataanyYesJob payload pushed on each execution.
schedulestring*Cron expression ("*/5 * * * *", "0 2 * * *").
repeatEverynumber*Interval in ms (alternative to cron expression).
timezonestringNoIANA timezone (default: UTC). Affects cron scheduling.
prioritynumberNoPriority for generated jobs.
maxLimitnumberNoMax total executions. Cron is removed after reaching this count.

* Either schedule or repeatEvery is required (not both).

Broadcasts: cron:created event.


GET /crons/:name

DELETE /crons/:name

Broadcasts: cron:deleted event.


Register HTTP endpoints to be called when specific job events occur. Webhooks are delivered with exponential backoff on failure (3 retries, 1s base delay).

GET /webhooks

POST /webhooks
Terminal window
curl -X POST http://localhost:6790/webhooks \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com/hooks/bunqueue",
"events": ["completed", "failed"],
"queue": "emails",
"secret": "whsec_abc123"
}'
FieldTypeRequiredDescription
urlstringYesHTTPS endpoint URL. Validated against SSRF (localhost, private IPs, cloud metadata blocked).
eventsstring[]YesEvent types to subscribe to (completed, failed, pushed, started).
queuestringNoFilter to specific queue. Omit for all queues.
secretstringNoHMAC signing secret for verifying webhook authenticity.

Response (200):

{ "ok": true, "data": {"webhookId": "wh-abc123", "url": "https://...", "events": ["completed", "failed"], "createdAt": 1700000000000} }

Broadcasts: webhook:added event.


DELETE /webhooks/:id

Broadcasts: webhook:removed event.


PUT /webhooks/:id/enabled
{ "enabled": false }

Disabled webhooks stop receiving deliveries but retain their configuration.


GET /workers
{
"ok": true,
"data": {
"workers": [
{"id": "w-1", "name": "email-worker", "queues": ["emails"], "lastSeen": 1700000000000, "activeJobs": 3, "processedJobs": 1500, "failedJobs": 12}
],
"stats": {"total": 4, "active": 3}
}
}

POST /workers
{ "name": "email-worker-1", "queues": ["emails", "notifications"] }

Broadcasts: worker:connected event with { workerId, name, queues }.


DELETE /workers/:id

Broadcasts: worker:disconnected event with { workerId }.


Keep a worker’s registration alive. Workers that stop sending heartbeats are eventually marked as disconnected.

POST /workers/:id/heartbeat

Comprehensive health information for load balancers and monitoring systems. No authentication required.

GET /health
{
"ok": true,
"status": "healthy",
"uptime": 86400,
"version": "2.6.17",
"queues": {"waiting": 150, "active": 12, "delayed": 30, "completed": 50000, "dlq": 3},
"connections": {"tcp": 8, "ws": 4, "sse": 2},
"memory": {"heapUsed": 45, "heapTotal": 64, "rss": 82}
}

Memory values in MB. Uptime in seconds. Returns "status": "degraded" when disk is full.


GET /healthz # Returns "OK" (text/plain, 200)
GET /live # Returns "OK" (text/plain, 200)
GET /ready # Returns { "ok": true, "ready": true }

No authentication required. Designed for Kubernetes probe configuration.


GET /ping
{ "ok": true, "data": {"pong": true, "time": 1700000000000} }

Server statistics with throughput counters, memory usage, and internal collection sizes.

GET /stats
{
"ok": true,
"stats": {
"waiting": 150, "active": 12, "delayed": 30, "completed": 50000, "dlq": 3,
"totalPushed": 100000, "totalPulled": 99500, "totalCompleted": 98000, "totalFailed": 200,
"uptime": 86400
},
"memory": {"heapUsed": 45, "heapTotal": 64, "rss": 82, "external": 2, "arrayBuffers": 1},
"collections": {"jobIndex": 1500, "completedJobs": 5000, "processingTotal": 12, "queuedTotal": 150, "temporalIndexTotal": 30}
}

GET /metrics
{ "ok": true, "metrics": {"totalPushed": 100000, "totalPulled": 99500, "totalCompleted": 98000, "totalFailed": 200} }

GET /prometheus

Returns text/plain; version=0.0.4 format for Prometheus scraping. Includes per-queue gauges, throughput counters, and latency histograms. Optionally requires auth (requireAuthForMetrics).


GET /storage
{ "ok": true, "diskFull": false }

When diskFull: true, the server stops accepting durable writes. In-memory operations continue.


POST /gc

Triggers Bun GC and internal memory compaction (compactMemory()). Returns before/after heap stats in MB.

{
"ok": true,
"before": {"heapUsed": 52, "heapTotal": 64, "rss": 90},
"after": {"heapUsed": 45, "heapTotal": 64, "rss": 85}
}

GET /heapstats

Detailed V8/JSC heap breakdown for debugging memory leaks. Returns top 20 object types by count, internal collection sizes, and heap metrics.


bunqueue provides two real-time event channels: Server-Sent Events (SSE) for simple one-way streaming, and WebSocket with full pub/sub for interactive dashboards.

GET /events
GET /events/queues/:queue

SSE broadcasts all job events in the legacy format ({ eventType, queue, jobId, ... }). For authenticated SSE, use @microsoft/fetch-event-source (native EventSource doesn’t support custom headers).

const events = new EventSource('http://localhost:6790/events');
events.onmessage = (e) => {
const data = JSON.parse(e.data);
if (data.connected) return;
console.log(`[${data.eventType}] ${data.queue} ${data.jobId}`);
};
ws://localhost:6790/ws
ws://localhost:6790/ws/queues/:queue

WebSocket supports pub/sub subscriptions with 50 event types across 9 categories. Clients subscribe to specific events and receive only matching data — zero polling needed.

Every pub/sub event follows this structure:

{
"event": "job:completed",
"ts": 1710000000000,
"data": {
"queue": "payments",
"jobId": "abc-123"
}
}
  • event — event name (category:action)
  • ts — unix timestamp in milliseconds
  • data — event-specific payload

After connecting, send a Subscribe command to start receiving events:

{ "cmd": "Subscribe", "events": ["job:*", "queue:counts", "stats:snapshot", "health:status"], "reqId": "1" }

Response:

{ "ok": true, "subscribed": ["job:*", "queue:counts", "stats:snapshot", "health:status"], "reqId": "1" }

Unsubscribe from specific events:

{ "cmd": "Unsubscribe", "events": ["job:progress"] }

Unsubscribe from everything:

{ "cmd": "Unsubscribe", "events": [] }
PatternMatches
*All 50 events
job:*All 14 job events
queue:*All 7 queue events + queue:counts
worker:*All 3 worker events
dlq:*All 4 DLQ events
cron:*All 5 cron events
stats:*stats:snapshot
health:*health:status
storage:*storage:status
config:*Both config events
ratelimit:*All rate limit events
concurrency:*All concurrency events
webhook:*All 4 webhook events
server:*server:started, server:shutdown

Clients that never send Subscribe receive all job events in the old format ({ eventType: "completed", queue, jobId, ... }). This maintains backward compatibility with existing integrations.

WebSocket clients can also send any TCP protocol command as JSON. This allows a dashboard to both receive events AND send commands (pause queue, retry job, etc.) over a single connection:

// Send a command
ws.send(JSON.stringify({ cmd: 'Pause', queue: 'emails', reqId: '2' }));
// Response
{ "ok": true, "reqId": "2" }

Two options:

  1. Header auth: Send Authorization: Bearer <token> during the WebSocket handshake
  2. Command auth: Send { "cmd": "Auth", "token": "my-secret" } after connecting

When a WebSocket disconnects, all jobs owned by that client (pulled but not ACKed) are automatically released back to the queue. This prevents jobs from being stuck when a worker disconnects unexpectedly.

const ws = new WebSocket('ws://localhost:6790/ws');
ws.onopen = () => {
// Subscribe to everything a dashboard needs
ws.send(JSON.stringify({
cmd: 'Subscribe',
events: [
'job:*', // All job lifecycle events
'queue:counts', // Real-time count updates (eliminates N+1 polling)
'stats:snapshot', // Global stats every 5s
'health:status', // Health check every 10s
'worker:*', // Worker connect/disconnect
'dlq:*', // DLQ events
'cron:*', // Cron events
'queue:paused', // Queue state changes
'queue:resumed',
]
}));
};
ws.onmessage = (e) => {
const msg = JSON.parse(e.data);
// Pub/sub event
if (msg.event) {
switch (msg.event) {
// Periodic snapshots (replace HTTP polling)
case 'stats:snapshot':
updateOverviewCards(msg.data);
updateMetricsCharts(msg.data);
break;
case 'health:status':
updateConnectionBanner(msg.data.ok);
updateMemoryDisplay(msg.data.memory);
break;
// Queue counts (eliminates the N+1 problem)
case 'queue:counts':
updateQueueRow(msg.data.queue, msg.data);
break;
// Real-time activity feed
case 'job:completed':
case 'job:failed':
case 'job:pushed':
addToActivityFeed(msg);
break;
// Worker status
case 'worker:connected':
addWorkerRow(msg.data);
break;
case 'worker:disconnected':
removeWorkerRow(msg.data.workerId);
break;
// DLQ alerts
case 'dlq:added':
incrementDlqCounter(msg.data.queue);
showAlert(`Job ${msg.data.jobId} moved to DLQ: ${msg.data.reason}`);
break;
}
return;
}
// Command response (for interactive operations)
if (msg.reqId) {
handleCommandResponse(msg);
}
};
// Interactive: pause a queue from the dashboard
function pauseQueue(queue) {
ws.send(JSON.stringify({ cmd: 'Pause', queue, reqId: `pause-${queue}` }));
}
EventPayloadDescription
job:pushedqueue, jobIdJob added to queue
job:activequeue, jobIdWorker picked up job
job:completedqueue, jobIdJob finished successfully
job:failedqueue, jobId, errorJob errored
job:removedqueue, jobIdJob cancelled/deleted
job:promotedjobIdDelayed job moved to waiting
job:progressqueue, jobId, progressWorker reported progress (0-100)
job:delayedqueue, jobId, delayJob moved to delayed state
job:stalledqueue, jobIdStall detected (no heartbeat)
job:retriedqueue, jobIdFailed job retried
job:discardedjobIdJob sent to DLQ via discard
job:priority-changedjobId, newPriorityPriority updated
job:data-updatedjobIdJob payload modified
job:delay-changedjobId, newDelayDelay modified
job:expiredqueue, jobId, ttl, ageJob TTL expired (distinguished from fail)
EventPayloadDescription
queue:countsqueue, waiting, active, completed, failed, delayedFired on every job state change. Eliminates N+1 polling.
queue:pausedqueueQueue paused
queue:resumedqueueQueue resumed
queue:drainedqueue, countAll waiting/delayed jobs removed
queue:cleanedqueue, state, countJobs cleaned by state
queue:obliteratedqueueQueue destroyed
queue:createdqueueFirst job pushed to new queue
queue:removedqueueQueue removed
queue:idlequeue, idleSecondsQueue empty with no active jobs for N seconds. Configure via QUEUE_IDLE_THRESHOLD_MS (default: 30000).
queue:thresholdqueue, size, thresholdQueue size exceeds threshold. Configure via QUEUE_SIZE_THRESHOLD (default: 0 = disabled).
EventPayloadDescription
flow:completedparentJobId, queue, childrenCountAll children of a flow completed successfully
flow:failedparentJobId, failedChildId, queue, errorA child in a flow failed permanently (moved to DLQ)
EventPayloadDescription
dlq:addedqueue, jobId, reasonJob moved to DLQ
dlq:retriedqueue, jobIdSingle DLQ entry retried
dlq:retry-allqueue, countAll DLQ entries retried
dlq:purgedqueue, countDLQ emptied
EventPayloadDescription
cron:createdname, queue, pattern?, every?, nextRunCron added
cron:deletednameCron removed
cron:firedname, queueCron triggered, job pushed
cron:updatedname, queue, nextRunCron modified
cron:missedname, queue, errorCron missed execution window
cron:skippedname, queue, reasonCron skipped due to overlap (previous instance still within interval)
EventPayloadDescription
worker:connectedworkerId, name, queuesWorker registered
worker:disconnectedworkerIdWorker gone
worker:heartbeatworkerIdWorker alive signal
worker:overloadedworkerId, name, activeJobs, concurrency, overloadedSecondsWorker at max concurrency for N seconds. Configure via WORKER_OVERLOAD_THRESHOLD_MS (default: 30000).
worker:errorworkerId, name, failedJobs, processedJobs, failureRateWorker failure rate is high (emitted at thresholds: 5, 10, 25, 50, 100 failures)
EventPayloadDescription
ratelimit:setqueue, maxRate limit configured
ratelimit:clearedqueueRate limit removed
ratelimit:hitqueue, jobIdJob throttled by rate limit
concurrency:setqueue, concurrencyConcurrency limit configured
concurrency:clearedqueueConcurrency limit removed
EventPayloadDescription
webhook:addedid, url, eventsWebhook created
webhook:removedidWebhook deleted
webhook:firedid, event, statusCodeWebhook delivered
webhook:failedid, event, errorWebhook delivery failed
EventPayloadDescription
stats:snapshotwaiting, active, completed, dlq, totalPushed, totalCompleted, totalFailed, pushPerSec, pullPerSec, uptime, queues, workers, cronJobsEvery 5s
health:statusok, uptime, memory: { rss, heapUsed }, connectionsEvery 10s
storage:statuscollections, diskFullEvery 30s
server:startedversion, startedAtServer boot
server:shutdownreasonGraceful shutdown
server:memory-warningheapUsedMB, thresholdMB, rssMBHeap exceeds threshold. Configure via MEMORY_WARNING_MB (default: 0 = disabled).
storage:size-warningsizeMB, thresholdMBSQLite DB exceeds threshold. Configure via STORAGE_WARNING_MB (default: 0 = disabled).
EventPayloadDescription
config:stall-changedqueue, configStall detection config updated
config:dlq-changedqueue, configDLQ config updated

This is the most impactful event for dashboards. It fires automatically on every job state change and provides the current counts for the affected queue:

{
"event": "queue:counts",
"ts": 1710000000000,
"data": {
"queue": "payments",
"waiting": 15,
"active": 2,
"completed": 100,
"failed": 0,
"delayed": 3
}
}

Without queue:counts: A dashboard with 20 queues needs to poll GET /queues/:q/counts for each queue every few seconds = 200+ HTTP requests per minute.

With queue:counts: Subscribe once, receive real-time updates only when counts change. Zero polling, instant UI updates.


MethodPathDescription
POST/queues/:q/jobsPush a job
POST/queues/:q/jobs/bulkPush jobs in bulk
GET/queues/:q/jobsPull a job
POST/queues/:q/jobs/pull-batchPull jobs in batch
GET/jobs/:idGet job by ID
GET/jobs/custom/:customIdGet job by custom ID
DELETE/jobs/:idCancel a job
POST/jobs/:id/ackAcknowledge a job
POST/jobs/ack-batchAcknowledge batch
POST/jobs/:id/failFail a job
GET/jobs/:id/stateGet job state
GET/jobs/:id/resultGet job result
GET/jobs/:id/progressGet progress
POST/jobs/:id/progressUpdate progress
PUT/jobs/:id/dataUpdate job data
PUT/jobs/:id/priorityChange priority
POST/jobs/:id/promotePromote delayed job
POST/jobs/:id/move-to-waitMove to waiting
POST/jobs/:id/move-to-delayedMove to delayed
PUT/jobs/:id/delayChange delay
POST/jobs/:id/discardDiscard to DLQ
POST/jobs/:id/waitWait for completion
GET/jobs/:id/childrenGet children values
POST/jobs/:id/heartbeatJob heartbeat
POST/jobs/heartbeat-batchJob heartbeat batch
POST/jobs/:id/extend-lockExtend lock
POST/jobs/extend-locksExtend locks batch
GET/jobs/:id/logsGet logs
POST/jobs/:id/logsAdd log
DELETE/jobs/:id/logsClear logs
MethodPathDescription
GET/queuesList all queues
GET/queues/:q/jobs/listList jobs by state
GET/queues/:q/countsJob counts per state
GET/queues/:q/countTotal job count
GET/queues/:q/priority-countsCounts per priority
GET/queues/:q/pausedCheck if paused
POST/queues/:q/pausePause queue
POST/queues/:q/resumeResume queue
POST/queues/:q/drainDrain queue
POST/queues/:q/obliterateObliterate queue
POST/queues/:q/cleanClean old jobs
POST/queues/:q/promote-jobsPromote delayed jobs
POST/queues/:q/retry-completedRetry completed jobs
MethodPathDescription
GET/queues/:q/dlqList DLQ jobs
POST/queues/:q/dlq/retryRetry DLQ jobs
POST/queues/:q/dlq/purgePurge DLQ
MethodPathDescription
PUT/queues/:q/rate-limitSet rate limit
DELETE/queues/:q/rate-limitClear rate limit
PUT/queues/:q/concurrencySet concurrency
DELETE/queues/:q/concurrencyClear concurrency
MethodPathDescription
GET/PUT/queues/:q/stall-configStall detection config
GET/PUT/queues/:q/dlq-configDLQ config
MethodPathDescription
GET/cronsList crons
POST/cronsAdd cron
GET/crons/:nameGet cron
DELETE/crons/:nameDelete cron
MethodPathDescription
GET/webhooksList webhooks
POST/webhooksAdd webhook
DELETE/webhooks/:idRemove webhook
PUT/webhooks/:id/enabledToggle webhook
MethodPathDescription
GET/workersList workers
POST/workersRegister worker
DELETE/workers/:idUnregister worker
POST/workers/:id/heartbeatWorker heartbeat
MethodPathAuthDescription
GET/healthNoHealth check
GET/healthzNoLiveness probe
GET/liveNoLiveness probe
GET/readyNoReadiness probe
GET/pingYesPing/pong
GET/statsYesServer statistics
GET/metricsYesThroughput metrics
GET/prometheusOptionalPrometheus metrics
GET/storageYesStorage health
POST/gcYesForce GC + compact
GET/heapstatsYesHeap statistics
ProtocolPathDescription
SSE/eventsAll events (legacy format)
SSE/events/queues/:qQueue-filtered events
WebSocket/wsPub/sub + commands (50 events, wildcards)
WebSocket/ws/queues/:qQueue-filtered pub/sub