Skip to content

MCP Server for AI Agents — Job Queue Control via Model Context Protocol

bunqueue ships with a built-in MCP (Model Context Protocol) server that gives AI agents full programmatic control over job queues, scheduling, monitoring, and execution.

MCP is the open standard that allows AI agents like Claude, Cursor, and Windsurf to interact with external tools. bunqueue implements the MCP server specification using the official @modelcontextprotocol/sdk, exposing every queue feature as a tool that agents can call directly. No REST wrappers, no glue code, no middleware. One command to connect, and your agent has a complete job queue system at its disposal.

73 Tools

Full job lifecycle, queue control, cron scheduling, DLQ management, rate limiting, webhooks, workflows, HTTP handlers, monitoring

3 Prompts

Pre-built diagnostic workflows: health report, queue debug, incident response

5 Resources

Read-only live context: server stats, queue states, cron schedules, workers, webhooks

2 Modes

Embedded (local SQLite, zero config) or TCP (connect to a remote bunqueue server)


  1. Install bunqueue

    Terminal window
    bun add bunqueue
  2. Connect your AI client

    Terminal window
    claude mcp add bunqueue -- bunx bunqueue-mcp
  3. Start using it

    Ask your agent: “Add a job to the emails queue” — it works immediately. The MCP server starts as a child process, communicates via stdio, and the agent can call any of the 73 tools.


The MCP server runs as a subprocess spawned by your AI client. It communicates via stdio using the JSON-RPC protocol defined by the MCP specification. When the agent calls a tool (e.g. bunqueue_add_job), the MCP server executes the operation against the queue backend and returns the result.

The server supports two backend modes: embedded (direct SQLite access in the same process) and TCP (connects to a remote bunqueue server). In embedded mode, everything runs locally with zero configuration. In TCP mode, the MCP server acts as a client that forwards operations to your production bunqueue instance.

AI Agent Claude, Cursor, Windsurf
"Schedule a cleanup job every hour"
stdio (JSON-RPC)
bunqueue MCP Server 73 tools · 3 prompts · 5 resources
Embedded SQLite (local)
TCP Client (remote)
TCP :6789
bunqueue Server Remote instance

Embedded mode is ideal for local development, single-machine deployments, and CLI tools. The MCP server manages its own SQLite database with WAL mode for concurrent access.

TCP mode connects to a running bunqueue server, allowing your agent to manage production queues shared across multiple workers and services.


Terminal window
claude mcp add bunqueue -- bunx bunqueue-mcp
// ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"bunqueue": {
"command": "bunx",
"args": ["bunqueue-mcp"]
}
}
}
{
"mcpServers": {
"bunqueue": {
"command": "bunx",
"args": ["bunqueue-mcp"]
}
}
}

Connect to a running bunqueue server instead of local SQLite:

{
"mcpServers": {
"bunqueue": {
"command": "bunx",
"args": ["bunqueue-mcp"],
"env": {
"BUNQUEUE_MODE": "tcp",
"BUNQUEUE_HOST": "your-server.com",
"BUNQUEUE_PORT": "6789",
"BUNQUEUE_TOKEN": "your-auth-token"
}
}
}
}
Embedded (default)TCP (remote)
How it worksDirect SQLite accessConnects to bunqueue server
SetupZero configSet BUNQUEUE_MODE=tcp
Best forLocal dev, single-machineProduction, shared queues
Data./data/bunq.dbRemote server handles storage

HTTP handlers solve a fundamental problem: an AI agent can schedule jobs and manage queues, but it cannot run a persistent worker process to actually execute those jobs. HTTP handlers bridge this gap.

When the agent calls bunqueue_register_handler, the MCP server spawns an embedded Worker inside its own process. This Worker continuously pulls jobs from the specified queue and, for each job, makes an HTTP request to the registered endpoint. The HTTP response is saved as the job result. If the HTTP call fails (non-2xx status or timeout), the job is marked as failed and follows the standard retry/DLQ flow.

This means the agent can set up a fully autonomous pipeline — schedule recurring jobs with cron, auto-process them via HTTP, and check results later — without writing any code or deploying any external service.

Cron / Add Job
Queue waiting
Embedded Worker
HTTP API endpoint
Result saved
Cron / Add JobAI agent triggers
QueueJobs waiting
Embedded WorkerInside MCP server process
HTTP APIYour endpoint
Job ResultResponse saved

The embedded Worker inherits all standard Worker features: heartbeat, stall detection, retry with backoff, and dead letter queue. It processes jobs sequentially (concurrency 1) to avoid overwhelming the target endpoint.

bunqueue_register_handler

Register an HTTP handler on a queue. Specify URL, method, optional headers, body template, and timeout. Spawns a Worker that starts processing immediately.

bunqueue_unregister_handler

Remove a handler from a queue and gracefully stop its Worker. Jobs already in the queue remain untouched.

bunqueue_list_handlers

List all active HTTP handlers with their configuration and Worker status (running or stopped).

ParameterRequiredDescription
queueYesQueue name to attach the handler to
urlYesHTTP endpoint URL to call for each job
methodYesGET, POST, PUT, or DELETE
headersNoCustom HTTP headers (e.g. Authorization, X-Api-Key)
bodyNoFixed request body template for POST/PUT. If omitted, the job’s data payload is sent
timeoutMsNoRequest timeout in milliseconds (default: 30000, range: 1000-120000)

For POST and PUT requests, the Worker sends the job’s data as JSON body by default. If a body template is provided, it overrides the job data. GET and DELETE requests do not send a body.

Example: Weather monitoring every 10 seconds

Section titled “Example: Weather monitoring every 10 seconds”
Terminal window
$ claude
> Register a handler on "meteo" that calls the OpenWeather API
bunqueue_register_handler
queue: "meteo"
method: GET
url: "https://api.openweathermap.org/data/2.5/weather?q=Milan&appid=xxx"
Worker started. Jobs in "meteo" will be processed via GET.
> Create a cron that pushes a job every 10 seconds
bunqueue_add_cron
name: "check-meteo"
queue: "meteo"
repeatEvery: 10000
# Every 10s: cron creates a job → Worker pulls it → GET request to API →
# response saved as job result → agent can check with bunqueue_get_job_result

All 73 Tools Complete Reference

Section titled “All 73 Tools ”
ToolDescription
bunqueue_add_jobAdd a job to a queue
bunqueue_add_jobs_bulkAdd multiple jobs in one call
bunqueue_get_jobGet job details by ID
bunqueue_get_job_stateGet current state (waiting/delayed/active/completed/failed)
bunqueue_get_job_resultGet the result of a completed job
bunqueue_cancel_jobCancel a waiting or delayed job
bunqueue_promote_jobPromote delayed job to waiting
bunqueue_update_progressUpdate job progress (0-100)
bunqueue_get_children_valuesGet child job results (FlowProducer)
bunqueue_get_job_by_custom_idLook up job by custom ID
bunqueue_wait_for_jobWait for a job to complete
ToolDescription
bunqueue_update_job_dataUpdate job payload data
bunqueue_change_job_priorityChange job priority
bunqueue_move_to_delayedMove job to delayed state
bunqueue_discard_jobPermanently discard a job
bunqueue_get_progressGet progress value and message
bunqueue_change_delayChange delay of a delayed job
ToolDescription
bunqueue_pull_jobPull a job from a queue for processing
bunqueue_pull_job_batchPull multiple jobs at once
bunqueue_ack_jobAcknowledge job completion with result
bunqueue_ack_job_batchBatch acknowledge multiple jobs
bunqueue_fail_jobMark a job as failed
bunqueue_job_heartbeatSend heartbeat for active job (supports custom duration for lock extension)
bunqueue_job_heartbeat_batchBatch heartbeat for multiple jobs
bunqueue_extend_lockExtend lock on an active job with a specific duration
ToolDescription
bunqueue_list_queuesList all queues
bunqueue_count_jobsCount total jobs in a queue
bunqueue_get_jobsList jobs with state filter and pagination
bunqueue_get_job_countsJob counts per state
bunqueue_pause_queuePause job processing
bunqueue_resume_queueResume processing
bunqueue_drain_queueRemove all waiting jobs
bunqueue_obliterate_queueRemove ALL data from a queue
bunqueue_clean_queueRemove old jobs by grace period
bunqueue_is_pausedCheck if queue is paused
bunqueue_get_counts_per_priorityJob counts by priority level
ToolDescription
bunqueue_get_dlqGet failed jobs from DLQ
bunqueue_retry_dlqRetry jobs from DLQ
bunqueue_purge_dlqClear all DLQ entries
bunqueue_retry_completedReprocess completed jobs
ToolDescription
bunqueue_add_cronAdd recurring job (cron pattern or interval)
bunqueue_list_cronsList all scheduled crons
bunqueue_get_cronGet cron details by name
bunqueue_delete_cronDelete a cron job
ToolDescription
bunqueue_set_rate_limitSet max jobs per second
bunqueue_clear_rate_limitRemove rate limit
bunqueue_set_concurrencySet max concurrent jobs
bunqueue_clear_concurrencyRemove concurrency limit
ToolDescription
bunqueue_add_webhookAdd webhook for job events
bunqueue_remove_webhookRemove a webhook
bunqueue_list_webhooksList all webhooks
bunqueue_set_webhook_enabledEnable/disable a webhook
ToolDescription
bunqueue_register_workerRegister a new worker
bunqueue_unregister_workerRemove a worker
bunqueue_worker_heartbeatSend worker heartbeat
ToolDescription
bunqueue_get_statsGlobal server statistics
bunqueue_get_queue_statsPer-queue statistics
bunqueue_list_workersList active workers
bunqueue_get_job_logsGet job log entries
bunqueue_add_job_logAdd log entry to a job
bunqueue_get_storage_statusDisk health status
bunqueue_get_per_queue_statsDetailed per-queue breakdown
bunqueue_get_memory_statsMemory usage stats
bunqueue_get_prometheus_metricsPrometheus exposition format
bunqueue_clear_job_logsClear logs for a job
bunqueue_compact_memoryForce memory compaction
ToolDescription
bunqueue_add_flowCreate a flow tree (BullMQ v5 compatible). Children processed before parent
bunqueue_add_flow_chainCreate a sequential pipeline: A → B → C
bunqueue_add_flow_bulk_thenFan-out/fan-in: parallel jobs → final merge job
bunqueue_get_flowRetrieve a flow tree with full dependency graph
ToolDescription
bunqueue_register_handlerRegister HTTP handler on a queue and start auto-processing
bunqueue_unregister_handlerRemove handler and stop its worker
bunqueue_list_handlersList all active HTTP handlers

Pre-built diagnostic workflows for AI agents.

PromptDescription
bunqueue_health_reportFull health report with severity levels (OK/WARNING/CRITICAL). Covers system stats, storage, queues, workers, crons, memory
bunqueue_debug_queueDeep diagnostic of a specific queue: state breakdown, pause status, DLQ, active jobs, priority distribution
bunqueue_incident_responseStep-by-step triage playbook for “jobs not processing” scenarios

Read-only context available at any time.

URIDescription
bunqueue://statsGlobal server statistics
bunqueue://queuesAll queues with job counts
bunqueue://cronsScheduled cron jobs
bunqueue://workersActive workers
bunqueue://webhooksRegistered webhooks

A complete Claude Code session connected to a remote bunqueue server via TCP:

Terminal window
# Connect MCP to your running server
$ claude mcp add bunqueue -- bunx bunqueue-mcp \
--env BUNQUEUE_MODE=tcp \
--env BUNQUEUE_HOST=your-server.com \
--env BUNQUEUE_PORT=6789
$ claude
> Add 3 notification jobs: push, email, sms
bunqueue_add_jobs_bulk
Added 3 jobs to "notifications" queue
> Schedule session cleanup every hour
bunqueue_add_cron
Created "cleanup-sessions" schedule: "0 * * * *"
Next run: 2026-02-24T20:00:00.000Z
> Rate limit notifications to 50/sec and show queue status
bunqueue_set_rate_limit notifications: 50 jobs/sec
bunqueue_get_job_counts
waiting: 3 · active: 0 · completed: 87 · failed: 0
> Create a pipeline: validate payment → send receipt → update inventory
bunqueue_add_flow_chain
Created: validate-payment send-receipt update-inventory
> Register an HTTP handler to auto-process the "webhooks" queue
bunqueue_register_handler
queue: "webhooks"
method: POST
url: "https://api.example.com/webhook"
Worker started. Jobs auto-processed via HTTP.

Jobs sit in queues until something processes them. Two approaches:

Option A: HTTP Handlers (no code, agent-managed)

Section titled “Option A: HTTP Handlers (no code, agent-managed)”

The agent registers an HTTP endpoint on a queue via bunqueue_register_handler. The MCP server spawns an embedded Worker that automatically pulls jobs and makes HTTP requests to that endpoint. The response is saved as the job result.

No external process, no deployment, no code. The agent controls the entire lifecycle: register, schedule with cron, monitor results, unregister.

1
Agent registers handler
2
MCP spawns Worker
3
Worker pulls jobs
4
Calls HTTP endpoint
5
Response saved as job result
1
Agent registers handler
2
MCP spawns Worker
3
Worker pulls jobs
4
Calls HTTP endpoint
5
Response saved as result

When to use: API polling, webhook forwarding, external service calls, health checks, monitoring endpoints. Any use case where the processing logic is just an HTTP call.

Terminal window
# Agent conversation example
> Register a POST handler on "webhooks" that calls https://api.example.com/ingest
Worker started. Every job in "webhooks" will be sent via POST.
> Add a cron to push a job every 30 seconds
Cron "webhook-trigger" created. Jobs auto-processed via HTTP.

For processing that requires custom code (database writes, file processing, business logic), deploy a separate Worker process. The agent orchestrates (add jobs, schedule crons, monitor progress), the Worker executes.

import { Worker } from 'bunqueue/client';
new Worker('emails', async (job) => {
const { to, subject, body } = job.data;
await job.updateProgress(25, 'Validating recipient...');
await validateEmail(to);
await job.updateProgress(50, 'Sending email...');
await sendEmail(to, subject, body);
await job.log(`Email sent to ${to}`);
return { sent: true, timestamp: Date.now() };
}, { embedded: true });
Terminal window
bun run worker.ts

When to use: Email sending, image processing, database operations, ML inference, complex business logic — anything that needs code execution beyond a simple HTTP call.

HTTP HandlersExternal Worker
SetupZero — agent registers via MCPDeploy a Worker process
ProcessingHTTP request to an endpointCustom TypeScript function
Managed byAI agent (register/unregister)Developer (deploy/maintain)
Use caseAPI calls, webhooks, monitoringBusiness logic, DB writes, files
Code requiredNoneYes (Worker definition)

All 73 tools return structured error responses with isError: true:

{
"error": "Human-readable error message"
}

Not-found responses for get_job, get_job_by_custom_id, get_progress, and get_cron also return isError: true. No stack traces or internal details are ever exposed.


VariableDefaultDescription
BUNQUEUE_MODEembeddedembedded or tcp
BUNQUEUE_HOSTlocalhostTCP server host
BUNQUEUE_PORT6789TCP server port
BUNQUEUE_TOKENAuth token for TCP
DATA_PATH./data/bunq.dbSQLite path (embedded)

Terminal window
# Verify installation
bunx bunqueue-mcp --help
# Check Bun is in PATH
which bun
# Test manually
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}' | bunx bunqueue-mcp

No worker is processing the queue. Either register an HTTP handler or deploy a Worker:

Terminal window
# Option A: HTTP handler (via agent)
# Ask your agent: "Register a handler on <queue> that calls <url>"
# Option B: External worker
bun run worker.ts
  1. Restart Claude Desktop or Claude Code
  2. Check config file syntax (valid JSON)
  3. Verify bunqueue-mcp path is correct
  4. Run bunx bunqueue-mcp manually to check for errors