73 Tools
Full job lifecycle, queue control, cron scheduling, DLQ management, rate limiting, webhooks, workflows, HTTP handlers, monitoring
bunqueue ships with a built-in MCP (Model Context Protocol) server that gives AI agents full programmatic control over job queues, scheduling, monitoring, and execution.
MCP is the open standard that allows AI agents like Claude, Cursor, and Windsurf to interact with external tools. bunqueue implements the MCP server specification using the official @modelcontextprotocol/sdk, exposing every queue feature as a tool that agents can call directly. No REST wrappers, no glue code, no middleware. One command to connect, and your agent has a complete job queue system at its disposal.
73 Tools
Full job lifecycle, queue control, cron scheduling, DLQ management, rate limiting, webhooks, workflows, HTTP handlers, monitoring
3 Prompts
Pre-built diagnostic workflows: health report, queue debug, incident response
5 Resources
Read-only live context: server stats, queue states, cron schedules, workers, webhooks
2 Modes
Embedded (local SQLite, zero config) or TCP (connect to a remote bunqueue server)
Install bunqueue
bun add bunqueueConnect your AI client
claude mcp add bunqueue -- bunx bunqueue-mcpStart using it
Ask your agent: “Add a job to the emails queue” — it works immediately. The MCP server starts as a child process, communicates via stdio, and the agent can call any of the 73 tools.
The MCP server runs as a subprocess spawned by your AI client. It communicates via stdio using the JSON-RPC protocol defined by the MCP specification. When the agent calls a tool (e.g. bunqueue_add_job), the MCP server executes the operation against the queue backend and returns the result.
The server supports two backend modes: embedded (direct SQLite access in the same process) and TCP (connects to a remote bunqueue server). In embedded mode, everything runs locally with zero configuration. In TCP mode, the MCP server acts as a client that forwards operations to your production bunqueue instance.
Embedded mode is ideal for local development, single-machine deployments, and CLI tools. The MCP server manages its own SQLite database with WAL mode for concurrent access.
TCP mode connects to a running bunqueue server, allowing your agent to manage production queues shared across multiple workers and services.
claude mcp add bunqueue -- bunx bunqueue-mcp// ~/Library/Application Support/Claude/claude_desktop_config.json{ "mcpServers": { "bunqueue": { "command": "bunx", "args": ["bunqueue-mcp"] } }}{ "mcpServers": { "bunqueue": { "command": "bunx", "args": ["bunqueue-mcp"] } }}{ "mcpServers": { "bunqueue": { "command": "bunx", "args": ["bunqueue-mcp"] } }}{ "mcpServers": { "bunqueue": { "command": "bunx", "args": ["bunqueue-mcp"] } }}Connect to a running bunqueue server instead of local SQLite:
{ "mcpServers": { "bunqueue": { "command": "bunx", "args": ["bunqueue-mcp"], "env": { "BUNQUEUE_MODE": "tcp", "BUNQUEUE_HOST": "your-server.com", "BUNQUEUE_PORT": "6789", "BUNQUEUE_TOKEN": "your-auth-token" } } }}| Embedded (default) | TCP (remote) | |
|---|---|---|
| How it works | Direct SQLite access | Connects to bunqueue server |
| Setup | Zero config | Set BUNQUEUE_MODE=tcp |
| Best for | Local dev, single-machine | Production, shared queues |
| Data | ./data/bunq.db | Remote server handles storage |
HTTP handlers solve a fundamental problem: an AI agent can schedule jobs and manage queues, but it cannot run a persistent worker process to actually execute those jobs. HTTP handlers bridge this gap.
When the agent calls bunqueue_register_handler, the MCP server spawns an embedded Worker inside its own process. This Worker continuously pulls jobs from the specified queue and, for each job, makes an HTTP request to the registered endpoint. The HTTP response is saved as the job result. If the HTTP call fails (non-2xx status or timeout), the job is marked as failed and follows the standard retry/DLQ flow.
This means the agent can set up a fully autonomous pipeline — schedule recurring jobs with cron, auto-process them via HTTP, and check results later — without writing any code or deploying any external service.
The embedded Worker inherits all standard Worker features: heartbeat, stall detection, retry with backoff, and dead letter queue. It processes jobs sequentially (concurrency 1) to avoid overwhelming the target endpoint.
bunqueue_register_handler
Register an HTTP handler on a queue. Specify URL, method, optional headers, body template, and timeout. Spawns a Worker that starts processing immediately.
bunqueue_unregister_handler
Remove a handler from a queue and gracefully stop its Worker. Jobs already in the queue remain untouched.
bunqueue_list_handlers
List all active HTTP handlers with their configuration and Worker status (running or stopped).
| Parameter | Required | Description |
|---|---|---|
queue | Yes | Queue name to attach the handler to |
url | Yes | HTTP endpoint URL to call for each job |
method | Yes | GET, POST, PUT, or DELETE |
headers | No | Custom HTTP headers (e.g. Authorization, X-Api-Key) |
body | No | Fixed request body template for POST/PUT. If omitted, the job’s data payload is sent |
timeoutMs | No | Request timeout in milliseconds (default: 30000, range: 1000-120000) |
For POST and PUT requests, the Worker sends the job’s data as JSON body by default. If a body template is provided, it overrides the job data. GET and DELETE requests do not send a body.
$ claude
> Register a handler on "meteo" that calls the OpenWeather API
✓ bunqueue_register_handler queue: "meteo" method: GET url: "https://api.openweathermap.org/data/2.5/weather?q=Milan&appid=xxx" Worker started. Jobs in "meteo" will be processed via GET.
> Create a cron that pushes a job every 10 seconds
✓ bunqueue_add_cron name: "check-meteo" queue: "meteo" repeatEvery: 10000
# Every 10s: cron creates a job → Worker pulls it → GET request to API →# response saved as job result → agent can check with bunqueue_get_job_result| Tool | Description |
|---|---|
bunqueue_add_job | Add a job to a queue |
bunqueue_add_jobs_bulk | Add multiple jobs in one call |
bunqueue_get_job | Get job details by ID |
bunqueue_get_job_state | Get current state (waiting/delayed/active/completed/failed) |
bunqueue_get_job_result | Get the result of a completed job |
bunqueue_cancel_job | Cancel a waiting or delayed job |
bunqueue_promote_job | Promote delayed job to waiting |
bunqueue_update_progress | Update job progress (0-100) |
bunqueue_get_children_values | Get child job results (FlowProducer) |
bunqueue_get_job_by_custom_id | Look up job by custom ID |
bunqueue_wait_for_job | Wait for a job to complete |
| Tool | Description |
|---|---|
bunqueue_update_job_data | Update job payload data |
bunqueue_change_job_priority | Change job priority |
bunqueue_move_to_delayed | Move job to delayed state |
bunqueue_discard_job | Permanently discard a job |
bunqueue_get_progress | Get progress value and message |
bunqueue_change_delay | Change delay of a delayed job |
| Tool | Description |
|---|---|
bunqueue_pull_job | Pull a job from a queue for processing |
bunqueue_pull_job_batch | Pull multiple jobs at once |
bunqueue_ack_job | Acknowledge job completion with result |
bunqueue_ack_job_batch | Batch acknowledge multiple jobs |
bunqueue_fail_job | Mark a job as failed |
bunqueue_job_heartbeat | Send heartbeat for active job (supports custom duration for lock extension) |
bunqueue_job_heartbeat_batch | Batch heartbeat for multiple jobs |
bunqueue_extend_lock | Extend lock on an active job with a specific duration |
| Tool | Description |
|---|---|
bunqueue_list_queues | List all queues |
bunqueue_count_jobs | Count total jobs in a queue |
bunqueue_get_jobs | List jobs with state filter and pagination |
bunqueue_get_job_counts | Job counts per state |
bunqueue_pause_queue | Pause job processing |
bunqueue_resume_queue | Resume processing |
bunqueue_drain_queue | Remove all waiting jobs |
bunqueue_obliterate_queue | Remove ALL data from a queue |
bunqueue_clean_queue | Remove old jobs by grace period |
bunqueue_is_paused | Check if queue is paused |
bunqueue_get_counts_per_priority | Job counts by priority level |
| Tool | Description |
|---|---|
bunqueue_get_dlq | Get failed jobs from DLQ |
bunqueue_retry_dlq | Retry jobs from DLQ |
bunqueue_purge_dlq | Clear all DLQ entries |
bunqueue_retry_completed | Reprocess completed jobs |
| Tool | Description |
|---|---|
bunqueue_add_cron | Add recurring job (cron pattern or interval) |
bunqueue_list_crons | List all scheduled crons |
bunqueue_get_cron | Get cron details by name |
bunqueue_delete_cron | Delete a cron job |
| Tool | Description |
|---|---|
bunqueue_set_rate_limit | Set max jobs per second |
bunqueue_clear_rate_limit | Remove rate limit |
bunqueue_set_concurrency | Set max concurrent jobs |
bunqueue_clear_concurrency | Remove concurrency limit |
| Tool | Description |
|---|---|
bunqueue_add_webhook | Add webhook for job events |
bunqueue_remove_webhook | Remove a webhook |
bunqueue_list_webhooks | List all webhooks |
bunqueue_set_webhook_enabled | Enable/disable a webhook |
| Tool | Description |
|---|---|
bunqueue_register_worker | Register a new worker |
bunqueue_unregister_worker | Remove a worker |
bunqueue_worker_heartbeat | Send worker heartbeat |
| Tool | Description |
|---|---|
bunqueue_get_stats | Global server statistics |
bunqueue_get_queue_stats | Per-queue statistics |
bunqueue_list_workers | List active workers |
bunqueue_get_job_logs | Get job log entries |
bunqueue_add_job_log | Add log entry to a job |
bunqueue_get_storage_status | Disk health status |
bunqueue_get_per_queue_stats | Detailed per-queue breakdown |
bunqueue_get_memory_stats | Memory usage stats |
bunqueue_get_prometheus_metrics | Prometheus exposition format |
bunqueue_clear_job_logs | Clear logs for a job |
bunqueue_compact_memory | Force memory compaction |
| Tool | Description |
|---|---|
bunqueue_add_flow | Create a flow tree (BullMQ v5 compatible). Children processed before parent |
bunqueue_add_flow_chain | Create a sequential pipeline: A → B → C |
bunqueue_add_flow_bulk_then | Fan-out/fan-in: parallel jobs → final merge job |
bunqueue_get_flow | Retrieve a flow tree with full dependency graph |
| Tool | Description |
|---|---|
bunqueue_register_handler | Register HTTP handler on a queue and start auto-processing |
bunqueue_unregister_handler | Remove handler and stop its worker |
bunqueue_list_handlers | List all active HTTP handlers |
Pre-built diagnostic workflows for AI agents.
| Prompt | Description |
|---|---|
bunqueue_health_report | Full health report with severity levels (OK/WARNING/CRITICAL). Covers system stats, storage, queues, workers, crons, memory |
bunqueue_debug_queue | Deep diagnostic of a specific queue: state breakdown, pause status, DLQ, active jobs, priority distribution |
bunqueue_incident_response | Step-by-step triage playbook for “jobs not processing” scenarios |
Read-only context available at any time.
| URI | Description |
|---|---|
bunqueue://stats | Global server statistics |
bunqueue://queues | All queues with job counts |
bunqueue://crons | Scheduled cron jobs |
bunqueue://workers | Active workers |
bunqueue://webhooks | Registered webhooks |
A complete Claude Code session connected to a remote bunqueue server via TCP:
# Connect MCP to your running server$ claude mcp add bunqueue -- bunx bunqueue-mcp \ --env BUNQUEUE_MODE=tcp \ --env BUNQUEUE_HOST=your-server.com \ --env BUNQUEUE_PORT=6789
$ claude
> Add 3 notification jobs: push, email, sms
✓ bunqueue_add_jobs_bulk Added 3 jobs to "notifications" queue
> Schedule session cleanup every hour
✓ bunqueue_add_cron Created "cleanup-sessions" → schedule: "0 * * * *" Next run: 2026-02-24T20:00:00.000Z
> Rate limit notifications to 50/sec and show queue status
✓ bunqueue_set_rate_limit → notifications: 50 jobs/sec✓ bunqueue_get_job_counts waiting: 3 · active: 0 · completed: 87 · failed: 0
> Create a pipeline: validate payment → send receipt → update inventory
✓ bunqueue_add_flow_chain Created: validate-payment → send-receipt → update-inventory
> Register an HTTP handler to auto-process the "webhooks" queue
✓ bunqueue_register_handler queue: "webhooks" method: POST url: "https://api.example.com/webhook" Worker started. Jobs auto-processed via HTTP.Jobs sit in queues until something processes them. Two approaches:
The agent registers an HTTP endpoint on a queue via bunqueue_register_handler. The MCP server spawns an embedded Worker that automatically pulls jobs and makes HTTP requests to that endpoint. The response is saved as the job result.
No external process, no deployment, no code. The agent controls the entire lifecycle: register, schedule with cron, monitor results, unregister.
When to use: API polling, webhook forwarding, external service calls, health checks, monitoring endpoints. Any use case where the processing logic is just an HTTP call.
# Agent conversation example> Register a POST handler on "webhooks" that calls https://api.example.com/ingest
✓ Worker started. Every job in "webhooks" will be sent via POST.
> Add a cron to push a job every 30 seconds
✓ Cron "webhook-trigger" created. Jobs auto-processed via HTTP.For processing that requires custom code (database writes, file processing, business logic), deploy a separate Worker process. The agent orchestrates (add jobs, schedule crons, monitor progress), the Worker executes.
import { Worker } from 'bunqueue/client';
new Worker('emails', async (job) => { const { to, subject, body } = job.data;
await job.updateProgress(25, 'Validating recipient...'); await validateEmail(to);
await job.updateProgress(50, 'Sending email...'); await sendEmail(to, subject, body);
await job.log(`Email sent to ${to}`); return { sent: true, timestamp: Date.now() };}, { embedded: true });bun run worker.tsWhen to use: Email sending, image processing, database operations, ML inference, complex business logic — anything that needs code execution beyond a simple HTTP call.
| HTTP Handlers | External Worker | |
|---|---|---|
| Setup | Zero — agent registers via MCP | Deploy a Worker process |
| Processing | HTTP request to an endpoint | Custom TypeScript function |
| Managed by | AI agent (register/unregister) | Developer (deploy/maintain) |
| Use case | API calls, webhooks, monitoring | Business logic, DB writes, files |
| Code required | None | Yes (Worker definition) |
All 73 tools return structured error responses with isError: true:
{ "error": "Human-readable error message"}Not-found responses for get_job, get_job_by_custom_id, get_progress, and get_cron also return isError: true. No stack traces or internal details are ever exposed.
| Variable | Default | Description |
|---|---|---|
BUNQUEUE_MODE | embedded | embedded or tcp |
BUNQUEUE_HOST | localhost | TCP server host |
BUNQUEUE_PORT | 6789 | TCP server port |
BUNQUEUE_TOKEN | — | Auth token for TCP |
DATA_PATH | ./data/bunq.db | SQLite path (embedded) |
# Verify installationbunx bunqueue-mcp --help
# Check Bun is in PATHwhich bun
# Test manuallyecho '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}' | bunx bunqueue-mcpNo worker is processing the queue. Either register an HTTP handler or deploy a Worker:
# Option A: HTTP handler (via agent)# Ask your agent: "Register a handler on <queue> that calls <url>"
# Option B: External workerbun run worker.tsbunqueue-mcp path is correctbunx bunqueue-mcp manually to check for errors