Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

mininq is a SQLite-backed job runner — background jobs the way SQLite feels for databases. No Redis, no RabbitMQ, no external dependencies. Just a single binary and a single file.

Why mininq?

Most job queues require running a separate broker (Redis, RabbitMQ, Postgres). mininq takes the SQLite approach: embed the queue directly alongside your application. It’s ideal for:

  • Small-to-medium services that don’t need a dedicated message broker
  • Self-contained deployments where fewer moving parts matter
  • Projects that already use SQLite and want to keep things simple

Architecture Overview

┌──────────────┐     POST /jobs      ┌────────────────┐
│  Your App    │ ──────────────────▶ │   mininq HTTP  │
│              │                     │   API (Axum)   │
└──────────────┘                     └───────┬────────┘
                                             │
                                             ▼
                                     ┌───────────────┐
                                     │  SQLite (WAL)  │
                                     │   mininq.db    │
                                     └───────┬────────┘
                                             │
                                             ▼
                                     ┌───────────────┐
                                     │ Worker Engine  │──▶ POST callback_url
                                     │ (poll loop)    │    (webhook delivery)
                                     └───────────────┘

Features

  • Webhook-based execution — jobs POST to a callback URL; your service handles the work
  • Priority queues — higher-priority jobs run first within a queue
  • Per-queue rate limiting — token bucket algorithm, configurable per queue
  • Retry with backoff — exponential, linear, or fixed strategies with jitter
  • Delayed jobs — schedule jobs to become visible after a delay
  • Cron schedules — recurring jobs via 6-field cron expressions
  • Idempotency keys — deduplicate job creation
  • Stale job recovery — reaper reclaims jobs from crashed workers
  • Automatic cleanup — optionally delete completed/dead jobs after N days
  • Embedded dashboard — htmx-powered web UI for monitoring
  • Graceful shutdown — waits for in-flight jobs on SIGTERM/SIGINT
  • Single binary, single file — no runtime dependencies beyond the OS

Getting Started

Prerequisites

Install

# Clone and build
git clone https://github.com/tadeasf/mininq.git
cd mininq
cargo install --path .

Or build without installing:

cargo build --release
# Binary at ./target/release/mininq

Run

mininq

mininq creates mininq.db in the current directory and starts listening on 0.0.0.0:6390.

2024-01-15T10:00:00 INFO mininq: Starting mininq version=0.1.0 host=0.0.0.0 port=6390 db=mininq.db
2024-01-15T10:00:00 INFO mininq::db: Database pools connected writer_max=1 reader_max=4 path=mininq.db
2024-01-15T10:00:00 INFO mininq: HTTP server listening addr=0.0.0.0:6390

Enqueue Your First Job

Use any HTTP endpoint as a callback. Here we use httpbin.org as a test target:

curl -X POST http://localhost:6390/jobs \
  -H "Content-Type: application/json" \
  -d '{
    "callback_url": "https://httpbin.org/post",
    "payload": {"message": "hello from mininq"}
  }'

Response:

{
  "id": "019...",
  "queue_name": "default",
  "status": "pending",
  "priority": 0,
  "payload": "{\"message\":\"hello from mininq\"}",
  "callback_url": "https://httpbin.org/post",
  "attempt": 0,
  "max_retries": 3,
  "timeout_ms": 30000,
  "created_at": "2024-01-15T10:00:01.234",
  "updated_at": "2024-01-15T10:00:01.234"
}

Check Job Status

curl http://localhost:6390/jobs/019...

The job should quickly move from pendingrunningcompleted.

Open the Dashboard

Navigate to http://localhost:6390/dashboard to see stats, queues, jobs, and schedules in a live-updating web UI.

Next Steps

Configuration

mininq is configured via a TOML config file, CLI flags, and environment variables.

Precedence: CLI flags > environment variables > config file > defaults

Config File

By default, mininq looks for mininq.toml in the current directory. Specify a different path with --config:

mininq --config /etc/mininq/config.toml

Full Example

[server]
host = "0.0.0.0"
port = 6390
db_path = "mininq.db"

[worker]
concurrency = 10
poll_interval_ms = 500
queues = []                  # empty = all queues
reaper_interval_secs = 30
scheduler_interval_secs = 15
retention_days = 30          # omit to keep jobs forever
cleanup_interval_secs = 3600

[defaults]
max_retries = 3
retry_backoff = "exponential"  # "exponential" | "linear" | "fixed"
base_delay_ms = 1000
max_delay_ms = 300000          # 5 minutes
timeout_ms = 30000             # 30 seconds

[logging]
level = "info"
format = "json"                # "json" | "pretty"

Reference

[server]

FieldTypeDefaultDescription
hostString"0.0.0.0"Bind address
portu166390HTTP port
db_pathString"mininq.db"Path to the SQLite database file

[worker]

FieldTypeDefaultDescription
concurrencyusize10Maximum concurrent job executions (semaphore permits)
poll_interval_msu64500Milliseconds between poll cycles
queues[String][]Queue names to process; empty means all queues
reaper_interval_secsu6430Seconds between reaper sweeps for stale jobs
scheduler_interval_secsu6415Seconds between scheduler ticks for cron jobs
retention_daysu32?noneDays to keep completed/dead jobs; omit to keep forever
cleanup_interval_secsu643600Seconds between cleanup sweeps (only active if retention is set)

[defaults]

These defaults apply to jobs that don’t specify their own values.

FieldTypeDefaultDescription
max_retriesi323Maximum retry attempts
retry_backoffString"exponential"Backoff strategy: exponential, linear, fixed
base_delay_msi321000Base delay for retry calculation (ms)
max_delay_msi32300000Maximum retry delay cap (ms)
timeout_msi3230000Webhook request timeout (ms)

[logging]

FieldTypeDefaultDescription
levelString"info"Log level: trace, debug, info, warn, error
formatString"json"Output format: json or pretty

CLI Flags

mininq [OPTIONS]

Options:
  -c, --config <PATH>       Path to config file [default: mininq.toml]
      --host <HOST>         Server host (overrides config)
      --port <PORT>         Server port (overrides config)
      --db-path <PATH>      Database path (overrides config)
      --log-level <LEVEL>   Log level (overrides config)
  -h, --help                Print help

Environment Variables

VariableOverrides
MININQ_HOSTserver.host
MININQ_PORTserver.port
MININQ_DB_PATHserver.db_path
MININQ_LOG_LEVELlogging.level

API Overview

mininq exposes a JSON REST API over HTTP.

Base URL

http://localhost:6390

All request and response bodies use Content-Type: application/json.

Error Format

All errors return a JSON object with error and status fields:

{
  "error": "Job 019... not found",
  "status": 404
}

Status Codes

CodeMeaning
200Success
201Resource created
400Bad request (validation error)
404Resource not found
409Conflict (e.g., duplicate, active jobs)
500Internal server error

Route Table

MethodPathDescription
GET/healthHealth check
POST/jobsCreate a job
GET/jobsList jobs (with filters)
GET/jobs/{id}Get a single job
DELETE/jobs/{id}Cancel a pending job
POST/jobs/{id}/retryRetry a dead job
POST/queuesCreate a queue
GET/queuesList queues with stats
GET/queues/{name}Get a single queue with stats
PUT/queues/{name}Update queue settings
DELETE/queues/{name}Delete a queue
POST/queues/{name}/pausePause a queue
POST/queues/{name}/resumeResume a paused queue
POST/schedulesCreate a schedule
GET/schedulesList all schedules
GET/schedules/{id}Get a single schedule
PUT/schedules/{id}Update a schedule
DELETE/schedules/{id}Delete a schedule
GET/metricsGet system metrics
GET/dashboardWeb dashboard (HTML)

Jobs API

Job Lifecycle

pending ──▶ running ──▶ completed
               │
               ▼
           (retry) ──▶ pending   (if attempts < max_retries)
               │
               ▼
             dead                (if attempts exhausted or 4xx response)

When a job is executed, mininq sends a POST request to the callback_url with:

  • Body: the job’s payload (JSON)
  • Headers:
    • Content-Type: application/json
    • X-Mininq-Job-Id: <job-id>
    • X-Mininq-Attempt: <attempt-number>
    • X-Mininq-Queue: <queue-name>

Response handling:

  • 2xx → job marked completed, response body saved to result
  • 4xx → permanent failure, job goes directly to dead
  • 5xx / timeout / connection error → transient failure, retried up to max_retries

POST /jobs

Create a new job.

Request body:

FieldTypeRequiredDefaultDescription
callback_urlStringyesHTTP(S) URL to POST when executing
queue_nameStringno"default"Target queue (auto-created if missing)
priorityi32no0Higher values execute first
payloadObjectno{}JSON payload sent to the callback
max_retriesi32noconfig defaultMaximum retry attempts
retry_backoffStringnoconfig default"exponential", "linear", or "fixed"
base_delay_msi32noconfig defaultBase delay for retry calculation
max_delay_msi32noconfig defaultMaximum retry delay cap
timeout_msi32noconfig defaultWebhook request timeout (ms)
idempotency_keyStringnoDedup key — returns existing job if matched
delay_msi64noDelay before job becomes visible (ms)

Example:

curl -X POST http://localhost:6390/jobs \
  -H "Content-Type: application/json" \
  -d '{
    "callback_url": "https://example.com/webhook",
    "queue_name": "emails",
    "priority": 10,
    "payload": {"to": "user@example.com", "template": "welcome"},
    "max_retries": 5,
    "idempotency_key": "welcome-user-123"
  }'

Response: 201 Created (or 200 OK if idempotency key matched an existing job)

{
  "id": "019...",
  "queue_name": "emails",
  "status": "pending",
  "priority": 10,
  "payload": "{\"to\":\"user@example.com\",\"template\":\"welcome\"}",
  "callback_url": "https://example.com/webhook",
  "visible_at": "2024-01-15T10:00:01.234",
  "attempt": 0,
  "max_retries": 5,
  "timeout_ms": 30000,
  "idempotency_key": "welcome-user-123",
  "created_at": "2024-01-15T10:00:01.234",
  "updated_at": "2024-01-15T10:00:01.234"
}

Delayed Jobs

Set delay_ms to schedule a job for future execution:

curl -X POST http://localhost:6390/jobs \
  -H "Content-Type: application/json" \
  -d '{
    "callback_url": "https://example.com/webhook",
    "delay_ms": 60000
  }'

The job will remain invisible to the worker until the delay has elapsed.


GET /jobs

List jobs with optional filters.

Query parameters:

ParamTypeDefaultDescription
queueStringFilter by queue name
statusStringFilter by status (pending, running, completed, dead)
limiti6450Max results (capped at 1000)
offseti640Pagination offset

Example:

curl "http://localhost:6390/jobs?queue=emails&status=dead&limit=10"

Response: 200 OK — array of job objects.


GET /jobs/

Get a single job by ID.

curl http://localhost:6390/jobs/019...

Response: 200 OK — job object. 404 if not found.


DELETE /jobs/

Cancel a pending job. Only jobs with status pending can be cancelled.

curl -X DELETE http://localhost:6390/jobs/019...

Response: 200 OK

{
  "status": "cancelled",
  "id": "019..."
}

Returns 404 if not found, 409 if the job is not in pending status.


POST /jobs/{id}/retry

Retry a dead job. Resets it to pending with attempt count 0.

curl -X POST http://localhost:6390/jobs/019.../retry

Response: 200 OK — the reset job object. 404 if not found or not in dead status.

Queues API

Queues group jobs and control concurrency, rate limiting, and retry behavior. Queues are automatically created when a job references a non-existent queue name, but you can also create them explicitly to configure settings.


POST /queues

Create a queue with custom settings.

Request body:

FieldTypeRequiredDefaultDescription
nameStringyesQueue name (unique)
max_concurrencyi32no5Max concurrent jobs from this queue
rate_limit_rpsf64nononeRequests per second limit
max_retriesi32no3Default max retries for jobs
retry_backoffStringno"exponential"Default backoff strategy
base_delay_msi32no1000Default base delay (ms)
max_delay_msi32no300000Default max delay (ms)

Example:

curl -X POST http://localhost:6390/queues \
  -H "Content-Type: application/json" \
  -d '{
    "name": "webhooks",
    "max_concurrency": 20,
    "rate_limit_rps": 10.0,
    "retry_backoff": "linear"
  }'

Response: 201 Created

{
  "name": "webhooks",
  "max_concurrency": 20,
  "rate_limit_rps": 10.0,
  "max_retries": 3,
  "retry_backoff": "linear",
  "base_delay_ms": 1000,
  "max_delay_ms": 300000,
  "paused": 0,
  "created_at": "2024-01-15T10:00:00.000",
  "updated_at": "2024-01-15T10:00:00.000"
}

Returns 409 if a queue with that name already exists.


GET /queues

List all queues with job count statistics.

curl http://localhost:6390/queues

Response: 200 OK

[
  {
    "name": "default",
    "max_concurrency": 5,
    "paused": false,
    "pending": 12,
    "running": 3,
    "completed": 150,
    "dead": 2
  }
]

GET /queues/

Get a single queue with stats.

curl http://localhost:6390/queues/webhooks

Response: 200 OK — queue stats object. 404 if not found.


PUT /queues/

Update queue settings. Only provided fields are changed.

Request body:

FieldTypeDescription
max_concurrencyi32Max concurrent jobs
rate_limit_rpsf64Requests per second limit
max_retriesi32Default max retries
retry_backoffStringDefault backoff strategy
base_delay_msi32Default base delay (ms)
max_delay_msi32Default max delay (ms)

Example:

curl -X PUT http://localhost:6390/queues/webhooks \
  -H "Content-Type: application/json" \
  -d '{"rate_limit_rps": 5.0}'

Response: 200 OK — updated queue object. 404 if not found. 400 if no fields provided.


DELETE /queues/

Delete a queue. Fails if the queue has pending or running jobs.

curl -X DELETE http://localhost:6390/queues/webhooks

Response: 200 OK

{
  "status": "deleted",
  "name": "webhooks"
}

Returns 404 if not found. Returns 409 if the queue still has active (pending/running) jobs.


POST /queues/{name}/pause

Pause a queue. Paused queues stop having their jobs picked up by workers.

curl -X POST http://localhost:6390/queues/webhooks/pause

Response: 200 OK — updated queue object with paused: 1.


POST /queues/{name}/resume

Resume a paused queue.

curl -X POST http://localhost:6390/queues/webhooks/resume

Response: 200 OK — updated queue object with paused: 0.

Schedules API

Schedules create jobs automatically on a cron-based interval.

Cron Format

mininq uses the cron crate, which supports a 7-field format:

sec  min  hour  day-of-month  month  day-of-week  year

The year field is optional. Common examples:

ExpressionMeaning
0 */5 * * * *Every 5 minutes
0 0 * * * *Every hour
0 0 9 * * *Daily at 09:00
0 30 2 * * MonMondays at 02:30
0 0 0 1 * *First day of each month

Note: The first field is seconds, not minutes. 0 * * * * * means “every minute at second 0.”


POST /schedules

Create a recurring schedule.

Request body:

FieldTypeRequiredDefaultDescription
cron_expressionStringyesCron expression (see format above)
callback_urlStringyesHTTP(S) URL to POST
queue_nameStringno"default"Target queue for generated jobs
payloadObjectno{}JSON payload for generated jobs
max_retriesi32no3Max retries for generated jobs
timeout_msi32no30000Timeout for generated jobs (ms)
enabledboolnotrueWhether the schedule is active

Example:

curl -X POST http://localhost:6390/schedules \
  -H "Content-Type: application/json" \
  -d '{
    "cron_expression": "0 */5 * * * *",
    "callback_url": "https://example.com/cleanup",
    "queue_name": "maintenance",
    "payload": {"task": "expire_sessions"}
  }'

Response: 201 Created

{
  "id": "019...",
  "queue_name": "maintenance",
  "cron_expression": "0 */5 * * * *",
  "callback_url": "https://example.com/cleanup",
  "payload": "{\"task\":\"expire_sessions\"}",
  "max_retries": 3,
  "timeout_ms": 30000,
  "enabled": 1,
  "last_run_at": null,
  "next_run_at": "2024-01-15T10:05:00.000",
  "created_at": "2024-01-15T10:00:00.000",
  "updated_at": "2024-01-15T10:00:00.000"
}

GET /schedules

List all schedules.

curl http://localhost:6390/schedules

Response: 200 OK — array of schedule objects, ordered by created_at DESC.


GET /schedules/

Get a single schedule.

curl http://localhost:6390/schedules/019...

Response: 200 OK — schedule object. 404 if not found.


PUT /schedules/

Update a schedule. Only provided fields are changed. If cron_expression is updated, next_run_at is automatically recomputed.

Request body:

FieldTypeDescription
queue_nameStringTarget queue
cron_expressionStringCron expression (recomputes next run)
callback_urlStringCallback URL
payloadObjectJSON payload
max_retriesi32Max retries
timeout_msi32Timeout (ms)
enabledboolEnable/disable the schedule

Example:

curl -X PUT http://localhost:6390/schedules/019... \
  -H "Content-Type: application/json" \
  -d '{"enabled": false}'

Response: 200 OK — updated schedule object. 404 if not found. 400 if no fields provided.


DELETE /schedules/

Delete a schedule.

curl -X DELETE http://localhost:6390/schedules/019...

Response: 200 OK

{
  "status": "deleted",
  "id": "019..."
}

Returns 404 if not found.

Metrics API

GET /metrics

Returns aggregate system metrics.

curl http://localhost:6390/metrics

Response: 200 OK

{
  "uptime_secs": 3600,
  "version": "0.1.0",
  "jobs": {
    "total": 1500,
    "pending": 42,
    "running": 8,
    "completed": 1420,
    "dead": 30
  },
  "queues": [
    {
      "name": "default",
      "paused": false,
      "depth": 30,
      "in_flight": 5
    },
    {
      "name": "emails",
      "paused": false,
      "depth": 12,
      "in_flight": 3
    }
  ],
  "schedules": {
    "total": 5,
    "enabled": 4
  }
}

Field Descriptions

FieldDescription
uptime_secsSeconds since the server started
versionmininq version from Cargo.toml
jobs.totalTotal job count across all statuses
jobs.pendingJobs waiting to be picked up
jobs.runningJobs currently being executed
jobs.completedSuccessfully completed jobs
jobs.deadJobs that exhausted retries or hit permanent failure
queues[].nameQueue name
queues[].pausedWhether the queue is paused
queues[].depthNumber of pending jobs in this queue
queues[].in_flightNumber of currently running jobs in this queue
schedules.totalTotal number of schedules
schedules.enabledNumber of enabled (active) schedules

Dashboard

mininq includes an embedded web dashboard powered by htmx. No separate frontend build is required — it’s compiled into the binary via rust-embed.

Accessing the Dashboard

Open your browser to:

http://localhost:6390/dashboard

(Replace localhost:6390 with your configured host and port.)

What It Shows

The dashboard provides a live-updating overview of the system:

  • Stats cards — total jobs, pending, running, completed, dead counts at a glance
  • Queues table — all queues with their job counts, pause/resume controls, and concurrency settings
  • Jobs table — filterable list of jobs with status, queue, priority, timestamps, and error messages
  • Schedules table — all cron schedules with their expression, next run time, and enabled status

Interactivity

  • Pause/Resume — click buttons in the queues table to pause or resume individual queues
  • Auto-refresh — the dashboard polls the API periodically to keep data current
  • Filtering — filter jobs by queue name and status

Static Assets

The dashboard’s HTML and CSS are embedded into the binary at compile time from the static/ directory. The files are served at:

  • GET /dashboard — main HTML page
  • GET /static/{path} — CSS and other static assets

Architecture

This page describes the internal design of mininq.

Database

mininq uses SQLite in WAL (Write-Ahead Logging) mode, enabling concurrent reads while a write is in progress.

Connection pools:

  • Writer pool — exactly 1 connection (serializes all writes)
  • Reader pool — N connections (N = number of CPU cores, minimum 4)

Pragmas applied to both pools:

PragmaValuePurpose
journal_modewalConcurrent read/write
synchronousnormalBalance durability with performance
busy_timeout5000 msWait on lock instead of failing
temp_storememoryStore temp tables in RAM
mmap_size30000000000Memory-map up to ~30 GB of the DB file
cache_size-6400064 MB page cache
foreign_keysonEnforce referential integrity

Schema

3 tables:

  • queues — queue configuration (name, concurrency, rate limit, retry settings, paused flag)
  • jobs — individual jobs (status, payload, callback URL, retry state, timestamps)
  • schedules — cron schedules (expression, callback, next/last run)

4 indexes:

IndexPurpose
idx_jobs_pollFast job claiming: (queue_name, status, visible_at, priority DESC, created_at ASC) WHERE status = 'pending'
idx_jobs_reaperStale job detection: (status, visible_at) WHERE status = 'running'
idx_jobs_idempotencyIdempotency key lookup: unique on (idempotency_key) WHERE idempotency_key IS NOT NULL
idx_schedules_next_runDue schedule lookup: (enabled, next_run_at) WHERE enabled = 1

Worker Engine

The worker engine is a poll-based loop:

  1. Sleep for poll_interval_ms (default 500ms)
  2. Acquire a semaphore permit (limits concurrency to worker.concurrency)
  3. Claim a job using an atomic UPDATE ... WHERE id = (SELECT ...) RETURNING * query
  4. Spawn a tokio task to execute the webhook
  5. Release the semaphore permit when the task completes

The claim query selects the highest-priority pending job whose visible_at has passed, atomically setting it to running and assigning a worker_id. This prevents double-execution even with multiple workers.

Queue order is randomized on each poll cycle to prevent starvation.

Graceful Shutdown

On SIGTERM or SIGINT:

  1. The CancellationToken is triggered
  2. The worker engine stops polling for new jobs
  3. It acquires all semaphore permits (blocking until in-flight jobs complete)
  4. The reaper, scheduler, and cleanup tasks exit their loops
  5. Database connections are closed

Rate Limiter

Per-queue rate limiting uses a token bucket algorithm:

  • Each queue with rate_limit_rps set gets its own bucket
  • Bucket capacity = rate_limit_rps (minimum 1.0)
  • Tokens refill continuously at rate_limit_rps per second
  • Each job claim consumes 1 token
  • If the bucket is empty, the queue is skipped for that poll cycle
  • Queues without a rate limit always pass

Rate limits are checked from the database on each poll, so changes via PUT /queues/{name} take effect immediately.

Retry Strategies

When a job fails with a transient error (5xx, timeout, connection error), it’s retried up to max_retries times. The delay before the next attempt is computed as:

StrategyFormula
exponentialbase_delay_ms * 2^(attempt - 1)
linearbase_delay_ms * attempt
fixedbase_delay_ms

All strategies add ±30% random jitter and are capped at max_delay_ms.

Permanent failures (4xx responses) bypass retries and go directly to dead.

Reaper

The reaper runs on a configurable interval (default 30s) and recovers jobs stuck in running status. A job is considered stale when its visible_at has passed — this timestamp is set to now + timeout_ms when the job is claimed.

Stale jobs are reset to pending with worker_id and started_at cleared, making them available for re-execution.

Scheduler

The scheduler runs on a configurable interval (default 15s) and:

  1. Queries enabled schedules where next_run_at <= now
  2. For each due schedule, computes the next run time from the cron expression
  3. Uses a CAS (Compare-And-Swap) update: UPDATE ... WHERE id = ? AND next_run_at = ?
    • This prevents duplicate job creation if multiple instances are running
  4. Inserts a new job with the schedule’s callback URL, payload, and retry settings
  5. Auto-creates the target queue if it doesn’t exist

Cleanup

If worker.retention_days is configured, the cleanup task runs periodically (default every 3600s) and deletes completed/dead jobs older than the retention period. Jobs in pending or running status are never deleted.

Deployment

Build a Release Binary

cargo build --release

The binary is at ./target/release/mininq (~5-10 MB, statically includes the dashboard).

Systemd

Create /etc/systemd/system/mininq.service:

[Unit]
Description=mininq job runner
After=network.target

[Service]
Type=simple
User=mininq
Group=mininq
WorkingDirectory=/var/lib/mininq
ExecStart=/usr/local/bin/mininq --config /etc/mininq/config.toml
Restart=on-failure
RestartSec=5

# Graceful shutdown
KillSignal=SIGTERM
TimeoutStopSec=30

# Security hardening
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=yes
ReadWritePaths=/var/lib/mininq

[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now mininq

Docker

FROM rust:1.85-slim AS builder
WORKDIR /build
COPY . .
RUN cargo build --release

FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /build/target/release/mininq /usr/local/bin/mininq

WORKDIR /data
EXPOSE 6390
CMD ["mininq"]
docker build -t mininq .
docker run -d -p 6390:6390 -v mininq-data:/data mininq

The database file (mininq.db) is created in /data — mount a volume to persist it.

Reverse Proxy

Nginx

upstream mininq {
    server 127.0.0.1:6390;
}

server {
    listen 80;
    server_name jobs.example.com;

    location / {
        proxy_pass http://mininq;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Caddy

jobs.example.com {
    reverse_proxy localhost:6390
}

Graceful Shutdown

mininq handles SIGTERM and SIGINT:

  1. Stops accepting new HTTP connections
  2. Waits for in-flight jobs to complete (up to their individual timeouts)
  3. Stops background tasks (reaper, scheduler, cleanup)
  4. Closes database connections

Set TimeoutStopSec in systemd or stop_grace_period in Docker Compose to match your longest expected job timeout.

Backup

mininq stores everything in a single SQLite file. Back it up safely while the server is running:

sqlite3 /var/lib/mininq/mininq.db ".backup /backups/mininq-$(date +%F).db"

The .backup command uses SQLite’s online backup API, which is safe to run against a live WAL-mode database.