Introduction
mininq is a SQLite-backed job runner — background jobs the way SQLite feels for databases. No Redis, no RabbitMQ, no external dependencies. Just a single binary and a single file.
Why mininq?
Most job queues require running a separate broker (Redis, RabbitMQ, Postgres). mininq takes the SQLite approach: embed the queue directly alongside your application. It’s ideal for:
- Small-to-medium services that don’t need a dedicated message broker
- Self-contained deployments where fewer moving parts matter
- Projects that already use SQLite and want to keep things simple
Architecture Overview
┌──────────────┐ POST /jobs ┌────────────────┐
│ Your App │ ──────────────────▶ │ mininq HTTP │
│ │ │ API (Axum) │
└──────────────┘ └───────┬────────┘
│
▼
┌───────────────┐
│ SQLite (WAL) │
│ mininq.db │
└───────┬────────┘
│
▼
┌───────────────┐
│ Worker Engine │──▶ POST callback_url
│ (poll loop) │ (webhook delivery)
└───────────────┘
Features
- Webhook-based execution — jobs POST to a callback URL; your service handles the work
- Priority queues — higher-priority jobs run first within a queue
- Per-queue rate limiting — token bucket algorithm, configurable per queue
- Retry with backoff — exponential, linear, or fixed strategies with jitter
- Delayed jobs — schedule jobs to become visible after a delay
- Cron schedules — recurring jobs via 6-field cron expressions
- Idempotency keys — deduplicate job creation
- Stale job recovery — reaper reclaims jobs from crashed workers
- Automatic cleanup — optionally delete completed/dead jobs after N days
- Embedded dashboard — htmx-powered web UI for monitoring
- Graceful shutdown — waits for in-flight jobs on SIGTERM/SIGINT
- Single binary, single file — no runtime dependencies beyond the OS
Getting Started
Prerequisites
- Rust toolchain (1.85+ for edition 2024)
Install
# Clone and build
git clone https://github.com/tadeasf/mininq.git
cd mininq
cargo install --path .
Or build without installing:
cargo build --release
# Binary at ./target/release/mininq
Run
mininq
mininq creates mininq.db in the current directory and starts listening on 0.0.0.0:6390.
2024-01-15T10:00:00 INFO mininq: Starting mininq version=0.1.0 host=0.0.0.0 port=6390 db=mininq.db
2024-01-15T10:00:00 INFO mininq::db: Database pools connected writer_max=1 reader_max=4 path=mininq.db
2024-01-15T10:00:00 INFO mininq: HTTP server listening addr=0.0.0.0:6390
Enqueue Your First Job
Use any HTTP endpoint as a callback. Here we use httpbin.org as a test target:
curl -X POST http://localhost:6390/jobs \
-H "Content-Type: application/json" \
-d '{
"callback_url": "https://httpbin.org/post",
"payload": {"message": "hello from mininq"}
}'
Response:
{
"id": "019...",
"queue_name": "default",
"status": "pending",
"priority": 0,
"payload": "{\"message\":\"hello from mininq\"}",
"callback_url": "https://httpbin.org/post",
"attempt": 0,
"max_retries": 3,
"timeout_ms": 30000,
"created_at": "2024-01-15T10:00:01.234",
"updated_at": "2024-01-15T10:00:01.234"
}
Check Job Status
curl http://localhost:6390/jobs/019...
The job should quickly move from pending → running → completed.
Open the Dashboard
Navigate to http://localhost:6390/dashboard to see stats, queues, jobs, and schedules in a live-updating web UI.
Next Steps
- Configuration — customize ports, concurrency, retry behavior
- API Reference — full endpoint documentation
- Architecture — understand the internals
Configuration
mininq is configured via a TOML config file, CLI flags, and environment variables.
Precedence: CLI flags > environment variables > config file > defaults
Config File
By default, mininq looks for mininq.toml in the current directory. Specify a different path with --config:
mininq --config /etc/mininq/config.toml
Full Example
[server]
host = "0.0.0.0"
port = 6390
db_path = "mininq.db"
[worker]
concurrency = 10
poll_interval_ms = 500
queues = [] # empty = all queues
reaper_interval_secs = 30
scheduler_interval_secs = 15
retention_days = 30 # omit to keep jobs forever
cleanup_interval_secs = 3600
[defaults]
max_retries = 3
retry_backoff = "exponential" # "exponential" | "linear" | "fixed"
base_delay_ms = 1000
max_delay_ms = 300000 # 5 minutes
timeout_ms = 30000 # 30 seconds
[logging]
level = "info"
format = "json" # "json" | "pretty"
Reference
[server]
| Field | Type | Default | Description |
|---|---|---|---|
host | String | "0.0.0.0" | Bind address |
port | u16 | 6390 | HTTP port |
db_path | String | "mininq.db" | Path to the SQLite database file |
[worker]
| Field | Type | Default | Description |
|---|---|---|---|
concurrency | usize | 10 | Maximum concurrent job executions (semaphore permits) |
poll_interval_ms | u64 | 500 | Milliseconds between poll cycles |
queues | [String] | [] | Queue names to process; empty means all queues |
reaper_interval_secs | u64 | 30 | Seconds between reaper sweeps for stale jobs |
scheduler_interval_secs | u64 | 15 | Seconds between scheduler ticks for cron jobs |
retention_days | u32? | none | Days to keep completed/dead jobs; omit to keep forever |
cleanup_interval_secs | u64 | 3600 | Seconds between cleanup sweeps (only active if retention is set) |
[defaults]
These defaults apply to jobs that don’t specify their own values.
| Field | Type | Default | Description |
|---|---|---|---|
max_retries | i32 | 3 | Maximum retry attempts |
retry_backoff | String | "exponential" | Backoff strategy: exponential, linear, fixed |
base_delay_ms | i32 | 1000 | Base delay for retry calculation (ms) |
max_delay_ms | i32 | 300000 | Maximum retry delay cap (ms) |
timeout_ms | i32 | 30000 | Webhook request timeout (ms) |
[logging]
| Field | Type | Default | Description |
|---|---|---|---|
level | String | "info" | Log level: trace, debug, info, warn, error |
format | String | "json" | Output format: json or pretty |
CLI Flags
mininq [OPTIONS]
Options:
-c, --config <PATH> Path to config file [default: mininq.toml]
--host <HOST> Server host (overrides config)
--port <PORT> Server port (overrides config)
--db-path <PATH> Database path (overrides config)
--log-level <LEVEL> Log level (overrides config)
-h, --help Print help
Environment Variables
| Variable | Overrides |
|---|---|
MININQ_HOST | server.host |
MININQ_PORT | server.port |
MININQ_DB_PATH | server.db_path |
MININQ_LOG_LEVEL | logging.level |
API Overview
mininq exposes a JSON REST API over HTTP.
Base URL
http://localhost:6390
All request and response bodies use Content-Type: application/json.
Error Format
All errors return a JSON object with error and status fields:
{
"error": "Job 019... not found",
"status": 404
}
Status Codes
| Code | Meaning |
|---|---|
| 200 | Success |
| 201 | Resource created |
| 400 | Bad request (validation error) |
| 404 | Resource not found |
| 409 | Conflict (e.g., duplicate, active jobs) |
| 500 | Internal server error |
Route Table
| Method | Path | Description |
|---|---|---|
GET | /health | Health check |
POST | /jobs | Create a job |
GET | /jobs | List jobs (with filters) |
GET | /jobs/{id} | Get a single job |
DELETE | /jobs/{id} | Cancel a pending job |
POST | /jobs/{id}/retry | Retry a dead job |
POST | /queues | Create a queue |
GET | /queues | List queues with stats |
GET | /queues/{name} | Get a single queue with stats |
PUT | /queues/{name} | Update queue settings |
DELETE | /queues/{name} | Delete a queue |
POST | /queues/{name}/pause | Pause a queue |
POST | /queues/{name}/resume | Resume a paused queue |
POST | /schedules | Create a schedule |
GET | /schedules | List all schedules |
GET | /schedules/{id} | Get a single schedule |
PUT | /schedules/{id} | Update a schedule |
DELETE | /schedules/{id} | Delete a schedule |
GET | /metrics | Get system metrics |
GET | /dashboard | Web dashboard (HTML) |
Jobs API
Job Lifecycle
pending ──▶ running ──▶ completed
│
▼
(retry) ──▶ pending (if attempts < max_retries)
│
▼
dead (if attempts exhausted or 4xx response)
When a job is executed, mininq sends a POST request to the callback_url with:
- Body: the job’s
payload(JSON) - Headers:
Content-Type: application/jsonX-Mininq-Job-Id: <job-id>X-Mininq-Attempt: <attempt-number>X-Mininq-Queue: <queue-name>
Response handling:
- 2xx → job marked
completed, response body saved toresult - 4xx → permanent failure, job goes directly to
dead - 5xx / timeout / connection error → transient failure, retried up to
max_retries
POST /jobs
Create a new job.
Request body:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
callback_url | String | yes | HTTP(S) URL to POST when executing | |
queue_name | String | no | "default" | Target queue (auto-created if missing) |
priority | i32 | no | 0 | Higher values execute first |
payload | Object | no | {} | JSON payload sent to the callback |
max_retries | i32 | no | config default | Maximum retry attempts |
retry_backoff | String | no | config default | "exponential", "linear", or "fixed" |
base_delay_ms | i32 | no | config default | Base delay for retry calculation |
max_delay_ms | i32 | no | config default | Maximum retry delay cap |
timeout_ms | i32 | no | config default | Webhook request timeout (ms) |
idempotency_key | String | no | Dedup key — returns existing job if matched | |
delay_ms | i64 | no | Delay before job becomes visible (ms) |
Example:
curl -X POST http://localhost:6390/jobs \
-H "Content-Type: application/json" \
-d '{
"callback_url": "https://example.com/webhook",
"queue_name": "emails",
"priority": 10,
"payload": {"to": "user@example.com", "template": "welcome"},
"max_retries": 5,
"idempotency_key": "welcome-user-123"
}'
Response: 201 Created (or 200 OK if idempotency key matched an existing job)
{
"id": "019...",
"queue_name": "emails",
"status": "pending",
"priority": 10,
"payload": "{\"to\":\"user@example.com\",\"template\":\"welcome\"}",
"callback_url": "https://example.com/webhook",
"visible_at": "2024-01-15T10:00:01.234",
"attempt": 0,
"max_retries": 5,
"timeout_ms": 30000,
"idempotency_key": "welcome-user-123",
"created_at": "2024-01-15T10:00:01.234",
"updated_at": "2024-01-15T10:00:01.234"
}
Delayed Jobs
Set delay_ms to schedule a job for future execution:
curl -X POST http://localhost:6390/jobs \
-H "Content-Type: application/json" \
-d '{
"callback_url": "https://example.com/webhook",
"delay_ms": 60000
}'
The job will remain invisible to the worker until the delay has elapsed.
GET /jobs
List jobs with optional filters.
Query parameters:
| Param | Type | Default | Description |
|---|---|---|---|
queue | String | Filter by queue name | |
status | String | Filter by status (pending, running, completed, dead) | |
limit | i64 | 50 | Max results (capped at 1000) |
offset | i64 | 0 | Pagination offset |
Example:
curl "http://localhost:6390/jobs?queue=emails&status=dead&limit=10"
Response: 200 OK — array of job objects.
GET /jobs/
Get a single job by ID.
curl http://localhost:6390/jobs/019...
Response: 200 OK — job object. 404 if not found.
DELETE /jobs/
Cancel a pending job. Only jobs with status pending can be cancelled.
curl -X DELETE http://localhost:6390/jobs/019...
Response: 200 OK
{
"status": "cancelled",
"id": "019..."
}
Returns 404 if not found, 409 if the job is not in pending status.
POST /jobs/{id}/retry
Retry a dead job. Resets it to pending with attempt count 0.
curl -X POST http://localhost:6390/jobs/019.../retry
Response: 200 OK — the reset job object. 404 if not found or not in dead status.
Queues API
Queues group jobs and control concurrency, rate limiting, and retry behavior. Queues are automatically created when a job references a non-existent queue name, but you can also create them explicitly to configure settings.
POST /queues
Create a queue with custom settings.
Request body:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
name | String | yes | Queue name (unique) | |
max_concurrency | i32 | no | 5 | Max concurrent jobs from this queue |
rate_limit_rps | f64 | no | none | Requests per second limit |
max_retries | i32 | no | 3 | Default max retries for jobs |
retry_backoff | String | no | "exponential" | Default backoff strategy |
base_delay_ms | i32 | no | 1000 | Default base delay (ms) |
max_delay_ms | i32 | no | 300000 | Default max delay (ms) |
Example:
curl -X POST http://localhost:6390/queues \
-H "Content-Type: application/json" \
-d '{
"name": "webhooks",
"max_concurrency": 20,
"rate_limit_rps": 10.0,
"retry_backoff": "linear"
}'
Response: 201 Created
{
"name": "webhooks",
"max_concurrency": 20,
"rate_limit_rps": 10.0,
"max_retries": 3,
"retry_backoff": "linear",
"base_delay_ms": 1000,
"max_delay_ms": 300000,
"paused": 0,
"created_at": "2024-01-15T10:00:00.000",
"updated_at": "2024-01-15T10:00:00.000"
}
Returns 409 if a queue with that name already exists.
GET /queues
List all queues with job count statistics.
curl http://localhost:6390/queues
Response: 200 OK
[
{
"name": "default",
"max_concurrency": 5,
"paused": false,
"pending": 12,
"running": 3,
"completed": 150,
"dead": 2
}
]
GET /queues/
Get a single queue with stats.
curl http://localhost:6390/queues/webhooks
Response: 200 OK — queue stats object. 404 if not found.
PUT /queues/
Update queue settings. Only provided fields are changed.
Request body:
| Field | Type | Description |
|---|---|---|
max_concurrency | i32 | Max concurrent jobs |
rate_limit_rps | f64 | Requests per second limit |
max_retries | i32 | Default max retries |
retry_backoff | String | Default backoff strategy |
base_delay_ms | i32 | Default base delay (ms) |
max_delay_ms | i32 | Default max delay (ms) |
Example:
curl -X PUT http://localhost:6390/queues/webhooks \
-H "Content-Type: application/json" \
-d '{"rate_limit_rps": 5.0}'
Response: 200 OK — updated queue object. 404 if not found. 400 if no fields provided.
DELETE /queues/
Delete a queue. Fails if the queue has pending or running jobs.
curl -X DELETE http://localhost:6390/queues/webhooks
Response: 200 OK
{
"status": "deleted",
"name": "webhooks"
}
Returns 404 if not found. Returns 409 if the queue still has active (pending/running) jobs.
POST /queues/{name}/pause
Pause a queue. Paused queues stop having their jobs picked up by workers.
curl -X POST http://localhost:6390/queues/webhooks/pause
Response: 200 OK — updated queue object with paused: 1.
POST /queues/{name}/resume
Resume a paused queue.
curl -X POST http://localhost:6390/queues/webhooks/resume
Response: 200 OK — updated queue object with paused: 0.
Schedules API
Schedules create jobs automatically on a cron-based interval.
Cron Format
mininq uses the cron crate, which supports a 7-field format:
sec min hour day-of-month month day-of-week year
The year field is optional. Common examples:
| Expression | Meaning |
|---|---|
0 */5 * * * * | Every 5 minutes |
0 0 * * * * | Every hour |
0 0 9 * * * | Daily at 09:00 |
0 30 2 * * Mon | Mondays at 02:30 |
0 0 0 1 * * | First day of each month |
Note: The first field is seconds, not minutes.
0 * * * * *means “every minute at second 0.”
POST /schedules
Create a recurring schedule.
Request body:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
cron_expression | String | yes | Cron expression (see format above) | |
callback_url | String | yes | HTTP(S) URL to POST | |
queue_name | String | no | "default" | Target queue for generated jobs |
payload | Object | no | {} | JSON payload for generated jobs |
max_retries | i32 | no | 3 | Max retries for generated jobs |
timeout_ms | i32 | no | 30000 | Timeout for generated jobs (ms) |
enabled | bool | no | true | Whether the schedule is active |
Example:
curl -X POST http://localhost:6390/schedules \
-H "Content-Type: application/json" \
-d '{
"cron_expression": "0 */5 * * * *",
"callback_url": "https://example.com/cleanup",
"queue_name": "maintenance",
"payload": {"task": "expire_sessions"}
}'
Response: 201 Created
{
"id": "019...",
"queue_name": "maintenance",
"cron_expression": "0 */5 * * * *",
"callback_url": "https://example.com/cleanup",
"payload": "{\"task\":\"expire_sessions\"}",
"max_retries": 3,
"timeout_ms": 30000,
"enabled": 1,
"last_run_at": null,
"next_run_at": "2024-01-15T10:05:00.000",
"created_at": "2024-01-15T10:00:00.000",
"updated_at": "2024-01-15T10:00:00.000"
}
GET /schedules
List all schedules.
curl http://localhost:6390/schedules
Response: 200 OK — array of schedule objects, ordered by created_at DESC.
GET /schedules/
Get a single schedule.
curl http://localhost:6390/schedules/019...
Response: 200 OK — schedule object. 404 if not found.
PUT /schedules/
Update a schedule. Only provided fields are changed. If cron_expression is updated, next_run_at is automatically recomputed.
Request body:
| Field | Type | Description |
|---|---|---|
queue_name | String | Target queue |
cron_expression | String | Cron expression (recomputes next run) |
callback_url | String | Callback URL |
payload | Object | JSON payload |
max_retries | i32 | Max retries |
timeout_ms | i32 | Timeout (ms) |
enabled | bool | Enable/disable the schedule |
Example:
curl -X PUT http://localhost:6390/schedules/019... \
-H "Content-Type: application/json" \
-d '{"enabled": false}'
Response: 200 OK — updated schedule object. 404 if not found. 400 if no fields provided.
DELETE /schedules/
Delete a schedule.
curl -X DELETE http://localhost:6390/schedules/019...
Response: 200 OK
{
"status": "deleted",
"id": "019..."
}
Returns 404 if not found.
Metrics API
GET /metrics
Returns aggregate system metrics.
curl http://localhost:6390/metrics
Response: 200 OK
{
"uptime_secs": 3600,
"version": "0.1.0",
"jobs": {
"total": 1500,
"pending": 42,
"running": 8,
"completed": 1420,
"dead": 30
},
"queues": [
{
"name": "default",
"paused": false,
"depth": 30,
"in_flight": 5
},
{
"name": "emails",
"paused": false,
"depth": 12,
"in_flight": 3
}
],
"schedules": {
"total": 5,
"enabled": 4
}
}
Field Descriptions
| Field | Description |
|---|---|
uptime_secs | Seconds since the server started |
version | mininq version from Cargo.toml |
jobs.total | Total job count across all statuses |
jobs.pending | Jobs waiting to be picked up |
jobs.running | Jobs currently being executed |
jobs.completed | Successfully completed jobs |
jobs.dead | Jobs that exhausted retries or hit permanent failure |
queues[].name | Queue name |
queues[].paused | Whether the queue is paused |
queues[].depth | Number of pending jobs in this queue |
queues[].in_flight | Number of currently running jobs in this queue |
schedules.total | Total number of schedules |
schedules.enabled | Number of enabled (active) schedules |
Dashboard
mininq includes an embedded web dashboard powered by htmx. No separate frontend build is required — it’s compiled into the binary via rust-embed.
Accessing the Dashboard
Open your browser to:
http://localhost:6390/dashboard
(Replace localhost:6390 with your configured host and port.)
What It Shows
The dashboard provides a live-updating overview of the system:
- Stats cards — total jobs, pending, running, completed, dead counts at a glance
- Queues table — all queues with their job counts, pause/resume controls, and concurrency settings
- Jobs table — filterable list of jobs with status, queue, priority, timestamps, and error messages
- Schedules table — all cron schedules with their expression, next run time, and enabled status
Interactivity
- Pause/Resume — click buttons in the queues table to pause or resume individual queues
- Auto-refresh — the dashboard polls the API periodically to keep data current
- Filtering — filter jobs by queue name and status
Static Assets
The dashboard’s HTML and CSS are embedded into the binary at compile time from the static/ directory. The files are served at:
GET /dashboard— main HTML pageGET /static/{path}— CSS and other static assets
Architecture
This page describes the internal design of mininq.
Database
mininq uses SQLite in WAL (Write-Ahead Logging) mode, enabling concurrent reads while a write is in progress.
Connection pools:
- Writer pool — exactly 1 connection (serializes all writes)
- Reader pool — N connections (N = number of CPU cores, minimum 4)
Pragmas applied to both pools:
| Pragma | Value | Purpose |
|---|---|---|
journal_mode | wal | Concurrent read/write |
synchronous | normal | Balance durability with performance |
busy_timeout | 5000 ms | Wait on lock instead of failing |
temp_store | memory | Store temp tables in RAM |
mmap_size | 30000000000 | Memory-map up to ~30 GB of the DB file |
cache_size | -64000 | 64 MB page cache |
foreign_keys | on | Enforce referential integrity |
Schema
3 tables:
queues— queue configuration (name, concurrency, rate limit, retry settings, paused flag)jobs— individual jobs (status, payload, callback URL, retry state, timestamps)schedules— cron schedules (expression, callback, next/last run)
4 indexes:
| Index | Purpose |
|---|---|
idx_jobs_poll | Fast job claiming: (queue_name, status, visible_at, priority DESC, created_at ASC) WHERE status = 'pending' |
idx_jobs_reaper | Stale job detection: (status, visible_at) WHERE status = 'running' |
idx_jobs_idempotency | Idempotency key lookup: unique on (idempotency_key) WHERE idempotency_key IS NOT NULL |
idx_schedules_next_run | Due schedule lookup: (enabled, next_run_at) WHERE enabled = 1 |
Worker Engine
The worker engine is a poll-based loop:
- Sleep for
poll_interval_ms(default 500ms) - Acquire a semaphore permit (limits concurrency to
worker.concurrency) - Claim a job using an atomic
UPDATE ... WHERE id = (SELECT ...) RETURNING *query - Spawn a tokio task to execute the webhook
- Release the semaphore permit when the task completes
The claim query selects the highest-priority pending job whose visible_at has passed, atomically setting it to running and assigning a worker_id. This prevents double-execution even with multiple workers.
Queue order is randomized on each poll cycle to prevent starvation.
Graceful Shutdown
On SIGTERM or SIGINT:
- The
CancellationTokenis triggered - The worker engine stops polling for new jobs
- It acquires all semaphore permits (blocking until in-flight jobs complete)
- The reaper, scheduler, and cleanup tasks exit their loops
- Database connections are closed
Rate Limiter
Per-queue rate limiting uses a token bucket algorithm:
- Each queue with
rate_limit_rpsset gets its own bucket - Bucket capacity =
rate_limit_rps(minimum 1.0) - Tokens refill continuously at
rate_limit_rpsper second - Each job claim consumes 1 token
- If the bucket is empty, the queue is skipped for that poll cycle
- Queues without a rate limit always pass
Rate limits are checked from the database on each poll, so changes via PUT /queues/{name} take effect immediately.
Retry Strategies
When a job fails with a transient error (5xx, timeout, connection error), it’s retried up to max_retries times. The delay before the next attempt is computed as:
| Strategy | Formula |
|---|---|
exponential | base_delay_ms * 2^(attempt - 1) |
linear | base_delay_ms * attempt |
fixed | base_delay_ms |
All strategies add ±30% random jitter and are capped at max_delay_ms.
Permanent failures (4xx responses) bypass retries and go directly to dead.
Reaper
The reaper runs on a configurable interval (default 30s) and recovers jobs stuck in running status. A job is considered stale when its visible_at has passed — this timestamp is set to now + timeout_ms when the job is claimed.
Stale jobs are reset to pending with worker_id and started_at cleared, making them available for re-execution.
Scheduler
The scheduler runs on a configurable interval (default 15s) and:
- Queries enabled schedules where
next_run_at <= now - For each due schedule, computes the next run time from the cron expression
- Uses a CAS (Compare-And-Swap) update:
UPDATE ... WHERE id = ? AND next_run_at = ?- This prevents duplicate job creation if multiple instances are running
- Inserts a new job with the schedule’s callback URL, payload, and retry settings
- Auto-creates the target queue if it doesn’t exist
Cleanup
If worker.retention_days is configured, the cleanup task runs periodically (default every 3600s) and deletes completed/dead jobs older than the retention period. Jobs in pending or running status are never deleted.
Deployment
Build a Release Binary
cargo build --release
The binary is at ./target/release/mininq (~5-10 MB, statically includes the dashboard).
Systemd
Create /etc/systemd/system/mininq.service:
[Unit]
Description=mininq job runner
After=network.target
[Service]
Type=simple
User=mininq
Group=mininq
WorkingDirectory=/var/lib/mininq
ExecStart=/usr/local/bin/mininq --config /etc/mininq/config.toml
Restart=on-failure
RestartSec=5
# Graceful shutdown
KillSignal=SIGTERM
TimeoutStopSec=30
# Security hardening
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=yes
ReadWritePaths=/var/lib/mininq
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now mininq
Docker
FROM rust:1.85-slim AS builder
WORKDIR /build
COPY . .
RUN cargo build --release
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /build/target/release/mininq /usr/local/bin/mininq
WORKDIR /data
EXPOSE 6390
CMD ["mininq"]
docker build -t mininq .
docker run -d -p 6390:6390 -v mininq-data:/data mininq
The database file (mininq.db) is created in /data — mount a volume to persist it.
Reverse Proxy
Nginx
upstream mininq {
server 127.0.0.1:6390;
}
server {
listen 80;
server_name jobs.example.com;
location / {
proxy_pass http://mininq;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Caddy
jobs.example.com {
reverse_proxy localhost:6390
}
Graceful Shutdown
mininq handles SIGTERM and SIGINT:
- Stops accepting new HTTP connections
- Waits for in-flight jobs to complete (up to their individual timeouts)
- Stops background tasks (reaper, scheduler, cleanup)
- Closes database connections
Set TimeoutStopSec in systemd or stop_grace_period in Docker Compose to match your longest expected job timeout.
Backup
mininq stores everything in a single SQLite file. Back it up safely while the server is running:
sqlite3 /var/lib/mininq/mininq.db ".backup /backups/mininq-$(date +%F).db"
The .backup command uses SQLite’s online backup API, which is safe to run against a live WAL-mode database.