Enqueue jobs to a specific function by name with configurable retries, concurrency limits, FIFO ordering, and dead-letter support. All named queues are defined centrally in iii-config.yaml. For help deciding between named and topic-based queues, see When to use which.
Named queues use the Enqueue trigger action. Refer to Trigger Actions to learn more.
FIFO queues enforce ordering in a queue and they require a message_group_field to order on. Queues can also set backoff_ms for exponential retry delays. See more on this in the steps below.For full configuration options refer to the Queue worker reference.
From any function, enqueue a job by calling trigger() with TriggerAction.Enqueue and the target queue name. The caller receives an acknowledgement (messageReceiptId) once the engine accepts the job — it does not wait for processing.
When processing order matters — for example, financial transactions for the same account — set type: fifo and specify message_group_field. Jobs sharing the same group value are processed strictly in order.
Jobs are enqueued and acknowledged immediately — the caller receives a messageReceiptId without waiting for processing. The engine delivers each job to the target function, retries failures with exponential backoff, and routes exhausted jobs to the dead-letter queue. Standard queues process jobs concurrently; FIFO queues guarantee per-group ordering.
For a detailed comparison of standard and FIFO queue behavior — including processing model, ordering guarantees, and flow diagrams — see the Queue worker reference. For retry and dead-letter flow, see Retry and dead-letter flow.
The most common pattern — an HTTP endpoint accepts a request, responds immediately, and offloads the actual work to a queue. This keeps API response times fast regardless of how long downstream processing takes.
This example uses all three trigger actions: Enqueue for payment (reliable, ordered) and email (reliable, parallel), and Void for analytics (best-effort).
Three transactions arrive — two for the same account, one for a different account. The FIFO queue groups them by account_id:The worker processes acct_A jobs strictly in order, while acct_B proceeds independently:
Node / TypeScript
Python
Rust
import { registerWorker, TriggerAction } from 'iii-sdk'const iii = registerWorker(process.env.III_URL ?? 'ws://localhost:49134')iii.registerFunction('transactions::submit', async (req) => { const { account_id, type, amount } = req.body const receipt = await iii.trigger({ function_id: 'ledger::apply', payload: { account_id, type, amount }, action: TriggerAction.Enqueue({ queue: 'ledger' }), }) return { status_code: 202, body: { receiptId: receipt.messageReceiptId } }})iii.registerFunction('ledger::apply', async (txn) => { const { account_id, type, amount } = txn if (type === 'deposit') { await db.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, account_id]) } else if (type === 'withdraw') { const { rows } = await db.query('SELECT balance FROM accounts WHERE id = $1', [account_id]) if (rows[0].balance < amount) { throw new Error('Insufficient funds') } await db.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, account_id]) } return { applied: true }})
from iii import TriggerAction, register_workeriii = register_worker("ws://localhost:49134")def submit_transaction(req): account_id = req["body"]["account_id"] txn_type = req["body"]["type"] amount = req["body"]["amount"] receipt = iii.trigger({ "function_id": "ledger::apply", "payload": {"account_id": account_id, "type": txn_type, "amount": amount}, "action": TriggerAction.Enqueue(queue="ledger"), }) return {"status_code": 202, "body": {"receiptId": receipt["messageReceiptId"]}}def apply_transaction(txn): account_id = txn["account_id"] if txn["type"] == "deposit": db.execute( "UPDATE accounts SET balance = balance + %s WHERE id = %s", (txn["amount"], account_id), ) elif txn["type"] == "withdraw": balance = db.query("SELECT balance FROM accounts WHERE id = %s", (account_id,)) if balance < txn["amount"]: raise ValueError("Insufficient funds") db.execute( "UPDATE accounts SET balance = balance - %s WHERE id = %s", (txn["amount"], account_id), ) return {"applied": True}iii.register_function("transactions::submit", submit_transaction)iii.register_function("ledger::apply", apply_transaction)
Because the ledger queue is FIFO with message_group_field: account_id, the deposit for acct_A always completes before the withdrawal. Without FIFO ordering, the withdrawal could execute first and fail with “Insufficient funds” even though the deposit was submitted first.
A marketing system sends thousands of emails. The SMTP provider has a rate limit. A standard queue with low concurrency prevents overloading the provider while retrying transient failures.
iii-config.yaml (excerpt)
queue_configs: bulk-email: max_retries: 5 concurrency: 3 type: standard backoff_ms: 5000
Three workers pull from the queue concurrently. When one hits a rate limit, it retries with exponential backoff while the others continue:
use iii_sdk::{ register_worker, InitOptions, RegisterFunction, TriggerAction, TriggerRequest,};use serde_json::{json, Value};let iii = register_worker("ws://localhost:49134", InitOptions::default());let iii_clone = iii.clone();let reg = RegisterFunction::new_async("campaigns::launch", move |campaign: Value| { let iii = iii_clone.clone(); async move { let recipients = campaign["recipients"].as_array().unwrap(); for recipient in recipients { iii.trigger(TriggerRequest { function_id: "emails::send".into(), payload: json!({ "to": recipient["email"], "subject": campaign["subject"], "body": campaign["body"], }), action: Some(TriggerAction::Enqueue { queue: "bulk-email".into() }), timeout_ms: None, }).await?; } Ok(json!({ "enqueued": recipients.len() })) }});iii.register_function(reg);
With concurrency: 3, at most three emails are in-flight at any time. Failed sends retry with exponential backoff (5s, 10s, 20s, 40s, 80s), protecting the SMTP provider from overload.
For adapter options (builtin, RabbitMQ, Redis), scenario-based recommendations, and the full queue configuration reference, see the Queue worker reference.
Jobs are enqueued and acknowledged immediately — the caller receives a messageReceiptId without waiting for processing. The engine delivers each job to the target function, retries failures with exponential backoff, and routes permanently failed jobs to a dead-letter queue. Standard queues process jobs concurrently; FIFO queues guarantee per-group ordering.