Async Processing Glossary
Key terms and concepts in task queues, background jobs, and async processing.
A
API Chaining in Async Workflows
API chaining is the practice of calling multiple APIs in sequence, where each response feeds into the next request, forming a dependent workflow.
Async (Asynchronous)
Asynchronous programming allows tasks to run independently without blocking the main execution thread, enabling non-blocking I/O and parallel task processing.
B
Background Job
A background job is a task that runs outside the main request-response cycle, allowing applications to handle time-consuming operations without blocking users.
Backpressure
Backpressure is a flow control mechanism where a system signals upstream producers to slow down when it can't keep up with incoming work.
Bulkhead Pattern for Fault Isolation
The bulkhead pattern isolates components into separate resource pools so a failure in one area cannot exhaust resources needed by others.
C
Callback in Async Processing
A callback is a function or URL that is invoked when an asynchronous operation completes, enabling non-blocking workflows and event-driven architectures.
Cascade Failure in Distributed Systems
A cascade failure occurs when one failing component overwhelms shared resources and causes other components to fail in a chain reaction.
Circuit Breaker
A circuit breaker is a resilience pattern that stops calling a failing service after repeated failures, preventing cascading outages across your system.
Cold Start
A cold start is the initialization delay when a serverless function runs for the first time or after being idle, adding latency before processing.
Concurrency in Task Processing
Concurrency is the ability to process multiple tasks or requests simultaneously, controlled by limits to prevent resource exhaustion.
Connection Pool
A connection pool is a cache of reusable database or service connections that eliminates the overhead of establishing a new connection for each request.
Cron Job in Task Scheduling
A cron job is a time-based scheduler that runs tasks automatically at specified intervals, commonly used for recurring background operations.
F
G
Gateway Timeout
A gateway timeout (HTTP 504) occurs when a reverse proxy or load balancer does not receive a timely response from an upstream server.
Graceful Degradation in Workflows
Graceful degradation lets a system keep operating with reduced functionality when components fail, protecting core behavior from partial outages.
L
Latency in API Performance
Latency is the time delay between initiating a request and receiving the first response, a key metric for API performance and user experience.
Load Shedding for System Protection
Load shedding intentionally drops low-priority requests under extreme load to keep a system stable and protect its most important operations.
P
Payload in Task Queues
A payload is the data or parameters sent along with a task or API request, containing the information needed to execute the work.
Polling for Task Status
Polling is the practice of repeatedly checking a resource's status at regular intervals to detect changes or task completion.
R
Rate Limiting
Rate limiting controls the number of requests a client can make to an API within a time window, protecting services from abuse and overload.
Retry in Distributed Systems
A retry is an automatic attempt to re-execute a failed task or API call, essential for handling transient failures in distributed systems.
S
Saga Pattern for Distributed Transactions
The saga pattern manages data consistency across services using a sequence of local transactions with compensating actions for rollback.
Serverless Function
A serverless function is a cloud-hosted unit of code that runs on demand, scales automatically, and is billed per execution — with strict timeout limits.
T
Task Chaining
Task chaining is a pattern where asynchronous tasks are linked sequentially, with each task's output feeding as input to the next task in a workflow.
Task Queue
A task queue distributes work across processes or machines, letting apps offload time-consuming operations to background workers.
Throttling in API Systems
Throttling is a technique that rejects requests exceeding a defined rate to protect services from overload and ensure fair resource usage.
Throughput
Throughput is the number of tasks or requests a system can process per unit of time, a key metric for measuring capacity and scalability.
Timeout in API Requests
A timeout is the maximum duration allowed for an operation to complete before it is automatically cancelled or marked as failed.
W
Webhook for Event Delivery
A webhook is an HTTP callback that delivers real-time data to other applications when a specific event occurs, enabling event-driven integrations.
Worker in Background Processing
A worker is a process that picks up tasks from a queue and executes them, running independently from the main application to handle background processing.