logo

Serverless functions are designed to start fast, run briefly, and shut down. Most platforms enforce execution limits between 10 seconds and 15 minutes. When your function calls a slow API, processes a large file, or waits for a third-party service, the platform kills the process mid-execution with no warning and no result.

This is not a bug. It is a fundamental constraint of the serverless model. This guide explains why it happens and how to architect around the limitation.

Step 1: Understand Serverless Execution Limits

Every serverless platform enforces a maximum execution time:

PlatformDefault TimeoutMaximum Timeout
Vercel (Hobby)10 seconds10 seconds
Vercel (Pro)15 seconds300 seconds
Netlify Functions10 seconds26 seconds
AWS Lambda3 seconds15 minutes
Google Cloud Functions60 seconds9 minutes
Cloudflare Workers30 seconds (CPU)30 seconds

These limits exist because serverless functions are billed per millisecond. The platform needs to reclaim resources fast. A function that runs for 20 minutes blocks a compute slot and drains the platform’s capacity.

What happens when you hit the limit:

  • The function terminates at once
  • Any in-progress HTTP request gets dropped
  • No response reaches the caller
  • Database transactions may land in an inconsistent state
  • No error returns to your application - the request vanishes

Step 2: Identify Tasks That Exceed Timeout Limits

These workloads commonly fail in serverless environments:

API calls to slow services:

// This will timeout on Vercel if the video service takes > 10s
export async function POST(req) {
const result = await fetch('https://video-api.example.com/convert', {
method: 'POST',
body: JSON.stringify({ videoUrl: req.body.url }),
});
// Function killed before response arrives
return Response.json(result);
}

File processing:

  • PDF generation from complex templates
  • Image resizing of large files
  • CSV parsing and transformation of large datasets

Third-party integrations:

  • Payment processing with slow gateways
  • Shipping rate calculations across multiple carriers
  • AI model inference (image generation, LLM calls)

Chained API calls:

  • Three sequential API calls at 5 seconds each = 15 seconds total
  • Any single slow response pushes the chain past the limit

Step 3: Offload Long-Running Work to a Task Queue

The solution is to separate the request from the execution. Your serverless function should do one thing: create a task and return at once.

// Serverless function - returns in < 100ms
export async function POST(req) {
const { task } = await aq.tasks.create({
targetUrl: 'https://your-worker.com/api/convert-video',
payload: { videoUrl: req.body.url, format: 'mp4' },
timeout: 300, // 5 minutes - no serverless limit applies
maxRetries: 3,
onCompleteUrl: 'https://your-app.com/api/video-done',
});
// Return immediately with task ID
return Response.json({ taskId: task.id, status: 'processing' });
}

The task queue calls your target URL from its own infrastructure, free from serverless timeout limits. It waits as long as needed, retries on failure, and stores the result.

Before (breaks):

User -> Serverless Function -> Slow API (20s) -> TIMEOUT

After (works):

User -> Serverless Function -> Create Task (50ms) -> Return
|
Task Queue -> Slow API (20s) -> Store Result

Step 4: Handle Results Asynchronously

Since your serverless function returns before the task finishes, you need a way to retrieve the result. Three options exist:

Option A: onComplete webhook

app.post('/api/video-done', async (req, res) => {
const { task } = req.body;
if (task.status === 'completed') {
await db.videos.update(task.payload.videoUrl, {
status: 'ready',
outputUrl: task.result.body,
});
}
res.json({ received: true });
});

Option B: Poll from the client

// Frontend polls for status
const checkStatus = async (taskId) => {
const res = await fetch(`/api/task-status?id=${taskId}`);
const { status } = await res.json();
if (status === 'completed') {
// Show result
} else {
setTimeout(() => checkStatus(taskId), 3000);
}
};

Option C: Wait-for-signal for external dependencies

const { task, signalToken } = await aq.tasks.create({
targetUrl: 'https://your-app.com/api/start-payment',
payload: { orderId: 'order_789' },
waitForSignal: true,
maxWaitTime: 3600,
});
// Signal arrives later when payment processor calls back

Step 5: Avoid Common Mistakes

Mistake: increasing the timeout limit

Bumping your Vercel timeout to 300 seconds is a band-aid. You remain limited to 5 minutes, you pay more per invocation, and slow third-party APIs can still break your flow.

Mistake: running background work after sending the response

// DANGEROUS - function may be killed before this completes
export async function POST(req) {
res.json({ status: 'ok' });
// This runs after the response but the platform may kill it
await processVideo(req.body.url); // unreliable
}

Serverless platforms may freeze or terminate the function right after the response leaves. Do not rely on post-response execution.

Mistake: chaining slow calls in a single function

// BAD - total time = sum of all calls
const inventory = await checkInventory(orderId); // 3s
const payment = await chargeCard(orderId); // 5s
const shipping = await createShipment(orderId); // 4s
// Total: 12s - dangerously close to or past the limit

Instead, create separate tasks for each step and chain them with onCompleteUrl.

Mistake: ignoring partial failures

If your function times out after writing to the database but before sending the confirmation email, your data lands in an inconsistent state. Use idempotent handlers and status flags to make operations resumable.