Mastering the Retry Pattern with Higher-Order Functions in C#, Python, and React
November 16, 2024 · 10 min read
C#, .NET, Python, React, TypeScript, Design Patterns, Resilience
When building resilient applications, you often need to wrap database calls or API requests in retry logic. Instead of copy-pasting a try-catch block around every single call, you can use a higher-order function to write retry logic once and reuse it across your entire application.
This pattern changed the way I think about error handling, and it translates beautifully across languages.
What Is a Higher-Order Function?
A higher-order function is a function that either takes another function as a parameter, or returns a function. This is one of the most powerful concepts in programming, and every modern language supports it.
Think of it this way: instead of passing the result of an operation, you pass the instructions for how to perform it. The receiving function can then decide when and how many times to execute those instructions.
This is exactly what makes the retry pattern work.
Understanding Func<Task> in C#
In C#, Func<Task> is a delegate — a type-safe reference to a method. It represents a function that:
- Takes no parameters
- Returns a Task (meaning it is an asynchronous operation)
Here is a quick reference for the built-in delegate types:
| Delegate | Signature | Use Case |
|---|---|---|
Action | void Method() | Fire-and-forget callbacks |
Action<T> | void Method(T arg) | Callbacks with parameters |
Func<TResult> | TResult Method() | Synchronous operations that return a value |
Func<Task> | Task Method() | Async operations (void return) |
Func<Task<TResult>> | Task<TResult> Method() | Async operations that return a value |
Think of Func<Task> as passing a recipe instead of the finished meal. You are not passing the result of the database call — you are passing the instructions on how to perform the call so that the retry handler can execute them whenever it needs to.
The Generic Retry Handler
Here is a reusable retry handler with exponential backoff:
/// <summary>
/// Executes any asynchronous operation with exponential backoff retry logic.
/// </summary>
private async Task<bool> ExecuteWithRetry(
Func<Task> action,
string actionName,
string entityId)
{
int attempt = 0;
int delayMs = 1000;
const int maxAttempts = 3;
while (attempt < maxAttempts)
{
try
{
await action();
_logger.LogInformation(
"Successfully completed {Action} for {Id}",
actionName, entityId);
return true;
}
catch (Exception ex)
{
attempt++;
if (attempt >= maxAttempts)
{
_logger.LogError(ex,
"Final failure for {Action} on {Id} after {n} attempts.",
actionName, entityId, maxAttempts);
return false;
}
_logger.LogWarning(
"Attempt {n} failed for {Action}. Retrying in {ms}ms...",
attempt, actionName, delayMs);
await Task.Delay(delayMs);
delayMs *= 2; // Exponential backoff
}
}
return false;
}
How to Use It
Because we use Func<Task>, we can pass any async method into this handler using a lambda expression () => ...:
// Creating a record
await ExecuteWithRetry(
async () => await _repository.CreateAsync(myObject),
"CreateRecord",
myObject.Id
);
// Deleting a record
await ExecuteWithRetry(
() => _repository.DeleteAsync(id),
"DeleteRecord",
id
);
// Calling an external API
await ExecuteWithRetry(
() => _httpClient.PostAsJsonAsync("/api/orders", order),
"SubmitOrder",
order.Id
);
One handler. Any operation. That is the power of higher-order functions.
Why This Is Not Recursion
It is a common misconception that retry logic is recursive.
- Recursion happens when a method calls itself, creating a new stack frame each time.
- This pattern is iterative. It uses a
whileloop to invoke a passed-in delegate. TheExecuteWithRetrymethod stays on the same level of the call stack, simply calling the "recipe" again if it fails.
No risk of stack overflow, no growing call chain — just a clean loop.
Adding Jitter to Prevent Thundering Herds
The basic version above has a subtle problem. If 1,000 clients all fail at the same instant, they all compute the same backoff delays and retry in lockstep — a thundering herd. Jitter adds randomness to break up that synchronization.
AWS recommends full jitter as the most effective strategy:
private static readonly Random _jitter = new();
private async Task<bool> ExecuteWithRetry(
Func<Task> action,
string actionName,
string entityId)
{
int attempt = 0;
const int maxAttempts = 3;
const int baseDelayMs = 1000;
const int maxDelayMs = 30000;
while (attempt < maxAttempts)
{
try
{
await action();
_logger.LogInformation(
"Successfully completed {Action} for {Id}",
actionName, entityId);
return true;
}
catch (Exception ex)
{
attempt++;
if (attempt >= maxAttempts)
{
_logger.LogError(ex,
"Final failure for {Action} on {Id} after {n} attempts.",
actionName, entityId, maxAttempts);
return false;
}
// Full jitter: random delay between 0 and the exponential ceiling
int expDelay = Math.Min(baseDelayMs * (int)Math.Pow(2, attempt), maxDelayMs);
int jitteredDelay = _jitter.Next(0, expDelay);
_logger.LogWarning(
"Attempt {n} failed for {Action}. Retrying in {ms}ms...",
attempt, actionName, jitteredDelay);
await Task.Delay(jitteredDelay);
}
}
return false;
}
Production Alternative: Polly
For production .NET applications, consider Polly — the standard resilience library recommended by Microsoft. It handles exponential backoff, jitter, circuit breakers, and cancellation tokens out of the box:
using Polly;
using Polly.Retry;
var pipeline = new ResiliencePipelineBuilder()
.AddRetry(new RetryStrategyOptions
{
BackoffType = DelayBackoffType.Exponential,
UseJitter = true,
MaxRetryAttempts = 3,
Delay = TimeSpan.FromSeconds(1),
MaxDelay = TimeSpan.FromSeconds(30),
})
.Build();
await pipeline.ExecuteAsync(async token =>
{
await _repository.CreateAsync(myObject);
}, cancellationToken);
When to use Polly vs. hand-rolled
A hand-rolled retry is great for learning and simple use cases. For production services — especially those calling external APIs or running in distributed systems — Polly gives you battle-tested jitter, circuit breakers, telemetry, and IHttpClientFactory integration that are easy to get wrong by hand.
The Same Pattern in Python
Python's first-class functions make this pattern even more concise. Here is a hand-rolled version:
import asyncio
import random
async def retry_async(
operation,
max_retries: int = 3,
base_delay: float = 1.0,
max_delay: float = 30.0,
):
"""Retry an async operation with exponential backoff and full jitter."""
for attempt in range(max_retries + 1):
try:
return await operation()
except Exception as e:
if attempt == max_retries:
raise
exp_delay = min(base_delay * (2 ** attempt), max_delay)
jittered_delay = random.uniform(0, exp_delay)
print(f"Attempt {attempt + 1} failed: {e}. "
f"Retrying in {jittered_delay:.2f}s...")
await asyncio.sleep(jittered_delay)
Usage looks almost identical to the C# version:
# Pass any async function as a lambda
result = await retry_async(lambda: db.fetch_user(user_id))
await retry_async(lambda: http_client.post("/api/orders", json=order))
Production Alternative: Tenacity
Tenacity is the standard Python retry library. It uses decorators — Python's way of wrapping functions:
from tenacity import (
retry,
stop_after_attempt,
wait_random_exponential,
retry_if_exception_type,
before_sleep_log,
)
import logging
import httpx
logger = logging.getLogger(__name__)
@retry(
stop=stop_after_attempt(5),
wait=wait_random_exponential(multiplier=1, max=60),
retry=retry_if_exception_type((httpx.ConnectError, httpx.ReadTimeout)),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
async def fetch_user(user_id: str):
async with httpx.AsyncClient() as client:
response = await client.get(f"https://api.example.com/users/{user_id}")
response.raise_for_status()
return response.json()
The @retry decorator transforms fetch_user into a self-retrying function. No wrapper needed at the call site.
The Same Pattern in React / TypeScript
In TypeScript, functions are first-class citizens just like in Python. Here is a typed retry utility:
interface RetryOptions {
maxRetries?: number;
baseDelay?: number;
maxDelay?: number;
shouldRetry?: (error: unknown) => boolean;
onRetry?: (error: unknown, attempt: number) => void;
}
async function withRetry<T>(
operation: () => Promise<T>,
options: RetryOptions = {}
): Promise<T> {
const {
maxRetries = 3,
baseDelay = 1000,
maxDelay = 30000,
shouldRetry = () => true,
onRetry,
} = options;
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await operation();
} catch (error) {
if (attempt === maxRetries || !shouldRetry(error)) {
throw error;
}
const expDelay = Math.min(baseDelay * 2 ** attempt, maxDelay);
const jitteredDelay = Math.random() * expDelay;
onRetry?.(error, attempt + 1);
await new Promise((resolve) => setTimeout(resolve, jitteredDelay));
}
}
throw new Error("Unreachable");
}
Using It in a React Hook
function useUserData(userId: string) {
const [data, setData] = useState<User | null>(null);
const [error, setError] = useState<Error | null>(null);
const [loading, setLoading] = useState(true);
useEffect(() => {
withRetry(
async () => {
const response = await fetch(`/api/users/${userId}`);
if (!response.ok) throw new Error(`HTTP ${response.status}`);
return response.json() as Promise<User>;
},
{
maxRetries: 3,
shouldRetry: (err) => {
// Don't retry client errors
if (err instanceof Error && err.message.startsWith("HTTP 4")) {
return false;
}
return true;
},
onRetry: (err, attempt) => {
console.warn(`Attempt ${attempt} failed, retrying...`, err);
},
}
)
.then(setData)
.catch(setError)
.finally(() => setLoading(false));
}, [userId]);
return { data, error, loading };
}
Library alternative
For JavaScript projects, p-retry gives you exponential backoff with jitter in a lightweight package. Throw an AbortError to signal non-retryable failures.
The Pattern Across Languages
| Concept | C# | Python | TypeScript |
|---|---|---|---|
| Function reference | Func<Task> | Callable / lambda | () => Promise<T> |
| Passing behavior | () => _repo.SaveAsync() | lambda: repo.save() | () => repo.save() |
| Async/await | async/await | async/await | async/await |
| Production library | Polly | Tenacity | p-retry |
The syntax differs, but the idea is identical: pass a function, not a result.
When NOT to Retry
Not every failure should be retried. Getting this wrong can make outages worse:
Don't Retry Client Errors (4xx)
| Status | Meaning | Retry? |
|---|---|---|
| 400 Bad Request | Malformed request | No |
| 401 Unauthorized | Invalid credentials | No |
| 403 Forbidden | Insufficient permissions | No |
| 404 Not Found | Resource does not exist | No |
| 429 Too Many Requests | Rate limited | Yes — honor Retry-After header |
| 500+ Server Errors | Transient failures | Yes |
A 400 Bad Request will fail the same way every time. Retrying it just wastes resources.
Watch Out for Non-Idempotent Operations
An operation is idempotent if performing it multiple times has the same effect as performing it once. Retries are only safe for idempotent operations:
- Safe to retry: GET requests, PUT with a full resource, DELETE by ID
- Dangerous to retry: POST (creates duplicates), incrementing a counter, processing a payment
If you must retry a non-idempotent operation, use idempotency keys — a unique ID per logical request that the server uses to deduplicate.
Beware of Retry Storms
If your service is already struggling and every client retries 3 times, you have just tripled the load. Combined with the thundering herd effect, this can turn a minor blip into a cascading failure.
Mitigations:
- Jitter is not optional — it is essential
- Circuit breakers stop retries when a service is clearly down
- Retry budgets cap the percentage of requests that can be retries (e.g., stop retrying if more than 10% of recent requests were retries)
Retry amplification
If Service A retries 3 times and calls Service B which also retries 3 times, a single failure can generate up to 9 downstream requests. In deep call chains, this multiplies exponentially. Be mindful of where in the stack your retries live.
Key Takeaways
- DRY: Write the retry logic once, reuse it everywhere.
- Clean separation: Your business logic focuses on what to do. The retry handler manages resilience.
- Consistency: Every call in your application behaves the same way during network blips.
- Always add jitter to exponential backoff. Without it, synchronized retries can cause thundering herds.
- Know when not to retry: client errors, non-idempotent operations, and already-overwhelmed services.
- The pattern is universal:
Func<Task>in C#, lambdas in Python, arrow functions in TypeScript — same idea, different syntax.
Higher-order functions are one of those concepts that, once you internalize them, show up everywhere. The retry pattern is just the beginning.