5 Useful DIY Python Functions for Error Handling
Debugging Python doesn’t need to be complicated. These 5 DIY functions simplify error handling and improve code reliability.

Image by Author
# Introduction
Error handling is often the weak point in otherwise solid code. Issues like missing keys, failed requests, and long-running functions show up often in real projects. Python’s built-in try-except blocks are useful, but they don’t cover many practical cases on their own.
You’ll need to wrap common failure scenarios into small, reusable functions that help handle retries with limits, input validation, and safeguards that prevent code from running longer than it should. This article walks through five error-handling functions you can use in tasks like web scraping, building application programming interfaces (APIs), processing user data, and more.
You can find the code on GitHub.
# Retrying Failed Operations with Exponential Backoff
In many projects, API calls and network requests often fail. A beginner’s approach is to try once and catch any exceptions, log them, and stop. The better approach is to retry.
Here is where exponential backoff comes in. Instead of hammering a failing service with immediate retries — which only makes things worse — you wait a bit longer between each attempt: 1 second, then 2 seconds, then 4 seconds, and so on.
Let's build a decorator that does this:
import time
import functools
from typing import Callable, Type, Tuple
def retry_with_backoff(
max_attempts: int = 3,
base_delay: float = 1.0,
exponential_base: float = 2.0,
exceptions: Tuple[Type[Exception], ...] = (Exception,)
):
"""
Retry a function with exponential backoff.
Args:
max_attempts: Maximum number of retry attempts
base_delay: Initial delay in seconds
exponential_base: Multiplier for delay (2.0 = double each time)
exceptions: Tuple of exception types to catch and retry
"""
def decorator(func: Callable):
@functools.wraps(func)
def wrapper(*args, **kwargs):
last_exception = None
for attempt in range(max_attempts):
try:
return func(*args, **kwargs)
except exceptions as e:
last_exception = e
if attempt < max_attempts - 1:
delay = base_delay * (exponential_base ** attempt)
print(f"Attempt {attempt + 1} failed: {e}")
print(f"Retrying in {delay:.1f} seconds...")
time.sleep(delay)
else:
print(f"All {max_attempts} attempts failed")
raise last_exception
return wrapper
return decorator
The decorator wraps your function and catches specified exceptions. The key calculation is delay = base_delay * (exponential_base ** attempt). With base_delay=1 and exponential_base=2, your delays are 1s, 2s, 4s, 8s. This gives stressed systems time to recover.
The exceptions parameter lets you specify which errors to retry. You might retry ConnectionError but not ValueError, since connection issues are temporary but validation errors aren't.
Now let's see it in action:
import random
@retry_with_backoff(max_attempts=4, base_delay=0.5, exceptions=(ConnectionError,))
def fetch_user_data(user_id):
"""Simulate an unreliable API."""
if random.random() < 0.6: # 60% failure rate
raise ConnectionError("Service temporarily unavailable")
return {"id": user_id, "name": "Sara", "status": "active"}
# Watch it retry automatically
result = fetch_user_data(12345)
print(f"Success: {result}")
Output:
Success: {'id': 12345, 'name': 'Sara', 'status': 'active'}
# Validating Input with Composable Rules
User input validation is tedious and repetitive. You check if strings are empty, if numbers are in range, and if emails look valid. Before you know it, you've got nested if-statements everywhere and your code looks like a mess.
Let's build a validation system that's simple to use. First, we need a custom exception:
from typing import Any, Callable, Dict, List, Optional
class ValidationError(Exception):
"""Raised when validation fails."""
def __init__(self, field: str, errors: List[str]):
self.field = field
self.errors = errors
super().__init__(f"{field}: {', '.join(errors)}")
This exception holds multiple error messages. When validation fails, we want to show the user everything that's wrong, not just the first error.
Now here's the validator:
def validate_input(
value: Any,
field_name: str,
rules: Dict[str, Callable[[Any], bool]],
messages: Optional[Dict[str, str]] = None
) -> Any:
"""
Validate input against multiple rules.
Returns the value if valid, raises ValidationError otherwise.
"""
if messages is None:
messages = {}
errors = []
for rule_name, rule_func in rules.items():
try:
if not rule_func(value):
error_msg = messages.get(
rule_name,
f"Failed validation rule: {rule_name}"
)
errors.append(error_msg)
except Exception as e:
errors.append(f"Validation error in {rule_name}: {str(e)}")
if errors:
raise ValidationError(field_name, errors)
return value
In the rules dictionary, each rule is just a function that returns True or False. This makes rules composable and reusable.
Let's create some common validation rules:
# Reusable validation rules
def not_empty(value: str) -> bool:
return bool(value and value.strip())
def min_length(min_len: int) -> Callable:
return lambda value: len(str(value)) >= min_len
def max_length(max_len: int) -> Callable:
return lambda value: len(str(value)) <= max_len
def in_range(min_val: float, max_val: float) -> Callable:
return lambda value: min_val <= float(value) <= max_val
Notice how min_length, max_length, and in_range are factory functions. They return validation functions configured with specific parameters. This lets you write min_length(3) instead of creating a new function for every length requirement.
Let's validate a username:
try:
username = validate_input(
"ab",
"username",
{
"not_empty": not_empty,
"min_length": min_length(3),
"max_length": max_length(20),
},
messages={
"not_empty": "Username cannot be empty",
"min_length": "Username must be at least 3 characters",
"max_length": "Username cannot exceed 20 characters",
}
)
print(f"Valid username: {username}")
except ValidationError as e:
print(f"Invalid: {e}")
Output:
Invalid: username: Username must be at least 3 characters
This approach scales well. Define your rules once, compose them however you need, and get clear error messages.
# Navigating Nested Dictionaries Safely
Accessing nested dictionaries is often challenging. You get KeyError when a key doesn't exist, TypeError when you try to subscript a string, and your code becomes cluttered with chains of .get() calls or defensive try-except blocks. Working with JavaScript Object Notation (JSON) from APIs makes this more challenging.
Let's build a function that safely navigates nested structures:
from typing import Any, Optional, List, Union
def safe_get(
data: dict,
path: Union[str, List[str]],
default: Any = None,
separator: str = "."
) -> Any:
"""
Safely get a value from a nested dictionary.
Args:
data: The dictionary to access
path: Dot-separated path (e.g., "user.address.city") or list of keys
default: Value to return if path doesn't exist
separator: Character to split path string (default: ".")
Returns:
The value at the path, or default if not found
"""
# Convert string path to list
if isinstance(path, str):
keys = path.split(separator)
else:
keys = path
current = data
for key in keys:
try:
# Handle list indices (convert string to int if numeric)
if isinstance(current, list):
try:
key = int(key)
except (ValueError, TypeError):
return default
current = current[key]
except (KeyError, IndexError, TypeError):
return default
return current
The function splits the path into individual keys and navigates the nested structure step by step. If any key doesn't exist or if you try to subscript something that isn't subscriptable, it returns the default instead of crashing.
It also handles list indices automatically. If the current value is a list and the key is numeric, it converts the key to an integer.
Here's the companion function for setting values:
def safe_set(
data: dict,
path: Union[str, List[str]],
value: Any,
separator: str = ".",
create_missing: bool = True
) -> bool:
"""
Safely set a value in a nested dictionary.
Args:
data: The dictionary to modify
path: Dot-separated path or list of keys
value: Value to set
separator: Character to split path string
create_missing: Whether to create missing intermediate dicts
Returns:
True if successful, False otherwise
"""
if isinstance(path, str):
keys = path.split(separator)
else:
keys = path
if not keys:
return False
current = data
# Navigate to the parent of the final key
for key in keys[:-1]:
if key not in current:
if create_missing:
current[key] = {}
else:
return False
current = current[key]
if not isinstance(current, dict):
return False
# Set the final value
current[keys[-1]] = value
return True
The safe_set function creates the nested structure as needed and sets the value. This is useful for building dictionaries dynamically.
Let's test both:
# Sample nested data
user_data = {
"user": {
"name": "Anna",
"address": {
"city": "San Francisco",
"zip": "94105"
},
"orders": [
{"id": 1, "total": 99.99},
{"id": 2, "total": 149.50}
]
}
}
# Safe get examples
city = safe_get(user_data, "user.address.city")
print(f"City: {city}")
country = safe_get(user_data, "user.address.country", default="Unknown")
print(f"Country: {country}")
first_order = safe_get(user_data, "user.orders.0.total")
print(f"First order: ${first_order}")
# Safe set example
new_data = {}
safe_set(new_data, "user.settings.theme", "dark")
print(f"Created: {new_data}")
Output:
City: San Francisco
Country: Unknown
First order: $99.99
Created: {'user': {'settings': {'theme': 'dark'}}}
This pattern eliminates defensive programming clutter and makes your code cleaner when working with JSON, configuration files, or any deeply nested data.
# Enforcing Timeouts on Long Operations
Some operations take too long. A database query might hang, a web scraping operation might get stuck on a slow server, or a computation might run forever. You need a way to set a time limit and bail out.
Here's a timeout decorator using threading:
import threading
import functools
from typing import Callable, Optional
class TimeoutError(Exception):
"""Raised when an operation exceeds its timeout."""
pass
def timeout(seconds: int, error_message: Optional[str] = None):
"""
Decorator to enforce a timeout on function execution.
Args:
seconds: Maximum execution time in seconds
error_message: Custom error message for timeout
"""
def decorator(func: Callable) -> Callable:
@functools.wraps(func)
def wrapper(*args, **kwargs):
result = [TimeoutError(
error_message or f"Operation timed out after {seconds} seconds"
)]
def target():
try:
result[0] = func(*args, **kwargs)
except Exception as e:
result[0] = e
thread = threading.Thread(target=target)
thread.daemon = True
thread.start()
thread.join(timeout=seconds)
if thread.is_alive():
raise TimeoutError(
error_message or f"Operation timed out after {seconds} seconds"
)
if isinstance(result[0], Exception):
raise result[0]
return result[0]
return wrapper
return decorator
This decorator runs your function in a separate thread and uses thread.join(timeout=seconds) to wait. If the thread is still alive after the timeout, we know it took too long and raise TimeoutError.
The function result is stored in a list (mutable container) so the inner thread can modify it. If an exception occurred in the thread, we re-raise it in the main thread.
⚠️ One limitation: The thread continues running in the background even after the timeout. For most use cases this is fine, but for operations with side effects, be careful.
Let's test it:
import time
@timeout(2, error_message="Query took too long")
def slow_database_query():
"""Simulate a slow query."""
time.sleep(5)
return "Query result"
@timeout(3)
def fetch_data():
"""Simulate a quick operation."""
time.sleep(1)
return {"data": "value"}
# Test timeout
try:
result = slow_database_query()
print(f"Result: {result}")
except TimeoutError as e:
print(f"Timeout: {e}")
# Test success
try:
data = fetch_data()
print(f"Success: {data}")
except TimeoutError as e:
print(f"Timeout: {e}")
Output:
Timeout: Query took too long
Success: {'data': 'value'}
This pattern is essential for building responsive applications. When you're scraping websites, calling external APIs, or running user code, timeouts prevent your program from hanging indefinitely.
# Managing Resources with Automatic Cleanup
Opening files, database connections, and network sockets requires careful cleanup. If an exception occurs, you need to ensure resources are released. Context managers using the with statement handle this, but sometimes you need more control.
Let's build a flexible context manager for automatic resource cleanup:
from contextlib import contextmanager
from typing import Callable, Any, Optional
import traceback
@contextmanager
def managed_resource(
acquire: Callable[[], Any],
release: Callable[[Any], None],
on_error: Optional[Callable[[Exception, Any], None]] = None,
suppress_errors: bool = False
):
"""
Context manager for automatic resource acquisition and cleanup.
Args:
acquire: Function to acquire the resource
release: Function to release the resource
on_error: Optional error handler
suppress_errors: Whether to suppress exceptions after cleanup
"""
resource = None
try:
resource = acquire()
yield resource
except Exception as e:
if on_error and resource is not None:
try:
on_error(e, resource)
except Exception as handler_error:
print(f"Error in error handler: {handler_error}")
if not suppress_errors:
raise
finally:
if resource is not None:
try:
release(resource)
except Exception as cleanup_error:
print(f"Error during cleanup: {cleanup_error}")
traceback.print_exc()
The managed_resource function is a context manager factory. It takes two required functions: one to acquire the resource and one to release it. The release function always runs in the finally block, guaranteeing cleanup even if exceptions occur.
The optional on_error parameter lets you handle errors before they propagate. This is useful for logging, sending alerts, or attempting recovery. The suppress_errors flag determines whether exceptions get explicitly raised or suppressed.
Here's a helper class to demonstrate resource tracking:
class ResourceTracker:
"""Helper class to track resource operations."""
def __init__(self, name: str, verbose: bool = True):
self.name = name
self.verbose = verbose
self.operations = []
def log(self, operation: str):
self.operations.append(operation)
if self.verbose:
print(f"[{self.name}] {operation}")
def acquire(self):
self.log("Acquiring resource")
return self
def release(self):
self.log("Releasing resource")
def use(self, action: str):
self.log(f"Using resource: {action}")
Let's test the context manager:
# Example: Operation with error handling
tracker = ResourceTracker("Database")
def error_handler(exception, resource):
resource.log(f"Error occurred: {exception}")
resource.log("Attempting rollback")
try:
with managed_resource(
acquire=lambda: tracker.acquire(),
release=lambda r: r.release(),
on_error=error_handler
) as db:
db.use("INSERT INTO users")
raise ValueError("Duplicate entry")
except ValueError as e:
print(f"Caught: {e}")
Output:
[Database] Acquiring resource
[Database] Using resource: INSERT INTO users
[Database] Error occurred: Duplicate entry
[Database] Attempting rollback
[Database] Releasing resource
Caught: Duplicate entry
This pattern is useful for managing database connections, file handles, network sockets, locks, and any resource that needs guaranteed cleanup. It prevents resource leaks and makes your code safer.
# Wrapping Up
Each function in this article addresses a specific error handling challenge: retrying transient failures, validating input systematically, accessing nested data safely, preventing hung operations, and managing resource cleanup.
These patterns show up repeatedly in API integrations, data processing pipelines, web scraping, and user-facing applications.
The techniques here use decorators, context managers, and composable functions to make error handling less repetitive and more reliable. You can drop these functions into your projects as-is or adapt them to your specific needs. They're self-contained, easy to understand, and solve problems you'll run into regularly. Happy coding!
Bala Priya C is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she's working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.