from celery import shared_task @shared_task def send_email(user_id): user = User.objects.get(id=user_id) if not user.email: raise ValueError('No email found') # send email logic here return 'Email sent'
By default, if a Celery task raises an exception and no retry logic is implemented, the task fails immediately and is marked as failed. Celery does not retry automatically unless configured.
from celery import shared_task from celery.exceptions import MaxRetriesExceededError @shared_task(bind=True) def fetch_data(self): try: # code that might fail pass except Exception as exc: # retry logic here pass
Using countdown=10 ** self.request.retries causes the delay to grow exponentially (10, 100, 1000 seconds). The max_retries limits retries to 3.
from celery import shared_task @shared_task(bind=True) def process_order(self, order_id): try: order = Order.objects.get(id=order_id) # process order except Order.DoesNotExist as e: self.retry(exc=e, countdown=5, max_retries=2) raise
Calling self.retry() raises a special exception to retry the task. If the exception is caught or the function returns normally, retry does not happen. Here, the retry call is not followed by a raise or return, so the task ends without retry.
from celery import shared_task @shared_task(bind=True, max_retries=2) def unreliable_task(self): raise RuntimeError('Fail')
When max retries are exceeded, Celery marks the task as FAILURE and stores the exception details. It does not retry further.
from celery import shared_task import requests @shared_task(bind=True, max_retries=5) def fetch_api(self, url): try: response = requests.get(url) response.raise_for_status() return response.json() except Exception as exc: # retry only for network errors pass
To retry selectively, check the exception type. Retry only for network-related exceptions like requests.exceptions.RequestException. For others, raise immediately to fail the task.