The simplest way to add a Django worker (Using AWS Chalice) 🍷

Updated: Fri 03 May 2024

I spent 14 hours finding the simplest way to develop and deploy a serverless function for Django in Add an async serverless python function 🗡.

I recommended DigitalOcean functions.

Turns out, AWS Chalice is great as well. It also allows you to use serverless functions (in the form of AWS lambda) with Django in a very simple way.

Here's what I'll show you how to do in this short guide:

  1. Develop your lambda functions locally (including in a way that works with your local Django app)
  2. Async (or sync) invocation of serverless functions.
  3. Easy debugging of your lambda function from the CLI
  4. Automated CI deployment to production with GitHub Actions (when you push to master)

I've made an optional video guide (featuring me 🏇🏿) here that follows the steps in this guide:

More benefits:

  • Near instant deploys of your lambda function
  • No docker
  • No AWS SAM ((AWS Chalice is so much simpler and faster than AWS SAM for creating serverless functions.)
  • How to integrate an async lambda function with Django in a neat way

P.S Joke: Why do frontend developers normally eat lunch alone? (The answer is below in our lambda function using Chalice)

Let's get started 🚀

Setup

Install the required packages:

pip install chalice boto3

Add your AWS credentials to your computer

If you haven't done this, add your AWS credentials to your computer. The simplest way to do this is:

i. install the AWS CLI

ii. run the below and enter your credentials when prompted:

aws configure

If you need to create AWS credentials, you can follow the guide here. Be sure to attach the below permissions to your AWS user, otherwise deploying lambdas will fail. img.webp

Develop your python lambda locally with Chalice

Create a new chalice project containing a sample lambda function

chalice new-project helloworld

This creates a folder named helloworld which contains your sample lambda function inside app.py.

  • Replace the contents of app.py with the below:
from chalice import Chalice

app = Chalice(app_name='helloworld')

@app.route('/')
def index():
    sample_text = (
        "The squirrel was in charge of acorn muffins, the rabbit brought a basket of the freshest carrots, and the owl",
        "brewed a pot of the most aromatic tea. As the sun dipped below the horizon, casting a golden glow over the",
        "garden, laughter and chatter filled the air, creating a melody that even the stars above paused to listen to."
    )
    print('Running the index function')
    return {'hello': 'galaxy', 'sample_text': sample_text}
  • Deploy the function
cd helloworld
chalice deploy

This will deploy your function to the cloud and give you a URL to access it.

  • Visit the URL to see your function in action.
  • Copy the ARN of your lambda to use in the next step.

chalice uses the concept of stages to refer to different environments. By default, it uses the dev stage. So the URL you get from the above command is for the dev stage.

When we deploy to production below, we will deploy to a different stage by using the --stage flag. This means that we will have different versions of your lambda function for different environments.

We'll use the dev stage for local development and then deploy the latest version of the lambda to the prod stage when we merge our code to production.

Adding a python function to call the lambda function asynchronously

Create a services.py file in the top folder and add the below: To use this with Django, you would add this file to your Django project and call it from your Django views.

import boto3
import json


def fetch_magic_spell(spell_name: str) -> dict:
    """
    This simple function would do a large amount of work in a real-world application.
    Your Django app would call this function in a view to fetch the details of a magic spell.
    """
    client = boto3.client(
        'lambda',
    )

    # Invoke the lambda function.
    payload = {'spell_name': spell_name}
    response = client.invoke(
        FunctionName='arn:<your_details_from_your_deployed_dev_lambda_function:helloworld-dev',
        InvocationType='Event',  # Change to 'RequestResponse' if you want to wait for the lambda to complete (i.e., synchronous execution)
        Payload=json.dumps(payload),
    )
    print(f'{response = }')
    return response


if __name__ == '__main__':
    spell_name = 'Leviosa'
    spell_details = fetch_magic_spell(spell_name)
    print(f"Spell Details: {spell_details}")

Now run the services.py file to see the lambda function in action.

This will execute the lambda function asynchronously and return a response. You should see something like this:

response = {'ResponseMetadata': {'RequestId': '26c5b61e-a248-420d-876e-49dadf8ef6b9'...

This initiated calling the lambda function and returned a response, without waiting for the lambda function to complete.

In this way, we use the lambda function as a serverless background worker for any app. It will run in the background and not block the main thread of your app. The lambda function can then call an endpoint on your Django app when it's done.

Add an external library to your lambda function

Using chalice means that we can use any python library in our lambda function, without having to add lambda layers or any other AWS-specific configuration. This is a big advantage.

We'll also add a deliberate error to demonstrate how to handle errors in your lambda function. - Update the app.py file to the below

from chalice import Chalice
import time

app = Chalice(app_name='helloworld')


@app.route('/')
def index():
    url = 'https://v2.jokeapi.dev/joke/Any'
    response = requests.get(url)
    data = response.json()
    if data['type'] == 'single':
        joke = data['joke']
    else:
        joke = f"{data['setup']}\n{data['delivery']}"
    time_asleep = 2
    time.sleep(time_asleep)
    return {
        'joke': joke,
        'time_taken': time_asleep
    }
  • Deploy the function
chalice deploy
  • Visit the URL to see your function in action. You should see error messages in the logs when you visit the URL.

Debugging Your Lambda function with chalice

chalice logs

img.webp You can see that the lambda function is throwing an error because we didn't add the requests library.

  • Add the requests library to the requirements.txt file in the helloworld folder.
requests
  • Update the app.py file to the below to fix this error:
from chalice import Chalice
import requests
import time

app = Chalice(app_name='helloworld')


@app.route('/')
def index():
    url = 'https://v2.jokeapi.dev/joke/Any'
    response = requests.get(url)
    data = response.json()
    if data['type'] == 'single':
        joke = data['joke']
    else:
        joke = f"{data['setup']}\n{data['delivery']}"
    time_asleep = 2
    time.sleep(time_asleep)
    return {
        'joke': joke,
        'time_taken': time_asleep
    }
  • Deploy the function
chalice deploy
  • Visit the URL to see your function in action. img_1.webp

Deploying to Production with GitHub Actions

As mentioned, we will deploy lambda to a dev stage for local development and then deploy to a prod stage when we merge our code to production. We'll use GitHub Actions to deploy our python lambda function when we push our code to the master branch.

Deploy to production

For the first time, we'll deploy to production manually. This is necessary for me because Github Actions is hiding the final production url with asterisks.

We'll do this once to get the production URL and then automatically deploy to GitHub Actions workflow for future deployments.

chalice deploy --stage prod

If any problems, look at the logs:

chalice logs --stage prod

Setting up GitHub Actions for AWS Deployment

  1. Store AWS Credentials: Add AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION to your GitHub repository's secrets. To do that, go to your GitHub repository, click on Settings > Secrets > New repository secret. Add the below secrets:
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
    • AWS_DEFAULT_REGION

img_2.webp

  1. Create GitHub Actions Workflow: In your repository, create a .github/workflows/deploy.yml file with the following content: Note: Change your branch name from master to main if you are using the main branch as your default branch.
name: Deploy lambda to production using Chalice

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest

    env:
      AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
      AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}

    steps:
      - uses: actions/checkout@v3

      - uses: actions/setup-python@v2
        with:
          python-version: 3.11

      - name: Install required packages
        run: pip install chalice requests

      - name: Deploy lambda to production
        run: |
          cd helloworld
          chalice deploy --stage prod

Finished 🏁

Congrats! You now have a python lambda function that you can develop locally and deploy to production using GitHub Actions.

You can now integrate this with your Django app by adding a function to call the lambda function in your Django views. I use a very similar setup in Photon Designer to export users' projects in the background, without blocking the app server.

You can use the lambda function to call an endpoint on your Django app when it's done. Below is an example of how to use this lambda function in a Django app.

Example Django usage

Here's an example of how you might use this in a Django app:

# views.py in your Django app
from django.http import JsonResponse, HttpResponse
from .services import fetch_magic_spell
from .models import Spell

def cast_spell(request):
    spell_name = request.GET.get('spell', 'Leviosa')  # Default to 'Leviosa' if no spell name provided
    spell_id = Spell.objects.create(name=spell_name, status='Casting')
    # Call your lambda function with the spell_id here. This will send the processing to the lambda function.
    return JsonResponse({'status': 'Casting', 'spell_id': spell_id})

def record_spell_complete(request, spell_id: int):
    """
    This view would be called inside the lambda function after it's finished casting its 
    long-running function to record that the spell is complete.
    """
    spell = Spell.objects.get(id=spell_id)
    spell.status = 'Complete'
    spell.save()
    return HttpResponse('Recorded spell complete')

def get_spell_details(request, spell_id: int):
    """
    This view would be called the frontend of your app to check the status of the spell.
    I.e., to check if the spell is still casting or if it's complete.
    """
    spell = Spell.objects.get(id=spell_id)
    return JsonResponse({'name': spell.name, 'status': spell.status})

P.S Want to build Django frontend faster?

Probably like you, I want to get my Django frontend out fast as possible (preferably instantly).

So, I'm building Photon Designer. Photon Designer lets me produce Django frontend visually and extremely quickly - like a painter sweeping his brush across the page 🖌️

If you found this guide helpful, you can see Photon Designer here .

Let's get visual.

Do you want to create beautiful django frontends effortlessly?
Click below to book your spot on our early access mailing list (as well as early adopter prices).
Copied link to clipboard 📋