Dockerizing Django + Gulp for Efficient Development: A Step-by-Step Guide

Learn to create a Dockerized development environment for Django and Celery using Dockerfile.dev and Docker Compose. Develop, test, and manage databases easily with this setup. Follow the step-by-step instructions and start building your web application in a Dockerized environment.

Dockerizing Django + Gulp for Efficient Development: A Step-by-Step Guide
Dockerizing Django + Gulp for Efficient Development: A Step-by-Step Guide

In today's fast-paced world, building a fast, efficient, and reliable development environment is crucial. That's why using Docker to containerize your development environment is becoming increasingly popular. In this blog post, we'll go through how to create a Dockerfile to build a Django + Gulp development environment.

The Dockerfile consists of three stages:

  1. Node.js build
  2. Python build
  3. Final stage

The first stage installs Node.js and runs npm install to install all the necessary Node.js packages. The second stage installs Python and installs all the required packages in requirements.txt using pip. It also copies the SSH keys needed to access private Git repositories. The final stage copies the files from the previous stages and sets up the Django + Gulp development environment.

Here's a Dockerfile.dev for creating a dockerized development environment with Gulp. Keep in mind that this may not be suitable for production use. Please refer to my previous post on Gulp if you're unfamiliar with it.

# Stage 1: Node.js build
FROM node:16-alpine as node_builder

WORKDIR /home/django/app

COPY package*.json .

RUN npm install

# Stage 2: Python build
FROM python:3.11.2-alpine3.17 as builder

RUN mkdir -p /home/django/app/wheels

WORKDIR /home/django/app

ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

COPY requirements.txt .

RUN apk add --no-cache --virtual .build-deps \
  build-base \
  openssl-dev \
  libffi-dev \
  postgresql-dev \
  git \
  openssh

RUN pip install --upgrade pip

# authorize SSH Host to download modules from SL git account
RUN mkdir -p -m 0600 /root/.ssh \
  && ssh-keyscan ssh.dev.azure.com > /root/.ssh/known_hosts

# copy keys and set permissions
COPY --chmod=0600 .ssh/* /root/.ssh/

RUN eval "$(ssh-agent -s)" \
  && ssh-add /root/.ssh/id_rsa

RUN pip wheel --no-cache-dir --no-deps --wheel-dir /home/django/app/wheels -r requirements.txt

RUN apk del .build-deps \
  && rm -rf /var/cache/apk/*

# Stage 3: Final stage
FROM python:3.11.2-alpine3.17

RUN mkdir -p /home/django/app

RUN addgroup -g 1000 django && adduser -u 1000 -S django -G django

ARG HOME=/home/django

ARG APP_HOME=$HOME/app

ARG VIRTUAL_ENV=$HOME/env

RUN python -m venv $VIRTUAL_ENV

ENV PATH="$VIRTUAL_ENV/bin:$PATH"

RUN pip install --upgrade pip

WORKDIR $APP_HOME

COPY --from=builder /home/django/app/wheels /wheels

COPY --from=builder /home/django/app/requirements.txt .

# Copy built Node.js assets from the node_builder stage
COPY --from=node_builder /home/django/app/node_modules /home/django/app/node_modules

RUN pip install --no-cache /wheels/*

RUN apk add --no-cache \
  bash \
  libpq \
  gettext \
  nodejs \
  npm \
  && rm -rf /var/cache/apk/*

COPY . $APP_HOME

RUN echo "----> fixing permissions" && \
  echo "django ALL = ( ALL ) NOPASSWD: ALL" >> /etc/sudoers && \
  chown -R django:django ${HOME}

USER django

EXPOSE 8000 3000 3001

STOPSIGNAL SIGINT

CMD ["npm", "run", "watch"]

Stage 1: Node.js build

FROM node:16-alpine as node_builder

WORKDIR /home/django/app

COPY package*.json .

RUN npm install
  • Use an Alpine-based Node.js image as a build environment for our frontend assets
  • Set the working directory to /home/django/app
  • Copy the package*.json files from the host to the container
  • Install the required Node.js packages using npm install

Stage 2: Python build

FROM python:3.11.2-alpine3.17 as builder

RUN mkdir -p /home/django/app/wheels

WORKDIR /home/django/app

ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

COPY requirements.txt .

RUN apk add --no-cache --virtual .build-deps \
  build-base \
  openssl-dev \
  libffi-dev \
  postgresql-dev \
  git \
  openssh

RUN pip install --upgrade pip

# authorize SSH Host to download modules from SL git account
RUN mkdir -p -m 0600 /root/.ssh \
  && ssh-keyscan ssh.dev.azure.com > /root/.ssh/known_hosts

# copy keys and set permissions
COPY --chmod=0600 .ssh/* /root/.ssh/

RUN eval "$(ssh-agent -s)" \
  && ssh-add /root/.ssh/id_rsa

RUN pip wheel --no-cache-dir --no-deps --wheel-dir /home/django/app/wheels -r requirements.txt

RUN apk del .build-deps \
  && rm -rf /var/cache/apk/*
  • Use an Alpine-based Python image as a build environment for our backend Python code
  • Create a directory at /home/django/app/wheels for Python package wheels
  • Set the working directory to /home/django/app
  • Set environment variables to optimize Python performance and avoid writing bytecode
  • Copy the requirements.txt file from the host to the container
  • Install the required Alpine packages using apk add
  • Upgrade pip to the latest version using pip install --upgrade pip
  • Set up SSH access to download private dependencies from a remote Git server
  • Build Python wheels for all dependencies listed in requirements.txt
  • Clean up build dependencies using apk del and remove cached files using rm

Stage 3: Final stage

FROM python:3.11.2-alpine3.17

RUN mkdir -p /home/django/app

RUN addgroup -g 1000 django && adduser -u 1000 -S django -G django

ARG HOME=/home/django

ARG APP_HOME=$HOME/app

ARG VIRTUAL_ENV=$HOME/env

RUN python -m venv $VIRTUAL_ENV

ENV PATH="$VIRTUAL_ENV/bin:$PATH"

RUN pip install --upgrade pip

WORKDIR $APP_HOME

COPY --from=builder /home/django/app/wheels /wheels

COPY --from=builder /home/django/app/requirements.txt .

# Copy built Node.js assets from the node_builder stage
COPY --from=node_builder /home/django/app/node_modules /home/django/app/node_modules

RUN pip install --no-cache /wheels/*

RUN apk add --no-cache \
  bash \
  libpq \
  gettext \
  nodejs \
  npm \
  && rm -rf /var/cache/apk/*

COPY . $APP_HOME

RUN
  • Use an Alpine-based Python image as the final runtime environment for our application
  • Create a directory at /home/django/app for our application code
  • Create a non-root django user with UID/GID 1000 to run the application
  • Set environment variables for the home directory, application directory, and Python virtual environment
  • Create a Python virtual environment using python -m venv
  • Add the virtual environment's bin directory to the system path
  • Upgrade pip to the latest version using pip install --upgrade pip
  • Set the working directory to $APP_HOME
  • Copy the Python package wheels and requirements.txt file from the builder stage
  • Copy the Node.js node_modules directory from the Node.js builder stage
  • Install Python dependencies using pip install --no-cache
  • Install Alpine packages needed for our application using apk add
  • Copy the application code from the host to the container
  • Set permissions for the django user and run a startup script
  • Expose the ports used by our application
  • Set the stop signal for graceful shutdown of the container
  • Define the command to start our application using the startup script and the dev environment configuration

To build and execute the Dockerfile.dev, you need to use the -f option to specify the Dockerfile you want to use. By default, Docker will always use the default Dockerfile in the current directory. Here's the command to build and run the Dockerfile.dev:

docker build -f Dockerfile.dev -t myapp:tagname .

This will create a new Docker image with the tagname you specified.

Once the container is build, you can run the following command to start the container.

docker run -it -p 8000:8000 -p 3000:3000 -p 3001:3001 -v $(pwd):/home/django/app myapp:tagname

This will start the Docker container, with ports 8000, 3000, and 3001 exposed for Django, Gulp, and BrowserSync respectively. It also mounts the current directory as a volume inside the container.

Finally, open your preferred browser and go to http://localhost:3000 to see your Django application in action. The browser will automatically refresh when changes are made, thanks to BrowserSync.

By containerizing your development environment, you can ensure that it is consistent across all development machines, making it easier to collaborate with your team. Additionally, containerizing your environment makes it easy to move your development environment to other machines or to the cloud.

Overall, using Docker to containerize your development environment is a great way to ensure consistency and ease of use across all development machines. The Dockerfile provided in this post is a great starting point for building your own Django + Gulp development environment. Happy coding!

Bonus

If you are curious with the last block of code in the Dockerfile.dev file, it is the start.sh shell script that is used to start up different processes in the container. It takes an argument PROCESS_TYPE which can be one of the following options: server, beat, worker, or flower.

When running the container, you can pass one of these options to start.sh to start the corresponding process. Here's the  start.sh code:

#!/bin/bash

set -e

usage() {
  echo -e "Usage: start.sh [PROCESS_TYPE] (server/beat/worker/flower)\n\n\
Opções de PROCESS_TYPE:\n\
- server: Starts the Django app using the Gunicorn web server.\n\
- beat: Starts the Celery beat scheduler using the Celery command-line tool.\n\
- worker: Starts the Celery worker using the Celery command-line tool.\n\
- flower: Starts the web-based Celery flower monitoring tool using the Celery command-line tool.\n\n
Example usage: ./start.sh server"
}

if [[ $# -eq 0 ]]; then
  usage
  exit 1
fi

PROCESS_TYPE=$1

if [[ "$PROCESS_TYPE" == "server" ]]; then
  echo -e "\n>>>>>> Starting Django App <<<<<<\n"
  gunicorn -c ./server/gunicorn.conf.py garupa_portal.wsgi
elif [[ "$PROCESS_TYPE" == "beat" ]]; then
  echo -e "\n>>>>>> Starting Celery Beat <<<<<<\n"
  celery \
    beat \
    --app garupa_portal \
    --loglevel $LOG_LEVEL \
    --scheduler garupa_portal.schedulers:DatabaseScheduler
elif [[ "$PROCESS_TYPE" == "worker" ]]; then
  echo -e "\n>>>>>> Starting Celery Worker <<<<<<\n"
  celery \
    worker \
    --app garupa_portal \
    --loglevel $LOG_LEVEL
elif [[ "$PROCESS_TYPE" == "flower" ]]; then
  echo -e "\n>>>>>> Starting Celery Flower <<<<<<\n"
  celery \
  flower \
  --app garupa_portal \
  --basic_auth="${CELERY_FLOWER_USER}:${CELERY_FLOWER_PASSWORD}" \
  --loglevel $LOG_LEVEL
else
  usage
  exit 1
fi

For example, if you want to start the Django app with Gunicorn, you would run:

./start.sh server

This will start the Gunicorn server and run the Django app.

Similarly, if you want to start the Celery worker, you would run:

./start.sh worker

And if you want to start the Celery Flower monitoring tool, you would run:

./start.sh flower

This script is included in the Dockerfile.dev file to make it easy to start up different processes in the container depending on your needs.

Bonus: Setting up a Full Dockerized Development Environment with Docker Compose

The provided Docker Compose file sets up a complete development environment for your project with several services. These include the app, queue, database, Redis, and Adminer services. The app service uses the latest Docker image built with the Dockerfile.dev file and exposes ports 8000, 3000, and 3001 to the host machine. Volumes are also mounted to allow for live code reloading. The queue service also uses the same image and links to both the database and Redis services. The database service uses the official PostgreSQL Docker image and has a mounted volume for persistent data storage. Similarly, the Redis service uses the official Redis Docker image and also mounts a volume for persistent data storage. Lastly, the Adminer service uses the official Adminer Docker image and links to the database service for web-based database management.

version: "3"
services:
  app:
    image: myapp:latest
    build:
      context: .
      dockerfile: Dockerfile.dev
    tty: true
    container_name: myapp-app
    restart: unless-stopped
    volumes:
      - .:/home/django/app
      - ./node_modules:/home/django/app/node_modules
    ports:
      - 8000:8000
      - 3000:3000
      - 3001:3001
    env_file:
      - .env
    links:
      - "db:db"
  queue:
    image: myapp:latest
    tty: true
    container_name: myapp-queue
    restart: unless-stopped
    env_file:
      - .env
    links:
      - "db:db"
      - "redis:redis"
  db:
    image: postgres:14.5-alpine
    restart: always
    container_name: myapp-postgres
    environment:
      POSTGRES_PASSWORD: "username"
      POSTGRES_USER: "password"
      POSTGRES_DB: "database"
    ports:
      - 5432:5432
    volumes:
      - ~/.docker-conf/myapp/postgres:/var/lib/postgresql/data
    healthcheck:
      test: [ "CMD", "pg_isready", "-q", "-U", "username" ]
      interval: 1s
      timeout: 3s
      retries: 30
  redis:
    image: redis:7.0.0-alpine
    container_name: myapp-redis
    restart: always
    command: [ "redis-server", "--appendonly", "yes" ]
    volumes:
      - ~/.docker-conf/myapp/redis:/data
    healthcheck:
      test: [ "CMD", "redis-cli", "ping" ]
      interval: 1m30s
      timeout: 30s
      retries: 5
      start_period: 10s
  adminer:
    image: adminer:latest
    container_name: myapp-adminer
    restart: always
    ports:
      - 8080:8080
    depends_on:
      db:
        condition: service_healthy
    links:
      - "db:db"

Now, let's detail each service and its configuration:

  • app: This service is based on the Docker image built from the Dockerfile.dev file. It mounts the current directory as a volume to allow for live editing of code. It also links to the database and sets environment variables using the .env file. This service exposes ports 8000, 3000, and 3001 for the Django app and for the Gulp development server.
  • queue: This service is also based on the Docker image built from the Dockerfile.dev file. It links to the database and Redis containers, and sets environment variables using the .env file.
  • db: This service uses the official Postgres Docker image and sets the necessary environment variables for the database. It exposes port 5432 for database access, and mounts a local directory for data persistence. The service also defines a healthcheck to ensure the database is running properly.
  • redis: This service uses the official Redis Docker image, and sets the necessary command to run the Redis server with persistence enabled. It mounts a local directory for data persistence and defines a healthcheck to ensure Redis is running properly.
  • adminer: This service uses the official Adminer Docker image, and exposes port 8080 for web-based database management. It depends on the database container to be healthy before starting up.

With this Docker Compose file, you can easily spin up a full development environment with all necessary services running and configured.

Subscribe to codingwithalex

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
[email protected]
Subscribe