Image created by Midjourney
In my last post I went over setting up a self-hosted GitHub Actions runner and Docker. Now it's time to write some Dockerfiles and YAMLs. If you already know how to wirte Dockerfiles for your app, feel free to skip this section
Dockerfile is how you tell the Docker daemon how you want to containerize you application. It's basically a blueprint for the Docker image you're gonna create. This is not a Docker tutorial, so I'm not gonna get into the weeds of what Docker really is, but all you need to remeber is the following diagram
The Dockerfile is what creates a Docker image, and from the Docker image you can create as many containers as you want. It's this "container" that runs your Flask/Django/Rails server.
I'm gonna be writing a Dockerfile for a Flask app, so this may not apply to you 100%, but you'll get the general idea. Here's the Dockerfile I use for this site
FROM python:3.12-slim
RUN apt update
RUN apt install -y \
lsb-release \
traceroute \
wget \
curl \
iputils-ping \
bridge-utils \
dnsutils \
netcat-openbsd \
jq \
redis \
nmap \
net-tools \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /usr/bin/portfolio
COPY . .
RUN pip install -r requirements.txt
# INSTALL NODE
RUN curl -fsSL https://deb.nodesource.com/setup_22.x -o nodesource_setup.sh
RUN bash nodesource_setup.sh
RUN apt install -y nodejs
# INSTALL NODE DEPENDENCIES
RUN npm i
EXPOSE 9000
ENTRYPOINT ["./entrypoint.sh"]
Think of it as a list of instructions if you were to create a VM from scratch. Let's go over it line by line
FROM python:3.12-slim
This tells Docker the "base image" you want to use to build your custom
Docker image off of. Docker images are made of layers that can be stacked
on top of each other. There are a lot of images available on
DockerHub that
you can use to build your image off of. I'm using
python:3.12-slim
which is a Debian image that comes with
Python and pip installed. If you're using Rails or Node.js, you'll
probably use base images for
Ruby or
Node.
RUN apt update
RUN apt install -y \
lsb-release \
traceroute \
wget \
curl \
iputils-ping \
bridge-utils \
dnsutils \
netcat-openbsd \
jq \
redis \
nmap \
net-tools \
&& rm -rf /var/lib/apt/lists/*
The first line here just makes sure your linux repositories are up to date. The second line is what installs some critical system packages. Now, you don't NEED to install all of these. Most of these are network utilities that I find useful when I'm troubleshooting network connectivity issues with Docker. If you're planning on deploying only an app container that talks to external managed services like Postgres, Redis etc then you probably don't need to do this. However, if you're like me and want to run their own self-hosted versions of Postgres and Redis, you absolutely need to do this in my opinion.
WORKDIR /usr/bin/portfolio
COPY . .
The first line here is what sets the working directory in your container. It's the default path in your container. Like if you shell into your conatiner, this is the directory you'll be dropped in
The second line, the COPY
command copies the app files from
your local machine to your container's working directory.
RUN pip install -r requirements.txt
This step installs the depedencies needed to run your application. Note
how this step comes AFTER the WORKDIR
and
COPY
commands. This is specific to Python, but you can
generalize this to Rails or Node.js apps.
# INSTALL NODE
RUN curl -fsSL https://deb.nodesource.com/setup_22.x -o nodesource_setup.sh
RUN bash nodesource_setup.sh
RUN apt install -y nodejs
# INSTALL NODE DEPENDENCIES
RUN npm i
This is another optional step, but one I think is worth going over. Since
I'm deploying a Python(Flask) app, I'm building my image off of a Python
base image. Now, I also need to use tools like npm
to run
tailwind commands. The
base image doesn't come with Node.js installed, but since it's like any
other Linux image, I can run regular Linux commands to install whatever
package I want. This step install Node.js in the container so I can run
npm
commands in my container
EXPOSE 9000
ENTRYPOINT ["./entrypoint.sh"]
This is a very important step. The EXPOSE
command tells Docker
which TCP/IP port in your container you want to expose to outside traffic.
This would be the port number you want to run your application on. For
example, I typically run my apps on port 9000
if __name__ == "__main__":
app.run(host="0.0.0.0", port=9000, debug=True)
Yes, I know, I should not be using the Flask dev server in production blah
blah blah (Don't worry I'm not). Anyway, the point is, you need to
explicitly tell Docker which ports you plan on exposing to the world. This
EXPOSE
command will become clearer later but keep the number
9000 in the back of your head for now.
The last step, ENTRYPOINT
is the step that actually "runs"
your app. The arguments to this command is read as a JSON array so make
sure you use double quotes. It's not uncommon to see explicit commands to
ENTRYPOINT
in the wild, but I prefer to write a seperate bash
script and just reference that in the Dockerfile. Here's what my
entrypoint.sh
looks like
#!/usr/bin/env bash
npx tailwindcss -i ./src/input.css -o ./static/output.css
CONCURRENCY=$(expr 2 \* $(nproc) + 1)
gunicorn -w $CONCURRENCY \
--worker-class=gevent \
--worker-connections=100 \
--timeout 120 \
--log-level=debug \
--threads=$CONCURRENCY \
--bind 0.0.0.0:9000 \
app:app
There's a lot going on here. I'll probably do a tutorial on how to
productionize a Python app in the future, but for now, just understand
that we're using a production-ready app server called
gunicorn to run our Flask
app in our VPS and creating our final CSS file with Tailwind. The benefit
of writing an entrypoint file like this is you get to make your start
command as sophisticated as you want and keep your Dockerfile clean. If
you're using a database, this is the file where your migration commands
would go.Gunicorn gives you a lot of things to configure, but the
important parts here are the --bind 0.0.0.0:9000
and
app:app
Gunicorn takes the app instance from app.py
and runs it on
port 9000 of the container (remember the EXPOSE
command?)