How to Dockerize a React + Flask Project

Posted by
on under

This is the fourth article in my series about working with a combined Flask and React project. In this part I'm going to cover how to deploy the application in Docker containers.

Have you missed any of the previous parts? Here is the complete list of articles I have published to date:

Why Docker?

Back in part 2 of this series I showed two ways to deploy an application made up of a React front end and a Flask back end: one that only used a Python based web server (Gunicorn), and a more complex and robust solution that involved the use of a dedicated static file and proxy web server (nginx) in front of the Python API.

While these two solutions are great, they require a few manual steps to be implemented. Since I started using React with Flask I have implemented a few projects, and I started to find it tedious to have to manually set up a deployment. So at some point I started looking into ways to script or automate these deployments, and this is how I arrived at Docker as the most convenient tool for the job.

Having the deployment implemented as one or more Docker containers means that I can test locally and once everything is working I can deploy the containers anywhere Docker is supported, regardless of operating system or cloud platform.

In the following sections I'm going to describe two Docker deployment options, which match the two deployment options I showed in part 2 of this series. The code for the project that I'm using is on GitHub.

Dockerizing the API

We are going to start from the simplest part, which is to create a Docker container that runs the API part of the project, which as you recall is stored in the api subdirectory.

Here is the Dockerfile for this deployment, which I put in a file called Dockerfile.api:

FROM python:3.9
WORKDIR /app

COPY api/requirements.txt api/api.py api/.flaskenv ./
RUN pip install -r ./requirements.txt
ENV FLASK_ENV production

EXPOSE 5000
CMD ["gunicorn", "-b", ":5000", "api:app"]

This container uses the official Python 3.9 image as a base. To keep the application nicely installed in its own directory it sets /app as the working directory. This means that all container paths that follow will be relative to this location.

The installation has three steps. First all the project files are copied over to the container's application directory, including the requirements file, the Python code and the .flaskenv file. In the second step pip is used to install all the requirements. The third step is really optional, but given that the .flaskenv file is configured for debug mode, I thought it would be best to override that, since this is supposed to be a production deployment. Note that the FLASK_ENV environment variable is only used when you start the server with the flask run command and we are not going to do that here, so this is more of a "just in case" type of thing.

To complete the Dockerfile, port 5000 is declared as a public port that needs to be exposed to the outside world, and then a start up command for Gunicorn is provided.

If you want to run this container, you first need to build an image for it:

docker build -f Dockerfile.api -t react-flask-app-api .

Once the image is built, you can run an API container with the following command:

docker run --rm -p 5000:5000 react-flask-app-api  

With the container running, you can type http://localhost:5000/api/time in the navigation bar of your web browser to see how the API responds to your requests.

When you are done testing this container, press Ctrl-C to stop it. Running the API container manually is useful to test it, but we are going to use a more convenient method that also involves the front end for the real deployment.

Dockerizing the React client

The next step is to create a second Docker container that runs an nginx web server that serves the React front end to clients, and proxies requests for the API to the container we built in the previous section.

Unfortunately this container is a bit more complex than the previous one. Here is the Dockerfile definition, which I put in a Dockerfile.client file:

# Build step #1: build the React front end
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json yarn.lock ./
COPY ./src ./src
COPY ./public ./public
RUN yarn install
RUN yarn build

# Build step #2: build an nginx container
FROM nginx:stable-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
COPY deployment/nginx.default.conf /etc/nginx/conf.d/default.conf

This container definition file uses a relatively recent feature of Docker called multi-stage builds.

The first step of the build is based on the Node.js 16 container., which comes pre-installed with node and yarn. Once again we use /app as work directory.

To build the client we first need to copy all the React project files to the container image. This includes the package.json and yarn.lock files, and the src and public directories with the source code and static files respectively. Then we install all the dependencies and build the client.

At this point, the production version of the React application is generated and stored in the /app/build directory. We could just install nginx and point it to this directory, but this is not ideal, because the container image has Node.js and yarn installed and taking up space, yet we do not need these tools anymore.

This is a common problem that multi-stage builds are designed to address. What we do next is start a second build step, which basically allows us to start a clean container image, in this case the official one for nginx.

When you have a multi-stage Dockerfile, the COPY command can be used to bring files from the previous build step. This is what we do to copy the app/build directory to /usr/share/nginx/html, which is the directory where the nginx container expects the files to be served to be installed.

A second COPY command installs a custom nginx configuration file. If the container had to only serve static files, the default configuration would suffice, but in our case we also need to proxy requests for URLs that start with /api to the API container, so the default configuration is not sufficient. The custom nginx configuration is almost line by line a copy of the nginx configuration we used in part 2 of this series:

server {
    listen       80;
    server_name  localhost;

    root   /usr/share/nginx/html;
    index index.html;
    error_page   500 502 503 504  /50x.html;

    location / {
        try_files $uri $uri/ =404;
        add_header Cache-Control "no-cache";
    }

    location /static {
        expires 1y;
        add_header Cache-Control "public";
    }

    location /api {
        proxy_pass http://api:5000;
    }
}

One interesting change is that in this configuration the proxy_pass statement refers to the API service with the api hostname. This is because once we have the two containers running, they're going to be part of a private network under which each container can refer to the other by name.

You may have noticed that this Dockerfile does not have a CMD statement, which indicates what command the container needs to run when it starts. This is because the base nginx image already defines a start command for the container that launches nginx on port 80. This works well for our purposes, so there is no need to define our own start command.

You can build the container image with the following command:

docker build -f Dockerfile.client -t react-flask-app-client .

We are not going to start this container with docker run as we did with the API one. If you attempt it, you will get a failure from nginx, which is not going to recognize the http://api:5000 proxy URL. In the next section we are going to use Docker Compose to start the two containers together as part of a network.

Using Docker Compose to Orchestrate the Client and API Containers

The last step in standing up our two-container solution is to create a Docker Compose file that orchestrates the launch of the two containers as part of a network. Below you can see the docker-compose.yml file that achieves this:

services:
  api:
    build:
      context: .
      dockerfile: Dockerfile.api
    image: react-flask-app-api
  client:
    build:
      context: .
      dockerfile: Dockerfile.client
    image: react-flask-app-client
    ports:
      - "3000:80"

The keys under services define the containers that are going to be started as part of this deployment, which in our case are the two that we've built above, api and client.

For each container, the Docker Compose file allows us to define the build options, which is nice because we won't need to use the docker build command as we did before while testing. The context sub-key configures the root directory for the build, and the dockerfile sub-key tells Docker Compose which dockerfile to use for that container.

Both containers also have an image key, which defines the name of the container image. The client container maps port 80, which is the port exposed by the nginx image, to port 3000 on the host computer, which is where we'll go with our browser to connect to the application. Note that the api container does not need to map any ports, because it is now an internal service that only needs to be reachable by the client container, but not from the outside world.

The networking between the containers is automatically set up by Docker Compose. This means that inside the clientcontainer the api hostname is going to be recognized and mapped to the api container.

How do we run this? Just with the following command:

docker-compose up

This will build any of the containers if they are out of date, and then will start a private network with the client and api containers. Once you start seeing the logs from the containers on your terminal, open up your web browser and go to http://localhost:3000 to see the application in its full glory.

Creating a Single Container Deployment

The solution I described in the sections above is my preferred one, but sometimes you just want something quick and simple, and nothing says simple more than having the complete system, including the React client and the Flask API, all running in a single container. This solutions maps to the Python web server option that I described in part 2.

The key to make a single container image is to use the same ideas presented above, including the multi-stage build, but replace nginx with Gunicorn, which doubles down as an API and static file server. Here is the single-container Dockerfile, which I named Dockerfile.combo:

# Build step #1: build the React front end
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json yarn.lock ./
COPY ./src ./src
COPY ./public ./public
RUN yarn install
RUN yarn build

# Build step #2: build the API with the client as static files
FROM python:3.9
WORKDIR /app
COPY --from=build-step /app/build ./build

RUN mkdir ./api
COPY api/requirements.txt api/api.py api/.flaskenv ./api
RUN pip install -r ./api/requirements.txt
ENV FLASK_ENV production

EXPOSE 3000
WORKDIR /app/api
CMD ["gunicorn", "-b", ":3000", "api:app"]

The first build step in this Dockerfile is identical to the one we used for the React application above, since we need the same HTML, CSS and JavaScript files.

The second build step uses the Python 3.9 container as a base. It puts the React files from the first build step in the /app/build directory, and the Flask API files in /app/api. The reason we are using these directory names is that the API project has a static file configuration that uses the ../build directory. This was done in part 2 of the series to support the Gunicorn deployment option, and it is controlled by how the Flask instance was initialized:

app = Flask(__name__, static_folder='../build', static_url_path='/')

For this container image we are using port 3000, and we are starting from the /app/api work directory. Other than that the container works in the same way as the API container we used for the two-container solution.

You can build this container as follows:

docker build -f Dockerfile.combo -t react-flask-app .

With the container built, you can start a single-container deployment with this command (make sure you stop the Docker Compose deployment if you are still running it):

docker run --rm -p 3000:3000 react-flask-app

And now you can go to http://localhost:3000 in your browser to access this single-container deployment.

Conclusion

I hope you can appreciate how useful it is to script your deployments with Docker.

Do you have any other questions that are specific to React and Flask projects? Let me know below in the comments, because I'd love to continue this series with more topics!

Become a Patron!

Hello, and thank you for visiting my blog! If you enjoyed this article, please consider supporting my work on this blog on Patreon!

29 comments
  • #26 Maahir Gupta said

    When I run docker-compose up, I get:
    nginx: [emerg] host not found in upstream "api" in /etc/nginx/conf.d/default.conf:20

    Online solutions haven't seemed to help, how can I fix this?

  • #27 Miguel Grinberg said

    @Maahir: I can't really give you advice with just the error message, sorry. My only guess is that you don't have an "api" container running.

  • #28 Tework123 said

    Hello Miguel. Thanks for your lessons. Thank you for your help to the flask community. I have a question. Why are you opening port 3000 outside of the container? As a result, we need to use the URL: http://localhost:3000. Maybe it would be better to use
    ports: - "80:80" for interface. After all, we want to go to our site http://localhost, which is without a port.
    I'm posting this question because I had a problem getting the ssl key. Certbot sends a message to my domain, but my domain only works with this: http://mydomen:3000. As a result, the kertbot gives an error that it does not see the working domain. When I write ports: 80:80 everything works correctly.

  • #29 Miguel Grinberg said

    @Tework123: The port that you use inside the container does not matter, it can be any number, since network ports are virtualized by Docker. The port that you use from the outside depends on your needs. In my case I was running on my development machine, so for that use case port 3000 makes perfect sense. On a production server you may want to use ports 80 and 443, but in many cases ports such as 3000 are also the best option, like for example when you have a reverse proxy in front of your actual services. Using http://localhost as a URL is not really that common and not at all useful if you plan to get a SSL certificate, where a proper domain is required.

Leave a Comment