2021-06-13T19:13:02Z

How to Dockerize a React + Flask Project

This is the fourth article in my series about working with a combined Flask and React project. In this part I'm going to cover how to deploy the application in Docker containers.

Have you missed any of the previous parts? Here is the complete list of articles I have published to date:

Why Docker?

Back in part 2 of this series I showed two ways to deploy an application made up of a React front end and a Flask back end: one that only used a Python based web server (Gunicorn), and a more complex and robust solution that involved the use of a dedicated static file and proxy web server (nginx) in front of the Python API.

While these two solutions are great, they require a few manual steps to be implemented. Since I started using React with Flask I have implemented a few projects, and I started to find it tedious to have to manually set up a deployment. So at some point I started looking into ways to script or automate these deployments, and this is how I arrived at Docker as the most convenient tool for the job.

Having the deployment implemented as one or more Docker containers means that I can test locally and once everything is working I can deploy the containers anywhere Docker is supported, regardless of operating system or cloud platform.

In the following sections I'm going to describe two Docker deployment options, which match the two deployment options I showed in part 2 of this series. The code for the project that I'm using is on GitHub.

Dockerizing the API

We are going to start from the simplest part, which is to create a Docker container that runs the API part of the project, which as you recall is stored in the api subdirectory.

Here is the Dockerfile for this deployment, which I put in a file called Dockerfile.api:

FROM python:3.9
WORKDIR /app

COPY api/requirements.txt api/api.py api/.flaskenv ./
RUN pip install -r ./requirements.txt
ENV FLASK_ENV production

EXPOSE 5000
CMD ["gunicorn", "-b", ":5000", "api:app"]

This container uses the official Python 3.9 image as a base. To keep the application nicely installed in its own directory it sets /app as the working directory. This means that all container paths that follow will be relative to this location.

The installation has three steps. First all the project files are copied over to the container's application directory, including the requirements file, the Python code and the .flaskenv file. In the second step pip is used to install all the requirements. The third step is really optional, but given that the .flaskenv file is configured for debug mode, I thought it would be best to override that, since this is supposed to be a production deployment. Note that the FLASK_ENV environment variable is only used when you start the server with the flask run command and we are not going to do that here, so this is more of a "just in case" type of thing.

To complete the Dockerfile, port 5000 is declared as a public port that needs to be exposed to the outside world, and then a start up command for Gunicorn is provided.

If you want to run this container, you first need to build an image for it:

docker build -f Dockerfile.api -t react-flask-app-api .

Once the image is built, you can run an API container with the following command:

docker run --rm -p 5000:5000 react-flask-app-api  

With the container running, you can type http://localhost:5000/api/time in the navigation bar of your web browser to see how the API responds to your requests.

When you are done testing this container, press Ctrl-C to stop it. Running the API container manually is useful to test it, but we are going to use a more convenient method that also involves the front end for the real deployment.

Dockerizing the React client

The next step is to create a second Docker container that runs an nginx web server that serves the React front end to clients, and proxies requests for the API to the container we built in the previous section.

Unfortunately this container is a bit more complex than the previous one. Here is the Dockerfile definition, which I put in a Dockerfile.client file:

# Build step #1: build the React front end
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json yarn.lock ./
COPY ./src ./src
COPY ./public ./public
RUN yarn install
RUN yarn build

# Build step #2: build an nginx container
FROM nginx:stable-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
COPY deployment/nginx.default.conf /etc/nginx/conf.d/default.conf

This container definition file uses a relatively recent feature of Docker called multi-stage builds.

The first step of the build is based on the Node.js 16 container., which comes pre-installed with node and yarn. Once again we use /app as work directory.

To build the client we first need to copy all the React project files to the container image. This includes the package.json and yarn.lock files, and the src and public directories with the source code and static files respectively. Then we install all the dependencies and build the client.

At this point, the production version of the React application is generated and stored in the /app/build directory. We could just install nginx and point it to this directory, but this is not ideal, because the container image has Node.js and yarn installed and taking up space, yet we do not need these tools anymore.

This is a common problem that multi-stage builds are designed to address. What we do next is start a second build step, which basically allows us to start a clean container image, in this case the official one for nginx.

When you have a multi-stage Dockerfile, the COPY command can be used to bring files from the previous build step. This is what we do to copy the app/build directory to /usr/share/nginx/html, which is the directory where the nginx container expects the files to be served to be installed.

A second COPY command installs a custom nginx configuration file. If the container had to only serve static files, the default configuration would suffice, but in our case we also need to proxy requests for URLs that start with /api to the API container, so the default configuration is not sufficient. The custom nginx configuration is almost line by line a copy of the nginx configuration we used in part 2 of this series:

server {
    listen       80;
    server_name  localhost;

    root   /usr/share/nginx/html;
    index index.html;
    error_page   500 502 503 504  /50x.html;

    location / {
        try_files $uri $uri/ =404;
        add_header Cache-Control "no-cache";
    }

    location /static {
        expires 1y;
        add_header Cache-Control "public";
    }

    location /api {
        proxy_pass http://api:5000;
    }
}

One interesting change is that in this configuration the proxy_pass statement refers to the API service with the api hostname. This is because once we have the two containers running, they're going to be part of a private network under which each container can refer to the other by name.

You may have noticed that this Dockerfile does not have a CMD statement, which indicates what command the container needs to run when it starts. This is because the base nginx image already defines a start command for the container that launches nginx on port 80. This works well for our purposes, so there is no need to define our own start command.

You can build the container image with the following command:

docker build -f Dockerfile.client -t react-flask-app-client .

We are not going to start this container with docker run as we did with the API one. If you attempt it, you will get a failure from nginx, which is not going to recognize the http://api:5000 proxy URL. In the next section we are going to use Docker Compose to start the two containers together as part of a network.

Using Docker Compose to Orchestrate the Client and API Containers

The last step in standing up our two-container solution is to create a Docker Compose file that orchestrates the launch of the two containers as part of a network. Below you can see the docker-compose.yml file that achieves this:

services:
  api:
    build:
      context: .
      dockerfile: Dockerfile.api
    image: react-flask-app-api
  client:
    build:
      context: .
      dockerfile: Dockerfile.client
    image: react-flask-app-client
    ports:
      - "3000:80"

The keys under services define the containers that are going to be started as part of this deployment, which in our case are the two that we've built above, api and client.

For each container, the Docker Compose file allows us to define the build options, which is nice because we won't need to use the docker build command as we did before while testing. The context sub-key configures the root directory for the build, and the dockerfile sub-key tells Docker Compose which dockerfile to use for that container.

Both containers also have an image key, which defines the name of the container image. The client container maps port 80, which is the port exposed by the nginx image, to port 3000 on the host computer, which is where we'll go with our browser to connect to the application. Note that the api container does not need to map any ports, because it is now an internal service that only needs to be reachable by the client container, but not from the outside world.

The networking between the containers is automatically set up by Docker Compose. This means that inside the clientcontainer the api hostname is going to be recognized and mapped to the api container.

How do we run this? Just with the following command:

docker-compose up

This will build any of the containers if they are out of date, and then will start a private network with the client and api containers. Once you start seeing the logs from the containers on your terminal, open up your web browser and go to http://localhost:3000 to see the application in its full glory.

Creating a Single Container Deployment

The solution I described in the sections above is my preferred one, but sometimes you just want something quick and simple, and nothing says simple more than having the complete system, including the React client and the Flask API, all running in a single container. This solutions maps to the Python web server option that I described in part 2.

The key to make a single container image is to use the same ideas presented above, including the multi-stage build, but replace nginx with Gunicorn, which doubles down as an API and static file server. Here is the single-container Dockerfile, which I named Dockerfile.combo:

# Build step #1: build the React front end
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json yarn.lock ./
COPY ./src ./src
COPY ./public ./public
RUN yarn install
RUN yarn build

# Build step #2: build the API with the client as static files
FROM python:3.9
WORKDIR /app
COPY --from=build-step /app/build ./build

RUN mkdir ./api
COPY api/requirements.txt api/api.py api/.flaskenv ./api
RUN pip install -r ./api/requirements.txt
ENV FLASK_ENV production

EXPOSE 3000
WORKDIR /app/api
CMD ["gunicorn", "-b", ":3000", "api:app"]

The first build step in this Dockerfile is identical to the one we used for the React application above, since we need the same HTML, CSS and JavaScript files.

The second build step uses the Python 3.9 container as a base. It puts the React files from the first build step in the /app/build directory, and the Flask API files in /app/api. The reason we are using these directory names is that the API project has a static file configuration that uses the ../build directory. This was done in part 2 of the series to support the Gunicorn deployment option, and it is controlled by how the Flask instance was initialized:

app = Flask(__name__, static_folder='../build', static_url_path='/')

For this container image we are using port 3000, and we are starting from the /app/api work directory. Other than that the container works in the same way as the API container we used for the two-container solution.

You can build this container as follows:

docker build -f Dockerfile.combo -t react-flask-app .

With the container built, you can start a single-container deployment with this command (make sure you stop the Docker Compose deployment if you are still running it):

docker run --rm -p 3000:3000 react-flask-app

And now you can go to http://localhost:3000 in your browser to access this single-container deployment.

Conclusion

I hope you can appreciate how useful it is to script your deployments with Docker.

Do you have any other questions that are specific to React and Flask projects? Let me know below in the comments, because I'd love to continue this series with more topics!

21 comments

  • #1 Samuel said 2021-06-14T15:28:59Z

    Great stuff, Miguel! I was able to follow the same steps with only minor adjustments for a React app with a Django backend. I used to spin up two separate Docker containers and then had to deal with CORS... The process you suggested with docker-compose.yml makes everything a lot easier! Thanks so much!

  • #2 Samuel said 2021-06-15T08:10:47Z

    I actually do have a question: How do you make it work for local development with the Flask dev server (e.g. to enable auto-reload of the API service) with the same code base? Because when I run both the api and the client as separate services (e.g. 'flask run' and 'yarn start'), the fetch('/api/time') will fail, because both processes are available on different ports and it's not served by nginx, so no proxy_pass...

  • #3 Samuel said 2021-06-15T08:26:46Z

    Just realized you've already answered my question in a previous article :) Amazing, many thanks! https://blog.miguelgrinberg.com/post/how-to-create-a-react--flask-project

  • #4 Miguel Grinberg said 2021-06-15T13:31:19Z

    @Samuel: Yeah, this uses the proxy support in the Node web server, and as you noticed is covered in part 1 of this series.

  • #5 Cristian said 2021-07-02T12:35:17Z

    Hello Miguel,

    Very great series of posts. They have been very useful to me. However, I am having trouble running this setup with HTTPS. I've tried many methods with certbot and letsencrypt, but they all did not work for me. I tried following this (https://www.digitalocean.com/community/tutorials/how-to-secure-a-containerized-node-js-application-with-nginx-let-s-encrypt-and-docker-compose) tutorial, but everytime I would go to the webpage, it wouldn't be able to load the favicon or anything else from the website.

    Do you have any plans on making a post on this subject? Or could you maybe help me in incorporating HTTPS?

    Thank you.

  • #6 Miguel Grinberg said 2021-07-02T13:47:18Z

    @Cristian: I have written about deploying Flask applications on HTTPS here. That was not with Docker though, so you will need to adapt my instructions to your Docker set up.

  • #7 Shmuli said 2021-07-04T03:15:57Z

    Hey Miguel,

    Great article, wonder if you could make an article/tell me about using Docker for React + Flask in development. For example to have a volume to link code directory with the container so that hot-reloading works.

    Thanks

  • #8 Miguel Grinberg said 2021-07-04T11:07:11Z

    @Shmuli: I don't consider that a convenient way to run your project under development.

  • #9 Shmuli said 2021-07-04T22:57:41Z

    @Miguel, how would you recommend running a dev environment?

    I was imagining I would use 3 containers: frontend, backend and db (mongo or postgres). This would be very close to the actual deployment environment.

    How would you recommend I develop such a system?

  • #10 Miguel Grinberg said 2021-07-05T10:05:11Z

    @Shmuli: As I said above, I wouldn't use Docker during development. The only thing that makes sense in my view to Dockerize in development is the database.

  • #11 Cuajov6 said 2021-07-09T12:02:23Z

    Hola Miguel, thanks for the articles , most of the things you put in those, are what I use every day. You are always ahead of my sight, congrats. mm what are your reasons for using gunicorn and .. not maybe uwsgi? Both are great and I'm always looking for pros and cons for both of them.

  • #12 Miguel Grinberg said 2021-07-09T13:42:45Z

    @Cuajov6: uWSGI is not easy to install on some platforms such as Windows, or some of the Linux distros for Docker. Gunicorn runs everywhere (though for Windows you have to use the WSL).

  • #13 Kaha said 2021-07-14T18:22:00Z

    Thanks for the articles Miguel! Super useful! I have a question about dockerizing react+flask app, while using blueprints and application factory for the flask side. I was able to make it work on gunicorn and nginx separately but when I follow instructions above to dockerize it, i'm getting the "ImportError: cannot import name 'create_app' from partially initialized module ... (most likely due to a circular import)". I apologize if this is a very simple error, but I can't seem to be able to solve it. Has anyone else had similar trouble?

    For details, this is my entry point:

    from b_end import create_app app = create_app()

    and this is the init.py in the flask app directory:

    from flask import Flask from b_end.config import Config def create_app(config_class=Config): app = Flask(name, static_folder='../build', static_url_path='/') app.config.from_object(config_class)

    from b_end.test_bp import bp as test_bp
    app.register_blueprint(test_bp)
    
    return app
    
  • #14 Miguel Grinberg said 2021-07-15T14:03:35Z

    @Kaha: the problem is not in the code you are showing me. The stack trace of the error should give you a clue regarding where is the circular import in your code. Just follow the stack frames from bottom to the top.

  • #15 Kaha said 2021-07-16T16:17:16Z

    Thanks Miguel! I was finally able to solve it though trial and error. Removing init.py and creating app at the application entry point helped. When I tried define create_app() in init_.py (like in your mega tutorial) and then import and call crate_app() from app.py it wouldn't work. it was complaining that the module where init__.py resided was not yet initialized to import create_app().

    Another one that was tricky for me (and maybe someone will find it helpful), was pointing to static folder from blueprints. Since I had individual blueprints nested in their respective folders the only way i could make the app serve both react and flask files was to point to static folder from both places (i) when creating a flask application instance (ii) creating a Blueprint:

    So the blueprint definition would look something like: bp = Blueprint('api', name, static_folder='../../build', static_url_path='/') And app definition would be: app = Flask(name, static_folder='../build', static_url_path='/')

    Not sure if there is a more elegant way of doing this, or if this is very straightforward for more experienced developers, but took me a while to figure this out.

  • #16 Minsoo said 2021-08-19T06:57:29Z

    Thanks Miguel, it was such a good tutorial.

  • #17 Ahmed Wael said 2021-10-16T20:42:33Z

    Hey Miguel, great post as always! How different would this process be if we were using cross_origin in the backend instead of proxying which you did in the first Installment?

  • #18 Miguel Grinberg said 2021-10-16T22:51:28Z

    @Ahmed: Not very different, you just need to configure the cross-origin access.

  • #19 Willow said 2021-11-06T21:45:22Z

    Hey Miguel! Yet another great tutorial.

    I've got this up and running on my local, but I'm having trouble finding any resources for how I can actually get this up and running on a cloud platform, like Digital Ocean.

    Do you have any recommendations for docs, or better yet, any chance of a tutorial on this from you?

  • #20 Miguel Grinberg said 2021-11-07T10:14:30Z

    @Willow: I covered how to deploy the React app using nginx in the 2nd part of this tutorial. That is something you can do in a Digital Ocean VM.

  • #21 Sam said 2021-11-19T20:01:57Z

    Thank you! Very Helpful.

Leave a Comment