The Flask Mega-Tutorial Part XIX: Deployment on Docker Containers

This is the nineteenth installment of the Flask Mega-Tutorial series, in which I'm going to deploy Microblog to the Docker container platform.

For your reference, below is a list of the articles in this series.

Note 1: If you are looking for the legacy version of this tutorial, it's here.

Note 2: If you would like to support my work on this blog, or just don't have patience to wait for weekly articles, I am offering the complete version of this tutorial packaged as an ebook or a set of videos. For more information, visit courses.miguelgrinberg.com.

In Chapter 17 you learned about traditional deployments, in which you have to take care of every little aspect of the server configuration. Then in Chapter 18 I took you to the other extreme when I introduced you to Heroku, a service that takes complete control of the configuration and deployment tasks, allowing you to fully concentrate on your application. In this chapter you are going to learn about a third application deployment strategy based on containers, more particularly on the Docker container platform. This third option sits somewhere in between the other two in terms of the amount of deployment work needed on your part.

Containers are built on a lightweight virtualization technology that allows an application, along with its dependencies and configuration to run in complete isolation, but without the need to use a full blown virtualization solution such as virtual machines, which need a lot more resources and can sometimes have a significant performance degradation in comparison to the host. A system configured as a container host can execute many containers, all of them sharing the host's kernel and direct access to the host's hardware. This is in contrast to virtual machines, which have to emulate a complete system, including CPU, disk, other hardware, kernel, etc.

In spite of having to share the kernel, the level of isolation in a container is pretty high. A container has its own file system, and can be based on an operating system that is different than the one used by the container host. For example, you can run containers based on Ubuntu Linux on a Fedora host, or vice versa. While containers are a technology that is native to the Linux operating system, thanks to virtualization it is also possible to run Linux containers on Windows and Mac OS X hosts. This allows you to test your deployments on your development system, and also incorporate containers in your development workflow if you wish to do so.

The GitHub links for this chapter are: Browse, Zip, Diff.

Installing Docker CE

While Docker isn't the only container platform, it is by far the most popular, so that's going to be my choice. There are two editions of Docker, a free community edition (CE) and a subscription based enterprise edition (EE). For the purposes of this tutorial Docker CE is perfectly adequate.

To work with Docker CE, you first have to install it on your system. There are installers for Windows, Mac OS X and several Linux distributions available at the Docker website. If you are working on a Microsoft Windows system, it is important to note that Docker CE requires Hyper-V. The installer will enable this for you if necessary, but keep in mind that enabling Hyper-V prevents other virtualization technologies such as VirtualBox from working.

Once Docker CE is installed on your system, you can verify that the install was successful by typing the following command on a terminal window or command prompt:

$ docker version
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:40:09 2017
 OS/Arch:      darwin/amd64

 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:45:38 2017
 OS/Arch:      linux/amd64
 Experimental: true

Building a Container Image

The first step in creating a container for Microblog is to build an image for it. A container image is a template that is used to create a container. It contains a complete representation of the container file system, along with various settings pertaining to networking, start up options, etc.

The most basic way to create a container image for your application is to start a container for the base operating system you want to use (Ubuntu, Fedora, etc.), connect to a bash shell process running in it, and then manually install your application, maybe following the guidelines I presented in Chapter 17 for a traditional deployment. After you install everything, you can take a snapshot of the container and that becomes the image. This type of workflow is supported with the docker command, but I'm not going to discuss it because it is not convenient to have to manually install the application every time you need to generate a new image.

A better approach is to generate the container image through a script. The command that creates scripted container images is docker build. This command reads and executes build instructions from a file called Dockerfile, which I will need to create. The Dockerfile is basically an installer script of sorts that executes the installation steps to get the application deployed, plus some container specific settings.

Here is a basic Dockerfile for Microblog:

Dockerfile: Dockerfile for Microblog.

FROM python:3.6-alpine

RUN adduser -D microblog

WORKDIR /home/microblog

COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn

COPY app app
COPY migrations migrations
COPY microblog.py config.py boot.sh ./
RUN chmod +x boot.sh

ENV FLASK_APP microblog.py

RUN chown -R microblog:microblog ./
USER microblog

ENTRYPOINT ["./boot.sh"]

Each line in the Dockerfile is a command. The FROM command specifies the base container image on which the new image will be built. The idea is that you start from an existing image, add or change some things, and you end up with a derived image. Images are referenced by a name and a tag, separated by a colon. The tag is used as a versioning mechanism, allowing a container image to provide more than one variant. The name of my chosen image is python, which is the official Docker image for Python. The tags for this image allow you to specify the interpreter version and base operating system. The 3.6-alpine tag selects a Python 3.6 interpreter installed on Alpine Linux. The Alpine Linux distribution is often used instead of more popular ones such as Ubuntu because of its small size. You can see what tags are available for the Python image in the Python image repository.

The RUN command executes an arbitrary command in the context of the container. This would be similar to you typing the command in a shell prompt. The adduser -D microblog command creates a new user named microblog. Most container images have root as the default user, but it is not a good practice to run an application as root, so I create my own user.

The WORKDIR command sets a default directory where the application is going to be installed. When I created the microblog user above, a home directory was created, so now I'm making that directory the default. The new default directory is going to apply to any remaining commands in the Dockerfile, and also later when the container is executed.

The COPY command transfers files from your machine to the container file system. This command takes two or more arguments, the source and destination files or directories. The source file(s) must be relative to the directory where the Dockerfile is located. The destination can be an absolute path, or a path relative to the directory that was set in a previous WORKDIR command. In this first COPY command, I'm copying the requirements.txt file to the microblog user's home directory in the container file system.

Now that I have the requirements.txt file in the container, I can create a virtual environment, using the RUN command. First I create it, and then I install all the requirements in it. Because the requirements file contains only generic dependencies, I then explicitly install gunicorn, which I'm going to use as a web server. Alternatively, I could have added gunicorn to my requirements.txt file.

The three COPY commands that follow install the application in the container, by copying the app package, the migrations directory with the database migrations, and the microblog.py and config.py scripts from the top-level directory. I'm also copying a new file, boot.sh that I will discuss below.

The RUN chmod command ensures that this new boot.sh file is correctly set as an executable file. If you are in a Unix based file system and your source file is already marked as executable, then the copied file will also have the executable bit set. I added an explicit set because on Windows it is harder to set executable bits. If you are working on Mac OS X or Linux you probably don't need this statement, but it does not hurt to have it anyway.

The ENV command sets an environment variable inside the container. I need to set FLASK_APP, which is required to use the flask command.

The RUN chown command that follows sets the owner of all the directories and files that were stored in /home/microblog as the new microblog user. Even though I created this user near the top of the Dockerfile, the default user for all the commands remained root, so all these files need to be switched to the microblog user so that this user can work with them when the container is started.

The USER command in the next line makes this new microblog user the default for any subsequent instructions, and also for when the container is started.

The EXPOSE command configures the port that this container will be using for its server. This is necessary so that Docker can configure the network in the container appropriately. I've chosen the standard Flask port 5000, but this can be any port.

Finally, the ENTRYPOINT command defines the default command that should be executed when the container is started. This is the command that will start the application web server. To keep things well organized, I decided to create a separate script for this, and this is the boot.sh file that I copied to the container earlier. Here are the contents of this script:

boot.sh: Docker container start-up script.

source venv/bin/activate
flask db upgrade
flask translate compile
exec gunicorn -b :5000 --access-logfile - --error-logfile - microblog:app

This is a fairly standard start up script that is fairly similar to how the deployments in Chapter 17 and Chapter 18 were started. I activate the virtual environment, upgrade the database though the migration framework, compile the language translations, and finally run the server with gunicorn.

Note the exec that precedes the gunicorn command. In a shell script, exec triggers the process running the script to be replaced with the command given, instead of starting it as a new process. This is important, because Docker associates the life of the container to the first process that runs on it. In cases like this one, where the start up process is not the main process of the container, you need to make sure that the main process takes the place of that first process to ensure that the container is not terminated early by Docker.

An interesting aspect of Docker is that anything that the container writes to stdout or stderr will be captured and stored as logs for the container. For that reason, the --access-logfile and --error-logfile are both configured with a -, which sends the log to standard output so that they are stored as logs by Docker.

With the Dockerfile created, I can now build a container image:

$ docker build -t microblog:latest .

The -t argument that I'm giving to the docker build command sets the name and tag for the new container image. The . indicates the base directory where the container is to be built. This is the directory where the Dockerfile is located. The build process is going to evaluate all the commands in the Dockerfile and create the image, which will be stored on your own machine.

You can obtain a list of the images that you have locally with the docker images command:

$ docker images
REPOSITORY    TAG          IMAGE ID        CREATED              SIZE
microblog     latest       54a47d0c27cf    About a minute ago   216MB
python        3.6-alpine   a6beab4fa70b    3 months ago         88.7MB

This listing will include your new image, and also the base image on which it was built. Any time you make changes to the application, you can update the container image by running the build command again.

Starting a Container

With an image already created, you can now run the container version of the application. This is done with the docker run command, which usually takes a large number of arguments. I'm going to start by showing you a basic example:

$ docker run --name microblog -d -p 8000:5000 --rm microblog:latest

The --name option provides a name for the new container. The -d option tells Docker to run the container in the background. Without -d the container runs as a foreground application, blocking your command prompt. The -p option maps container ports to host ports. The first port is the port on the host computer, and the one on the right is the port inside the container. The above example exposes port 5000 in the container on port 8000 in the host, so you will access the application on 8000, even though internally the container is using 5000. The --rm option will delete the container once it is terminated. While this isn't required, containers that finish or are interrupted are usually not needed anymore, so they can be automatically deleted. The last argument is the container image name and tag to use for the container. After you run the above command, you can access the application at http://localhost:8000.

The output of docker run is the ID assigned to the new container. This is a long hexadecimal string, that you can use whenever you need to refer to the container in subsequent commands. In fact, only the first few characters are necessary, enough to make the ID unique.

If you want to see what containers are running, you can use the docker ps command:

$ docker ps
CONTAINER ID  IMAGE             COMMAND      PORTS                   NAMES
021da2e1e0d3  microblog:latest  "./boot.sh">5000/tcp  microblog

You can see that even the docker ps command shortens container IDs. If you now want to stop the container, you can use docker stop:

$ docker stop 021da2e1e0d3

If you recall, there are a number of options in the application's configuration that are sourced from environment variables. For example, the Flask secret key, database URL and email server options are all imported from environment variables. In the docker run example above I have not worried about those, so all those configuration options are going to use defaults.

In a more realistic example, you will be setting those environment variables inside the container. You saw in the previous section that the ENV command in the Dockerfile sets environment variables, and it is a handy option for variables that are going to be static. For variables that depend on the installation, however, it isn't convenient to have them as part of the build process, because you want to have a container image that is fairly portable. If you want to give your application to another person as a container image, you would want that person to be able to use it as is, and not have to rebuild it with different variables.

So build-time environment variables can be useful, but there is also a need to have run-time environment variables that can be set via the docker run command, and for these variables, the -e option can be used. The following example sets a secret key and sends email through a gmail account:

$ docker run --name microblog -d -p 8000:5000 --rm -e SECRET_KEY=my-secret-key \
    -e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
    -e MAIL_USERNAME=<your-gmail-username> -e MAIL_PASSWORD=<your-gmail-password> \

It is not uncommon for docker run command lines to be extremely long due to having many environment variable definitions.

Using Third-Party "Containerized" Services

The container version of Microblog is looking good, but I haven't really thought much about storage yet. In fact, since I haven't set a DATABASE_URL environment variable, the application is using the default SQLite database, which is supported by a file on disk. What do you think is going to happen to that SQLite file when you stop and delete the container? The file is going to disappear!

The file system in a container is ephemeral, meaning that it goes away when the container goes away. You can write data to the file system, and the data is going to be there if the container needs to read it, but if for any reason you need to recycle your container and replace it with a new one, any data that the application saved to disk is going to be lost forever.

A good design strategy for a container application is to make the application containers stateless. If you have a container that has application code and no data, you can throw it away and replace it with a new one without any problems, the container becomes truly disposable, which is great in terms of simplifying the deployment of upgrades.

But of course, this means that the data must be put somewhere outside of the application container. This is where the fantastic Docker ecosystem comes into play. The Docker Container Registry contains a large variety of container images. I have already told you about the Python container image, which I'm using as a base image for my Microblog container. In addition to that, Docker maintains images for many other languages, databases and other services in the Docker registry and if that isn't enough, the registry also allows companies to publish container images for their products, and also regular users like you or me to publish your own images. That means that the effort to install third party services is reduced to finding an appropriate image in the registry, and starting it with a docker run command with proper arguments.

So what I'm going to do now is create two additional containers, one for a MySQL database, and another one for the Elasticsearch service, and then I'm going to make the command line that starts the Microblog container even longer with options that enable it to access these two new containers.

Adding a MySQL Container

Like many other products and services, MySQL has public container images available on the Docker registry. Like my own Microblog container, MySQL relies on environment variables that need to be passed to docker run. These configure passwords, database names etc. While there are many MySQL images in the registry, I decided to use one that is officially maintained by the MySQL team. You can find detailed information about the MySQL container image in its registry page: https://hub.docker.com/r/mysql/mysql-server/.

If you remember the laborious process to set up MySQL in Chapter 17, you are going to appreciate Docker when you see how easy it is to deploy MySQL. Here is the docker run command that starts a MySQL server:

$ docker run --name mysql -d -e MYSQL_RANDOM_ROOT_PASSWORD=yes \
    -e MYSQL_DATABASE=microblog -e MYSQL_USER=microblog \
    -e MYSQL_PASSWORD=<database-password> \

That is it! On any machine that you have Docker installed, you can run the above command and you'll get a fully installed MySQL server with a randomly generated root password, a brand new database called microblog, and a user with the same name that is configured with full permissions to access the database. Note that you will need to enter a proper password as the value for the MYSQL_PASSWORD environment variable.

Now on the application side, I need to add a MySQL client package, like I did for the traditional deployment on Ubuntu. I'm going to use pymysql once again, which I can add to the Dockerfile:

Dockerfile: Add pymysql to Dockerfile.

# ...
RUN venv/bin/pip install gunicorn pymysql
# ...

Any time a change is made to the application or the Dockerfile, the container image needs to be rebuilt:

$ docker build -t microblog:latest .

Any now I can start Microblog again, but this time with a link to the database container so that both can communicate through the network:

$ docker run --name microblog -d -p 8000:5000 --rm -e SECRET_KEY=my-secret-key \
    -e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
    -e MAIL_USERNAME=<your-gmail-username> -e MAIL_PASSWORD=<your-gmail-password> \
    --link mysql:dbserver \
    -e DATABASE_URL=mysql+pymysql://microblog:<database-password>@dbserver/microblog \

The --link option tells Docker to make another container accessible to this one. The argument contains two names separated by a colon. The first part is the name or ID of the container to link, in this case the one named mysql that I created above. The second part defines a hostname that can be used in this container to refer to the linked one. Here I'm using dbserver as generic name that represents the database server.

With the link between the two containers established, I can set the DATABASE_URL environment variable so that SQLAlchemy is directed to use the MySQL database in the other container. The database URL is going to use dbserver as the database hostname, microblog as the database name and user, and the password that you selected when you started MySQL.

One thing I noticed when I was experimenting with the MySQL container is that it takes a few seconds for this container to be fully running and ready to accept database connections. If you start the MySQL container and then start the application container immediately after, when the boot.sh script tries to run flask db upgrade it may fail due to the database not being ready to accept connections. To make my solution more robust, I decided to add a retry loop in boot.sh:

boot.sh: Retry database connection.

source venv/bin/activate
while true; do
    flask db upgrade
    if [[ "$?" == "0" ]]; then
    echo Upgrade command failed, retrying in 5 secs...
    sleep 5
flask translate compile
exec gunicorn -b :5000 --access-logfile - --error-logfile - microblog:app

This loop checks the exit code of the flask db upgrade command, and if it is non-zero it assumes that something went wrong, so it waits five seconds and then retries.

Adding a Elasticsearch Container

The Elasticsearch documentation for Docker shows how to run the service as a single-node for development, and as a two-node production-ready deployment. For now I'm going to go with the single-node option and use the "oss" image, which only has the open source engine. The container is started with the following command:

$ docker run --name elasticsearch -d -p 9200:9200 -p 9300:9300 --rm \
    -e "discovery.type=single-node" \

This docker run command has many similarities with the ones I've used for Microblog and MySQL, but there are a couple of interesting differences. First, there are two -p options, which means that this container is going to listen on two ports instead of just one. Both ports 9200 and 9300 are mapped to the same ports in the host machine.

The other difference is in the syntax used to refer to the container image. For the images that I've been building locally, the syntax was <name>:<tag>. The MySQL container uses a slightly more complete syntax with the format <account>/<name>:<tag>, which is appropriate to reference container images on the Docker registry. The Elasticsearch image that I'm using follows the pattern <registry>/<account>/<name>:<tag>, which includes the address of the registry as the first component. This syntax is used for images that are not hosted in the Docker registry. In this case Elasticsearch runs their own container registry service at docker.elastic.co instead of using the main registry maintained by Docker.

So now that I have the Elasticsearch service up and running, I can modify the start command for my Microblog container to create a link to it and set the Elasticsearch service URL:

$ docker run --name microblog -d -p 8000:5000 --rm -e SECRET_KEY=my-secret-key \
    -e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
    -e MAIL_USERNAME=<your-gmail-username> -e MAIL_PASSWORD=<your-gmail-password> \
    --link mysql:dbserver \
    -e DATABASE_URL=mysql+pymysql://microblog:<database-password>@dbserver/microblog \
    --link elasticsearch:elasticsearch \
    -e ELASTICSEARCH_URL=http://elasticsearch:9200 \

Before you run this command, remember to stop your previous Microblog container if you still have it running. Also be careful in setting the correct passwords for the database and the Elasticsearch service in the proper places in the command.

Now you should be able to visit http://localhost:8000 and use the search feature. If you experience any errors, you can troubleshoot them by looking at the container logs. You'll most likely want to see logs for the Microblog container, where any Python stack traces will appear:

$ docker logs microblog

The Docker Container Registry

So now I have the complete application up and running on Docker, using three containers, two of which come from publicly available third-party images. If you would like to make your own container images available to others, then you have to push them to the Docker registry from where anybody can obtain images.

To have access to the Docker registry you need to go to https://hub.docker.com and create an account for yourself. Make sure you pick a username that you like, because that is going to be used in all the images that you publish.

To be able to access your account from the command line, you need to log in with the docker login command:

$ docker login

If you've been following my instructions, you now have an image called microblog:latest stored locally on your computer. To be able to push this image to the Docker registry, it needs to be renamed to include the account, like the image from MySQL. This is done with the docker tag command:

$ docker tag microblog:latest <your-docker-registry-account>/microblog:latest

If you list your images again with docker images you are now going to see two entries for Microblog, the original one with the microblog:latest name, and a new one that also includes your account name. These are really two alias for the same image.

To publish your image to the Docker registry, use the docker push command:

$ docker push <your-docker-registry-account>/microblog:latest

Now your image is publicly available and you can document how to install it and run from the Docker registry in the same way MySQL and others do.

Deployment of Containerized Applications

One of the best things about having your application running in Docker containers is that once you have the containers tested locally, you can take them to any platform that offers Docker support. For example, you could use the same servers I recommended in Chapter 17 from Digital Ocean, Linode or Amazon Lightsail. Even the cheapest offering from these providers is sufficient to run Docker with a handful of containers.

The Amazon Container Service (ECS) gives you the ability to create a cluster of container hosts on which to run your containers, in a fully integrated AWS environment, with support for scaling and load balancing, plus the option to use a private container registry for your container images.

Finally, a container orchestration platform such as Kubernetes provides an even greater level of automation and convenience, by allowing you to describe your multi-container deployments in simple text files in YAML format, with load balancing, scaling, secure management of secrets and rolling upgrades and rollbacks.


  • #176 Gitau Harrison said 2020-10-30T10:24:45Z

    Creating a new project, I am not sure where the error Failed to find attribute 'app' in 'gossip_app' comes from. gossip_app is both the new user and my virtual env. Dockerfile and boot.sh file are similar in all aspects with the tutorial.

    [2020-10-30 09:54:41,411] INFO in __init__: Tinker ChatApp INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -&gt; a6664359912e, Add user table INFO [alembic.runtime.migration] Running upgrade a6664359912e -&gt; 6b7e8489b0ee, Add post table INFO [alembic.runtime.migration] Running upgrade 6b7e8489b0ee -&gt; a8b409f2c772, Add fields to user table INFO [alembic.runtime.migration] Running upgrade a8b409f2c772 -&gt; 2a3289732dbc, add followed field in user table INFO [alembic.runtime.migration] Running upgrade 2a3289732dbc -&gt; d055880868f7, Add language field to post table [2020-10-30 09:54:42,426] INFO in __init__: Tinker ChatApp compiling catalog app/translations/sw/LC_MESSAGES/messages.po to app/translations/sw/LC_MESSAGES/messages.mo compiling catalog app/translations/zh/LC_MESSAGES/messages.po to app/translations/zh/LC_MESSAGES/messages.mo [2020-10-30 09:54:42 +0000] [1] [INFO] Starting gunicorn 20.0.4 [2020-10-30 09:54:42 +0000] [1] [INFO] Listening at: (1) [2020-10-30 09:54:42 +0000] [1] [INFO] Using worker: sync [2020-10-30 09:54:42 +0000] [12] [INFO] Booting worker with pid: 12 **Failed to find attribute 'app' in 'gossip_app'**. [2020-10-30 09:54:42 +0000] [12] [INFO] Worker exiting (pid: 12) [2020-10-30 09:54:42 +0000] [1] [INFO] Shutting down: Master [2020-10-30 09:54:42 +0000] [1] [INFO] Reason: App failed to load.
  • #177 Miguel Grinberg said 2020-10-30T10:46:10Z

    @Gitau: I'm sorry but I'm really having a hard time following what you are doing. There are three completely unrelated questions that you posted above, and some details are still unclear to me.

    > localhost didn't send any data is the feedback I get when I try to say log into the app running on localhost:8000

    What do you mean be "feedback" here? Who is giving you this feedback? What do you mean by "when I say log into the app"?

    2nd question: the "chown" command changes the ownership of files. The "a:b" notation indicates that "a" is the new owner, and "b" is the new group of the file. These can be anything you want, they hold no relation to the project name or anything else. As long as you are consistent you can use any username and any group name that you like.

    3rd question: the "failed to find attribute 'app' in 'gossip_app'" is gunicorn telling you that there is no Flask app instance in file gossip_app.py.

  • #178 Gitau Harrison said 2020-10-30T13:43:14Z

    Thank you for your input Miguel.

    With regards to the error Failed to find attribute 'app' in 'gossip_app', I have the application file tinker.py in the root directory. This is how my new app is structured:

    project_name |------------app/ |------------boot.sh |------------Dockerfile |------------config.py |------------tinker.py |------------requirements.txt

    My Dockerfile has the user called gossip_app created as follows:

    FROM python:3.8-alpine RUN adduser -D gossip_app WORKDIR /home/software_development/python/current_projects/1_work_gossip_chat_app COPY requirements.txt requirements.txt RUN python -m venv gossip_app RUN gossip_app/bin/python3 -m pip install --upgrade pip RUN \ # install psycopg2 dependancies apk update &amp;&amp; \ apk add postgresql-dev gcc python3-dev musl-dev &amp;&amp; \ # then install your requirements gossip_app/bin/pip3 install -r requirements.txt &amp;&amp; \ gossip_app/bin/pip3 install gunicorn pymysql # I have psycopg2 installed due to the use of Postgre while deploying the app on Heroku COPY app app COPY migrations migrations COPY tinker.py config.py boot.sh ./ RUN chmod +x boot.sh ENV FLASK_APP tinker.py RUN chown -R gossip_app:gossip_app ./ USER gossip_app EXPOSE 5000 ENTRYPOINT [ "./boot.sh" ]

    app in tinker.py is defined as:

    from app import db, create_app, cli from app.models import User, Post app = create_app() cli.register(app) @app.shell_context_processor def make_shell_context(): return {'db': db, 'User': User, 'Post': Post}

    So, app not being found in gossip_app which to my knowledge is the new user (I have used the same name for my virtual environment) is confusing. I do not have gossip_app.py but rather 'tinker.py' in the root directory.

  • #179 Miguel Grinberg said 2020-10-30T14:57:26Z

    @Gitau: None of what you are showing me is related to the problem. As I said above, you are telling Gunicorn to load the app instance from a file called gossip_app. Review your Gunicorn command.

  • #180 Joel said 2020-11-12T07:19:25Z

    Dear Miguel, first of all: tank you for this tutorial! I have two questions: Wouldn't it be good practice to use docker-compose in the case of multiple containers? How does the migration-process look like? The problem I have is, that the versions folder with the migration scripts are created within the docker container and not on the host machine (which is the one where git is running). So when I take down the containers, the folder is gone.

  • #181 Miguel Grinberg said 2020-11-12T11:20:31Z

    @Joel: Yes, you can use docker-compose or even something more advanced such as Kubernetes to automate and orchestrate a container deployment.

    You seem to be wanting to run everything inside containers, including your development version. That is not what this tutorial recommends. If you plan on running your migrations from inside Docker you will need to mount a directory with the source code that is stored in the host, so that the changes are not lost when the container is gone.

  • #182 Luc B. said 2021-01-18T18:35:32Z

    Hi Miguel !

    I implemented my flask app using docker-compose, something similar to your flasky !

    here is the docker-compose.prod.yml file (I'm using armv6 images to put this on a raspberry pi zero wh)

    version: "3" services: app: image: "lucbertin/app:rasp0" # private repo env_file: - .env - .env-mysql ports: - 80:5000 depends_on: - db db: image: "hypriot/rpi-mysql:5.5" env_file: .env-mysql volumes: - db_data:/var/lib/mysql volumes: db_data:

    The application works perfectly fine, except the mail sending through gmail (note that it did work on my computer on the first place without using docker).

    here is the traceback:

    app_1 | Exception in thread Thread-2: app_1 | Traceback (most recent call last): app_1 | File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner app_1 | self.run() app_1 | File "/usr/local/lib/python3.8/threading.py", line 870, in run app_1 | self._target(self._args, *self._kwargs) app_1 | File "/home/painterly/app/email.py", line 9, in send_async_email app_1 | mail.send(msg) app_1 | File "/usr/local/lib/python3.8/site-packages/flask_mail.py", line 491, in send app_1 | with self.connect() as connection: app_1 | File "/usr/local/lib/python3.8/site-packages/flask_mail.py", line 144, in enter app_1 | self.host = self.configure_host() app_1 | File "/usr/local/lib/python3.8/site-packages/flask_mail.py", line 158, in configure_host app_1 | host = smtplib.SMTP(self.mail.server, self.mail.port) app_1 | File "/usr/local/lib/python3.8/smtplib.py", line 253, in init app_1 | (code, msg) = self.connect(host, port) app_1 | File "/usr/local/lib/python3.8/smtplib.py", line 339, in connect app_1 | self.sock = self._get_socket(host, port, self.timeout) app_1 | File "/usr/local/lib/python3.8/smtplib.py", line 308, in _get_socket app_1 | return socket.create_connection((host, port), timeout, app_1 | File "/usr/local/lib/python3.8/socket.py", line 808, in create_connection app_1 | raise err app_1 | File "/usr/local/lib/python3.8/socket.py", line 796, in create_connection app_1 | sock.connect(sa) app_1 | ConnectionRefusedError: [Errno 111] Connection refused

    I don't think docker is the issue here, as it shouldn't block outgoing connections to port 587 to gmail smtp server, but who knows...

    Note that: - my raspberry pi is in my LAN. -There is port forwarding 80 -> 80 and 443 -> 443 from my router to the pi so to access to the app. - DynDNS is set up so I can connect to the app from a something like "domain.com" rather than the public dynamic ip of my router. - I put the UFW firewall on the pi the same way as your tutorial on linux deployment (maybe that might be blocking outgoing connections ?). - after cross-checking, email.py is the same as yours on flasky project.

    let me know if you have some insights ! Best, Luc

  • #183 Miguel Grinberg said 2021-01-19T18:38:51Z

    @Luc: might be related to bad Gmail credentials. If you run the Flask app in debug mode the logs from the smtp module may help you understand what's going on better.

  • #184 Przemator said 2021-01-24T19:28:25Z

    What's really confusing to me is the replacement of sqlite with mysql. If mysql database lives in a separate volume, then where is this volume located and how to access it? If I ever move to a different cloud service, how to take the database with me? Or if I'd like to develop locally using a copy of the production database. And is it not possible to keep the sqlite database on a separate volume, that we would configure within our own container?

  • #185 Miguel Grinberg said 2021-01-24T22:53:51Z

    @Przemator: You can access the MySQL volume using Docker CLI or API. You can export it and migrate it to another computer if necessary. You can alternatively use MySQL's own exporting and importing commands.

    If you really like sqlite and want to use it for production, you can find a way to make it work, either with or without Docker. I wouldn't personally recommend this over using a proper database server, but if that's what you want to do I don't see why not.

  • #186 Jordan said 2021-04-11T12:56:11Z

    Hi Miguel,

    I've noticed that the chmod -R command is extremely slow with large applications. It's also sitting at the end of the file so the image layer can't be cached.

    Would there be anything wrong with an alternative approach below, that reduces the number of files needed to be moved from root -> nonroot?

    FROM python
    # create the new user and perform root actions
    RUN useradd --create-home --shell /bin/bash microblog
    WORKDIR /home/microblog
    COPY boot.sh boot.sh
    RUN chmod u+x boot.sh
    # chown is now only on a small selection of files
    RUN chown -R microblog:microblog ./
    # switch to the user
    USER microblog
    # rest of dockerfile
    COPY requirements.txt requirements.txt
    RUN python -m venv venv
    RUN venv/bin/pip install --default-timeout=100 -r requirements.txt
    RUN venv/bin/pip install gunicorn
    COPY app app
    COPY migrations migrations
    COPY microblog.py config.py ./
    EXPOSE 5000
    ENTRYPOINT ["./boot.sh"]
  • #187 Miguel Grinberg said 2021-04-11T18:26:31Z

    @Jordan: I think your alternative is fine, can't see anything wrong with it.

  • #188 Tim said 2021-04-17T15:40:00Z

    Hi Miguel May I suggest an improvement to your Dockerfile?

    Specifically, this line makes my container bigger by 100MB: RUN chown -R microblog:microblog ./ It also massively impacts the speed of image builds

    I have checked Google for similar symptoms and apparently when recursive chown is executed in Dockerfile, then all files are copied (duplicated) with new permissions. The reason for slowness and size increase is because your venv is part of this recursive chown process, A LOT of files are processed in that folder.

    Some people suggest to do COPY --chown=<user>:<group> <srv> <dst>, which does work, but I did it slightly differently:

    I've added this to my Dockerfile, right after where you install gunicorn

    RUN mkdir ./app
    WORKDIR /home/microblog/app

    Everything else is not changed, in boot.sh I have added this

    #!/usr/bin/env bash
    source /home/microblog/venv/bin/activate

    So, my Virtual Env is one level higher than application and its modules. /home/microblog/venv /home/microblog/app

    This massively improved build time and decreased image size by 100MB, yet everything works fine :)

  • #189 Miguel Grinberg said 2021-04-17T18:02:25Z

    @Tim: the problem with your solution is that all the files in the virtual environment remain under the root account. In general this isn't a problem, and as you say, it is likely that everything will work, but it feels wrong to me. An easier solution that I think avoids the chown command is to move the USER command right after the adduser, and then all the file operations will be done under the microblog user and a chown will not be needed.

  • #190 Tim said 2021-04-17T18:34:31Z

    Yes, what you say does make sense. I'll try what you've suggested. Think this will be the best approach, providing it also works :) Thanks a lot

  • #191 Tim said 2021-04-17T19:35:57Z

    Yes, you're right.

    I've just patched my Dockerfile to this

    FROM python:3.7-slim

    RUN adduser --disabled-password --gecos '' byod RUN apt-get update RUN apt-get install -y gcc

    WORKDIR /home/byod

    COPY --chown=byod:byod requirements.txt requirements.txt COPY --chown=byod:byod boot.sh ./

    RUN chmod +x boot.sh

    USER byod

    RUN python -m venv venv RUN venv/bin/pip install --no-cache-dir -r requirements.txt RUN venv/bin/pip install --no-cache-dir cryptography RUN venv/bin/pip install --no-cache-dir gunicorn[gevent]

    COPY app app COPY ers ers COPY migrations migrations COPY wsgi.py config.py ./

    ENV FLASK_APP wsgi.py

    EXPOSE 5000 ENTRYPOINT ["./boot.sh"]

    The key elements are

    I had to install dependencies (C compiler) so this must be done under root, as Docker has no sudo I have then copied requirements and boot.sh under my user (using --chown) Then set boot.sh as executable while still under ROOT (as won't work under normal user)

    Then switched the user and performed the rest under my user. It works, builds very fast and is also small in size.

    Thanks for pointing in the right direction

  • #192 Miguel Grinberg said 2021-04-18T11:30:11Z

    @Tim: if you are concerned about image size, then you should look at ways to either not needing to install gcc (which I presume is only needed during image build time, but not while the container runs), or at least removing it after your build is complete. Using a multi-stage build might be a good solution for you if you cannot eliminate the need for gcc.

  • #193 Jesse said 2021-04-21T00:44:03Z

    Miguel, I've had a lot of success so far. I have setup the persistent storage of the MySQL on my host with the -v /path:/mount option. After making a couple posts, then removing and restarting all the services with docker run (1. Elastic, 2. MySQL, 3. Blog) I can still login and see posts (so MySQL persistence works)... BUT I get an error from Elastic:

    "raise HTTP_EXCEPTIONS.get(status_code, TransportError)(elasticsearch.exceptions.NotFoundError: NotFoundError(404, index_not_found_exception', 'no such index [post]', post, index_or_alias)"

    I assume this is due to not running the flask shell then Post.reindex().

    1. Is there a right way to do this? (should I "reindex" as part of the Dockerfile build? Maybe that is a bad practice?)
    2. Should Elastic has a persistent volume like MySQL?

    Thanks for the great course.

  • #194 Jakob Lilliemarck said 2021-04-21T05:01:54Z

    @Miguel - first of all, thank you for this amazing tutorial-series (and many others)! It has played a key-part in me learning to create backend application with python, and in me switching career to development. It's been invaluable, exciting and fun :) many thanks.

    I have been having some issues with the boot.sh script above, and believe that it has to do with the shebang beeing "#!/bin/sh" instead of "#/bin/bash" while the the line [[ "$?" == "0" ]] does not seem to run in a regular shell (the first shebang). I've modified it like so, and got it to work:

    . venv/bin/activate
    while true; do
        flask db upgrade;
        if [[ "$?" == "0" ]]; then
      echo "flask db upgrade failed, re-trying in 5 seconds";
      sleep 5;
    exec gunicorn -b :5000 --access-logfile - --error-logfile - microblog
  • #195 Miguel Grinberg said 2021-04-21T08:57:16Z

    @Jesse: The Elastic data should be persistent (on a production deployment). See the Always bind data volumes section of their documentation.

  • #196 Miguel Grinberg said 2021-04-21T09:03:32Z

    @Jakob: that should be fine. You must be using a different image, one in which /bin/sh is a much more primitive shell. Using bash is perfectly okay.

  • #197 Jesse said 2021-04-25T14:57:47Z

    @Miguel: thanks for that!!

    I've switched from three docker runs to using a compose and had quite the challenge to get it to work (removing the links since it's deprecated). Next step is to translate from this compose file to a Kubernetes deployment. I've used a .env file for the environmental vars (gitignored!). Particular issue was the DB URL connection string, I ended up have to pass via env vs concatenating in the compose file (so it gets DATABASE_URL=mysql+pymysql://<dbuser>:<dbpass>@mysql-db/<dbname>).

    I got it to run after some trouble shooting, anything I'm doing here a bad idea?

    --- Compose file ---

    version: '3.8'
            image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.6.2
                - privnet
                discovery.type: single-node
                - 9200:9200
                - 9300:9300
                - ${ELASTIC_VOLUME}:/usr/share/elasticsearch/data
            container_name: elasticsearch
            image: mysql/mysql-server:5.7
                - privnet
                MYSQL_USER: ${MYSQL_USER}
                - ${MYSQL_VOLUME}:/var/lib/mysql  
            container_name: mysql-db 
            image: miniblog:0.5
                - privnet
                SECRET_KEY: ${SECRET_KEY}
                MAIL_SERVER: ${MAIL_SERVER}
                MAIL_PORT: ${MAIL_PORT}
                DATABASE_URL: ${DATABASE_URL}
                - 8000:5000
            container_name: miniblog
                - "mysql-db"
                - "elasticsearch"
  • #198 Miguel Grinberg said 2021-04-25T15:54:50Z

    @Jesse: I think your solution is okay. The only thing I would improve is instead of relying on all those environment variables I think it is better to use an .env file. That eliminates the need to have all the variables defined in your local environment.

  • #199 Jesse said 2021-04-25T23:11:46Z

    @Miguel, agreed. I use a .env with 14 things defined. Docker Compose after v 1.28 will use a .env file if defined at the root directory or by passing the --env-file flag if not in the working directory. I'm going to add a build step to test that out then move on to Kubernetes integration and oauth 2 proxy integration.

  • #200 Jesse said 2021-05-12T20:58:49Z

    Just as an update to my previous, to get the data to persist using docker compose up and docker compose down I had to switch to an external volume. This is problematic but not unsolvable! One must first define the external volume in the cli with options set to use the designated path with the device flag. Then in the compose file you reference the named volume with an external: true. Not well documented, especially the volume device options such as type, o, device.

Leave a Comment