2018-04-10T17:58:26Z

The Flask Mega-Tutorial Part XIX: Deployment on Docker Containers

This is the nineteenth installment of the Flask Mega-Tutorial series, in which I'm going to deploy Microblog to the Docker container platform.

For your reference, below is a list of the articles in this series.

Note 1: If you are looking for the legacy version of this tutorial, it's here.

Note 2: If you would like to support my work on this blog, or just don't have patience to wait for weekly articles, I am offering the complete version of this tutorial packaged as an ebook or a set of videos. For more information, visit courses.miguelgrinberg.com.

In Chapter 17 you learned about traditional deployments, in which you have to take care of every little aspect of the server configuration. Then in Chapter 18 I took you to the other extreme when I introduced you to Heroku, a service that takes complete control of the configuration and deployment tasks, allowing you to fully concentrate on your application. In this chapter you are going to learn about a third application deployment strategy based on containers, more particularly on the Docker container platform. This third option sits somewhere in between the other two in terms of the amount of deployment work needed on your part.

Containers are built on a lightweight virtualization technology that allows an application, along with its dependencies and configuration to run in complete isolation, but without the need to use a full blown virtualization solution such as virtual machines, which need a lot more resources and can sometimes have a significant performance degradation in comparison to the host. A system configured as a container host can execute many containers, all of them sharing the host's kernel and direct access to the host's hardware. This is in contrast to virtual machines, which have to emulate a complete system, including CPU, disk, other hardware, kernel, etc.

In spite of having to share the kernel, the level of isolation in a container is pretty high. A container has its own file system, and can be based on an operating system that is different than the one used by the container host. For example, you can run containers based on Ubuntu Linux on a Fedora host, or vice versa. While containers are a technology that is native to the Linux operating system, thanks to virtualization it is also possible to run Linux containers on Windows and Mac OS X hosts. This allows you to test your deployments on your development system, and also incorporate containers in your development workflow if you wish to do so.

The GitHub links for this chapter are: Browse, Zip, Diff.

Installing Docker CE

While Docker isn't the only container platform, it is by far the most popular, so that's going to be my choice. There are two editions of Docker, a free community edition (CE) and a subscription based enterprise edition (EE). For the purposes of this tutorial Docker CE is perfectly adequate.

To work with Docker CE, you first have to install it on your system. There are installers for Windows, Mac OS X and several Linux distributions available at the Docker website. If you are working on a Microsoft Windows system, it is important to note that Docker CE requires Hyper-V. The installer will enable this for you if necessary, but keep in mind that enabling Hyper-V prevents other virtualization technologies such as VirtualBox from working.

Once Docker CE is installed on your system, you can verify that the install was successful by typing the following command on a terminal window or command prompt:

$ docker version
Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:40:09 2017
 OS/Arch:      darwin/amd64

Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:45:38 2017
 OS/Arch:      linux/amd64
 Experimental: true

Building a Container Image

The first step in creating a container for Microblog is to build an image for it. A container image is a template that is used to create a container. It contains a complete representation of the container file system, along with various settings pertaining to networking, start up options, etc.

The most basic way to create a container image for your application is to start a container for the base operating system you want to use (Ubuntu, Fedora, etc.), connect to a bash shell process running in it, and then manually install your application, maybe following the guidelines I presented in Chapter 17 for a traditional deployment. After you install everything, you can take a snapshot of the container and that becomes the image. This type of workflow is supported with the docker command, but I'm not going to discuss it because it is not convenient to have to manually install the application every time you need to generate a new image.

A better approach is to generate the container image through a script. The command that creates scripted container images is docker build. This command reads and executes build instructions from a file called Dockerfile, which I will need to create. The Dockerfile is basically an installer script of sorts that executes the installation steps to get the application deployed, plus some container specific settings.

Here is a basic Dockerfile for Microblog:

Dockerfile: Dockerfile for Microblog.

FROM python:3.6-alpine

RUN adduser -D microblog

WORKDIR /home/microblog

COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn

COPY app app
COPY migrations migrations
COPY microblog.py config.py boot.sh ./
RUN chmod +x boot.sh

ENV FLASK_APP microblog.py

RUN chown -R microblog:microblog ./
USER microblog

EXPOSE 5000
ENTRYPOINT ["./boot.sh"]

Each line in the Dockerfile is a command. The FROM command specifies the base container image on which the new image will be built. The idea is that you start from an existing image, add or change some things, and you end up with a derived image. Images are referenced by a name and a tag, separated by a colon. The tag is used as a versioning mechanism, allowing a container image to provide more than one variant. The name of my chosen image is python, which is the official Docker image for Python. The tags for this image allow you to specify the interpreter version and base operating system. The 3.6-alpine tag selects a Python 3.6 interpreter installed on Alpine Linux. The Alpine Linux distribution is often used instead of more popular ones such as Ubuntu because of its small size. You can see what tags are available for the Python image in the Python image repository.

The RUN command executes an arbitrary command in the context of the container. This would be similar to you typing the command in a shell prompt. The adduser -D microblog command creates a new user named microblog. Most container images have root as the default user, but it is not a good practice to run an application as root, so I create my own user.

The WORKDIR command sets a default directory where the application is going to be installed. When I created the microblog user above, a home directory was created, so now I'm making that directory the default. The new default directory is going to apply to any remaining commands in the Dockerfile, and also later when the container is executed.

The COPY command transfers files from your machine to the container file system. This command takes two or more arguments, the source and destination files or directories. The source file(s) must be relative to the directory where the Dockerfile is located. The destination can be an absolute path, or a path relative to the directory that was set in a previous WORKDIR command. In this first COPY command, I'm copying the requirements.txt file to the microblog user's home directory in the container file system.

Now that I have the requirements.txt file in the container, I can create a virtual environment, using the RUN command. First I create it, and then I install all the requirements in it. Because the requirements file contains only generic dependencies, I then explicitly install gunicorn, which I'm going to use as a web server. Alternatively, I could have added gunicorn to my requirements.txt file.

The three COPY commands that follow install the application in the container, by copying the app package, the migrations directory with the database migrations, and the microblog.py and config.py scripts from the top-level directory. I'm also copying a new file, boot.sh that I will discuss below.

The RUN chmod command ensures that this new boot.sh file is correctly set as an executable file. If you are in a Unix based file system and your source file is already marked as executable, then the copied file will also have the executable bit set. I added an explicit set because on Windows it is harder to set executable bits. If you are working on Mac OS X or Linux you probably don't need this statement, but it does not hurt to have it anyway.

The ENV command sets an environment variable inside the container. I need to set FLASK_APP, which is required to use the flask command.

The RUN chown command that follows sets the owner of all the directories and files that were stored in /home/microblog as the new microblog user. Even though I created this user near the top of the Dockerfile, the default user for all the commands remained root, so all these files need to be switched to the microblog user so that this user can work with them when the container is started.

The USER command in the next line makes this new microblog user the default for any subsequent instructions, and also for when the container is started.

The EXPOSE command configures the port that this container will be using for its server. This is necessary so that Docker can configure the network in the container appropriately. I've chosen the standard Flask port 5000, but this can be any port.

Finally, the ENTRYPOINT command defines the default command that should be executed when the container is started. This is the command that will start the application web server. To keep things well organized, I decided to create a separate script for this, and this is the boot.sh file that I copied to the container earlier. Here are the contents of this script:

boot.sh: Docker container start-up script.

#!/bin/sh
source venv/bin/activate
flask db upgrade
flask translate compile
exec gunicorn -b :5000 --access-logfile - --error-logfile - microblog:app

This is a fairly standard start up script that is fairly similar to how the deployments in Chapter 17 and Chapter 18 were started. I activate the virtual environment, upgrade the database though the migration framework, compile the language translations, and finally run the server with gunicorn.

Note the exec that precedes the gunicorn command. In a shell script, exec triggers the process running the script to be replaced with the command given, instead of starting it as a new process. This is important, because Docker associates the life of the container to the first process that runs on it. In cases like this one, where the start up process is not the main process of the container, you need to make sure that the main process takes the place of that first process to ensure that the container is not terminated early by Docker.

An interesting aspect of Docker is that anything that the container writes to stdout or stderr will be captured and stored as logs for the container. For that reason, the --access-logfile and --error-logfile are both configured with a -, which sends the log to standard output so that they are stored as logs by Docker.

With the Dockerfile created, I can now build a container image:

$ docker build -t microblog:latest .

The -t argument that I'm giving to the docker build command sets the name and tag for the new container image. The . indicates the base directory where the container is to be built. This is the directory where the Dockerfile is located. The build process is going to evaluate all the commands in the Dockerfile and create the image, which will be stored on your own machine.

You can obtain a list of the images that you have locally with the docker images command:

$ docker images
REPOSITORY    TAG          IMAGE ID        CREATED              SIZE
microblog     latest       54a47d0c27cf    About a minute ago   216MB
python        3.6-alpine   a6beab4fa70b    3 months ago         88.7MB

This listing will include your new image, and also the base image on which it was built. Any time you make changes to the application, you can update the container image by running the build command again.

Starting a Container

With an image already created, you can now run the container version of the application. This is done with the docker run command, which usually takes a large number of arguments. I'm going to start by showing you a basic example:

$ docker run --name microblog -d -p 8000:5000 --rm microblog:latest
021da2e1e0d390320248abf97dfbbe7b27c70fefed113d5a41bb67a68522e91c

The --name option provides a name for the new container. The -d option tells Docker to run the container in the background. Without -d the container runs as a foreground application, blocking your command prompt. The -p option maps container ports to host ports. The first port is the port on the host computer, and the one on the right is the port inside the container. The above example exposes port 5000 in the container on port 8000 in the host, so you will access the application on 8000, even though internally the container is using 5000. The --rm option will delete the container once it is terminated. While this isn't required, containers that finish or are interrupted are usually not needed anymore, so they can be automatically deleted. The last argument is the container image name and tag to use for the container. After you run the above command, you can access the application at http://localhost:8000.

The output of docker run is the ID assigned to the new container. This is a long hexadecimal string, that you can use whenever you need to refer to the container in subsequent commands. In fact, only the first few characters are necessary, enough to make the ID unique.

If you want to see what containers are running, you can use the docker ps command:

$ docker ps
CONTAINER ID  IMAGE             COMMAND      PORTS                   NAMES
021da2e1e0d3  microblog:latest  "./boot.sh"  0.0.0.0:8000->5000/tcp  microblog

You can see that even the docker ps command shortens container IDs. If you now want to stop the container, you can use docker stop:

$ docker stop 021da2e1e0d3
021da2e1e0d3

If you recall, there are a number of options in the application's configuration that are sourced from environment variables. For example, the Flask secret key, database URL and email server options are all imported from environment variables. In the docker run example above I have not worried about those, so all those configuration options are going to use defaults.

In a more realistic example, you will be setting those environment variables inside the container. You saw in the previous section that the ENV command in the Dockerfile sets environment variables, and it is a handy option for variables that are going to be static. For variables that depend on the installation, however, it isn't convenient to have them as part of the build process, because you want to have a container image that is fairly portable. If you want to give your application to another person as a container image, you would want that person to be able to use it as is, and not have to rebuild it with different variables.

So build-time environment variables can be useful, but there is also a need to have run-time environment variables that can be set via the docker run command, and for these variables, the -e option can be used. The following example sets a secret key and sends email through a gmail account:

$ docker run --name microblog -d -p 8000:5000 --rm -e SECRET_KEY=my-secret-key \
    -e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
    -e MAIL_USERNAME=<your-gmail-username> -e MAIL_PASSWORD=<your-gmail-password> \
    microblog:latest

It is not uncommon for docker run command lines to be extremely long due to having many environment variable definitions.

Using Third-Party "Containerized" Services

The container version of Microblog is looking good, but I haven't really thought much about storage yet. In fact, since I haven't set a DATABASE_URL environment variable, the application is using the default SQLite database, which is supported by a file on disk. What do you think is going to happen to that SQLite file when you stop and delete the container? The file is going to disappear!

The file system in a container is ephemeral, meaning that it goes away when the container goes away. You can write data to the file system, and the data is going to be there if the container needs to read it, but if for any reason you need to recycle your container and replace it with a new one, any data that the application saved to disk is going to be lost forever.

A good design strategy for a container application is to make the application containers stateless. If you have a container that has application code and no data, you can throw it away and replace it with a new one without any problems, the container becomes truly disposable, which is great in terms of simplifying the deployment of upgrades.

But of course, this means that the data must be put somewhere outside of the application container. This is where the fantastic Docker ecosystem comes into play. The Docker Container Registry contains a large variety of container images. I have already told you about the Python container image, which I'm using as a base image for my Microblog container. In addition to that, Docker maintains images for many other languages, databases and other services in the Docker registry and if that isn't enough, the registry also allows companies to publish container images for their products, and also regular users like you or me to publish your own images. That means that the effort to install third party services is reduced to finding an appropriate image in the registry, and starting it with a docker run command with proper arguments.

So what I'm going to do now is create two additional containers, one for a MySQL database, and another one for the Elasticsearch service, and then I'm going to make the command line that starts the Microblog container even longer with options that enable it to access these two new containers.

Adding a MySQL Container

Like many other products and services, MySQL has public container images available on the Docker registry. Like my own Microblog container, MySQL relies on environment variables that need to be passed to docker run. These configure passwords, database names etc. While there are many MySQL images in the registry, I decided to use one that is officially maintained by the MySQL team. You can find detailed information about the MySQL container image in its registry page: https://hub.docker.com/r/mysql/mysql-server/.

If you remember the laborious process to set up MySQL in Chapter 17, you are going to appreciate Docker when you see how easy it is to deploy MySQL. Here is the docker run command that starts a MySQL server:

$ docker run --name mysql -d -e MYSQL_RANDOM_ROOT_PASSWORD=yes \
    -e MYSQL_DATABASE=microblog -e MYSQL_USER=microblog \
    -e MYSQL_PASSWORD=<database-password> \
    mysql/mysql-server:5.7

That is it! On any machine that you have Docker installed, you can run the above command and you'll get a fully installed MySQL server with a randomly generated root password, a brand new database called microblog, and a user with the same name that is configured with full permissions to access the database. Note that you will need to enter a proper password as the value for the MYSQL_PASSWORD environment variable.

Now on the application side, I need to add a MySQL client package, like I did for the traditional deployment on Ubuntu. I'm going to use pymysql once again, which I can add to the Dockerfile:

Dockerfile: Add pymysql to Dockerfile.

# ...
RUN venv/bin/pip install gunicorn pymysql
# ...

Any time a change is made to the application or the Dockerfile, the container image needs to be rebuilt:

$ docker build -t microblog:latest .

Any now I can start Microblog again, but this time with a link to the database container so that both can communicate through the network:

$ docker run --name microblog -d -p 8000:5000 --rm -e SECRET_KEY=my-secret-key \
    -e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
    -e MAIL_USERNAME=<your-gmail-username> -e MAIL_PASSWORD=<your-gmail-password> \
    --link mysql:dbserver \
    -e DATABASE_URL=mysql+pymysql://microblog:<database-password>@dbserver/microblog \
    microblog:latest

The --link option tells Docker to make another container accessible to this one. The argument contains two names separated by a colon. The first part is the name or ID of the container to link, in this case the one named mysql that I created above. The second part defines a hostname that can be used in this container to refer to the linked one. Here I'm using dbserver as generic name that represents the database server.

With the link between the two containers established, I can set the DATABASE_URL environment variable so that SQLAlchemy is directed to use the MySQL database in the other container. The database URL is going to use dbserver as the database hostname, microblog as the database name and user, and the password that you selected when you started MySQL.

One thing I noticed when I was experimenting with the MySQL container is that it takes a few seconds for this container to be fully running and ready to accept database connections. If you start the MySQL container and then start the application container immediately after, when the boot.sh script tries to run flask db upgrade it may fail due to the database not being ready to accept connections. To make my solution more robust, I decided to add a retry loop in boot.sh:

boot.sh: Retry database connection.

#!/bin/sh
source venv/bin/activate
while true; do
    flask db upgrade
    if [[ "$?" == "0" ]]; then
        break
    fi
    echo Upgrade command failed, retrying in 5 secs...
    sleep 5
done
flask translate compile
exec gunicorn -b :5000 --access-logfile - --error-logfile - microblog:app

This loop checks the exit code of the flask db upgrade command, and if it is non-zero it assumes that something went wrong, so it waits five seconds and then retries.

Adding a Elasticsearch Container

The Elasticsearch documentation for Docker shows how to run the service as a single-node for development, and as a two-node production-ready deployment. For now I'm going to go with the single-node option and use the "oss" image, which only has the open source engine. The container is started with the following command:

$ docker run --name elasticsearch -d -p 9200:9200 -p 9300:9300 --rm \
    -e "discovery.type=single-node" \
    docker.elastic.co/elasticsearch/elasticsearch-oss:7.6.2

This docker run command has many similarities with the ones I've used for Microblog and MySQL, but there are a couple of interesting differences. First, there are two -p options, which means that this container is going to listen on two ports instead of just one. Both ports 9200 and 9300 are mapped to the same ports in the host machine.

The other difference is in the syntax used to refer to the container image. For the images that I've been building locally, the syntax was <name>:<tag>. The MySQL container uses a slightly more complete syntax with the format <account>/<name>:<tag>, which is appropriate to reference container images on the Docker registry. The Elasticsearch image that I'm using follows the pattern <registry>/<account>/<name>:<tag>, which includes the address of the registry as the first component. This syntax is used for images that are not hosted in the Docker registry. In this case Elasticsearch runs their own container registry service at docker.elastic.co instead of using the main registry maintained by Docker.

So now that I have the Elasticsearch service up and running, I can modify the start command for my Microblog container to create a link to it and set the Elasticsearch service URL:

$ docker run --name microblog -d -p 8000:5000 --rm -e SECRET_KEY=my-secret-key \
    -e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
    -e MAIL_USERNAME=<your-gmail-username> -e MAIL_PASSWORD=<your-gmail-password> \
    --link mysql:dbserver \
    -e DATABASE_URL=mysql+pymysql://microblog:<database-password>@dbserver/microblog \
    --link elasticsearch:elasticsearch \
    -e ELASTICSEARCH_URL=http://elasticsearch:9200 \
    microblog:latest

Before you run this command, remember to stop your previous Microblog container if you still have it running. Also be careful in setting the correct passwords for the database and the Elasticsearch service in the proper places in the command.

Now you should be able to visit http://localhost:8000 and use the search feature. If you experience any errors, you can troubleshoot them by looking at the container logs. You'll most likely want to see logs for the Microblog container, where any Python stack traces will appear:

$ docker logs microblog

The Docker Container Registry

So now I have the complete application up and running on Docker, using three containers, two of which come from publicly available third-party images. If you would like to make your own container images available to others, then you have to push them to the Docker registry from where anybody can obtain images.

To have access to the Docker registry you need to go to https://hub.docker.com and create an account for yourself. Make sure you pick a username that you like, because that is going to be used in all the images that you publish.

To be able to access your account from the command line, you need to log in with the docker login command:

$ docker login

If you've been following my instructions, you now have an image called microblog:latest stored locally on your computer. To be able to push this image to the Docker registry, it needs to be renamed to include the account, like the image from MySQL. This is done with the docker tag command:

$ docker tag microblog:latest <your-docker-registry-account>/microblog:latest

If you list your images again with docker images you are now going to see two entries for Microblog, the original one with the microblog:latest name, and a new one that also includes your account name. These are really two alias for the same image.

To publish your image to the Docker registry, use the docker push command:

$ docker push <your-docker-registry-account>/microblog:latest

Now your image is publicly available and you can document how to install it and run from the Docker registry in the same way MySQL and others do.

Deployment of Containerized Applications

One of the best things about having your application running in Docker containers is that once you have the containers tested locally, you can take them to any platform that offers Docker support. For example, you could use the same servers I recommended in Chapter 17 from Digital Ocean, Linode or Amazon Lightsail. Even the cheapest offering from these providers is sufficient to run Docker with a handful of containers.

The Amazon Container Service (ECS) gives you the ability to create a cluster of container hosts on which to run your containers, in a fully integrated AWS environment, with support for scaling and load balancing, plus the option to use a private container registry for your container images.

Finally, a container orchestration platform such as Kubernetes provides an even greater level of automation and convenience, by allowing you to describe your multi-container deployments in simple text files in YAML format, with load balancing, scaling, secure management of secrets and rolling upgrades and rollbacks.

181 comments

  • #151 Miguel Grinberg said 2020-08-30T15:45:31Z

    @Lauren: What I usually do to load initial data is to define a flask CLI command to do this. It would be the same actions you have been doing manually, but having them in a command is more convenient as they are easy to repeat.

  • #152 Kieron Spearing said 2020-09-10T11:29:08Z

    Hi Miguel

    During the docker build -t microblog:latest .

    Command I came across this error:

    writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt' Error: pg_config executable not found. pg_config is required to build psycopg2 from source. Please add the directory containing pg_config to the $PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. If you prefer to avoid building psycopg2 from source, please install the PyPI 'psycopg2-binary' package instead. For further information please check the 'doc/src/install.rst' file (also at &lt;https://www.psycopg.org/docs/install.html&gt;). ----------------------------------------

    Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-chd2gfk3/psycopg2/ You are using pip version 18.1, however version 20.2.3 is available. You should consider upgrading via the 'pip install --upgrade pip' command. The command '/bin/sh -c venv/bin/pip install -r requirements.txt' returned a non-zero code: 1

    Obviously the pip warning I am not to worried about, however after looking through previous problems and googling a bit I found that modifying the Dockerfile like this to be a solution:

    FROM python:3.6-alpine

    RUN adduser -D microblog

    WORKDIR /home/Mega

    COPY requirements.txt requirements.txt RUN python -m venv venv RUN apk add --no-cache postgresql-libs RUN apk add --no-cache --virtual .build-deps gcc musl-dev postgresql-dev RUN apk add --no-cache --virtual .pynacl_deps build-base python3-dev libffi-dev RUN venv/bin/pip install -r requirements.txt RUN venv/bin/pip install gunicorn RUN apk --purge del .build-deps

    COPY app app COPY migrations migrations COPY microblog.py config.py boot.sh ./ RUN chmod +x boot.sh

    ENV FLASK_APP microblog.py

    RUN chown -R microblog:microblog ./ USER microblog

    EXPOSE 5000 ENTRYPOINT ["./boot.sh"]

    Obviously this works, but I feel there may be a better way so my question is:

    Is this a good solution and is there a better one? How would you go around getting a better solution for this?

  • #153 Miguel Grinberg said 2020-09-10T14:11:24Z

    @Kieron: just to be clear, you are using Postgres, while this tutorial uses MySQL in the Docker deployment. If it is all the same to you, switching to MySQL will address this issue. The Postgres driver is written in C so it needs to be compiled. You can try using the psycopg2-binary package instead, which is a compiled version, but that may not work in the alpine image.

    The solution that you used is not ideal, because you are installing a lot of dependencies that are only needed to compile psycopg2, but they are inflating the size of your image considerably. There is a way to avoid the extra size by using a "multistage" build. I've found a tutorial that shows how to do this with Postgres here: https://www.rockyourcode.com/create-a-multi-stage-docker-build-for-python-flask-and-postgres/.

  • #154 Kieron Spearing said 2020-09-10T18:32:22Z

    I will switch to mysql but thank you for the tutorial i am going to look into it more to learn other ways

  • #155 Hyun Soo Jeon said 2020-09-11T02:42:13Z

    Hi Miguel! Let me start with a BIG thank you always. You mentioned "A good design strategy for a container application is to make the application containers stateless". I am curious to know whether Microblog is considered a stateful application or a stateless application. I am a beginner in web development and although I have read on the internet the distinctions between them, it's always kind of hard for me to tell. Since Microblog relies on Flask's user session instead of web tokens, is it considered a stateful application? Please kindly correct me if I am wrong. Thank you!

  • #156 Miguel Grinberg said 2020-09-11T13:57:48Z

    @Hyun: All applications need to maintain some state, in one way or another. In the context of this topic, when I say that the container should be "stateless" I mean that no data should be stored inside the container itself. The point of this is that you can destroy a container and stand up a new one without having to worry about saving or restoring data that the application needs.

  • #157 Hyun said 2020-09-12T16:31:01Z

    @Miguel Oh I see that makes perfect sense! Thank you very much for your explanation. What we are at it, I have one more question on Flask's user session. If I deploy Microblog, for example, in Kubernetes with 3 replicas, and a user connects to Replica 1 where his/her session will be stored. But if somehow Replica 1 goes down and gets replaced with a new one or simply the connection switches to Replica 2. Does that mean the user will be logged out and asked to log in again? I wanted to try it myself and see but I am a little stuck on some other stuff with Kubernetes

  • #158 Miguel Grinberg said 2020-09-13T09:14:02Z

    @Hyun: Flask does not store sessions in the server. Sessions are stored in cookies, so they follow the client regardless of the replica that handles the request.

  • #159 Kieron Spearing said 2020-09-14T15:16:08Z

    Hey Miguel

    When following through all of this and getting the containers up and running I am finding myself with the following error:

    [2020-09-14 15:09:46,197] INFO in init: Microblog startup INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade 2ef5f59404c4 -> e77bb98e1d6c, after tests Traceback (most recent call last): File "/home/Mega/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1278, in _execute_context cursor, statement, parameters, context File "/home/Mega/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 593, in do_execute cursor.execute(statement, parameters) File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/cursors.py", line 163, in execute result = self._query(query) File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/cursors.py", line 321, in _query conn.query(q) File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/connections.py", line 505, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/connections.py", line 724, in _read_query_result result.read() File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/connections.py", line 1069, in read first_packet = self.connection._read_packet() File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/connections.py", line 676, in _read_packet packet.raise_for_error() File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/protocol.py", line 223, in raise_for_error err.raise_mysql_exception(self._data) File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/err.py", line 107, in raise_mysql_exception raise errorclass(errno, errval) pymysql.err.OperationalError: (1050, "Table 'user' already exists")

    Which seems to indicate the database isn't clear before starting, however how would I go about sorting this out so that I could have a clear image and wouldn't this become a problem for future edits to the docker image when starting up?

    Obviously I don't plan to push this to docker container registry as I have been following this tutorial very closely and it is your site I am working with here but just curious for future projects when having this error how do you go about fixing it?

  • #160 Miguel Grinberg said 2020-09-14T20:09:54Z

    @Kieron: Your database files are not stored in a container, they are stored in a Docker volume. You probably have one from a previous test. You need to delete all the tables in this database, or else delete the volume and restart the mysql container so that it creates a brand new one.

  • #161 Kieron Spearing said 2020-09-15T05:21:58Z

    Hi Miguel

    I have followed the following steps: I removed all containers, volumes and images and then restarted it all from the beginning and I found the same problem happening , so then I went again and removed the containers once more along with the volume.

    When restarting it all I found the following and I am unsure if you can help me clarify what is happening here. in the docker logs microblog I had the following at first:

    File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/connections.py", line 327, in init self.connect() File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/connections.py", line 619, in connect raise exc sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'dbserver' ([Errno 111] Connection refused)") (Background on this error at: http://sqlalche.me/e/13/e3q8) sh: missing ]]

    After letting it retry with the loop in the boot.sh it came with this:

    INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> 1e9d19f0a0db, users table INFO [alembic.runtime.migration] Running upgrade 1e9d19f0a0db -> ebf69dfcd48b, posts table INFO [alembic.runtime.migration] Running upgrade ebf69dfcd48b -> a341411ee845, new fields in user model INFO [alembic.runtime.migration] Running upgrade a341411ee845 -> 2ef5f59404c4, followers INFO [alembic.runtime.migration] Running upgrade 2ef5f59404c4 -> e77bb98e1d6c, after tests Traceback (most recent call last): File "/home/Mega/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1278, in _execute_context cursor, statement, parameters, context File "/home/Mega/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 593, in do_execute cursor.execute(statement, parameters) File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/cursors.py", line 163, in execute result = self._query(query) File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/cursors.py", line 321, in _query conn.query(q) File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/connections.py", line 505, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/connections.py", line 724, in _read_query_result result.read() File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/connections.py", line 1069, in read first_packet = self.connection._read_packet() File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/connections.py", line 676, in _read_packet packet.raise_for_error() File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/protocol.py", line 223, in raise_for_error err.raise_mysql_exception(self._data) File "/home/Mega/venv/lib/python3.6/site-packages/pymysql/err.py", line 107, in raise_mysql_exception raise errorclass(errno, errval) pymysql.err.OperationalError: (1050, "Table 'user' already exists")

    This has got me thinking that maybe the problem lies in how i have the migrations through the problem? I have run into this create table error twice before, however I am unsure how to fix this issue? Do you have any idea why it would be behaving this way?

  • #162 Miguel Grinberg said 2020-09-15T08:59:38Z

    @Kieron: now that I think about it, yes, it seems your migration e77bb98e1d6c is trying to create a table that was already created in a previous migration, so you seem to have a bad migration in your repository.

  • #163 Kieron Spearing said 2020-09-15T12:55:13Z

    Are you able to clear the bad migration or start a new migration? won't do it for this but would like to know for future projects working with migrate?

  • #164 Miguel Grinberg said 2020-09-15T14:12:18Z

    @Kieron: migration scripts are Python code. You can open them in your editor and fix them.

  • #165 Gitau Harrison said 2020-10-26T15:28:19Z

    I am trying to run my docker container and access it on localhost:8000 without success. I have the image created but the docker run command does not create the container. This is what I have done:

    Dockerfile:

    FROM python:3.8-alpine RUN adduser -D practice_blog WORKDIR /home/gitau/software_development/python/flask_tutorial/practice_blog COPY requirements.txt requirements.txt RUN python3 -m venv practice_blog # Upgrade pip below RUN practice_blog/bin/python3 -m pip install --upgrade pip RUN practice_blog/bin/pip3 install -r requirements.txt RUN practice_blog/bin/pip3 install gunicorn COPY app app COPY migrations migrations COPY practice_blog.py config.py boot.sh ./ RUN chmod +x boot.sh ENV FLASK_APP practice_blog.py RUN chown -R practice_blog:practice_blog ./ USER practice_blog EXPOSE 5000 ENTRYPOINT ['./boot.sh']

    boot.sh

    !/bin/sh source practice_blog/bin/activate flask db upgrade flask translate compile exec gunicorn -b :5000 --access-logfile - --error-logfile - practice_blog:app

    Question 1: What could be my issue?

    Question 2: I have removed the # in the shebang in an attempted to load the boot.sh file as an executable (even though there is the command exec). Is this necessary? Because as you state, the exec triggers the process running the script to be replaced with the command given, instead of starting it as a new process in a shell script.

  • #166 Miguel Grinberg said 2020-10-26T15:45:47Z

    @Gitau: if Docker fails to create the container, then there must be logs that tell you what happened. Try docker logs &lt;container-name&gt; after the container failed to create. The "#" in the shebang line of a script is not supposed to be removed, that needs to stay there always.

  • #167 Gitau Harrison said 2020-10-26T16:40:36Z

    Running docker ps shows no container or the container id. docker ps -a in my machine shows only the default hello-world container. If I try to run docker logs &lt;container_name&gt;, I do not have the container name to use.

    The thing is $ docker run --name practice_blog -d -p 8000:5000 --rm practice_blog:latest successfully creates an id such as this 021da2e1e0d390320248abf97dfbbe7b27c70fefed113d5a41bb67a68522e91c. But I cannot see the container when docker ps is run, and therefore, there localhost:8000 refuses to connect.

    This is my issue. I am wondering, if I have successfully created an image and even built it to the point where I get a container id, why can't I see practice_blog container?

  • #168 Miguel Grinberg said 2020-10-26T19:25:39Z

    @Gitau: The --rm is telling docker to delete the container when it ends, successful or not. Remove that option so that you can then look at the container logs and see what failed.

  • #169 Gitau Harrison said 2020-10-27T05:23:30Z

    Getting rid of --rm solves the container creation problem.

    docker ps -a shows that the process was exited as soon as it was created

    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c3cf42ebbc27 practice_blog:latest "/bin/sh -c ['./boot…" 28 seconds ago Exited (127) 27 seconds ago practice_blog

    And the error from docker logs &lt;CONTAINER ID&gt; indicate that /bin/sh: [./boot.sh]: not found

    I think that the reason why docker run --name practice_blog -d -p 8000:5000 --rm practice_blog:latest failed to create a container is because the boot.sh file could not be found.

    I have pruned my system of dangling images and containers and redid the build and run commands successfully.

    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES da8d6da1c4f1 practice_blog:latest "./boot.sh" 15 seconds ago Up 14 seconds 0.0.0.0:8000-&gt;5000/tcp practice_blog

    localhost:8000 works!

  • #170 Gitau Harrison said 2020-10-29T12:39:34Z

    The for loop inside the boot.sh file exits with a non-zero digit and therefore keeps on retrying. This is the error message I am getting:

    spec.loader.exec_module(module) File "&lt;frozen importlib._bootstrap_external&gt;", line 783, in exec_module File "&lt;frozen importlib._bootstrap&gt;", line 219, in _call_with_frames_removed File "migrations/env.py", line 27, in &lt;module&gt; str(current_app.extensions['migrate'].db.engine.url).replace('%', '%%')) AttributeError: 'NoneType' object has no attribute 'engine' Upgrade command failed, retrying in 5 secs...

    I do not understand what I can do at this point. What does it mean that I do not have the attribute engine? Vaguely I understand that the flask db upgrade command failed.

    Notably, I have not started the mysql container and application container at the same or almost the same time. What can I do to fix this? Because at this point, the localhost:8000 site cannot be reached.

  • #171 Miguel Grinberg said 2020-10-29T14:05:32Z

    @Gitau: the error is not about engine, the important part is that the object on which this engine attribute is obtained is None. This is the db attribute of the Flask-Migrate extension. That means that you did not properly initialize the Flask-Migrate extension, so it does not know what the database is.

  • #172 Gitau Harrison said 2020-10-29T16:11:51Z

    After linking the mysql server to the application container, should be able to send data on the application, like logging in? What I am getting with such attempts is localhost didn't send any data. I have been able to get localhost:8000 to continue working after linking it to mysql.

  • #173 Miguel Grinberg said 2020-10-29T23:31:26Z

    @Gitau: I'm not familiar with the "localhost didn't send any data" error. Where does it appear? Who is producing it? Any stack traces that can provide more context?

  • #174 Gitau Harrison said 2020-10-30T05:45:03Z

    localhost didn't send any data is the feedback I get when I try to say log into the app running on localhost:8000. This is after I linked both the app's container and the mysql container. I am not sure what should happen after the link is created. Should the app be able to send data such as log in or registration data?

  • #175 Gitau Harrison said 2020-10-30T09:31:07Z

    Looking at the command RUN chown -R microblog:microblog ./, here the user microblog is set as the owner of all files and directories in the WORKDIR.

    Breaking it down, RUN chown -R microblog:microblog ./:

    the first microblog in the command above(before the colon), is it the project folder name? the second microblog, is it the new owner?

    I am a bit confused if I create a new project folder with a different name, say microblog_test, but with the user called microblog should it be as:

    RUN chown -R microblog_test:microblog ./

    or

    RUN chown -R microblog:microblog_test ./

Leave a Comment