Running a Flask Application as a Service with Systemd

Posted by
on under

When you deploy your application on a server, you need to make sure the application runs uninterrupted. If the application crashes, you'd want it to automatically restart, and if the server experiences a power outage, you'd want the application to start immediately once power is restored. Basically what you need is something that keeps an eye on the application and restarts it if it ever finds that it isn't running anymore.

In previous tutorials, I showed you how to implement this using supervisord, which is a third party utility written in Python. Today I'm going to show you a similar solution based on systemd, which is a native component in many Linux distributions including Debian derivatives such as Ubuntu and RedHat derivatives such as Fedora and CentOS.

Configuring a Service with Systemd

Systemd is configured through entities called units. There are several types of units, including services, sockets, devices, timers and a few more. For services, unit configuration files must have a .service extension. Below you can see a basic structure for a service unit configuration file:

[Unit]
Description=<a description of your application>
After=network.target

[Service]
User=<username>
WorkingDirectory=<path to your app>
ExecStart=<app start command>
Restart=always

[Install]
WantedBy=multi-user.target

The [Unit] section is common to unit configuration files of all types. It is used to configure general information about the unit and any dependencies that help systemd determine the start up order. In my template I'm adding a description for the service, and I also specify that I want my application to start after the networking subsystem is initialized, since it is a web application.

The [Service] section is where the details specific to your application are included. I'm using the most common options to define the user under which to run the service, the starting directory and the execution command. The Restart option tells systemd that in addition to starting the service when the system boots, I want the application to be restarted if it exits. This takes care of crashes or other unexpected problems that may force the process to end.

Finally, the [Install] section configures how and when the unit should be enabled. By adding the WantedBy=multi-user.target line I'm telling systemd to activate this unit whenever the system is running in multi-user mode, the normal mode a Unix server starts when operational. See a discussion on Unix runlevels if you want to know more details about the multi-user mode.

Unit configuration files are added in the /etc/systemd/system directory to be seen by systemd. Each time you add or modify a unit file you must tell systemd to refresh its configuration:

$ sudo systemctl daemon-reload

And then you can use the systemctl <action> <service-name> command to start, stop, restart or obtain status for your service:

$ sudo systemctl start <service-name>
$ sudo systemctl stop <service-name>
$ sudo systemctl restart <service-name>
$ sudo systemctl status <service-name>

Note: you may be used to manage your services with the service <service-name> <action> command instead of systemctl. in most distributions the service command maps to systemctl and gives you the same result.

Writing a Systemd Configuration File for a Flask Application

If you want to create a systemd service file for your own application, you just simply have to take the above template and fill out the Description, User, WorkingDirectory and ExecStart as appropriate.

As an example, let's say that I want to deploy the microblog application featured in my Flask Mega-Tutorial on a Linux server as discussed in this article, but instead of using supervisord I want to use systemd to monitor the process.

For your reference, here is the supervisord config file that I used in the tutorial:

[program:microblog]
command=/home/ubuntu/microblog/venv/bin/gunicorn -b localhost:8000 -w 4 microblog:app
directory=/home/ubuntu/microblog
user=ubuntu
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true

The equivalent unit configuration file for systemd would be written in /etc/systemd/system/microblog.service and would have the following contents:

[Unit]
Description=Microblog web application
After=network.target

[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/microblog
ExecStart=/home/ubuntu/microblog/venv/bin/gunicorn -b localhost:8000 -w 4 microblog:app
Restart=always

[Install]
WantedBy=multi-user.target

Note how the start command reaches inside the virtual environment to get to the gunicorn executable. This is equivalent to activating the virtual environment and then running gunicorn without a path, but has the benefit that it can be done in a single command.

After adding this file to your system, you can start the service with these commands:

$ sudo systemctl daemon-reload
$ sudo systemctl start microblog

Environment Variables

If your Flask application expects one or more environment variables to be set ahead of time, you can add these to the service file. For example, if you need FLASK_CONFIG and DATABASE_URL variables set, you can define them with the Environment option as follows:

[Unit]
Description=Microblog web application
After=network.target

[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/microblog
Environment=FLASK_CONFIG=production
Environment=DATABASE_URL=sqlite:////path/to/the/database.sqlite
ExecStart=/home/ubuntu/microblog/venv/bin/gunicorn -b localhost:8000 -w 4 microblog:app
Restart=always

[Install]
WantedBy=multi-user.target

Note that if you follow the style of my tutorials and use a .env file for your environment variables, then you do not need to add them through the systemd service file. I actually prefer to handle the environment through a .env file since that is a uniform method that works on development and production.

Accessing Logs

Systemd has a logging subsystem called the journal, implemented by the journald daemon, which collects logs for all the systemd units that are running. The contents of the journal can be viewed using the journalctl utility. Here are a few example commands for common log access:

View the logs for the microblog service:

$ journalctl -u microblog

View the last 25 log entries for the microblog service:

$ journalctl -u microblog -n 25

View the logs for the microblog service from the last five minutes:

$ journalctl -u microblog --since=-5m

Tail the logs for the microblog service:

$ journalctl -u microblog -f

There are many more options available. Run journalctl --help to see a more complete summary of options.

Advanced Usage: Running Worker Pools with Systemd

If you are running your background processes with Celery, then extending the above solution to cover your workers is simple, because Celery allows you to start your pool of worker processes with a single command. This is actually identical to how gunicorn with multiple workers is handled, so all you need to do is create a second .service file to manage your Celery master process, which in turn will manage the workers.

But if you made it to the last chapters of my Flask Mega-Tutorial, you know that I've introduced a task queue based on RQ to perform background tasks. When using RQ, you have to start workers individually, there is no master process that manages the pool of workers for you. Here is how I managed the RQ workers with supervisord in the tutorial:

[program:microblog-tasks]
command=/home/ubuntu/microblog/venv/bin/rq worker microblog-tasks
numprocs=1
directory=/home/ubuntu/microblog
user=ubuntu
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true

Here the numprocs argument makes it trivial to start as many workers as you need. With this argument supervisord will start and monitor the specified number of instances from a single configuration file.

Unfortunately there isn't a numprocs option in systemd so this type of service requires a different solution. The most naive way to make this work would be to create a separate service file for each worker instance, but that would be tedious. Instead what I'm going to do is create the service file as a template that can be used to start all these identical instances:

[Unit]
Description=Microblog task worker %I
After=network.target

[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/microblog
ExecStart=/home/ubuntu/microblog/venv/bin/rq worker microblog-tasks
Restart=always

[Install]
WantedBy=multi-user.target

The odd thing that you may have noticed in this file is that I've added a %I in the service description. This is the service argument, a number that is going to be passed to each instance. Having this %I in the description will help me identify the instances, as all the output from systemd commands is going to have that replaced with the instance number. For this specific case, I don't really need to use this argument for anything else, but it is common to include the %I in other fields, such as the start command when necessary.

The other difference with regular service files is that I'm going to write this service file with the name /etc/systemd/system/microblog-tasks@.service. The @ in the filename indicates that this is a template, and as such there's going to be an argument following it to identify each instance spawned from it. I'm going to use instance numbers as arguments, so the different instances of this service are then going to be known in systemd as microblog-tasks@1, microblog-tasks@2 and so on.

Now I can start four workers using brace expansion in bash:

$ sudo systemctl daemon-reload
$ sudo systemctl start microblog-tasks@{1..4}
$ sudo systemctl status microblog-tasks@{1..4}

And if you want to address an individual instance you can do that as well:

$ sudo systemctl restart microblog-tasks@3

This is almost as convenient as the single supervisord configuration, but there is a disadvantage in that when you want to perform an action on all the workers you have to include the {1..4} range in the command.

To really treat the entire pool of worker instances as a single entity, I can create a new systemd target, which is another type of unit. Then I can map all the instances to that target, which will allow me to reference this target when I want to perform an operation on all the members of the group. Let's begin with the unit configuration file for the new target, which I'm going to name /etc/systemd/system/microblog-tasks.target:

[Unit]
Description=Microblog RQ worker pool

[Install]
WantedBy=multi-user.target

Besides the description, the only definition that is needed is to place a dependency on the multi-user.target, which as you recall, is the same target under which all the unit files shown above were defined.

Now I can update the service file template to reference the new target, which ends up being equivalent because of the transitive reference to the original multi-user.target;

[Unit]
Description=Microblog task worker %I
After=network.target

[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/microblog
ExecStart=/home/ubuntu/microblog/venv/bin/rq worker microblog-tasks
Restart=always

[Install]
WantedBy=microblog-tasks.target

Now the system can be reconfigured to use the new setup with these commands:

$ sudo systemctl daemon-reload
$ sudo systemctl disable microblog-tasks@{1..4}
$ sudo systemctl enable microblog-tasks@{1..4}

The disable and enable commands are necessary to force systemd to drop the old target for the worker tasks and apply the new one. Now the pool of workers can be handled with the target:

$ sudo systemctl restart microblog-tasks.target

And if later you decide that you want to add a 5th worker, you can do:

$ sudo systemctl enable microblog-tasks@5
$ sudo systemctl start microblog-tasks.target

Of course, you can also take away workers. Here is how you can remove workers 4 and 5:

$ sudo systemctl stop microblog-tasks@{4..5}
$ sudo systemctl disable microblog-tasks@{4..5}

And at this point, I think this solution surpasses supervisord's numprocs command in terms of convenience and functionality, since not only I can control the entire farm of worker processes as a whole but I can also add and remove workers without having to edit any config files!

Become a Patron!

Hello, and thank you for visiting my blog! If you enjoyed this article, please consider supporting my work on this blog on Patreon!

31 comments
  • #26 Stan said

    Bug Issue:

    If you deploy multiple workers as systemd service, user login session was not shared by these workers, so users would login again if the next request is balanced to another worker. How to solve this???

  • #27 Miguel Grinberg said

    @Stan: normally your session is stored in a cookie, so it can be handled just fine by any worker. If you use a different type of authentication that requires clients to be handled by the same worker, then you have to configure session affinity in the load balancer that you are using.

  • #28 Ciro said

    Nice article Miguel.
    I followed the tutorial, but I have a problem. When I start the app as service, I cannot reach the API endpoint.
    I tried with curl from a remote machine (curl -k https://10.10.10.10:443). It works instead when I run the app normally.
    That's my configuration.

    [Unit]
    Description=Flask Web API
    [Install]
    WantedBy=multi-user.target
    [Service]
    Type=simple
    User=root
    PermissionsStartOnly=true
    ExecStart=/usr/bin/python3 /api/app.py
    Restart=on-failure
    TimeoutSec=600

    I am working on RHEL8 and I create the service file under /etc/systemd/system.
    From the systemctl status I can see that the app is running. What can be the issue? Do I have to expose the port somehow?

  • #29 Miguel Grinberg said

    @Ciro: you should check the logs of the running application to see if there are any clues there. I really have no way to know because you haven't shared what do you have in app.py.

  • #30 Do the RQ workers run on a port? said

    I've successfully created the systemd process, and am able to spawn workers as wanted.

    But I'm confused - How are these workers used? Are they accessible on a specific port? Is a config file needed?

    Status from systemctl status name-tasks@n is 'Active' and green. Something's running. I just can't figure our how to access it.

  • #31 Miguel Grinberg said

    This tutorial covers the deployment of the workers, it is not a tutorial on running background jobs. See my Flask Mega-Tutorial to learn how to work with RQ.

Leave a Comment