Running a Flask Application as a Service with Systemd

Posted by
on under

When you deploy your application on a server, you need to make sure the application runs uninterrupted. If the application crashes, you'd want it to automatically restart, and if the server experiences a power outage, you'd want the application to start immediately once power is restored. Basically what you need is something that keeps an eye on the application and restarts it if it ever finds that it isn't running anymore.

In previous tutorials, I showed you how to implement this using supervisord, which is a third party utility written in Python. Today I'm going to show you a similar solution based on systemd, which is a native component in many Linux distributions including Debian derivatives such as Ubuntu and RedHat derivatives such as Fedora and CentOS.

Configuring a Service with Systemd

Systemd is configured through entities called units. There are several types of units, including services, sockets, devices, timers and a few more. For services, unit configuration files must have a .service extension. Below you can see a basic structure for a service unit configuration file:

[Unit]
Description=<a description of your application>
After=network.target

[Service]
User=<username>
WorkingDirectory=<path to your app>
ExecStart=<app start command>
Restart=always

[Install]
WantedBy=multi-user.target

The [Unit] section is common to unit configuration files of all types. It is used to configure general information about the unit and any dependencies that help systemd determine the start up order. In my template I'm adding a description for the service, and I also specify that I want my application to start after the networking subsystem is initialized, since it is a web application.

The [Service] section is where the details specific to your application are included. I'm using the most common options to define the user under which to run the service, the starting directory and the execution command. The Restart option tells systemd that in addition to starting the service when the system boots, I want the application to be restarted if it exits. This takes care of crashes or other unexpected problems that may force the process to end.

Finally, the [Install] section configures how and when the unit should be enabled. By adding the WantedBy=multi-user.target line I'm telling systemd to activate this unit whenever the system is running in multi-user mode, the normal mode a Unix server starts when operational. See a discussion on Unix runlevels if you want to know more details about the multi-user mode.

Unit configuration files are added in the /etc/systemd/system directory to be seen by systemd. Each time you add or modify a unit file you must tell systemd to refresh its configuration:

$ sudo systemctl daemon-reload

And then you can use the systemctl <action> <service-name> command to start, stop, restart or obtain status for your service:

$ sudo systemctl start <service-name>
$ sudo systemctl stop <service-name>
$ sudo systemctl restart <service-name>
$ sudo systemctl status <service-name>

Note: you may be used to manage your services with the service <service-name> <action> command instead of systemctl. in most distributions the service command maps to systemctl and gives you the same result.

Writing a Systemd Configuration File for a Flask Application

If you want to create a systemd service file for your own application, you just simply have to take the above template and fill out the Description, User, WorkingDirectory and ExecStart as appropriate.

As an example, let's say that I want to deploy the microblog application featured in my Flask Mega-Tutorial on a Linux server as discussed in this article, but instead of using supervisord I want to use systemd to monitor the process.

For your reference, here is the supervisord config file that I used in the tutorial:

[program:microblog]
command=/home/ubuntu/microblog/venv/bin/gunicorn -b localhost:8000 -w 4 microblog:app
directory=/home/ubuntu/microblog
user=ubuntu
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true

The equivalent unit configuration file for systemd would be written in /etc/systemd/system/microblog.service and would have the following contents:

[Unit]
Description=Microblog web application
After=network.target

[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/microblog
ExecStart=/home/ubuntu/microblog/venv/bin/gunicorn -b localhost:8000 -w 4 microblog:app
Restart=always

[Install]
WantedBy=multi-user.target

Note how the start command reaches inside the virtual environment to get to the gunicorn executable. This is equivalent to activating the virtual environment and then running gunicorn without a path, but has the benefit that it can be done in a single command.

After adding this file to your system, you can start the service with these commands:

$ sudo systemctl daemon-reload
$ sudo systemctl start microblog

Environment Variables

If your Flask application expects one or more environment variables to be set ahead of time, you can add these to the service file. For example, if you need FLASK_CONFIG and DATABASE_URL variables set, you can define them with the Environment option as follows:

[Unit]
Description=Microblog web application
After=network.target

[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/microblog
Environment=FLASK_CONFIG=production
Environment=DATABASE_URL=sqlite:////path/to/the/database.sqlite
ExecStart=/home/ubuntu/microblog/venv/bin/gunicorn -b localhost:8000 -w 4 microblog:app
Restart=always

[Install]
WantedBy=multi-user.target

Note that if you follow the style of my tutorials and use a .env file for your environment variables, then you do not need to add them through the systemd service file. I actually prefer to handle the environment through a .env file since that is a uniform method that works on development and production.

Accessing Logs

Systemd has a logging subsystem called the journal, implemented by the journald daemon, which collects logs for all the systemd units that are running. The contents of the journal can be viewed using the journalctl utility. Here are a few example commands for common log access:

View the logs for the microblog service:

$ journalctl -u microblog

View the last 25 log entries for the microblog service:

$ journalctl -u microblog -n 25

View the logs for the microblog service from the last five minutes:

$ journalctl -u microblog --since=-5m

Tail the logs for the microblog service:

$ journalctl -u microblog -f

There are many more options available. Run journalctl --help to see a more complete summary of options.

Advanced Usage: Running Worker Pools with Systemd

If you are running your background processes with Celery, then extending the above solution to cover your workers is simple, because Celery allows you to start your pool of worker processes with a single command. This is actually identical to how gunicorn with multiple workers is handled, so all you need to do is create a second .service file to manage your Celery master process, which in turn will manage the workers.

But if you made it to the last chapters of my Flask Mega-Tutorial, you know that I've introduced a task queue based on RQ to perform background tasks. When using RQ, you have to start workers individually, there is no master process that manages the pool of workers for you. Here is how I managed the RQ workers with supervisord in the tutorial:

[program:microblog-tasks]
command=/home/ubuntu/microblog/venv/bin/rq worker microblog-tasks
numprocs=1
directory=/home/ubuntu/microblog
user=ubuntu
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true

Here the numprocs argument makes it trivial to start as many workers as you need. With this argument supervisord will start and monitor the specified number of instances from a single configuration file.

Unfortunately there isn't a numprocs option in systemd so this type of service requires a different solution. The most naive way to make this work would be to create a separate service file for each worker instance, but that would be tedious. Instead what I'm going to do is create the service file as a template that can be used to start all these identical instances:

[Unit]
Description=Microblog task worker %I
After=network.target

[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/microblog
ExecStart=/home/ubuntu/microblog/venv/bin/rq worker microblog-tasks
Restart=always

[Install]
WantedBy=multi-user.target

The odd thing that you may have noticed in this file is that I've added a %I in the service description. This is the service argument, a number that is going to be passed to each instance. Having this %I in the description will help me identify the instances, as all the output from systemd commands is going to have that replaced with the instance number. For this specific case, I don't really need to use this argument for anything else, but it is common to include the %I in other fields, such as the start command when necessary.

The other difference with regular service files is that I'm going to write this service file with the name /etc/systemd/system/microblog-tasks@.service. The @ in the filename indicates that this is a template, and as such there's going to be an argument following it to identify each instance spawned from it. I'm going to use instance numbers as arguments, so the different instances of this service are then going to be known in systemd as microblog-tasks@1, microblog-tasks@2 and so on.

Now I can start four workers using brace expansion in bash:

$ sudo systemctl daemon-reload
$ sudo systemctl start microblog-tasks@{1..4}
$ sudo systemctl status microblog-tasks@{1..4}

And if you want to address an individual instance you can do that as well:

$ sudo systemctl restart microblog-tasks@3

This is almost as convenient as the single supervisord configuration, but there is a disadvantage in that when you want to perform an action on all the workers you have to include the {1..4} range in the command.

To really treat the entire pool of worker instances as a single entity, I can create a new systemd target, which is another type of unit. Then I can map all the instances to that target, which will allow me to reference this target when I want to perform an operation on all the members of the group. Let's begin with the unit configuration file for the new target, which I'm going to name /etc/systemd/system/microblog-tasks.target:

[Unit]
Description=Microblog RQ worker pool

[Install]
WantedBy=multi-user.target

Besides the description, the only definition that is needed is to place a dependency on the multi-user.target, which as you recall, is the same target under which all the unit files shown above were defined.

Now I can update the service file template to reference the new target, which ends up being equivalent because of the transitive reference to the original multi-user.target;

[Unit]
Description=Microblog task worker %I
After=network.target

[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/microblog
ExecStart=/home/ubuntu/microblog/venv/bin/rq worker microblog-tasks
Restart=always

[Install]
WantedBy=microblog-tasks.target

Now the system can be reconfigured to use the new setup with these commands:

$ sudo systemctl daemon-reload
$ sudo systemctl disable microblog-tasks@{1..4}
$ sudo systemctl enable microblog-tasks@{1..4}

The disable and enable commands are necessary to force systemd to drop the old target for the worker tasks and apply the new one. Now the pool of workers can be handled with the target:

$ sudo systemctl restart microblog-tasks.target

And if later you decide that you want to add a 5th worker, you can do:

$ sudo systemctl enable microblog-tasks@5
$ sudo systemctl start microblog-tasks.target

Of course, you can also take away workers. Here is how you can remove workers 4 and 5:

$ sudo systemctl stop microblog-tasks@{4..5}
$ sudo systemctl disable microblog-tasks@{4..5}

And at this point, I think this solution surpasses supervisord's numprocs command in terms of convenience and functionality, since not only I can control the entire farm of worker processes as a whole but I can also add and remove workers without having to edit any config files!

Become a Patron!

Hello, and thank you for visiting my blog! If you enjoyed this article, please consider supporting my work on this blog on Patreon!

31 comments
  • #1 ram0937 said

    Thank you very much, very appreciated. And question: am I must do
    $ sudo systemctl enable microblog
    for autostart after system reload?

  • #2 Miguel Grinberg said

    @ram0937: only if you disabled it previously. The service will initially be enabled.

  • #3 elgow said

    Nice article. I've been doing a lot of systemd service units lately and I have some suggestions for making things less hard-coded and more configurable. In the line below you hard-code both the user home dir and the virtual environment directory name, along with some gunicorn settings.

    ExecStart=/home/ubuntu/microblog/venv/bin/gunicorn -b localhost:8000 -w 4 microblog:app

    It's true that ExecStart is excessively restrictive about having an explicit absolute path for the command. The wizened user will get around that by using /usr/bin/env as the command and making the actual command the 1st argument. That also enables use of environment variables in the command, so you can do something like this:

    EnvironmentFile=%h/service_env.config
    ExecStart=/usr/bin/env %h/${VENV}/bin/gunicorn -b ${HOST_PORT} -w {NUM_WORKERS} microblog:app

    This gets the user home dir from the %h selector rather than hard-coding it in the unit file. It also allows configuration of the virtualenv directory in the service_env.config file. Since there's now a config file we might as well make some gunicorn params configurable, e.g. the host/port and the number of workers. Changes to environment variables will take effect w/o a daemon-reload, so this approach is especially useful for one-shot service units.

    One of the things I find most galling about systemd is how much it traps you into hard-coding things. The above is one way I've found to get more configurability.

  • #4 Miguel Grinberg said

    @elgow: I don't think having configurable systemd units (or supervisord config files for that matter) is that useful, since these files are in many deployments auto-generated and not hand-edited. You would have something like Ansible generating them from templates, where the configurable bits such as port, host, etc. are inserted. Also having an environment file for the systemd unit while the application already has an environment file of its own can be confusing.

  • #5 elgow said

    I deploy using git rather than processing templates on a developer desktop and pushing them to the target. I think you'll find that many CI systems work in this way. For git based deployments it's nice to have the file in the git checkout be the linked systemd unit file rather than further processing it as a template. Systemd has its own template processing with environment variables and % specifiers, so I use that.

    In my experience Ansible is not so good for deploying applications, and I also find that the Ansbile systemd module is limited and doesn't work properly for --user mode systemd units. Some of your readers may also deploy using git, or a CI that uses git, so the tricks I described may be helpful for them.

  • #6 Bohumir said

    Thanks for a nice tutorial! I was solving this previously for some app and now for another one. Good refresher. Nice that it works fine with virtual envs. And interesting to see how to run a worker pool.

  • #7 Mir Hossein Mousavi said

    Liked the article, thanks
    Just one question. Which one do you use in production, services or supervisor?

  • #8 Miguel Grinberg said

    @Mir: you can use the one that you prefer. Both work really well!

  • #9 helo said

    What about supervisorctl?

  • #10 Miguel Grinberg said

    @helo: this article is about systemd, not supervisor.

  • #11 Sina said

    Thank you.That was a great and helpful article.

  • #12 Chris G said

    Thanks - very helpful.
    Only difference with my system (Centos7) was that the new service wasn't enabled for automatic start on boot until I'd run the command
    systemctl enable myservice

  • #13 Marko said

    Nice and helpful article. Explained in detail. Great work.

  • #14 subhan said

    Thank you this article is very helpful.......................

  • #15 Mantej Singh Dhanjal said

    Thank you!

  • #16 Herve Kabla said

    Thanks a lot, very clear and self explanatory.
    maybe you should add a :
    StartLimitBurst=0
    in order to avoid multiple restart

  • #17 Miguel Grinberg said

    @Herve: the StartLimitBurst is used to define a maximum number of retries. Can you describe the multiple restart problem you address by setting this value to 0?

  • #18 Sharon Gamboa said

    Hi Miguel, Nice article, I was trying to implementing it but I obtained an issue starting the new service. Do you know what is causing it and how can I fix it?

    flaskrest.service:

    [Unit]
    Description=Gunicorn instance to serve flask application
    After=network.target

    [Service]
    User=gamboas
    Group=www-data
    WorkingDirectory=/ws/gamboas/flask_rest
    Environment=FLASK_CONFIG=production
    Environment="PATH=/ws/gamboas/flask_rest/flaskvenv/bin"
    ExecStart=/ws/gamboas/flask_rest/flaskvenv/bin/gunicorn -b 127.0.0.1:8080 -w 4 wsgi:app
    Restart=always

    [Install]
    WantedBy=multi-user.target

    sw4563:/ws/gamboas > sudo systemctl daemon-reload
    sw4563:/ws/gamboas > sudo systemctl start flaskrest.service
    sw4563:/ws/gamboas > sudo systemctl enable flaskrest.service
    sw4563:/ws/gamboas > sudo systemctl status flaskrest
    ● flaskrest.service - Gunicorn instance to serve flask application
    Loaded: loaded (/etc/systemd/system/flaskrest.service; enabled; vendor preset: enabled)
    Active: failed (Result: start-limit-hit) since Fri 2020-07-17 17:27:04 PDT; 6s ago
    Main PID: 39384 (code=exited, status=203/EXEC)

    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Unit entered failed state.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Failed with result 'exit-code'.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Service hold-off time over, scheduling restart.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: Stopped Gunicorn instance to serve flask application.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Start request repeated too quickly.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: Failed to start Gunicorn instance to serve flask application.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Unit entered failed state.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Failed with result 'start-limit-hit'.

    sw4563:/ws/gamboas > journalctl -u flaskrest

    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: Started Gunicorn instance to serve flask application.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Main process exited, code=exited, status=203/EXEC
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Unit entered failed state.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Failed with result 'exit-code'.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Service hold-off time over, scheduling restart.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: Stopped Gunicorn instance to serve flask application.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Start request repeated too quickly.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: Failed to start Gunicorn instance to serve flask application.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Unit entered failed state.
    Jul 17 17:27:04 sw4563.rose.rdlabs.net systemd[1]: flaskrest.service: Failed with result 'start-limit-hit'.

  • #19 Miguel Grinberg said

    @Sharon: try running the gunicorn command from the command-line and make sure it works that way before you move to the service file. My guess is that there is a problem during gunicorn start up.

  • #20 ikhsan said

    why some dependencies is not installed ? how to install dependencies automatically once it is in systemd ?

  • #21 Miguel Grinberg said

    @ikhsan: which dependencies? Python dependencies? These are installed manually, what do you want them to be installed by systemd?

  • #22 Rodrigo B said

    Dear Miguel,

    were wondering If in the systemd start file for Gunicorn,
    I could have the python + gunicorn executables in other path
    completely different from the Web Application path .

    Do you think it would it work if I point "WorkingDirectory:"
    to my flask code, and "ExecStart" to the gunicorn executable
    in other path ?

    Thank you very much for your help.

  • #23 Miguel Grinberg said

    @Rodrigo: Yes, I believe that should work fine.

  • #24 Peter Koech said

    @miguel: I managed to setup an app and workers running. However I have three separate apps which I would like to assign workers.
    The workers are configured with different working directories to reflect each app working directory. I noted the apps send jobs to any available worker and not to any specific worker. This messes the jobs in queue since they may be executed by a wrong app.
    How can I ensure the apps will send jobs to the specific worker with its working directory configured within it.

  • #25 Miguel Grinberg said

    @Peter: each application should use a different queue, not the same queue. You should use a different queue name in each app.

Leave a Comment