How to Write Unit Tests in Python, Part 1: Fizz Buzz

Posted by
on under

Today I'm starting a new series of articles about a topic that does not get a lot of coverage: how to write Python unit tests. Unlike other testing tutorials, I'm going to focus on testing techniques more than on the testing tools themselves. The idea is that in each part of this series I will look at a particular piece of code and discuss the challenges it presents from a testing point of view and how to address them.

In this first part I'm going to introduce the testing set up that I use in my projects, which is based on pytest. Then I will show you how to write unit tests for an implementation of the popular fizz buzz game that is frequently the subject of coding interview exercises.

Why do we test?

A lot of people don't understand the role of automated testing and think it is an unnecessary burden. Their argument is that they would be running and extensively testing the code manually as part of writing it, so by the time the code is complete they would have found and fixed all or at least the most severe bugs.

While this logic is flawed, there is some truth in it. Coding requires alternating between writing code and running it to verify it works as intended, so you could say that when you are done writing some new code the chance of having bugs in it is low, especially if you have some previous coding experience.

The flaw in the argument is that in most cases you will not be coding from scratch, you will be adding to or modifying an existing codebase. Your coding and manual testing efforts are going to be focused on the new work that you are doing, so how can you ensure that the new code isn't indirectly affecting other parts of your project that you aren't paying too much attention to? Bugs that appear in old code that are indirectly caused by newer changes are called regressions. As your project gets larger the effort to test for regressions every time changes are made is going to become more and more time consuming.

When you implement automated tests for the code that you write you are not mainly doing it to ensure this code works as intended. You should think of automated testing as taking an insurance policy on your newly written code against regressions in the future. Automated tests are easy to run, so they can be executed every time the project changes, and with this you make sure that your application isn't affected when changes and additions are introduced.

As a secondary goal, automated tests can also help to verify that the new code performs as expected under conditions that are difficult to reproduce during manual testing. For example, if you wrote a piece of code that calls a third-party API to get some data, you are likely going to manually test this very well, but how does this code handle an error due to the third-party service being down, for example? This is harder to simulate when running the application for real, but it is easier to do in automated tests.

Unit Testing vs. Integration Testing vs. Functional Testing

You may have heard references to unit, integration and functional tests, and may not be clear on what the differences are. Let's review the definitions for these three testing strategies:

  • Unit tests evaluate individual modules of your project in isolation to confirm they work as expected.
  • Integration tests evaluate two or more modules of your project to confirm they work correctly as a group.
  • Functional tests evaluate the features or functions of your project end-to-end to make sure they work as expected.

As you can see, these three testing options have different scopes, with unit tests focusing on a specific part of the system, functional tests focusing on the system as a whole, and integration tests somewhere in the between the other two.

In general you will want to test as much of your code as possible with unit tests, as these are easier to write and faster to run. Integration and functional tests are progressively harder to write as they require orchestrating the work of multiple components.

In this series of articles I'm only going to discuss unit testing.

Unit Testing in Python

A Python unit test is a function or method that does two things: it first runs a small part of the application and then it verifies or asserts that the result of running that code is correct. For example, imagine that the target application has a function that is called forty_two() that is supposed to return the number 42 when called. A Python unit test for this function could be written as follows:

from app import forty_two

def test_forty_two():
    result = forty_two()
    assert result == 42

This unit test is extremely simple because the function that is the subject of the test is also simple. In a more realistic scenario running a part of your application may require more than one line of code, and likewise, asserting that the code functioned correctly may require multiple assert statements. It is also very common to need multiple unit tests to cover all possible behaviors and outcomes from a piece of application code.

How is this unit test executed? You save this function as a Python file and then start a test runner, which will find and execute all your tests and alert you of any assertions in them that failed.

Two of the most popular frameworks in Python to write and run unit tests are the unittest package from the Python standard library and the pytest package. For this series of articles I'm going to use a hybrid testing solution that incorporates parts of both packages, as follows:

  • The object-oriented approach based on the TestCase class of the unittest package will be used to structure and organize the unit tests.
  • The assert statement from Python will be used to write assertions. The pytest package includes some enhancements to the assert statement to provide more verbose output when there is a failure.
  • The pytest test runner will be used to run the tests, as it is required to use the enhanced assert. This test runner has full support for the TestCase class from the unittest package.

Don't worry if some of these things don't make much sense yet. The examples that are coming will make it more clear.

Testing a Fizz Buzz Application

The "Fizz Buzz" game consists in counting from 1 to 100, but replacing the numbers that are divisible by 3 with the word "Fizz", the ones that are divisible by 5 with "Buzz", and the ones that are divisible by both with "FizzBuzz". This game is intended to help kids learn division, but has been made into a very popular coding interview question.

I googled for implementations of the "Fizz Buzz" problem in Python and this one came up first:

for i in range(1, 101):
    if i % 15 == 0:
        print("FizzBuzz")
    elif i % 3 == 0:
        print("Fizz")
    elif i % 5 == 0:
        print("Buzz")
    else:
        print(i)

After you've seen the forty_two() unit test example above, testing this code seems awfully difficult, right? For starters there is no function to call from a unit test. And nothing is returned, the program just prints results to the screen, so how can you verify what is printed to the terminal?

To test this code in this original form you would need to write a functional test that runs it, captures the output, and then ensures it is correct. Instead of doing that, however, it is possible to refactor the application to make it more unit testing friendly. This is an important point that you should remember: if a piece of code proves difficult to test in an automated way, you should consider refactoring it so that testing becomes easier.

Here is a new version of the "Fizz Buzz" program above that is functionally equivalent but has a more robust structure that will lend better to writing tests for it:

def fizzbuzz(i):
    if i % 15 == 0:
        return "FizzBuzz"
    elif i % 3 == 0:
        return "Fizz"
    elif i % 5 == 0:
        return "Buzz"
    else:
        return i


def main():
    for i in range(1, 101):
        print(fizzbuzz(i))


if __name__ == '__main__':
    main()

What I did here is to encapsulate the main logic of the application in the fizzbuzz() function. This function takes a number as input argument and returns what needs to be printed for that number, which can be Fizz, Buzz, FizzBuzz or the number.

What's left after that is the loop that iterates over the numbers. Instead of leaving that in the global scope I moved it into a main() function, and then I added a standard top-level script check so that this function is automatically executed when the script is run directly, but not when it is imported by another script. This is necessary because the unit test will need to import this code.

I hope you now see that there is some hope and that testing the refactored code might be possible, after all.

Writing a test case

Since this is going to be a hands-on exercise, copy the refactored code from the previous section and save it to a file named fizzbuzz.py in an empty directory in your computer. Open a terminal or command prompt window and enter this directory. Set up a new Python virtual environment using your favorite tool.

Since you will be using pytest in a little bit, install it in your virtual environment:

(venv) $ pip install pytest

The fizzbuzz() function can be tested by feeding a few different numbers and asserting that the correct response is given for each one. To keep things nicely organized, separate unit tests can be written to test for "Fizz", "Buzz" and "FizzBuzz" numbers.

Here is a TestCase class that includes a method to test for "Fizz":

import unittest
from fizzbuzz import fizzbuzz


class TestFizzBuzz(unittest.TestCase):
    def test_fizz(self):
        for i in [3, 6, 9, 18]:
            print('testing', i)
            assert fizzbuzz(i) == 'Fizz'

This has some similarities with the forty_two() unit test, but now the test is a method within a class, not a function as before. The unittest framework's TestCase class is used as a base class to the TestFizzBuzz class. Organizing tests as methods of a test case class is useful to keep several related tests together. The benefits are not going to be evident with the simple application that is the testing subject in this article, so for now you'll have to bear with me and trust me in that this makes it easier to write more complex unit tests.

Since testing for "Fizz" numbers can be done really quickly, the implementation of this test runs a few numbers instead of just one, so a loop is used to go through a list of several "Fizz" numbers and asserting that all of them are reported as such.

Save this code in a file named test_fizzbuzz.py in the same directory as the main fizzbuzz.py file, and then type pytest in your terminal:

(venv) $ pytest
========================== test session starts ===========================
platform darwin -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /Users/miguel/testing
collected 1 items

test_fizzbuzz.py .                                                 [100%]

=========================== 1 passed in 0.03s ============================

The pytest command is smart and automatically detects unit tests. In general it will assume that any Python files named with the test_[something].py or [something]_test.py patterns contain unit tests. It will also look for files with this naming pattern in subdirectories. A common way to keep unit tests nicely organized in a larger project is to put them in a tests package, separately from the application source code.

If you want to see how does a test failure looks like, edit the list of numbers used in this test to include 4 or some other number that is not divisible by 3. Then run pytest again:

(venv) $ pytest
========================== test session starts ===========================
platform darwin -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /Users/miguel/testing
collected 1 item

test_fizzbuzz.py F                                                 [100%]

================================ FAILURES ================================
_________________________ TestFizzBuzz.test_fizz _________________________

self = <test_fizzbuzz.TestFizzBuzz testMethod=test_fizz>

    def test_fizz(self):
        for i in [3, 4, 6, 9, 18]:
            print('testing', i)
>           assert fizzbuzz(i) == 'Fizz'
E           AssertionError: assert 4 == 'Fizz'
E            +  where 4 = fizzbuzz(4)

test_fizzbuzz.py:9: AssertionError
-------------------------- Captured stdout call --------------------------
testing 3
testing 4
======================== short test summary info =========================
FAILED test_fizzbuzz.py::TestFizzBuzz::test_fizz - AssertionError: asse...
=========================== 1 failed in 0.13s ===========================

Note how the test stopped as soon as one of the numbers failed to test as a "Fizz" number. To help you in figuring out exactly what part of the test failed, pytest shows you the source code lines around the failure and the expected and actual results for the failed assertion. It also captures any output that the test prints and includes it in the report. Above you can see that the test went through numbers 3 and 4 and that's when the assertion for 4 failed, causing the test to end. After you experiment with test failures revert the test to its original passing condition.

Now that you've seen how "Fizz" numbers are tested, it is easy to add two more unit tests for "Buzz" and "FizzBuzz" numbers:

import unittest
from fizzbuzz import fizzbuzz


class TestFizzBuzz(unittest.TestCase):
    def test_fizz(self):
        for i in [3, 6, 9, 18]:
            print('testing', i)
            assert fizzbuzz(i) == 'Fizz'

    def test_buzz(self):
        for i in [5, 10, 50]:
            print('testing', i)
            assert fizzbuzz(i) == 'Buzz'

    def test_fizzbuzz(self):
        for i in [15, 30, 75]:
            print('testing', i)
            assert fizzbuzz(i) == 'FizzBuzz'

Running pytest once again now shows that there are three unit tests and that all are passing:

(venv) $ pytest
========================== test session starts ===========================
platform darwin -- Python 3.8.6, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /Users/miguel/testing
collected 3 items

test_fizzbuzz.py ...                                               [100%]

=========================== 3 passed in 0.04s ============================

Test Coverage

Are the three tests above good enough? What do you think?

While you are going to have to use your own judgement to decide how much automated testing you need to have confidence that your tests give adequate protection against failures in the future, there is one tool called code coverage that can help you get a better picture.

Code coverage is a technique that consists in watching the code as it executes in the interpreter and keeping track of which lines run and which do not. When code coverage is combined with unit tests, it can be used to get a report of all the lines of code that your unit tests did not exercise.

There is a plugin for pytest called pytest-cov that adds code coverage support to a test run. Let's install it into the virtual environment:

(venv) $ pip install pytest-cov

The command pytest --cov=fizzbuzz runs the unit tests with code coverage tracking enabled for the fizzbuzz module:

(venv) $ pytest --cov=fizzbuzz
========================== test session starts ===========================
platform darwin -- Python 3.8.6, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /Users/miguel/testing
plugins: cov-2.11.1
collected 3 items

test_fizzbuzz.py ...                                               [100%]

---------- coverage: platform darwin, python 3.8.6-final-0 -----------
Name          Stmts   Miss  Cover
---------------------------------
fizzbuzz.py      13      4    69%
---------------------------------
TOTAL            13      4    69%


=========================== 3 passed in 0.07s ============================

Note that when running tests with code coverage it is useful to always limit coverage to the application module or package, which is passed as an argument to the --cov option as seen above. If the scope is not restricted, then code coverage will apply to the entire Python process, which will include functions from the Python standard library and third-party dependencies, resulting in a very noisy report at the end.

With this report you know that the three unit tests cover 69% of the fizzbuzz.py code. I'm sure you agree that it would be useful to know exactly what parts of the application make up that other 31% of the code that the tests are currently missing, right? This could help you determine what other tests need to be written.

The pytest-cov plugin can generate the final report in several formats. The one you've seen above is the most basic one, called term because it is printed to the terminal. A variant of this report is called term-missing, which adds the lines of code that were not covered:

(venv) $ pytest --cov=fizzbuzz --cov-report=term-missing
========================== test session starts ===========================
platform darwin -- Python 3.8.6, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /Users/miguel/testing
plugins: cov-2.11.1
collected 3 items

test_fizzbuzz.py ...                                               [100%]

---------- coverage: platform darwin, python 3.8.6-final-0 -----------
Name          Stmts   Miss  Cover   Missing
-------------------------------------------
fizzbuzz.py      13      4    69%   9, 13-14, 18
-------------------------------------------
TOTAL            13      4    69%


=========================== 3 passed in 0.07s ============================

The term-missing report shows the list of line numbers that did not execute during the tests. Lines 13 and 14 are the body of the main() function, which were intentionally left out of the tests. Recall that when I refactored the original application I decided to split the logic into the main() and fizzbuzz() functions with the intention to have the core logic in fizzbuzz() to make it easy to test. There is nothing in the current tests that attempts to run the main() function, so it is expected those lines will appear as missing in terms of test coverage.

Likewise, line 18 is the last line of the application, which only runs when the fizzbuzz.py file is invoked as the main script, so it is also expected this line will not run during the tests.

Line 9, however, is inside the fizzbuzz() function. It looks like one aspect of the logic in this function is not currently being tested. Can you see what it is? Line 9 is the last line of the function, which returns the input number after it was determined that the number isn't divisible by 3 or by 5. This is an important case in this application, so a unit test should be added to check for numbers that are not "Fizz", "Buzz" or "FizzBuzz".

One detail that this report isn't still being accurate about are lines that have conditionals in them. When you have a line with an if statement such as lines 2, 4, 6 and 17 in fizzbuzz.py, saying that the line is covered does not give you the full picture, because these lines can execute in two very distinct ways based on the condition evaluating to True or False. The code coverage analysis can also be configured to treat lines with conditionals as needing double coverage to account for the two possible outcomes. This is called branch coverage and is enabled with the --cov-branch option:

(venv) $ pytest --cov=fizzbuzz --cov-report=term-missing --cov-branch
========================== test session starts ===========================
platform darwin -- Python 3.8.6, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /Users/miguel/testing
plugins: cov-2.11.1
collected 3 items

test_fizzbuzz.py ...                                               [100%]

---------- coverage: platform darwin, python 3.8.6-final-0 -----------
Name          Stmts   Miss Branch BrPart  Cover   Missing
---------------------------------------------------------
fizzbuzz.py      13      4     10      2    65%   6->9, 9, 13-14, 17->18, 18
---------------------------------------------------------
TOTAL            13      4     10      2    65%


=========================== 3 passed in 0.07s ============================

Adding branch coverage has lowered the covered percentage to 65%. And the "Missing" column not only shows lines 9, 13, 14 and 18, but also adds those lines with conditionals that have been covered only for one of the two possible outcomes. The if statement in line 17, which was reported as fully covered before, now appears as not been covered for the True case, which would move on to line 18. And the elif in line 6 is not covered for a False condition, where execution would jump to line 9.

As mentioned above, a test is missing to cover numbers that are not divisible by 3 or 5. This is evident not only because line 9 is reported as missing, but also because of the missing 6->9 conditional. Let's add a fourth unit test:

import unittest
from fizzbuzz import fizzbuzz


class TestFizzBuzz(unittest.TestCase):
    def test_fizz(self):
        for i in [3, 6, 9, 18]:
            print('testing', i)
            assert fizzbuzz(i) == 'Fizz'

    def test_buzz(self):
        for i in [5, 10, 50]:
            print('testing', i)
            assert fizzbuzz(i) == 'Buzz'

    def test_fizzbuzz(self):
        for i in [15, 30, 75]:
            print('testing', i)
            assert fizzbuzz(i) == 'FizzBuzz'

    def test_number(self):
        for i in [2, 4, 88]:
            print('testing', i)
            assert fizzbuzz(i) == i

Let's run pytest one more time to see how this new test helped improve coverage:

(venv) $ pytest --cov=fizzbuzz --cov-report=term-missing --cov-branch
========================== test session starts ===========================
platform darwin -- Python 3.8.6, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /Users/miguel/testing
plugins: cov-2.11.1
collected 4 items

test_fizzbuzz.py ....                                              [100%]

---------- coverage: platform darwin, python 3.8.6-final-0 -----------
Name          Stmts   Miss Branch BrPart  Cover   Missing
---------------------------------------------------------
fizzbuzz.py      13      3     10      1    74%   13-14, 17->18, 18
---------------------------------------------------------
TOTAL            13      3     10      1    74%


=========================== 4 passed in 0.08s ============================

This is looking much better. Coverage is now at 74%, and in particular all the lines that belong to the fizzbuzz() function, which are the core logic of the application, are covered.

Code Coverage Exceptions

The four unit tests now do a good job at keeping the main logic tested, but the coverage report still shows lines 13, 14 and 18 as not covered, plus the conditional on line 17 as partially covered.

I'm sure you will agree that lines 17 and 18 are pretty safe, so it is an annoyance to have to see them listed in every coverage report. For cases where you as a developer make a conscious decision that a piece of code does not need to be tested, it is possible to mark these lines as an exception, and with that they will be counted as covered and will not appear in coverage reports as missing. This is done by adding a comment with the text pragma: no cover to the line or lines in question. Here is the updated fizzbuzz.py with an exception made for lines 17 and 18:

def fizzbuzz(i):
    if i % 15 == 0:
        return "FizzBuzz"
    elif i % 3 == 0:
        return "Fizz"
    elif i % 5 == 0:
        return "Buzz"
    else:
        return i


def main():
    for i in range(1, 101):
        print(fizzbuzz(i))


if __name__ == '__main__':  # pragma: no cover
    main()

Note how the comment was only added in line 17. This is because when an exception is added in a line that begins a control structure, it is applied to the whole code block.

Let's run the tests one more time:

(venv) $ pytest --cov=fizzbuzz --cov-report=term-missing --cov-branch
========================== test session starts ===========================
platform darwin -- Python 3.8.6, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /Users/miguel/testing
plugins: cov-2.11.1
collected 4 items

test_fizzbuzz.py ....                                              [100%]

---------- coverage: platform darwin, python 3.8.6-final-0 -----------
Name          Stmts   Miss Branch BrPart  Cover   Missing
---------------------------------------------------------
fizzbuzz.py      11      2      8      0    79%   13-14
---------------------------------------------------------
TOTAL            11      2      8      0    79%


=========================== 4 passed in 0.07s ============================

This final report looks much cleaner. Should lines 13 and 14 also be marked as exempt from coverage? That is really up to you to decide. I'm always willing to exclude lines that I'm 100% sure I'll never need to test, but I'm not really sure the main() function in lines 13 and 14 falls into that category.

Writing a unit test for this function is going to be tricky because of the print() statements, and it is definitely out of scope for this introductory article. It is not impossible to do it, though. My preference is to leave those lines alone, as a reminder that at some point I could figure out a good testing strategy for them. The alternative point of view would be to say that this is a piece of code that is stable and unlikely to change, so the return of investment for writing unit tests for it is very low, and in that case it would also be okay to exempt it from code coverage. If you add an exception for lines 13 and 14 then the coverage report will show 100% code coverage.

Conclusion

I hope this was a good introduction to unit testing in Python. In the following articles in the series I'll be looking at testing more complex code. My intention is to be very thorough and cover many different types of applications and testing techniques. If you have a particular problem related to unit testing feel free to mention it to me in the comments so that I keep it in mind for future articles!

Update: Part 2 of this series is now available!

Become a Patron!

Hello, and thank you for visiting my blog! If you enjoyed this article, please consider supporting my work on this blog on Patreon!

20 comments
  • #1 havryliuk said

    Hi, I am a Java developer and have a lot of experience with unit testing in Java. I have recently started to learn Python and haven’t written a single unit test in it. You covered the theory and the practical part very well. I am eager to write my first unit test in Python! However, I have one comment: I think you don’t need to cover a number of cases inside one equivalence class (that is the input that results in the same output). It’s redundant. After all, why did you pick numbers 3, 6, 9 and 18 but not 66 or 33?

  • #2 Miguel Grinberg said

    @havryliuk: Sure. The important thing is that you are confident in your tests. As I explain in the post, since testing for a number is so quick, we can test a few of them at near zero cost. Also, I don't believe testing a few numbers is redundant. The goal of the test is to make sure that it passes for all multiples of 3 all the way to 100, but it is impractical to test them all (although you could). I selected just a few randomly.

    Now, if you feel than only testing for one number is enough, then that is fine. The important thing is that you have confidence in your tests. If just testing for 3 is enough to make you feel that the test is doing a good job and that testing other numbers is a waste, then by all means you should do what you feel is better.

  • #3 Duncan Thomas said

    One way of evaluating your test coverage in a much stronger way than coverage is mutation testing. This algorithmically makes changes to your code (e.g. change a constant value, change a + to a -, return from a function early) and then runs the unit tests to see if the change is detected. This ensures that not only is a line of code executed by a test but its semantic correctness is tested.

    Mutmut (https://github.com/boxed/mutmut) is an example of a mutation testing tool for python

  • #4 Miguel Grinberg said

    @Duncan: I wouldn't say that mutation testing is a replacement for code coverage, which is implied when you qualify it as "much stronger". I see mutation testing not as something that you incorporate into your test run (as you would with code coverage), but something that you manually run periodically when you want to evaluate your tests. And I would not stop using code coverage in a project that is set up to also do mutation testing, since it is fairly cheap in terms of time added to tests, compared to mutation testing, at least.

  • #5 Pierre said

    Thanks for this new series! I can't wait to see the other articles.

    I've been trying to write more unit tests in my projects, but I'm often facing a problem when my code interacts with something. It can be accessing/updating a database, calling an external API and doing something based on the result, etc.

    I know about mocking, but then I feel I end up mocking 99% of what I want to test, to the point where it doesn't really make sense... sure, the tests pass, but what am I testing in the end? My mocked code?

    Anyway, I'm curious to see if your next articles will touch this topic.

    Cheers,

  • #6 Miguel Grinberg said

    @Pierre: yes, absolutely. I will show how to mock things, when it does and when it does not make sense to mock, and also I'll try to give a reasonable approach to mocking to prevent it from getting totally out of control.

  • #7 Atuhaire said

    Thanks Miguel for this article, I have really learnt a lot in it and am looking forward for the new test articles.
    Jah Bless!

  • #8 Harry said

    My pylint tells me 'elif' is unnecessary after 'return'. This should be done before testing.

    And when I run 'pytest --cov=fizzbuzz' then I get an error because of a missing '.ini'. Any idea what I did wrong?

  • #9 Miguel Grinberg said

    @Harry: elif can be replaced with just if after a return. Neither one is a better option though, you should use the syntax that is more clear to you. Pylint is good to have as a guide, but it is definitely not the bible on how Python code should be written.

    When discussing an error, it is best to show the actual error instead of or in addition to just providing a verbal description of it. For example, it would be useful to know what ini file the error says you are missing, so that I have some context to go on.

  • #10 David a Marshall said

    I would be interested to see some demonstration of integration testing with mocks as @Pierre mentioned. I've seen that Javascript test frameworks work in conjunction with dependency injection so that you can kind of set up a global replacement of a class with a mock class so that running the test works with the mock wherever it is injected within the codebase.

  • #11 Miguel Grinberg said

    @David: I'm not planning to cover integration tests, this series is all about unit testing. I do plan to show how to work with mocks. The next article (due in a few days hopefully) has some of that.

  • #12 maths said

    Looking forward to the next article! I presume you don't plan to cover things like testing that the correct thing is printed?

  • #13 Miguel Grinberg said

    @maths: as I said in the article, if you want to check what's being printed, you need to use an end-to-end test that runs the entire application and captures the output. That is not the goal of this series. I am going to discuss mocking, which is a technique that will allow you swap out a function with an alternate that is easier to control during testing. This can be used to check what a piece of code is printing.

  • #14 Michal said

    Thanks for the article. I'll be looking forward to the whole series. Will you use pytest-mock? By the way, importing unittest and subclassing TestCase is unnecessary here. With pytest you can organize tests in plain classes.

  • #15 Miguel Grinberg said

    @Michal: The TestCase class is not a "plain" class. It's not unnecessary if you intend to use its set up and tear down facilities, which I will in the future. And please don't tell me that pytest makes setup/teardown unnecessary with fixtures. You have to understand that this is not a reference guide to testing with pytest. This is an opinionated series of articles. Take the parts that you like, ignore the parts that you don't.

  • #16 Michal said

    @Miguel I agree, and I did not mean to flame against the stdlib unittest. By "unnecessary here" I meant "unnecessary in the simple example presented in this article".

  • #17 Matthew said

    Oh wow, this is exactly what I've been looking for! I absolutely can't wait for future posts in this series! Thank you

  • #18 Brian Taylor said

    Not sure why, but I can't get pytest-cov to run properly. Here's a listing of what packages are installed in my venv and then what happens when I try to run pytest-cov:

    (venv) briantaylor1@Cs-iMac Grinberg-testing % pip3 list
    Package    Version
    ---------- -------
    attrs      20.3.0
    coverage   5.5
    iniconfig  1.1.1
    packaging  20.9
    pip        21.0.1
    pluggy     0.13.1
    py         1.10.0
    pyparsing  2.4.7
    pytest     6.2.3
    pytest-cov 2.11.1
    setuptools 41.2.0
    toml       0.10.2
    (venv) briantaylor1@Cs-iMac Grinberg-testing % pytest --cov=fizzbuzz
    ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
    pytest: error: unrecognized arguments: --cov=fizzbuzz
      inifile: None
      rootdir: /Users/briantaylor1/tutorials/Grinberg-testing
    
    (venv) briantaylor1@Cs-iMac Grinberg-testing % 
    
  • #19 Miguel Grinberg said

    @Brian: I don't really know what's wrong, but when weird things like this happen the first thing to try is to recreate the virtual environment from scratch.

  • #20 Brian Taylor said

    Thanks for your quick reply Miguel. The computer crashed a few hours after my question (not sure why) and upon restarting pytest worked fine with the --cov option. Thanks for a very informative tutorial!

Leave a Comment