Test Coverage in Python with Pytest (86/100 Days of Python)

Martin Mirakyan
3 min readMar 28, 2023

--

Day 86 of the “100 Days of Python” blog post series covering test coverage

Test coverage is a measure of how much of the code in a program is executed during testing. It is an important metric for determining how thoroughly a program has been tested, and can help identify areas of code that may need additional testing.

What is Test Coverage?

Test coverage is a measure of how much of the code in a program is executed during testing. There are several types of coverage metrics, including statement coverage, branch coverage, and path coverage. Statement coverage measures how many statements in the code were executed during testing, while branch coverage measures how many branches in the code were executed. Path coverage is the most comprehensive metric and measures how many unique paths through the code were executed.

Why Is Test Coverage Important?

Test coverage provides a measure of how thoroughly a program has been tested. A program with high test coverage has had most or all of its code paths executed during testing, which reduces the likelihood of bugs and improves overall program quality. In addition, test coverage can help identify areas of code that may need additional testing, or that are difficult to test and may need refactoring.

How to Measure Test Coverage in Python

Python provides several tools for measuring test coverage, including the built-in coverage module and third-party tools like pytest-cov and nose-cov. In this tutorial, we'll focus on pytest-cov, which is a plugin for the pytest testing framework.

To use pytest-cov, you first need to install it using pip:

pip install pytest-cov

Next, you can run your tests with the --cov option followed by the name of the package or module you want to generate coverage for:

pytest --cov=my_package

This will generate a coverage report that shows which lines of code were executed during testing. The report will include a summary of the coverage metrics, as well as detailed information about which lines of code were covered and which were not.

An example report might look something like this:

Name                      Stmts   Miss  Cover
---------------------------------------------
my_package/__init__.py 0 0 100%
my_package/module.py 5 1 80%
---------------------------------------------
TOTAL 5 1 80%

This report shows coverage metrics for a package called my_package. The report includes three columns:

  1. Stmts: The total number of statements in the package.
  2. Miss: The number of statements that were not executed during testing.
  3. Cover: The percentage of statements that were executed during testing.

In this example, the my_package/__init__.py module has 100% coverage, meaning that all statements in the module were executed during testing. However, the my_package/module.py module only has 80% coverage, meaning that one of the five statements in the module was not executed during testing.

Interpreting Test Coverage Results

Test coverage results can be interpreted in several ways, depending on the specific coverage metric being used. In general, higher coverage percentages are better, but it’s important to keep in mind that coverage metrics are only one aspect of testing and should not be used as the sole measure of program quality.

In addition, it’s important to consider the specific areas of code that are not covered by tests, and to determine whether those areas are critical to the program’s functionality. For example, if a program has 100% statement coverage but only 50% branch coverage (branching for if/else statements), it may still be missing critical tests that cover certain code paths.

What’s next?

--

--

Martin Mirakyan
Martin Mirakyan

Written by Martin Mirakyan

Software Engineer | Machine Learning | Founder of Profound Academy (https://profound.academy)

Responses (1)