Getting Started with PyTest: Effortlessly Write and Run Tests in Python

Exploring the Test-Driven Development Paradigm in Python



 

Getting Started with PyTest
Image by Author

 

Have you ever encountered software that didn't work as expected? Maybe you clicked a button, and nothing happened, or a feature you were excited about turned out to be buggy or incomplete. These issues can be frustrating for users and can even lead to financial losses for businesses.

To address these challenges, developers follow a programming approach called test-driven development. TDD is all about minimizing software failures and ensuring that the software meets the intended requirements. These test cases describe the expected behavior of the code. By writing these tests upfront, developers get a clear understanding of what they want to achieve. Test pipelines are an essential part of the software development process for any organization. Whenever we make changes to our codebase, we need to ensure that they don't introduce new bugs. This is where test pipelines come in to help us.

Now, let's talk about PyTest. PyTest is a Python package that simplifies the process of writing and running test cases. This full-featured testing tool has matured to become the de facto standard for many organizations, as it easily scales for complex codebases and functionalities.

 

Benefits of the PyTest Module

 

  • Improved Logging and Test Reports
    Upon the execution of tests, we receive a complete log of all executed tests and the status of each test case. In the event of failure, a complete stack trace is provided for each failure, along with the exact values that caused an assert statement to fail. This is extremely beneficial for debugging and makes it easier to trace the exact issue in our code to solve the bugs.

  • Automatic Discovery of Test Cases
    We do not have to manually configure any test case to be executed. All files are recursively scanned, and all function names prefixed with "test" are executed automatically.

  • Fixtures and Parametrization
    During test cases, specific requirements may not always be accessible. For example, it is inefficient to fetch a resource from the network for testing, and internet access may not be available when running a test case. In such scenarios, if we want to execute a test that makes internet requests, we will need to add stubs that create a dummy response for that specific part. Moreover, it may be necessary to execute a function multiple times with different arguments to cover all possible edge cases. PyTest makes it simple to implement this using fixtures and parametrization decorators.

 

Installation

 

PyTest is available as a PyPI package that can be easily installed using the Python package manager. To set up PyTest, it is good to start with a fresh environment. To create a new Python virtual environment, use the below commands:

python3 -m venv venv
source venv/bin/activate

 

To set up the PyTest module, you can install the official PyPI package using pip:

pip install pytest

 

Running your First Test Case

 

Let’s dive into writing and running your very first test case in Python using PyTest. We'll start from scratch and build a simple test to get a feel for how it works.

 

Structuring a Python Project

 

Before we start writing tests, it's essential to organize our project properly. This helps keep things tidy and manageable, especially as our projects grow. We'll follow a common practice of separating our application code from our test code.

Here's how we'll structure our project:

pytest_demo/
│
├── src/
│   ├── __init__.py
│   ├── sorting.py
│
├── tests/
│   ├── __init__.py
│   ├── test_sorting.py
│
├── venv/

 

Our root directory pytest_demo contains separate src and tests directories. Our application code resides in src, while our test code lives in tests.

 

Writing a Simple Program and Its Associated Test Case

 

Now, let’s create a basic sorting program using the bubble sort algorithm. We'll place this in src/sorting.py:

# src/sorting.py

def bubble_sort(arr):
    for n in range(len(arr)-1, 0, -1):
        for i in range(n):
            if arr[i] > arr[i + 1]:
                arr[i], arr[i + 1] = arr[i + 1], arr[i]
    
	return arr

 

We've implemented a basic Bubble Sort algorithm, a simple yet effective way to sort elements in a list by repeatedly swapping adjacent elements if they are in the wrong order.

Now, let's ensure our implementation works by writing comprehensive test cases.

# tests/test_sorting.py

import pytest
from src.sorting import bubble_sort


def test_always_passes():
	assert True

def test_always_fails():
	assert False

def test_sorting():
	assert bubble_sort([2,3,1,6,4,5,9,8,7]) == [1,2,3,4,5,6,7,8,9]

 

In our test file, we've written three different test cases. Note how each function name starts with the test prefix, which is a rule PyTest follows to recognize test functions.

We import the bubble sort implementation from the source code in the test file. This can now be used in our test cases. Each test must have an "assert" statement to check if it works as expected. We give the sorting function a list that's not in order and compare its output with what we expect. If they match, the test passes; otherwise, it fails.

In addition, We've also included two simple tests, one that always passes and another that always fails. These are just placeholder functions that are useful for checking if our testing setup is working correctly.

 

Executing Tests and Understanding the Output

 

We can now run our tests from the command line. Navigate to your project root directory and run:

pytest tests/

 

This will recursively search all files in the tests directory. All functions and classes that start with the test prefix will be automatically recognized as a test case. From our tests directory, it will search in the test_sorting.py file and run all three test functions.

After running the tests, you’ll see an output similar to this:

===================================================================    
test session starts ====================================================================
platform darwin -- Python 3.11.4, pytest-8.1.1, pluggy-1.5.0
rootdir: /pytest_demo/
collected 3 items                                                                                                                                     	 

tests/test_sorting.py .F.                                                                                                                     [100%]

========================================================================= FAILURES
=========================================================================
____________________________________________________________________ test_always_fails _____________________________________________________________________

	def test_always_fails():
>   	assert False
E   	assert False

tests/test_sorting.py:22: AssertionError
=================================================================      short test summary info ==================================================================
FAILED tests/test_sorting.py::test_always_fails - assert False
===============================================================         
1 failed, 2 passed in 0.02s ================================================================

 

When running the PyTest command line utility, it displays the platform metadata and the total test cases that will be run. In our example, three test cases were added from the test_sorting.py file. Test cases are executed sequentially. A dot (".") represents that the test case passed whereas an “F” represents a failed test case.

If a test case fails, PyTest provides a traceback, which shows the specific line of code and the arguments that caused the error. Once all the test cases have been executed, PyTest presents a final report. This report includes the total execution time and the number of test cases that passed and failed. This summary gives you a clear overview of the test results.

 

Function Parametrization for Multiple Test Cases

 

In our example, we test only one scenario for the sorting algorithm. Is that sufficient? Obviously not! We need to test the function with multiple examples and edge cases to ensure there are no bugs in our code.

PyTest makes this process easy for us. We use the parametrization decorator provided by PyTest to add multiple test cases for a single function. The code appears as follows:

@pytest.mark.parametrize(
	"input_list, expected_output",
	[
    	    ([], []),
    	    ([1], [1]),
    	    ([53,351,23,12], [12,23,53,351]),
    	    ([-4,-6,1,0,-2], [-6,-4,-2,0,1])
	]
)
def test_sorting(input_list, expected_output):
	assert bubble_sort(input_list) == expected_output

 

In the updated code, we have modified the test_sorting function using the pytest.mark.parametrize decorator. This decorator allows us to pass multiple sets of input values to the test function. The decorator expects two parameters: a string representing the comma-separated names of the function parameters, and a list of tuples where each tuple contains the input values for a specific test case.

Note that the function parameters have the same names as the string passed to the decorator. This is a strict requirement to ensure the correct mapping of input values. If the names don't match, an error will be raised during test case collection.

With this implementation, the test_sorting function will be executed four times, once for each set of input values specified in the decorator. Now, let's take a look at the output of the test cases:

===================================================================
 test session starts 
====================================================================
platform darwin -- Python 3.11.4, pytest-8.1.1, pluggy-1.5.0
rootdir: /pytest_demo
collected 6 items                                                                                                                                     	 

tests/test_sorting.py .F....                                                                                                                     	[100%]

=======================================================================
FAILURES ========================================================================
____________________________________________________________________ test_always_fails _____________________________________________________________________

	def test_always_fails():
>   	assert False
E   	assert False

tests/test_sorting.py:11: AssertionError
================================================================= 
short test summary info ==================================================================
FAILED tests/test_sorting.py::test_always_fails - assert False
=============================================================== 
1 failed, 5 passed in 0.03s ================================================================

 

In this run, a total of six test cases were executed, including four from the test_sorting function and two dummy functions. As expected, only the dummy test case failed.

We can now confidently say that our sorting implementation is correct :)

 

Fun Practice Task

 

In this article, we have introduced the PyTest module and demonstrated its usage by testing a bubble sort implementation with multiple test cases. We covered the basic functionality of writing and executing test cases using the command line utility. This should be enough to get you started with implementing testing for your own code bases. To make your understanding of PyTest better, here's a fun practice task for you:

Implement a function called validate_password that takes a password as input and checks if it meets the following criteria:

  • Contains at least 8 characters
  • Contains at least one uppercase letter
  • Contains at least one lowercase letter
  • Contains at least one digit
  • Contains at least one special character (e.g., !, @, #, $, %)

Write PyTest test cases to validate the correctness of your implementation, covering various edge cases. Good Luck!
 
 

Kanwal Mehreen Kanwal is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook "Maximizing Productivity with ChatGPT". As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She's also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.