PyTest

1

PyTest for Flask

To write modern unit test cases for your Flask Inference API, you can use the following libraries:

  1. pytest: A powerful testing framework for Python.

  2. pytest-flask: A Flask-specific extension for pytest to simplify testing Flask applications.

  3. requests-mock: For mocking external API calls.

  4. factory-boy or faker: For generating test data.

  5. coverage: For measuring test coverage.

Here’s a step-by-step guide to write unit test cases:

1. Set Up a Test Environment

Ensure your Flask app is modular and can be initialized easily for testing. For example:

Copy

# app.py
from flask import Flask, jsonify, request

app = Flask(__name__)

@app.route('/predict', methods=['POST'])
def predict():
    data = request.json
    if not data or 'input' not in data:
        return jsonify({'error': 'Invalid input'}), 400
    # Mock model inference logic
    prediction = len(data['input'])  # Replace with actual model prediction
    return jsonify({'prediction': prediction})

if __name__ == "__main__":
    app.run(debug=True)

2. Install Required Libraries

Install libraries:

Copy

3. Create Test Cases

Create a tests/ directory and add test files.

Example: tests/test_app.py

Copy

4. Run the Tests

Run tests using pytest:

Copy

5. Use Mocks for External Calls

If your API calls an external service, mock it using requests-mock.

Example: Mocking an External API

Copy

Test with mock:

Copy

6. Measure Coverage

To measure test coverage, use:

Copy

Best Practices

  1. Use fixtures for setting up the test environment.

  2. Use mocks for external dependencies.

  3. Keep test cases independent.

  4. Aim for high coverage (80%+).

  5. Validate both positive and negative scenarios.


2

PyTest for FastAPI

To write unit test cases for your FastAPI Inference API using modern testing libraries, you can follow these steps:


1. Recommended Libraries

  • pytest: A robust testing framework.

  • httpx: For making async HTTP requests.

  • pytest-asyncio: To test FastAPI’s async endpoints.

  • fastapi.testclient: For synchronous testing of FastAPI routes.

  • coverage: To measure code coverage.

Install the required libraries:

Copy


2. Example FastAPI Inference API

Here’s a simple FastAPI inference API for demonstration:

Copy


3. Writing Unit Tests

Create a tests/ directory and add your test files.

Example: tests/test_app.py

Copy

Using Async Testing with httpx

For async endpoints, use httpx with pytest-asyncio.

Example: tests/test_app_async.py

Copy


4. Running Tests

Run the tests using pytest:

Copy


5. Mock External Dependencies

If your FastAPI app calls external APIs, you can mock them with libraries like unittest.mock.

Example: Mocking External API

Copy

Test with mock:

Copy


6. Coverage

To measure test coverage, run:

Copy


Best Practices

  1. Use fixtures: Create reusable setup code for tests.

  2. Test both positive and negative cases: Validate edge cases, errors, and success paths.

  3. Mock dependencies: For external services, databases, or APIs, mock them using patch or unittest.mock.

  4. Keep tests fast and isolated: Ensure tests don’t depend on each other.

  5. Use async testing: Use pytest-asyncio for testing async routes.


3

More Testing Libs

To use modern Python testing libraries for writing unit tests in FastAPI, you can adopt these modern tools beyond the conventional pytest stack:


Modern Testing Libraries for FastAPI

  1. pytest-httpx: A plugin to simplify mocking HTTPX requests.

  2. pytest-mock: Built-in support for mocking with pytest.

  3. anyio: An async compatibility layer for FastAPI testing.

  4. pytest-randomly: Ensures test order independence by randomizing test execution order.

  5. snapshottest: For snapshot testing (e.g., ensuring JSON responses don’t change unexpectedly).

  6. schemathesis: Automates testing based on OpenAPI specifications.

  7. pytest-docker: Helps test FastAPI services interacting with Docker containers.

Install them:

Copy


Advanced Test Case with Modern Libraries

1. Mock HTTPX Requests with pytest-httpx

If your FastAPI app makes external API calls using HTTPX, mock them like this:

App Code:

Copy

Test Code:

Copy


2. Snapshot Testing with snapshottest

Use snapshot testing to ensure API responses don’t change unintentionally.

Test Code:

Copy

Run the test. On first run, it will save the snapshot. Subsequent runs will compare the response with the saved snapshot.


3. Testing OpenAPI with schemathesis

Automate testing based on OpenAPI specifications.

Copy

You can also integrate it with pytest for more control:

Copy


4. Async Context Management with anyio

Test async endpoints with seamless async context handling.

Test Code:

Copy


5. Randomized Testing with pytest-randomly

Ensure your tests don’t rely on execution order:

Copy

This is particularly useful to catch test order dependencies.


6. Test Containers with pytest-docker

If your FastAPI app interacts with databases or services running in Docker containers, test them in isolated containers.

Copy


Running Tests

Copy


Best Practices with Modern Testing

  1. Use snapshottest: To avoid accidental response changes.

  2. Adopt pytest-httpx: For reliable external API mocking.

  3. Leverage schemathesis: To ensure your FastAPI conforms to OpenAPI specs.

  4. Parallelize tests: Use pytest-xdist for faster execution.

  5. Randomize order: Avoid hidden dependencies with pytest-randomly.


4

Pytest Asyncio

For advanced testing with pytest-asyncio in a FastAPI application, you can incorporate features such as dependency injection, mocking databases, testing middleware, and testing WebSocket endpoints. Below are some advanced examples and patterns you can follow.


1. Testing Dependency Overrides

If your FastAPI app uses dependencies (e.g., database sessions, authentication), you can override these dependencies in tests.

App Code:

Copy

Test Code:

Copy


2. Mocking Databases (e.g., SQLAlchemy)

For testing APIs interacting with a database, you can mock the database or use an in-memory SQLite database.

App Code:

Copy

Test Code:

Copy


3. Testing Middleware

If your app uses middleware (e.g., for authentication, CORS), you can test the middleware logic directly.

App Code:

Copy

Test Code:

Copy


4. Testing WebSocket Endpoints

FastAPI supports WebSocket connections. Here’s how to test them.

App Code:

Copy

Test Code:

Copy


5. Testing Exception Handlers

FastAPI provides custom exception handlers. You can test them like this:

App Code:

Copy

Test Code:

Copy


6. Parameterized Tests with pytest.mark.parametrize

Run the same test logic with multiple inputs.

Test Code:

Copy


7. Coverage with pytest-cov

To measure test coverage:

Copy


Summary

These advanced testing techniques can help ensure robust testing of your FastAPI application:

  • Use dependency_overrides to mock dependencies.

  • Mock databases or use an in-memory database.

  • Test middleware logic and WebSocket connections.

  • Test custom exception handlers and edge cases.

  • Parameterize tests for multiple input combinations.

  • Use tools like pytest-cov for test coverage metrics.

Let me know if you'd like help setting up any specific feature in detail!


5

PyTest for vanilla python methods

Writing unit tests for vanilla Python methods involves testing functions or methods that are independent of frameworks like FastAPI or Flask. The process is straightforward and focuses on pure logic, edge cases, and exceptions.

Here’s a guide to writing unit tests for vanilla Python methods using pytest and modern libraries like pytest-mock for mocking, pytest-parametrize for multiple test cases, and faker for generating test data.


1. Install Required Libraries

Install the required testing libraries:

Copy


2. Sample Vanilla Python Methods

Suppose you have the following utility functions in a file called utils.py:

Copy


3. Create Unit Tests

Create a tests/ directory and add the test file, e.g., tests/test_utils.py.

Test Cases for utils.py

Copy


4. Using Mocks

If your methods depend on external calls or classes, mock them using pytest-mock.

For example, consider a method using a network call:

Copy

Test with Mocking:

Copy


5. Using Faker for Test Data

Generate realistic test data using faker.

Example with Faker:

Copy


6. Edge Case and Boundary Testing

Test edge cases for robustness.

Example for Edge Cases:

Copy


7. Run the Tests

Run tests with:

Copy


8. Code Coverage

To measure test coverage, install pytest-cov and run:

Copy


9. Best Practices for Testing Vanilla Methods

  1. Test Logic Thoroughly:

    • Test positive and negative scenarios.

    • Test edge cases (e.g., empty strings, special characters, large numbers).

  2. Use Mocking:

    • Mock external dependencies like API calls or file I/O.

  3. Parameterize Tests:

    • Use @pytest.mark.parametrize to test multiple input combinations.

  4. Readable Assertions:

    • Use pytest.raises for exceptions.

    • Keep assertions simple and meaningful.

  5. Measure Coverage:

    • Ensure 80%+ coverage for utility methods.


6

Various PyTest Options

pytest offers a wide range of options and configurations to make testing powerful and flexible. Below is a categorized list of various pytest options and their usage:


1. Running Specific Tests

  • Run a specific test file:

    Copy

  • Run a specific test within a file:

    Copy

  • Run tests matching a keyword:

    Copy


2. Markers

Markers are used to label or group tests:

  • Run tests with a specific marker:

    Copy

  • Define a custom marker in pytest.ini:

    Copy

  • Apply a marker to a test:

    Copy


3. Test Reporting

  • Show verbose output:

    Copy

  • Show detailed output for passed tests:

    Copy

  • Generate an HTML report (requires pytest-html):

    Copy


4. Test Coverage

  • Measure code coverage (requires pytest-cov):

    Copy

  • Show uncovered lines:

    Copy

  • Generate an HTML coverage report:

    Copy


5. Running Tests in Parallel

  • Install pytest-xdist for parallel testing:

    Copy

  • Run tests using 4 parallel processes:

    Copy


6. Test Selection

  • Fail on the first error:

    Copy

  • Stop after N failures:

    Copy

  • Run only failed tests from the last run:

    Copy

  • Run a specific subset of tests by their node IDs:

    Copy


7. Logging and Debugging

  • Enable logging capture:

    Copy

  • Debug with a specific test:

    Copy

  • Disable logging capture:

    Copy


8. Test Failures

  • Show detailed tracebacks on failure:

    Copy

  • Shorten tracebacks:

    Copy


9. Randomizing Test Order

  • Install pytest-randomly for random test order:

    Copy

  • Randomize test execution order:

    Copy


10. Running Tests with Specific Parameters

  • Pass custom CLI arguments:

    Copy

  • Access CLI arguments in test code:

    Copy


11. Fixtures

  • Use a specific fixture:

    Copy

  • Show all fixtures:

    Copy


12. Skipping and XFail

  • Skip specific tests:

    Copy

  • Expected failures:

    Copy


13. Plugins

Install and use plugins for extended functionality:

  • List all installed plugins:

    Copy

Popular plugins:

  • pytest-django: For Django testing.

  • pytest-flask: For Flask testing.

  • pytest-httpx: For HTTPX mocking.


14. Output Control

  • Disable warnings:

    Copy

  • Only show errors:

    Copy


15. Configuration with pytest.ini

Customize test behavior using a pytest.ini file:

Copy


16. Running Specific File Types

  • Run doctests in .py files:

    Copy

  • Run tests in .ipynb notebooks (requires pytest-ipynb):

    Copy


Cheat Sheet Summary

OptionDescription

-v

Verbose output

-q

Quiet output

--maxfail=N

Stop after N failures

--tb=short

Short traceback

--lf

Run only failed tests

--cov=MODULE

Measure coverage

-n 4

Run tests in 4 parallel processes

-k "keyword"

Run tests matching a keyword

-m "marker"

Run tests with a specific marker

--html=report.html

Generate an HTML report

--log-cli-level=INFO

Enable logging

--randomly-seed=42

Randomize test order with a fixed seed

Let me know if you'd like examples for specific use cases!


7

How to hide warnings

To hide or suppress warnings during test runs in pytest, you can use the following methods:


1. Use pytest Options to Suppress Warnings

Run pytest with the --disable-warnings option:

Copy

This suppresses all warnings, including those from external libraries.


2. Filter Specific Warnings

Use the -p no:warnings plugin to filter out warnings selectively.

For example:

Copy

Or filter only specific warnings using filterwarnings:

Copy


3. Configure pytest.ini

Add a pytest.ini file to your project directory and configure it to suppress specific warnings:

Copy

This ensures consistent behavior across all test runs.


4. Suppress Warnings in Test Code

Suppress warnings programmatically in your test code using the warnings module:

Copy


5. Suppress Warnings Globally in conftest.py

Create a conftest.py file in your test directory and configure global warning filters:

Copy


6. Upgrade Libraries

Some warnings, such as the ones related to Pydantic or logging, may be resolved by upgrading the libraries:

  • Upgrade Pydantic to the latest version compatible with your project:

    Copy

  • Update rs_microservice_core:

    Copy


7. Suppress Pydantic Warnings

If using Pydantic V2, switch from Config to ConfigDict as suggested in the warning message to eliminate the deprecation notice.

Old:

Copy

New:

Copy


Summary of Commands

  • Suppress all warnings:

    Copy

  • Suppress specific warnings:

    Copy

  • Configure warnings globally in pytest.ini:

    Copy


8

Hide warnings 2

To hide or suppress the warnings mentioned during your tests, you can use pytest options, configure filters in a pytest.ini file, or handle them programmatically. Here's how:


1. Suppress Warnings via pytest Options

Run pytest with the --disable-warnings option:

Copy

This suppresses all warnings during the test run.


2. Suppress Specific Warnings Using pytest.ini

Create or modify a pytest.ini file in the root directory of your project and configure filterwarnings to suppress specific warnings:

Copy

This ensures only the specified warnings are suppressed while allowing others to appear.


3. Suppress Specific Warnings Programmatically

You can suppress warnings programmatically in your test file using the warnings module.

Example:

Copy


4. Suppress Warnings in a conftest.py File

If you want to suppress warnings globally for all tests, create or modify a conftest.py file in your test directory:

conftest.py:

Copy


5. Fix the Code to Avoid Warnings

Instead of hiding warnings, consider updating the code to remove deprecated or incorrect practices:

Warning 1: DataFrame.applymap Deprecated

The warning suggests replacing applymap with map. Update the code in feature_engineering.py:

Copy

Warning 2: MLPClassifier with Feature Names

Ensure the input data for the MLPClassifier matches the feature names used during training. If your input X doesn't have feature names, convert it to a DataFrame with appropriate column names before passing it to the model:

Copy


6. Verify Warning Suppression

After applying these fixes or suppressions, run the tests:

Copy

Or:

Copy

You should no longer see the warnings.


9

Show print on console

In pytest, if print statements don't appear in the output by default, it’s because pytest captures standard output during test execution. You can control this behavior using the -s or --capture options.


How to Enable print Output in pytest

1. Use the -s Option

Run your tests with the -s option to disable output capturing:

Copy

This ensures that all print statements are displayed in the terminal.


2. Use the --capture=no Option

Alternatively, you can use the --capture=no option to achieve the same result:

Copy


3. Log Instead of Print

If you frequently need to see output, it's better to use Python's logging module instead of print. pytest handles logging well and can show logs even with output capturing.

Example:

Copy

Run the test with:

Copy


4. Combine Print with Debugging (pdb)

You can also enter the debugger if you need to inspect values:

Copy

When pdb is triggered, you can inspect variables and use the print command directly.


5. Display Output After Test Completion

You can ensure print outputs appear after the test runs by using the capsys fixture to explicitly access captured output:

Example:

Copy


10

Capsys

capsys is a built-in pytest fixture that captures output to sys.stdout and sys.stderr during test execution. It allows you to test and verify printed output or any text written to the standard output or error streams.


How It Works

When you use capsys in a test, it temporarily redirects sys.stdout and sys.stderr to capture anything printed during the test. After capturing, you can access the output and validate it using capsys.readouterr().


Example Usage

1. Capturing print Output

Copy

2. Capturing Both stdout and stderr

Copy


Methods Provided by capsys

MethodDescription

capsys.readouterr()

Returns a tuple-like object with out (stdout) and err (stderr) strings.

capsys.disabled()

Temporarily disables capturing (useful for debugging).


3. Disabling capsys Temporarily

If you want to disable output capturing for debugging, you can use capsys.disabled():

Copy


4. Example: Validating Function Output

Here’s how you can use capsys to test a function that prints output:

Function to Test

Copy

Test Case

Copy


5. Capturing Subprocess Output

You can also use capsys to capture output from subprocesses:

Example

Copy


6. Common Use Cases

  • Validating print Statements: Ensure the printed output is correct.

  • Testing Logging Output: Verify logs written to sys.stdout or sys.stderr.

  • Debugging Subprocess Output: Capture and check output from child processes.


Integration with Pytest

When you use capsys, it seamlessly integrates into your pytest test functions. There's no need for manual setup or teardown—it’s managed automatically.


11

Examples

https://github.com/pluralsight/intro-to-pytest https://gist.github.com/jakelevi1996/2d249adbbd2e13950852b80cca42ed02 https://github.com/pytest-dev/pytest/issues https://github.com/armakuni/pytest-examples https://github.com/hectorcanto/pytest-samples https://github.com/armakuni/pytest-examples https://gist.github.com/hectorcanto/40a7ecbd9e02b0550840f850165bc162 https://gist.github.com/jw-ng/cc07a9e323477bd58c49327107bd8c3b https://gist.github.com/jakelevi1996/2d249adbbd2e13950852b80cca42ed02 https://github.com/htorrence/pytest_examples https://docs.pytest.org/en/stable/example/index.html


12


13


14


15


16


17


18


19


20


21


22


23


24


25


PreviousSimple CRUD

Last updated 5 months ago1

PyTest for Flask

To write modern unit test cases for your Flask Inference API, you can use the following libraries:

  1. pytest: A powerful testing framework for Python.

  2. pytest-flask: A Flask-specific extension for pytest to simplify testing Flask applications.

  3. requests-mock: For mocking external API calls.

  4. factory-boy or faker: For generating test data.

  5. coverage: For measuring test coverage.

Here’s a step-by-step guide to write unit test cases:

1. Set Up a Test Environment

Ensure your Flask app is modular and can be initialized easily for testing. For example:

Copy

2. Install Required Libraries

Install libraries:

Copy

3. Create Test Cases

Create a tests/ directory and add test files.

Example: tests/test_app.py

Copy

4. Run the Tests

Run tests using pytest:

Copy

5. Use Mocks for External Calls

If your API calls an external service, mock it using requests-mock.

Example: Mocking an External API

Copy

Test with mock:

Copy

6. Measure Coverage

To measure test coverage, use:

Copy

Best Practices

  1. Use fixtures for setting up the test environment.

  2. Use mocks for external dependencies.

  3. Keep test cases independent.

  4. Aim for high coverage (80%+).

  5. Validate both positive and negative scenarios.


2

PyTest for FastAPI

To write unit test cases for your FastAPI Inference API using modern testing libraries, you can follow these steps:


1. Recommended Libraries

  • pytest: A robust testing framework.

  • httpx: For making async HTTP requests.

  • pytest-asyncio: To test FastAPI’s async endpoints.

  • fastapi.testclient: For synchronous testing of FastAPI routes.

  • coverage: To measure code coverage.

Install the required libraries:

Copy


2. Example FastAPI Inference API

Here’s a simple FastAPI inference API for demonstration:

Copy


3. Writing Unit Tests

Create a tests/ directory and add your test files.

Example: tests/test_app.py

Copy

Using Async Testing with httpx

For async endpoints, use httpx with pytest-asyncio.

Example: tests/test_app_async.py

Copy


4. Running Tests

Run the tests using pytest:

Copy


5. Mock External Dependencies

If your FastAPI app calls external APIs, you can mock them with libraries like unittest.mock.

Example: Mocking External API

Copy

Test with mock:

Copy


6. Coverage

To measure test coverage, run:

Copy


Best Practices

  1. Use fixtures: Create reusable setup code for tests.

  2. Test both positive and negative cases: Validate edge cases, errors, and success paths.

  3. Mock dependencies: For external services, databases, or APIs, mock them using patch or unittest.mock.

  4. Keep tests fast and isolated: Ensure tests don’t depend on each other.

  5. Use async testing: Use pytest-asyncio for testing async routes.


3

More Testing Libs

To use modern Python testing libraries for writing unit tests in FastAPI, you can adopt these modern tools beyond the conventional pytest stack:


Modern Testing Libraries for FastAPI

  1. pytest-httpx: A plugin to simplify mocking HTTPX requests.

  2. pytest-mock: Built-in support for mocking with pytest.

  3. anyio: An async compatibility layer for FastAPI testing.

  4. pytest-randomly: Ensures test order independence by randomizing test execution order.

  5. snapshottest: For snapshot testing (e.g., ensuring JSON responses don’t change unexpectedly).

  6. schemathesis: Automates testing based on OpenAPI specifications.

  7. pytest-docker: Helps test FastAPI services interacting with Docker containers.

Install them:

Copy


Advanced Test Case with Modern Libraries

1. Mock HTTPX Requests with pytest-httpx

If your FastAPI app makes external API calls using HTTPX, mock them like this:

App Code:

Copy

Test Code:

Copy


2. Snapshot Testing with snapshottest

Use snapshot testing to ensure API responses don’t change unintentionally.

Test Code:

Copy

Run the test. On first run, it will save the snapshot. Subsequent runs will compare the response with the saved snapshot.


3. Testing OpenAPI with schemathesis

Automate testing based on OpenAPI specifications.

Copy

You can also integrate it with pytest for more control:

Copy


4. Async Context Management with anyio

Test async endpoints with seamless async context handling.

Test Code:

Copy


5. Randomized Testing with pytest-randomly

Ensure your tests don’t rely on execution order:

Copy

This is particularly useful to catch test order dependencies.


6. Test Containers with pytest-docker

If your FastAPI app interacts with databases or services running in Docker containers, test them in isolated containers.

Copy


Running Tests

Copy


Best Practices with Modern Testing

  1. Use snapshottest: To avoid accidental response changes.

  2. Adopt pytest-httpx: For reliable external API mocking.

  3. Leverage schemathesis: To ensure your FastAPI conforms to OpenAPI specs.

  4. Parallelize tests: Use pytest-xdist for faster execution.

  5. Randomize order: Avoid hidden dependencies with pytest-randomly.


4

Pytest Asyncio

For advanced testing with pytest-asyncio in a FastAPI application, you can incorporate features such as dependency injection, mocking databases, testing middleware, and testing WebSocket endpoints. Below are some advanced examples and patterns you can follow.


1. Testing Dependency Overrides

If your FastAPI app uses dependencies (e.g., database sessions, authentication), you can override these dependencies in tests.

App Code:

Copy

Test Code:

Copy


2. Mocking Databases (e.g., SQLAlchemy)

For testing APIs interacting with a database, you can mock the database or use an in-memory SQLite database.

App Code:

Copy

Test Code:

Copy


3. Testing Middleware

If your app uses middleware (e.g., for authentication, CORS), you can test the middleware logic directly.

App Code:

Copy

Test Code:

Copy


4. Testing WebSocket Endpoints

FastAPI supports WebSocket connections. Here’s how to test them.

App Code:

Copy

Test Code:

Copy


5. Testing Exception Handlers

FastAPI provides custom exception handlers. You can test them like this:

App Code:

Copy

Test Code:

Copy


6. Parameterized Tests with pytest.mark.parametrize

Run the same test logic with multiple inputs.

Test Code:

Copy


7. Coverage with pytest-cov

To measure test coverage:

Copy


Summary

These advanced testing techniques can help ensure robust testing of your FastAPI application:

  • Use dependency_overrides to mock dependencies.

  • Mock databases or use an in-memory database.

  • Test middleware logic and WebSocket connections.

  • Test custom exception handlers and edge cases.

  • Parameterize tests for multiple input combinations.

  • Use tools like pytest-cov for test coverage metrics.

Let me know if you'd like help setting up any specific feature in detail!


5

PyTest for vanilla python methods

Writing unit tests for vanilla Python methods involves testing functions or methods that are independent of frameworks like FastAPI or Flask. The process is straightforward and focuses on pure logic, edge cases, and exceptions.

Here’s a guide to writing unit tests for vanilla Python methods using pytest and modern libraries like pytest-mock for mocking, pytest-parametrize for multiple test cases, and faker for generating test data.


1. Install Required Libraries

Install the required testing libraries:

Copy


2. Sample Vanilla Python Methods

Suppose you have the following utility functions in a file called utils.py:

Copy


3. Create Unit Tests

Create a tests/ directory and add the test file, e.g., tests/test_utils.py.

Test Cases for utils.py

Copy


4. Using Mocks

If your methods depend on external calls or classes, mock them using pytest-mock.

For example, consider a method using a network call:

Copy

Test with Mocking:

Copy


5. Using Faker for Test Data

Generate realistic test data using faker.

Example with Faker:

Copy


6. Edge Case and Boundary Testing

Test edge cases for robustness.

Example for Edge Cases:

Copy


7. Run the Tests

Run tests with:

Copy


8. Code Coverage

To measure test coverage, install pytest-cov and run:

Copy


9. Best Practices for Testing Vanilla Methods

  1. Test Logic Thoroughly:

    • Test positive and negative scenarios.

    • Test edge cases (e.g., empty strings, special characters, large numbers).

  2. Use Mocking:

    • Mock external dependencies like API calls or file I/O.

  3. Parameterize Tests:

    • Use @pytest.mark.parametrize to test multiple input combinations.

  4. Readable Assertions:

    • Use pytest.raises for exceptions.

    • Keep assertions simple and meaningful.

  5. Measure Coverage:

    • Ensure 80%+ coverage for utility methods.


6

Various PyTest Options

pytest offers a wide range of options and configurations to make testing powerful and flexible. Below is a categorized list of various pytest options and their usage:


1. Running Specific Tests

  • Run a specific test file:

    Copy

  • Run a specific test within a file:

    Copy

  • Run tests matching a keyword:

    Copy


2. Markers

Markers are used to label or group tests:

  • Run tests with a specific marker:

    Copy

  • Define a custom marker in pytest.ini:

    Copy

  • Apply a marker to a test:

    Copy


3. Test Reporting

  • Show verbose output:

    Copy

  • Show detailed output for passed tests:

    Copy

  • Generate an HTML report (requires pytest-html):

    Copy


4. Test Coverage

  • Measure code coverage (requires pytest-cov):

    Copy

  • Show uncovered lines:

    Copy

  • Generate an HTML coverage report:

    Copy


5. Running Tests in Parallel

  • Install pytest-xdist for parallel testing:

    Copy

  • Run tests using 4 parallel processes:

    Copy


6. Test Selection

  • Fail on the first error:

    Copy

  • Stop after N failures:

    Copy

  • Run only failed tests from the last run:

    Copy

  • Run a specific subset of tests by their node IDs:

    Copy


7. Logging and Debugging

  • Enable logging capture:

    Copy

  • Debug with a specific test:

    Copy

  • Disable logging capture:

    Copy


8. Test Failures

  • Show detailed tracebacks on failure:

    Copy

  • Shorten tracebacks:

    Copy


9. Randomizing Test Order

  • Install pytest-randomly for random test order:

    Copy

  • Randomize test execution order:

    Copy


10. Running Tests with Specific Parameters

  • Pass custom CLI arguments:

    Copy

  • Access CLI arguments in test code:

    Copy


11. Fixtures

  • Use a specific fixture:

    Copy

  • Show all fixtures:

    Copy


12. Skipping and XFail

  • Skip specific tests:

    Copy

  • Expected failures:

    Copy


13. Plugins

Install and use plugins for extended functionality:

  • List all installed plugins:

    Copy

Popular plugins:

  • pytest-django: For Django testing.

  • pytest-flask: For Flask testing.

  • pytest-httpx: For HTTPX mocking.


14. Output Control

  • Disable warnings:

    Copy

  • Only show errors:

    Copy


15. Configuration with pytest.ini

Customize test behavior using a pytest.ini file:

Copy


16. Running Specific File Types

  • Run doctests in .py files:

    Copy

  • Run tests in .ipynb notebooks (requires pytest-ipynb):

    Copy


Cheat Sheet Summary

OptionDescription

-v

Verbose output

-q

Quiet output

--maxfail=N

Stop after N failures

--tb=short

Short traceback

--lf

Run only failed tests

--cov=MODULE

Measure coverage

-n 4

Run tests in 4 parallel processes

-k "keyword"

Run tests matching a keyword

-m "marker"

Run tests with a specific marker

--html=report.html

Generate an HTML report

--log-cli-level=INFO

Enable logging

--randomly-seed=42

Randomize test order with a fixed seed

Let me know if you'd like examples for specific use cases!


7

How to hide warnings

To hide or suppress warnings during test runs in pytest, you can use the following methods:


1. Use pytest Options to Suppress Warnings

Run pytest with the --disable-warnings option:

Copy

This suppresses all warnings, including those from external libraries.


2. Filter Specific Warnings

Use the -p no:warnings plugin to filter out warnings selectively.

For example:

Copy

Or filter only specific warnings using filterwarnings:

Copy


3. Configure pytest.ini

Add a pytest.ini file to your project directory and configure it to suppress specific warnings:

Copy

This ensures consistent behavior across all test runs.


4. Suppress Warnings in Test Code

Suppress warnings programmatically in your test code using the warnings module:

Copy


5. Suppress Warnings Globally in conftest.py

Create a conftest.py file in your test directory and configure global warning filters:

Copy


6. Upgrade Libraries

Some warnings, such as the ones related to Pydantic or logging, may be resolved by upgrading the libraries:

  • Upgrade Pydantic to the latest version compatible with your project:

    Copy

  • Update rs_microservice_core:

    Copy


7. Suppress Pydantic Warnings

If using Pydantic V2, switch from Config to ConfigDict as suggested in the warning message to eliminate the deprecation notice.

Old:

Copy

New:

Copy


Summary of Commands

  • Suppress all warnings:

    Copy

  • Suppress specific warnings:

    Copy

  • Configure warnings globally in pytest.ini:

    Copy


8

Hide warnings 2

To hide or suppress the warnings mentioned during your tests, you can use pytest options, configure filters in a pytest.ini file, or handle them programmatically. Here's how:


1. Suppress Warnings via pytest Options

Run pytest with the --disable-warnings option:

Copy

This suppresses all warnings during the test run.


2. Suppress Specific Warnings Using pytest.ini

Create or modify a pytest.ini file in the root directory of your project and configure filterwarnings to suppress specific warnings:

Copy

This ensures only the specified warnings are suppressed while allowing others to appear.


3. Suppress Specific Warnings Programmatically

You can suppress warnings programmatically in your test file using the warnings module.

Example:

Copy


4. Suppress Warnings in a conftest.py File

If you want to suppress warnings globally for all tests, create or modify a conftest.py file in your test directory:

conftest.py:

Copy


5. Fix the Code to Avoid Warnings

Instead of hiding warnings, consider updating the code to remove deprecated or incorrect practices:

Warning 1: DataFrame.applymap Deprecated

The warning suggests replacing applymap with map. Update the code in feature_engineering.py:

Copy

Warning 2: MLPClassifier with Feature Names

Ensure the input data for the MLPClassifier matches the feature names used during training. If your input X doesn't have feature names, convert it to a DataFrame with appropriate column names before passing it to the model:

Copy


6. Verify Warning Suppression

After applying these fixes or suppressions, run the tests:

Copy

Or:

Copy

You should no longer see the warnings.


9

Show print on console

In pytest, if print statements don't appear in the output by default, it’s because pytest captures standard output during test execution. You can control this behavior using the -s or --capture options.


How to Enable print Output in pytest

1. Use the -s Option

Run your tests with the -s option to disable output capturing:

Copy

This ensures that all print statements are displayed in the terminal.


2. Use the --capture=no Option

Alternatively, you can use the --capture=no option to achieve the same result:

Copy


3. Log Instead of Print

If you frequently need to see output, it's better to use Python's logging module instead of print. pytest handles logging well and can show logs even with output capturing.

Example:

Copy

Run the test with:

Copy


4. Combine Print with Debugging (pdb)

You can also enter the debugger if you need to inspect values:

Copy

When pdb is triggered, you can inspect variables and use the print command directly.


5. Display Output After Test Completion

You can ensure print outputs appear after the test runs by using the capsys fixture to explicitly access captured output:

Example:

Copy


10

Capsys

capsys is a built-in pytest fixture that captures output to sys.stdout and sys.stderr during test execution. It allows you to test and verify printed output or any text written to the standard output or error streams.


How It Works

When you use capsys in a test, it temporarily redirects sys.stdout and sys.stderr to capture anything printed during the test. After capturing, you can access the output and validate it using capsys.readouterr().


Example Usage

1. Capturing print Output

Copy

2. Capturing Both stdout and stderr

Copy


Methods Provided by capsys

MethodDescription

capsys.readouterr()

Returns a tuple-like object with out (stdout) and err (stderr) strings.

capsys.disabled()

Temporarily disables capturing (useful for debugging).


3. Disabling capsys Temporarily

If you want to disable output capturing for debugging, you can use capsys.disabled():

Copy


4. Example: Validating Function Output

Here’s how you can use capsys to test a function that prints output:

Function to Test

Copy

Test Case

Copy


5. Capturing Subprocess Output

You can also use capsys to capture output from subprocesses:

Example

Copy


6. Common Use Cases

  • Validating print Statements: Ensure the printed output is correct.

  • Testing Logging Output: Verify logs written to sys.stdout or sys.stderr.

  • Debugging Subprocess Output: Capture and check output from child processes.


Integration with Pytest

When you use capsys, it seamlessly integrates into your pytest test functions. There's no need for manual setup or teardown—it’s managed automatically.


11

Examples

https://github.com/pluralsight/intro-to-pytest https://gist.github.com/jakelevi1996/2d249adbbd2e13950852b80cca42ed02 https://github.com/pytest-dev/pytest/issues https://github.com/armakuni/pytest-examples https://github.com/hectorcanto/pytest-samples https://github.com/armakuni/pytest-examples https://gist.github.com/hectorcanto/40a7ecbd9e02b0550840f850165bc162 https://gist.github.com/jw-ng/cc07a9e323477bd58c49327107bd8c3b https://gist.github.com/jakelevi1996/2d249adbbd2e13950852b80cca42ed02 https://github.com/htorrence/pytest_examples https://docs.pytest.org/en/stable/example/index.html


12


13


14


15


16


17


18


19


20


21


22


23


24


25


Last updated