FastAPI

Testing FastAPI Applications: Complete Guide with Pytest, Coverage, and Best Practices

Introduction to FastAPI Testing

Testing FastAPI applications is fundamental to building reliable, production-grade APIs. FastAPI provides powerful built-in testing tools that simulate HTTP requests to your application without running a live server, making tests faster, more reliable, and easier to integrate into CI/CD pipelines.

FastAPI’s TestClient, built on top of the excellent httpx library, provides a modern, async-capable HTTP client that interacts directly with your application code. This approach eliminates network overhead and makes tests deterministic and fast.

Why Test FastAPI Applications?

  • Catch bugs early before production deployment
  • Ensure API endpoints return correct responses and status codes
  • Validate data serialization and validation logic
  • Verify authentication and authorization mechanisms
  • Document expected API behavior
  • Enable confident refactoring and feature development
  • Meet production quality standards and compliance requirements

Setup and Configuration

Installing Testing Dependencies

pip install pytest pytest-asyncio httpx pytest-cov
pip install pytest-mock  # For mocking external services
pip install faker  # For generating test data

Project Structure

myapp/
├── app/
│   ├── __init__.py
│   ├── main.py
│   ├── models.py
│   └── dependencies.py
├── tests/
│   ├── __init__.py
│   ├── conftest.py
│   ├── test_main.py
│   └── test_auth.py
├── requirements.txt
├── requirements-dev.txt
├── pytest.ini
└── .coveragerc

Pytest Configuration (pytest.ini)

[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts = -v --cov=app --cov-report=term-missing --cov-report=html
asyncio_mode = auto

Coverage Configuration (.coveragerc)

[run]
source = app
branch = True
concurrency = greenlet,thread

[coverage:report]
precision = 2
show_missing = True
skip_covered = False

[paths]
source =
    app
    */site-packages/app

TestClient Fundamentals

Basic TestClient Usage

from fastapi.testclient import TestClient
from app.main import app

client = TestClient(app)

def test_read_main():
    """Test root endpoint"""
    response = client.get("/")
    assert response.status_code == 200
    assert response.json() == {"message": "Hello World"}

Testing Different HTTP Methods

GET Request

def test_get_items():
    response = client.get("/items/")
    assert response.status_code == 200
    assert isinstance(response.json(), list)

POST Request

def test_create_item():
    item_data = {"name": "Test Item", "price": 10.5}
    response = client.post("/items/", json=item_data)
    assert response.status_code == 201
    data = response.json()
    assert data["name"] == "Test Item"
    assert data["price"] == 10.5

PUT Request

def test_update_item():
    item_data = {"name": "Updated Item", "price": 15.0}
    response = client.put("/items/1", json=item_data)
    assert response.status_code == 200
    data = response.json()
    assert data["name"] == "Updated Item"

DELETE Request

def test_delete_item():
    response = client.delete("/items/1")
    assert response.status_code == 204
    assert response.content == b""  # No content

Query Parameters

def test_list_items_with_pagination():
    response = client.get("/items/?skip=0&limit=10")
    assert response.status_code == 200
    data = response.json()
    assert len(data) <= 10

Path Parameters

def test_get_item_by_id():
    response = client.get("/items/42")
    assert response.status_code == 200
    data = response.json()
    assert data["id"] == 42

Pytest Integration and Fixtures

Test Client Fixture

import pytest
from fastapi.testclient import TestClient
from app.main import app

@pytest.fixture
def client():
    """Create test client fixture"""
    return TestClient(app)

def test_with_fixture(client):
    """Use test client fixture"""
    response = client.get("/")
    assert response.status_code == 200

Database Fixture with Setup/Teardown

import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from app.database import Base, get_db

# Test database
SQLALCHEMY_DATABASE_URL = "sqlite:///./test.db"
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

@pytest.fixture(scope="function")
def db():
    """Create fresh database for each test"""
    Base.metadata.create_all(bind=engine)
    db = TestingSessionLocal()
    try:
        yield db
    finally:
        db.close()
    Base.metadata.drop_all(bind=engine)

@pytest.fixture(scope="function")
def client(db):
    """Override dependency to use test database"""
    def override_get_db():
        try:
            yield db
        finally:
            db.close()
    
    app.dependency_overrides[get_db] = override_get_db
    return TestClient(app)

Mocking External Services

import pytest
from unittest.mock import Mock, patch

@pytest.fixture
def mock_external_api():
    with patch('app.services.external_api.requests.get') as mock_get:
        mock_get.return_value = Mock(status_code=200, json=lambda: {"data": "test"})
        yield mock_get

def test_with_mock(client, mock_external_api):
    response = client.get("/data-from-external-api")
    assert response.status_code == 200
    mock_external_api.assert_called_once()

Async Testing Patterns

Async TestClient Setup

import pytest
from httpx import AsyncClient
from app.main import app

@pytest.fixture
async def async_client():
    """Async test client fixture"""
    async with AsyncClient(app=app, base_url="http://test") as client:
        yield client

@pytest.mark.asyncio
async def test_async_endpoint(async_client):
    """Test async endpoint"""
    response = await async_client.get("/async-data")
    assert response.status_code == 200
    data = response.json()
    assert "timestamp" in data

@pytest.mark.asyncio
async def test_concurrent_requests(async_client):
    """Test multiple concurrent requests"""
    import asyncio
    
    async def fetch_data(endpoint):
        response = await async_client.get(endpoint)
        return response.json()
    
    # Concurrent requests
    results = await asyncio.gather(
        fetch_data("/items/1"),
        fetch_data("/items/2"),
        fetch_data("/items/3")
    )
    
    assert len(results) == 3
    assert all("id" in result for result in results)

Testing Async Database Operations

import pytest
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import sessionmaker
from app.database import Base, get_async_db

@pytest.fixture
async def async_db():
    """Create async test database"""
    DATABASE_URL = "sqlite+aiosqlite:///./test.db"
    engine = create_async_engine(DATABASE_URL, echo=True)
    async with engine.begin() as conn:
        await conn.run_sync(Base.metadata.create_all)
    
    async_session = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False)
    
    async def override_get_async_db():
        async with async_session() as session:
            yield session
    
    app.dependency_overrides[get_async_db] = override_get_async_db
    yield
    
    async with engine.begin() as conn:
        await conn.run_sync(Base.metadata.drop_all)

@pytest.mark.asyncio
async def test_async_database_operations(async_client, async_db):
    response = await async_client.post("/users/", json={"name": "Test User"})
    assert response.status_code == 201
    data = response.json()
    assert data["name"] == "Test User"

Database Testing Patterns

Testing with SQLAlchemy and pytest

import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from app.database import Base, get_db
from app.models import User

# Test database setup
SQLALCHEMY_DATABASE_URL = "sqlite:///./test.db"
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

@pytest.fixture(scope="function")
def db():
    """Create fresh database for each test"""
    Base.metadata.create_all(bind=engine)
    db = TestingSessionLocal()
    try:
        yield db
    finally:
        db.close()
    Base.metadata.drop_all(bind=engine)

@pytest.fixture(scope="function")
def client(db):
    """Override dependency to use test database"""
    def override_get_db():
        try:
            yield db
        finally:
            db.close()
    
    app.dependency_overrides[get_db] = override_get_db
    return TestClient(app)

def test_create_user(client, db):
    """Test user creation endpoint"""
    user_data = {"username": "testuser", "email": "test@example.com"}
    response = client.post("/users/", json=user_data)
    assert response.status_code == 201
    
    # Verify in database
    user = db.query(User).filter(User.username == "testuser").first()
    assert user is not None
    assert user.email == "test@example.com"

def test_get_user(client, db):
    """Test retrieving user"""
    # Create user directly in database
    user = User(username="testuser", email="test@example.com")
    db.add(user)
    db.commit()
    
    # Test endpoint
    response = client.get(f"/users/{user.id}")
    assert response.status_code == 200
    data = response.json()
    assert data["username"] == "testuser"

Testing Database Transactions

@pytest.fixture(scope="function")
def db_transaction(db):
    """Use transactions for test isolation"""
    transaction = db.begin()
    try:
        yield db
        transaction.rollback()  # Rollback after each test
    finally:
        db.close()

def test_user_creation_rollback(db_transaction):
    """Test that changes are rolled back"""
    user = User(username="tempuser", email="temp@example.com")
    db_transaction.add(user)
    db_transaction.commit()
    # After test, changes are automatically rolled back

JWT Authentication Testing

Testing Endpoints with JWT

import pytest
from app.auth import create_access_token

@pytest.fixture
def auth_token():
    """Create JWT token for testing"""
    token = create_access_token(data={"sub": "testuser"})
    return f"Bearer {token}"

def test_protected_endpoint_with_jwt(client, auth_token):
    """Test protected endpoint with valid JWT"""
    headers = {"Authorization": auth_token}
    response = client.get("/protected/", headers=headers)
    assert response.status_code == 200
    data = response.json()
    assert data["username"] == "testuser"

def test_protected_endpoint_without_jwt(client):
    """Test protected endpoint without JWT"""
    response = client.get("/protected/")
    assert response.status_code == 401
    data = response.json()
    assert "detail" in data
    assert data["detail"] == "Not authenticated"

@pytest.fixture
def expired_token():
    """Create expired token"""
    from datetime import datetime, timedelta
    from app.auth import create_access_token
    
    # Create token that expired 1 hour ago
    token = create_access_token(
        data={"sub": "testuser"},
        expires_delta=timedelta(hours=-1)
    )
    return f"Bearer {token}"

def test_expired_jwt_token(client, expired_token):
    """Test endpoint with expired JWT"""
    headers = {"Authorization": expired_token}
    response = client.get("/protected/", headers=headers)
    assert response.status_code == 401
    data = response.json()
    assert "expired" in data["detail"].lower()

Testing Different User Roles

@pytest.fixture
def admin_token():
    """Create admin JWT token"""
    token = create_access_token(data={"sub": "adminuser", "role": "admin"})
    return f"Bearer {token}"

@pytest.fixture
def regular_user_token():
    """Create regular user JWT token"""
    token = create_access_token(data={"sub": "user", "role": "user"})
    return f"Bearer {token}"

def test_admin_only_endpoint(client, admin_token):
    """Test admin-only endpoint with admin token"""
    headers = {"Authorization": admin_token}
    response = client.get("/admin-only/", headers=headers)
    assert response.status_code == 200

def test_admin_only_endpoint_with_user(client, regular_user_token):
    """Test admin-only endpoint with user token (should fail)"""
    headers = {"Authorization": regular_user_token}
    response = client.get("/admin-only/", headers=headers)
    assert response.status_code == 403
    data = response.json()
    assert "Insufficient permissions" in data["detail"]

Coverage Configuration and Reporting

Running Coverage

# Run tests with coverage
pytest --cov=app --cov-report=term-missing --cov-report=html

# Show coverage in terminal
pytest --cov=app --cov-report=term-missing

# Generate HTML coverage report
pytest --cov=app --cov-report=html

# Generate XML for CI/CD (Codecov, etc.)
pytest --cov=app --cov-report=xml

.coveragerc Configuration

[run]
source = app
branch = True
concurrency = thread, greenlet

[report]
precision = 2
show_missing = True
exclude_lines =
    pragma: no cover
    def __repr__
    raise NotImplementedError
    if __name__ == .__main__.:
    pass
    raise AssertionError

[coverage:html]
directory = coverage_html_report

Excluding Code from Coverage

def debug_function():
    """This function is excluded from coverage"""
    # pragma: no cover
    print("Debug info")

def important_function():
    """This function is covered"""
    # pragma: cover
    return "important result"
Coverage Goals: Aim for 80% coverage minimum, 90%+ for critical business logic. 100% coverage is often impractical and can indicate over-testing.
See also  FastAPI Complete Guide: Building Production APIs

Coverage Badge Generation

# Add to CI/CD pipeline
- name: Generate coverage badge
  run: |
    pytest --cov=app --cov-report=xml
    coverage-badge -f -o coverage.svg
    
- name: Upload coverage badge
  uses: actions/upload-artifact@v3
  with:
    name: coverage-badge
    path: coverage.svg

Testing Best Practices

FastAPI Testing Best Practices

  • Use TestClient for most tests: Simulates real HTTP requests without server overhead
  • Override dependencies: Use app.dependency_overrides for test isolation
  • Test happy paths and error cases: Verify correct responses and proper error handling
  • Use descriptive test names: test_create_user_with_valid_data is better than test_1
  • Group related tests: Use test classes or separate files for different modules
  • Mock external services: Don't hit real APIs in tests
  • Use fixtures for setup/teardown: Keep tests DRY and maintainable
  • Parametrize tests: Test multiple scenarios with minimal code
  • Test authentication: Always test JWT, OAuth2, API key endpoints
  • Document test purpose: Add docstrings to explain complex test logic
See also  FastAPI Authentication & Authorization: JWT, OAuth2, and RBAC

Parametrized Tests Example

import pytest

@pytest.mark.parametrize("item_id,expected_status", [
    (1, 200),
    (2, 200),
    (999, 404),  # Non-existent item
    (-1, 422),   # Invalid ID
])
def test_get_item_various_ids(client, item_id, expected_status):
    response = client.get(f"/items/{item_id}")
    assert response.status_code == expected_status

Test Organization

# tests/conftest.py - Shared fixtures
# tests/test_main.py - Main endpoint tests
# tests/test_auth.py - Authentication tests
# tests/test_models.py - Model tests
# tests/test_integration.py - Integration tests
# tests/test_utils.py - Utility function tests

CI/CD Integration

GitHub Actions Workflow for FastAPI

name: FastAPI CI/CD

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  test:
    runs-on: ubuntu-latest
    
    services:
      postgres:
        image: postgres:15
        env:
          POSTGRES_USER: postgres
          POSTGRES_PASSWORD: postgres
          POSTGRES_DB: test_db
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
        ports:
          - 5432:5432

    steps:
    - uses: actions/checkout@v3
    
    - name: Set up Python
      uses: actions/setup-python@v4
      with:
        python-version: '3.11'
        cache: 'pip'
    
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt
        pip install -r requirements-dev.txt
    
    - name: Lint with flake8
      run: |
        flake8 app tests --count --select=E9,F63,F7,F82 --show-source --statistics
        flake8 app tests --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
    
    - name: Format check with black
      run: black --check app tests
    
    - name: Type check with mypy
      run: mypy app tests --ignore-missing-imports
    
    - name: Run tests with coverage
      env:
        DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test_db
        SECRET_KEY: test-secret-key
      run: |
        pytest --cov=app --cov-report=xml --cov-report=html --cov-fail-under=80
    
    - name: Upload coverage to Codecov
      uses: codecov/codecov-action@v3
      with:
        file: ./coverage.xml
        flags: unittests
        name: codecov-umbrella
        fail_ci_if_error: true

  build-and-deploy:
    needs: test
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    
    steps:
    - uses: actions/checkout@v3
    
    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2
    
    - name: Login to Docker Hub
      uses: docker/login-action@v2
      with:
        username: ${{ secrets.DOCKER_USERNAME }}
        password: ${{ secrets.DOCKER_PASSWORD }}
    
    - name: Build and push
      uses: docker/build-push-action@v4
      with:
        context: .
        push: true
        tags: |
          ${{ secrets.DOCKER_USERNAME }}/fastapi-app:${{ github.sha }}
          ${{ secrets.DOCKER_USERNAME }}/fastapi-app:latest
        cache-from: type=registry,ref=${{ secrets.DOCKER_USERNAME }}/fastapi-app:buildcache
        cache-to: type=registry,ref=${{ secrets.DOCKER_USERNAME }}/fastapi-app:buildcache,mode=max

Pre-commit Hooks

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/psf/black
    rev: 23.9.1
    hooks:
      - id: black
        language_version: python3.11

  - repo: https://github.com/pycqa/flake8
    rev: 6.1.0
    hooks:
      - id: flake8
        args: [--max-line-length=88]

  - repo: https://github.com/pycqa/isort
    rev: 5.12.0
    hooks:
      - id: isort
        args: ["--profile", "black"]

Testing FastAPI applications is essential for building production-ready APIs. By combining TestClient, pytest, and coverage tools, you can ensure your application behaves correctly under various conditions.

See also  FastAPI Authentication & Authorization: JWT, OAuth2, and RBAC

Start with basic endpoint testing, then add database tests, authentication tests, and async tests as your application grows. Aim for 80%+ coverage and integrate testing into your CI/CD pipeline for continuous quality assurance.

Remember: tests are documentation—they show how your API is intended to be used. Write clear, descriptive tests that explain the "why" behind your code, not just the "what."

Next Steps: Start with simple endpoint tests, gradually add database and authentication tests, integrate with CI/CD, and monitor coverage. The investment in testing pays dividends in code quality and developer confidence.