Testcontainers (Python)
Unlike the Java module, Python does not ship a dedicated MiniStackContainer class — the generic testcontainers-python library does everything you need in a few lines. This page gives you the recommended fixtures to copy into your project.
Install
pip install "testcontainers>=4" boto3 pytest requests
Or in requirements-test.txt / pyproject.toml:
[tool.poetry.group.test.dependencies] testcontainers = "^4" boto3 = "*" pytest = "*" requests = "*"
pytest fixture
Drop this in conftest.py. A module-scoped fixture is a good default — most suites can share one container across tests.
import time
import boto3
import pytest
import requests
from testcontainers.core.container import DockerContainer
@pytest.fixture(scope="module")
def ministack():
container = (
DockerContainer("ministackorg/ministack:1.3.14")
.with_exposed_ports(4566)
)
container.start()
host = container.get_container_host_ip()
port = container.get_exposed_port(4566)
endpoint = f"http://{host}:{port}"
# Wait for /_ministack/health to return 200
deadline = time.time() + 30
while time.time() < deadline:
try:
if requests.get(f"{endpoint}/_ministack/health", timeout=2).status_code == 200:
break
except Exception:
pass
time.sleep(0.5)
else:
raise RuntimeError("MiniStack did not become healthy within 30s")
yield endpoint
container.stop()
1.3.14, not latest) so CI is reproducible. The tag matches a published MiniStack release.
Boto3 wiring
@pytest.fixture
def s3(ministack):
return boto3.client(
"s3",
endpoint_url=ministack,
region_name="us-east-1",
aws_access_key_id="test",
aws_secret_access_key="test",
)
def test_put_get(s3):
s3.create_bucket(Bucket="demo")
s3.put_object(Bucket="demo", Key="hi.txt", Body=b"hello")
body = s3.get_object(Bucket="demo", Key="hi.txt")["Body"].read()
assert body == b"hello"
Factor the client construction if you use many services:
@pytest.fixture
def aws(ministack):
def _make(service):
return boto3.client(
service,
endpoint_url=ministack,
region_name="us-east-1",
aws_access_key_id="test",
aws_secret_access_key="test",
)
return _make
def test_dynamodb(aws):
ddb = aws("dynamodb")
ddb.create_table(...)
Path-style addressing for S3
Boto3 auto-detects path-style when endpoint_url is set, so you usually don't need to configure it. If you hit bucket.localhost resolution errors, force it explicitly:
from botocore.config import Config
s3 = boto3.client("s3", endpoint_url=ministack,
config=Config(s3={"addressing_style": "path"}),
region_name="us-east-1",
aws_access_key_id="test", aws_secret_access_key="test")
Multi-account tests
MiniStack derives the account ID from the access key when it's 12 digits. Use distinct keys to scope state per account:
@pytest.fixture
def aws_account_a(ministack):
return lambda svc: boto3.client(svc, endpoint_url=ministack, region_name="us-east-1",
aws_access_key_id="111111111111", aws_secret_access_key="test")
@pytest.fixture
def aws_account_b(ministack):
return lambda svc: boto3.client(svc, endpoint_url=ministack, region_name="us-east-1",
aws_access_key_id="222222222222", aws_secret_access_key="test")
def test_account_isolation(aws_account_a, aws_account_b):
aws_account_a("s3").create_bucket(Bucket="a-only")
buckets_b = aws_account_b("s3").list_buckets()["Buckets"]
assert not any(b["Name"] == "a-only" for b in buckets_b)
See Multi-tenancy for the services that leak across accounts.
Reuse across tests
testcontainers-python supports reuse. Enable it globally, then set .with_kwargs(reuse=True) (or equivalent) on the container. Caveat: reuse is best-effort and Docker-only.
# ~/.testcontainers.properties testcontainers.reuse.enable=true
Reset between tests
For function-scoped isolation without paying the container-boot cost:
@pytest.fixture(autouse=True)
def _reset(ministack):
requests.post(f"{ministack}/_ministack/reset")
yield
Use ?init=1 on the reset URL to also re-run any init.d scripts baked into your test image.
Real infrastructure
To let MiniStack spin up real RDS / ElastiCache / ECS / EKS sidecars, mount the Docker socket into the container and expose the random ports it allocates:
container = (
DockerContainer("ministackorg/ministack:1.3.14")
.with_exposed_ports(4566)
.with_volume_mapping("/var/run/docker.sock", "/var/run/docker.sock", "rw")
.with_env("DOCKER_NETWORK", "bridge")
)
For RDS: after CreateDBInstance returns, poll the instance status until it's available, then connect via the mapped port the RDS service assigns. See the RDS service page for port allocation rules.
delete_db_instance / delete_cache_cluster explicitly, or add a request.addfinalizer — otherwise the sidecar containers leak on your host.
ministack repo includes a runnable example at Testcontainers/python-testcontainers/ (S3 + SQS + DynamoDB). Adapt it to your project.