Providers
Provider comparison, registration system, and how to add new providers.
Reproducibility matrix
| Provider | Mode | Temperature | Seed | Reproducibility | Env var |
|---|---|---|---|---|---|
| OpenAI | API | enforced | enforced | high | OPENAI_API_KEY |
| Anthropic | API | enforced | ignored | low | ANTHROPIC_API_KEY |
| Anthropic | subscription | hint only | N/A | low | — |
| API | enforced | enforced | TBD | GEMINI_API_KEY |
Provider registry
Providers auto-register via the @register() decorator in providers/registry.py. The __init__.py auto-discovers all modules in the package at import time.
Source: providers/registry.py
@register("openai", "api", env_key="OPENAI_API_KEY")
class OpenAIProvider(BaseProvider):
...
Registration key is (provider_name, mode). Mode is either "api" or "subscription".
Resolution order
- If
--use-subscriptionor--api-keyis passed, use that mode explicitly - Otherwise auto-detect: try
preferred_mode(frompramana config) first, then the other - Check availability: API mode requires env var or
--api-key; subscription requires SDK installed
BaseProvider interface
Source: providers/base.py
class BaseProvider(ABC):
@abstractmethod
async def complete(
self,
input_text: str,
system_prompt: str | None = None,
temperature: float = 0.0,
seed: int | None = None,
) -> tuple[str, int]:
"""Return (output_text, latency_ms)."""
...
@abstractmethod
def estimate_tokens(self, text: str) -> int:
"""Estimate token count for text."""
...
Implemented providers
OpenAI
Source: providers/openai.py
- Uses
AsyncOpenAIclient - Passes
temperatureandseeddirectly to API - Handles models that reject these params (e.g., reasoning models) by caching rejections and retrying without them
- Logs a warning when params are dropped (results will not be reproducible)
Anthropic
Source: providers/anthropic.py
- Uses
AsyncAnthropicclient - Passes
temperatureto API seedparameter is accepted but silently ignored by the API- Even at
temperature=0.0, outputs are non-deterministic per official docs
Google Gemini
Source: providers/google.py
- Uses
google.genaiclient - Passes
temperatureandseedviaGenerateContentConfig - Supports
system_instructionfor system prompts
Claude Code (subscription)
Source: providers/claude_code.py
- Uses
claude_agent_sdk(optional dependency) - No parameter control —
temperature=1.0by default - Non-deterministic by design
- For exploratory testing, not scientific drift detection
Adding a new provider
- Create
src/pramana/providers/yourprovider.py - Subclass
BaseProvider, decorate with@register() - Implement
complete()andestimate_tokens() - Add model prefix to
FALLBACK_MODELSinmodels.py - Add tests in
tests/test_providers_integration.py
from pramana.providers.base import BaseProvider
from pramana.providers.registry import register
@register("yourprovider", "api", env_key="YOUR_API_KEY")
class YourProvider(BaseProvider):
def __init__(self, model_id: str, api_key: str | None = None):
self.model_id = model_id
# Initialize client...
async def complete(
self,
input_text: str,
system_prompt: str | None = None,
temperature: float = 0.0,
seed: int | None = None,
) -> tuple[str, int]:
start_ms = int(time.time() * 1000)
# Call API...
latency_ms = int(time.time() * 1000) - start_ms
return output, latency_ms
def estimate_tokens(self, text: str) -> int:
return len(text) // 4
No manual edit to
__init__.py needed. The auto-discovery in providers/__init__.py will find and import your module automatically.