LLM Providers¶
Tenro intercepts HTTP requests to LLM providers and returns simulated responses. Three providers have built-in support.
Requires httpx-based SDKs
Tenro supports provider SDKs built on httpx. Official SDKs from OpenAI, Anthropic, Google, and Mistral all work. See Compatibility for details.
-
OpenAI
Chat completions API. GPT-4, GPT-4o, o1 models.
-
Anthropic
Messages API. Claude Sonnet, Opus, Haiku.
-
Google Gemini
Generate content API. Gemini 1.5 Flash, Pro, Ultra.
-
Others
OpenAI-compatible APIs and custom providers.
Specifying Providers¶
Use the Provider enum for built-in providers:
from tenro import Provider
from tenro.simulate import llm, tool
llm.simulate(Provider.OPENAI, response="Hello!")
llm.simulate(Provider.ANTHROPIC, response="Bonjour!")
llm.simulate(Provider.GEMINI, response="Hola!")
Why enums?
The Provider enum prevents typos and enables IDE autocomplete. Passing strings like "openai" raises a helpful error suggesting the enum.
Built-in Providers¶
These providers have automatic HTTP interception for their primary chat/completion endpoints. Other endpoints (embeddings, audio, images) are not yet supported.
| Provider | Enum | Capabilities |
|---|---|---|
| OpenAI | Provider.OPENAI |
Text, tool calls, structured output |
| Anthropic | Provider.ANTHROPIC |
Text, tool use |
| Gemini | Provider.GEMINI |
Text, function calls |
from tenro import link_agent, Provider
from tenro.simulate import llm
from tenro.testing import tenro
@link_agent
def my_agent(query: str) -> str:
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": query}]
)
return response.choices[0].message.content
@tenro
def test_my_agent():
llm.simulate(Provider.OPENAI, response="Hello!")
result = my_agent("Hi")
assert result == "Hello!"
Default Provider (Recommended)¶
The best way to set a default provider is via a pytest fixture in conftest.py:
# conftest.py
from tenro.testing import tenro
from tenro import Provider
from tenro.simulate import llm, tool
@pytest.fixture(autouse=True)
def _tenro_defaults(construct):
construct.set_default_provider(Provider.OPENAI)
return construct
Then tests stay clean and focused on behavior:
from tenro.simulate import llm
from tenro.testing import tenro
@tenro
def test_my_agent():
# No provider= needed — uses the fixture default
llm.simulate(response="I'll help you with that.")
llm.simulate(response="Here's the answer: 42")
my_agent.run("What's the meaning of life?")
llm.verify_many(count=2) # Also uses the default
Explicit provider= always overrides the default when needed:
from tenro import Provider
from tenro.simulate import llm
llm.simulate(response="OpenAI response") # Uses default
llm.simulate(Provider.ANTHROPIC, response="Anthropic response") # Overrides
When to use set_default_provider()¶
Use it when your test suite mostly uses one provider and you want to avoid repeating provider= across many tests:
- Single-provider repo — everything is OpenAI (or Anthropic, etc.)
- Helper functions — you have helpers that call
llm.simulate()/llm.verify()and don't want to threadproviderthrough every helper signature - Custom provider ID — you use a registered ID (e.g.,
"mistral") and want consistent grouping without repeating it everywhere
When not to use¶
Don't use set_default_provider() if explicit is clearer for your situation:
- Your suite regularly switches providers per test
- You want every test to self-document which provider it simulates
- You rely heavily on
target=auto-detection and don't want defaults influencing grouping behavior
Parametrized provider suites¶
Run the same tests across multiple providers:
# conftest.py
from tenro.testing import tenro
from tenro import Provider
@pytest.fixture(params=[Provider.OPENAI, Provider.ANTHROPIC])
def multi_provider(request, construct):
"""Parametrized fixture that sets a default provider."""
construct.set_default_provider(request.param)
return construct
from tenro.simulate import llm
def test_agent_works_with_any_provider(multi_provider):
"""Test runs twice: once with OpenAI, once with Anthropic."""
llm.simulate(response="Hello!")
result = my_agent("Hi")
assert result == "Hello!"
Provider Resolution¶
Tenro resolves the provider in this order:
| Priority | Source | Example |
|---|---|---|
| 1 | Explicit provider= |
provider=Provider.OPENAI |
| 2 | Target detection | target=my_openai_fn → detects from @link_llm |
| 3 | Default provider | set_default_provider(...) |
| 4 | Error | No provider determinable |
Target Detection¶
When you use target= with a @link_llm decorated function, Tenro detects the provider from the decorator or the module path:
from tenro import link_llm, Provider
from tenro.simulate import llm
# Explicit provider in decorator (recommended)
@link_llm(Provider.OPENAI)
def call_openai(prompt: str) -> str:
return openai.chat.completions.create(...)
# Provider detected from decorator
llm.simulate(target=call_openai, response="Hello!")
Provider detection patterns (used when decorator doesn't specify provider):
| If module path contains... | Provider detected |
|---|---|
openai |
Provider.OPENAI |
anthropic |
Provider.ANTHROPIC |
gemini or google.genai |
Provider.GEMINI |
Verification¶
Verification uses the same provider rules:
from tenro import Provider
from tenro.simulate import llm, tool
# Verify with enum
llm.verify(Provider.OPENAI)
# Verify with registered custom provider
llm.verify("mistral")
# Verify call count
llm.verify_many(count=3)
Capability reference¶
| Capability | Description | Providers |
|---|---|---|
| Text | Basic text generation | All |
| Tool calls | LLM requests tool execution | OpenAI, Anthropic, Gemini |
| Structured output | JSON Schema-based output | OpenAI |
| Streaming | Server-sent events | Coming soon |
Use ToolCall(...) in responses= for all providers. Tenro handles provider-specific formats internally.
Compatibility¶
Tenro intercepts provider calls made through httpx.
Supported: Provider SDKs built on httpx (OpenAI, Anthropic, Google, Mistral). Agent frameworks like LangChain, CrewAI, and Pydantic AI use these SDKs, so they work too.
Not supported (yet): Clients using requests, aiohttp, urllib3, or custom transports.
Troubleshooting¶
If simulation doesn't trigger:
- Confirm you're using the official provider SDK
- Ensure you didn't configure the SDK to use a custom transport/session
- Use
@link_llmwithtarget=to route simulations directly
from tenro import Provider, link_llm
from tenro.simulate import llm
@link_llm(Provider.OPENAI)
def my_llm_call(prompt: str) -> str:
return some_sdk.client.Client().generate(prompt)
llm.simulate(
Provider.OPENAI,
target=my_llm_call,
response="Hello!",
)
See also¶
- Compatibility - Full support matrix
- How Tenro works - How simulation works