Skip to content

Other Providers

Beyond the three built-in providers, Tenro supports custom provider registration.

Custom Provider Registration

Register custom providers with construct.register_provider() to map your provider name to a built-in adapter:

from tenro import Provider
from tenro.simulate import llm
from tenro.testing import tenro

@tenro
def test_custom_provider():
    # Register provider with OpenAI-compatible response format
    construct.register_provider("mistral", adapter=Provider.OPENAI)

    llm.simulate("mistral", response="Hello from Mistral!")

    # Your code that calls the provider...

    llm.verify("mistral")

Adapter Mapping

The adapter parameter specifies which response format to use:

Your Provider API Format Adapter
Mistral OpenAI-compatible Provider.OPENAI
Together OpenAI-compatible Provider.OPENAI
Groq OpenAI-compatible Provider.OPENAI
Azure OpenAI OpenAI-compatible Provider.OPENAI
AWS Bedrock Claude Anthropic-compatible Provider.ANTHROPIC
Vertex AI Gemini-compatible Provider.GEMINI

Default with Custom Providers

Custom providers work with set_default_provider():

from tenro import Provider
from tenro.simulate import llm
from tenro.testing import tenro

@tenro
def test_defaults():
    construct.register_provider("mistral", adapter=Provider.OPENAI)
    construct.set_default_provider("mistral")

    # All simulations now use Mistral
    llm.simulate(response="First")
    llm.simulate(response="Second")

Custom Base URLs

When using a provider SDK with a custom base_url (e.g., OpenAI SDK pointing to Mistral), HTTP interception targets the built-in provider's domain by default.

For custom domains, use @link_llm with target= to route simulations directly:

import openai
from tenro import Provider, link_llm
from tenro.simulate import llm
from tenro.testing import tenro

@link_llm(Provider.OPENAI)
def call_mistral(prompt: str) -> str:
    """Call Mistral via OpenAI SDK."""
    client = openai.OpenAI(
        base_url="https://api.mistral.ai/v1",
        api_key="test-key",
    )
    response = client.chat.completions.create(
        model="mistral-large-latest",
        messages=[{"role": "user", "content": prompt}],
    )
    return response.choices[0].message.content

@tenro
def test_mistral():
    # Route simulation to the specific function
    llm.simulate(Provider.OPENAI, target=call_mistral, response="Hello from Mistral!")

    result = call_mistral("Hi")

    assert result == "Hello from Mistral!"
    llm.verify(Provider.OPENAI)

Native Provider SDKs

For provider SDKs (Mistral SDK, Cohere SDK, etc.), use @link_llm with target=:

from mistralai import Mistral
from tenro import Provider, link_llm
from tenro.simulate import llm
from tenro.testing import tenro

@link_llm(Provider.OPENAI)  # Mistral uses OpenAI-compatible format
def call_mistral_native(prompt: str) -> str:
    """Call Mistral using native SDK."""
    client = Mistral(api_key="test-key")
    response = client.chat.complete(
        model="mistral-large-latest",
        messages=[{"role": "user", "content": prompt}],
    )
    return response.choices[0].message.content

@tenro
def test_mistral_native():
    llm.simulate(Provider.OPENAI, target=call_mistral_native, response="Hello!")

    result = call_mistral_native("Hi")

    llm.verify(Provider.OPENAI)

SDKs Not Built on httpx

For SDKs that don't use httpx (e.g., those using requests or aiohttp), @link_llm with target= is required:

from tenro import Provider, link_llm
from tenro.simulate import llm
from tenro.testing import tenro

@link_llm(Provider.OPENAI)
def my_llm_call(prompt: str) -> str:
    return some_sdk.client.Client().generate(prompt)

@tenro
def test_other_sdk():
    llm.simulate(Provider.OPENAI, target=my_llm_call, response="Hello!")

    result = my_llm_call("test prompt")

    llm.verify(Provider.OPENAI)

See also