Skip to content

Compatibility

Compatibility reference

Check here for compatibility status. Provider and framework pages have usage details but may not reflect the latest support status.

Providers

Provider Status Docs
OpenAI Supported OpenAI
Anthropic Supported Anthropic
Gemini Supported Gemini

HTTP libraries

Tenro intercepts HTTP requests made through httpx only. Most LLM provider SDKs (OpenAI, Anthropic, Google, Mistral) use httpx by default, so they work automatically.

HTTP library Status Notes
httpx Supported Used by OpenAI, Anthropic, Google, Mistral SDKs
requests Not supported Use @link_llm with target= as workaround
aiohttp Not supported Use @link_llm with target= as workaround
Custom HTTP clients Not supported Use @link_llm with target= as workaround

Endpoints

Simulation is supported only for endpoint families explicitly listed below. Anything not listed is unsupported.

Endpoint family Status
Text generation (chat-style requests) Supported
Embeddings Not supported
Images Not supported
Audio / TTS / STT Not supported
Realtime / WebSockets Not supported

Covered request shapes include chat-style prompts/messages (e.g., OpenAI chat.completions, Anthropic messages). Other API families (assistants, responses, batch) are unsupported unless listed here.

Feature coverage (streaming, tool calling, etc.) within supported endpoints is provider-specific. See provider pages for details.

Frameworks

Framework Status Notes
LangChain Experimental Works when using OpenAI/Anthropic with httpx
CrewAI Experimental Works when using OpenAI/Anthropic with httpx
LangGraph Experimental Works when using OpenAI/Anthropic with httpx
Pydantic AI Experimental Works when using OpenAI/Anthropic with httpx
AutoGen Experimental Works when using OpenAI/Anthropic with httpx
LlamaIndex Experimental Works when using OpenAI/Anthropic with httpx

Support definitions

  • Supported: Tested before every release. We fix breakages promptly.
  • Experimental: Tested, but may break when frameworks update.

Version policy

Tenro is pre-1.0. The API may change between releases. Breaking changes are noted in release notes.

When interception doesn't apply

If your simulated response isn't returned:

  1. Check you're using a supported provider (see table above)
  2. Check the SDK uses httpx transport (default for OpenAI/Anthropic)
  3. Or use @link_llm with target= to route simulations directly

See Troubleshooting for more help.

See also