Frameworks¶
Choose your framework to see testing examples tailored to your stack.
-
LangChain
Test LangChain chains, agents, and tools.
-
CrewAI
Test CrewAI agents, crews, and tasks.
-
LangGraph
Test LangGraph state graphs and workflows.
-
Pydantic AI
Test Pydantic AI agents and structured outputs.
-
AutoGen
Test AutoGen multi-agent conversations.
-
LlamaIndex
Test LlamaIndex RAG pipelines and agents.
-
Custom agents
Test agents built with raw LLM SDK calls (OpenAI, Anthropic, etc.).
How Tenro works with frameworks¶
Tenro intercepts LLM calls at the HTTP level, not at the function level. This means:
- No framework patches: Tenro doesn't modify LangChain, CrewAI, or any framework
- Works automatically:
llm.simulate()intercepts HTTP requests to supported LLM providers - No
@link_llmneeded: Framework users don't need to decorate LLM calls
from tenro import link_agent, Provider
from tenro.simulate import llm
from tenro.testing import tenro
@link_agent
def my_agent(query: str) -> str:
# Your framework code makes LLM calls normally
chain = prompt | ChatOpenAI() # LangChain
crew.kickoff() # CrewAI
agent.run_sync("query") # Pydantic AI
return result
# This works WITHOUT @link_llm because Tenro intercepts HTTP
@tenro
def test_my_agent():
llm.simulate(Provider.OPENAI, response="Hello!")
my_agent("test query")
# Tenro intercepted the HTTP request and returned your simulated response
llm.verify(Provider.OPENAI)
When to use @link_llm
@link_llm is only for custom agents where you make raw provider SDK calls like openai.chat.completions.create(). Framework users don't need it.
See How Tenro works for the full explanation, or Compatibility for the support matrix.