Skip to content

Frameworks

Choose your framework to see testing examples tailored to your stack.


How Tenro works with frameworks

Tenro intercepts LLM calls at the HTTP level, not at the function level. This means:

  • No framework patches: Tenro doesn't modify LangChain, CrewAI, or any framework
  • Works automatically: llm.simulate() intercepts HTTP requests to supported LLM providers
  • No @link_llm needed: Framework users don't need to decorate LLM calls
from tenro import link_agent, Provider
from tenro.simulate import llm
from tenro.testing import tenro

@link_agent
def my_agent(query: str) -> str:
    # Your framework code makes LLM calls normally
    chain = prompt | ChatOpenAI()  # LangChain
    crew.kickoff()                  # CrewAI
    agent.run_sync("query")         # Pydantic AI
    return result

# This works WITHOUT @link_llm because Tenro intercepts HTTP
@tenro
def test_my_agent():
    llm.simulate(Provider.OPENAI, response="Hello!")

    my_agent("test query")

    # Tenro intercepted the HTTP request and returned your simulated response
    llm.verify(Provider.OPENAI)

When to use @link_llm

@link_llm is only for custom agents where you make raw provider SDK calls like openai.chat.completions.create(). Framework users don't need it.

See How Tenro works for the full explanation, or Compatibility for the support matrix.