Quick start¶
Write your first test with Tenro in 5 minutes.
What you'll build¶
A simple agent that searches for documents and summarizes them, tested without any API calls.
Step 1: Create your agent¶
First, install your framework alongside Tenro:
Then create your agent:
# myapp/agent.py
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from tenro import link_agent, link_tool
# NOTE: No @link_llm needed - Tenro intercepts HTTP automatically
@link_tool("search")
def search(query: str) -> list[str]:
"""Search for documents matching the query."""
# In production, this calls your vector DB or search API
return []
@link_agent("Researcher")
def researcher(topic: str) -> str:
"""Research a topic by searching and summarizing."""
docs = search(topic)
if not docs:
return "No documents found."
# LangChain handles LLM calls - Tenro intercepts at HTTP level
prompt = ChatPromptTemplate.from_messages([
("user", "Summarize these documents: {docs}")
])
chain = prompt | ChatOpenAI(model="gpt-4")
return chain.invoke({"docs": docs}).content
First, install the OpenAI SDK alongside Tenro:
Then create your agent:
# myapp/agent.py
from tenro import link_agent, link_llm, link_tool, Provider
@link_tool("search")
def search(query: str) -> list[str]:
"""Search for documents matching the query."""
# In production, this calls your vector DB or search API
return []
@link_llm(Provider.OPENAI) # Optional - for targeted verification
def summarize(docs: list[str]) -> str:
"""Summarize the documents using OpenAI."""
import openai
response = openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Summarize: {docs}"}],
)
return response.choices[0].message.content
@link_agent("Researcher")
def researcher(topic: str) -> str:
"""Research a topic by searching and summarizing."""
docs = search(topic)
if not docs:
return "No documents found."
return summarize(docs)
What do the decorators do?
@link_tool("search"): Marks this function as a tool that can be simulated@link_agent("Researcher"): Marks this as an agent for tracking and verification@link_llm(Provider.OPENAI): Optional, only for custom agents using raw OpenAI/Anthropic SDK calls
Provider is an enum with values like Provider.OPENAI, Provider.ANTHROPIC, and Provider.GEMINI. Import it with from tenro import Provider.
Step 2: Write your test¶
Create tests/test_agent.py:
from tenro import Provider
from tenro.simulate import llm, tool
from myapp.agent import researcher
from tenro.testing import tenro
@tenro
def test_researcher_finds_and_summarizes():
"""Test that the researcher searches and summarizes correctly."""
from myapp.agent import search # Import the @link_tool decorated function
# Arrange: Simulate the dependencies
tool.simulate(search, result=["Doc 1: AI basics", "Doc 2: ML intro"])
llm.simulate(Provider.OPENAI, response="AI is a field of computer science...")
# Act: Run the agent
result = researcher("artificial intelligence")
# Assert: Verify behaviour
tool.verify_many(search, count=1)
llm.verify(Provider.OPENAI)
assert "AI" in result
@tenro
def test_researcher_handles_no_results():
"""Test graceful handling when search returns nothing."""
from myapp.agent import search # Import the @link_tool decorated function
# Arrange: Simulate empty search results
tool.simulate(search, result=[])
# Act: Run the agent
result = researcher("obscure topic")
# Assert: Verify no LLM call was made
tool.verify_many(search, count=1)
llm.verify_never(Provider.OPENAI)
assert result == "No documents found."
Step 3: Run your tests¶
Expected output:
tests/test_agent.py::test_researcher_finds_and_summarizes PASSED
tests/test_agent.py::test_researcher_handles_no_results PASSED
Tests run in milliseconds. No API calls. No costs.
What just happened?¶
-
Simulation:
llm.simulate()andtool.simulate()intercepted calls and returned your specified values. -
Verification:
tool.verify_many()confirmed your agent called the right tools. Theassertchecks the agent's output. -
Isolation: Your test ran without network access, API keys, or external dependencies.
Key patterns¶
Simulate responses¶
from tenro.simulate import llm, tool
# Assuming search is a @link_tool decorated function:
# @link_tool("search")
# def search(query: str) -> list[str]: ...
# Single response (same every call)
llm.simulate(Provider.OPENAI, response="Hello")
# Sequential responses (different each call)
llm.simulate(Provider.OPENAI, responses=["First", "Second", "Third"])
# Tool results (use function reference)
tool.simulate(search, result=["doc1", "doc2"])
Verify behaviour¶
from tenro.simulate import llm, tool
# Assuming search and dangerous_operation are @link_tool decorated functions
# Verify at least once
llm.verify(Provider.OPENAI)
# Verify exact count (use verify_many even for count=1)
llm.verify_many(Provider.OPENAI, count=2)
tool.verify_many(search, count=1) # Exactly once
# Verify arguments (use function reference)
tool.verify(search, query="AI trends")
# Verify content
llm.verify(output_contains="summary")
# Verify never called (use function reference)
tool.verify_never(dangerous_operation)
Next steps¶
-
Your framework
See examples tailored to your framework: LangChain, CrewAI, LangGraph, and more.
-
How it works
Understand HTTP interception and why
@link_llmis optional. -
Testing patterns
Common patterns for simulations, verifications, and error handling.
-
API Reference
Complete documentation for all Tenro functions.