Guides¶
How-to guides for common testing tasks with Tenro.
These guides help you accomplish specific goals. For conceptual understanding, see Concepts. For complete API details, see API Reference.
Available guides¶
-
Compatibility
Supported providers, HTTP transports, and framework compatibility.
-
Troubleshooting
Common issues and how to resolve them.
-
Testing patterns
Common patterns for simulating responses, verifying calls, and testing error handling.
-
Tracing
Capture and visualize agent execution with spans for debugging and analysis.
Quick reference¶
Simulate responses¶
from tenro import Provider
from tenro.simulate import llm, tool
# Assuming search is a @link_tool decorated function:
# @link_tool("search")
# def search(query: str) -> list[str]: ...
# Single LLM response (same every call)
llm.simulate(Provider.OPENAI, response="Hello")
# Sequential LLM responses (different each call)
llm.simulate(Provider.OPENAI, responses=["First", "Second"])
# Tool results (use function reference)
tool.simulate(search, result=["doc1", "doc2"])
Verify behaviour¶
from tenro.simulate import llm, tool
# Verify at least once (default)
llm.verify(Provider.OPENAI)
# Verify exact count (use verify_many even for count=1)
llm.verify_many(Provider.OPENAI, count=2)
tool.verify_many(search, count=1) # Exactly once, not "at least once"
# Verify content
llm.verify(output_contains="expected text")
# Verify arguments (use function reference)
tool.verify(search, query="AI trends")
Test errors¶
from tenro.simulate import llm
# Simulate API errors (agent sees real failure)
llm.simulate(
provider=Provider.OPENAI,
responses=[ConnectionError("Rate limited"), "Recovered!"],
)
See Testing error handling for retry patterns.