Tenro¶
Tenro is a modern simulation framework for testing AI agents. Simulate multi-agent workflows and tool usage without burning tokens.
- No API costs — Tests run offline (no LLM calls)
- Deterministic — Simulate responses, errors, and tool results
- Workflow verification — Check tools, edge cases, and agent behaviours
How it works¶
Tenro intercepts LLM calls at the HTTP level. This works with any framework that uses a supported provider SDK. No patches, no special configuration.
from tenro import Provider
from tenro.simulate import llm, tool
from tenro.testing import tenro
@tenro
def test_my_agent():
# Simulate LLM response
llm.simulate(Provider.OPENAI, response="Hello!")
# Run your agent - LangChain, CrewAI, OpenAI SDK, anything
my_agent.run("query")
# Verify behavior
llm.verify()
Choose your framework¶
-
LangChain
-
CrewAI
-
LangGraph
-
Pydantic AI
-
AutoGen
-
LlamaIndex
-
Custom agents
Raw OpenAI/Anthropic SDK calls
Write tests that read like specs¶
No patch decorators. No response builders. Just simulate and verify.
from tenro import Provider, ToolCall
from tenro.simulate import llm, tool
from myapp.agent import get_weather, WeatherAgent
@tenro
def test_agent():
tool.simulate(get_weather, result={"temp": 72, "condition": "sunny"})
llm.simulate(
Provider.OPENAI,
responses=[
ToolCall(get_weather, city="Paris"),
"It's 72°F and sunny in Paris.",
],
)
result = WeatherAgent().run("Weather in Paris?")
tool.verify(get_weather)
llm.verify_many(Provider.OPENAI, count=2)
assert result == "It's 72°F and sunny in Paris."
Manual mocks, helper functions, boilerplate:
# test_helpers.py - you write and maintain this for each LLM provider
def mock_llm_response(content=None, tool_call=None):
if tool_call:
message = ChatCompletionMessage(
role="assistant", content=None,
tool_calls=[ChatCompletionMessageToolCall(
id="call_abc", type="function",
function=Function(name=tool_call["name"], arguments=json.dumps(tool_call["args"]))
)]
)
else:
message = ChatCompletionMessage(role="assistant", content=content, tool_calls=None)
return ChatCompletion(
id="chatcmpl-123", created=0, model="gpt-5", object="chat.completion",
choices=[Choice(index=0, finish_reason="stop", message=message)]
)
# test_agent.py
@patch("myapp.tools.get_weather")
@patch("openai.chat.completions.create")
def test_agent(mock_llm, mock_weather):
mock_weather.return_value = {"temp": 72, "condition": "sunny"}
mock_llm.side_effect = [
mock_llm_response(tool_call={"name": "get_weather", "args": {"city": "Paris"}}),
mock_llm_response(content="It's 72°F and sunny in Paris."),
]
result = my_agent.run("Weather in Paris?")
assert result == "It's 72°F and sunny in Paris."
mock_weather.assert_called_once_with(city="Paris")
Provider support¶
| Provider | Status |
|---|---|
| OpenAI | Supported |
| Anthropic | Supported |
| Google Gemini | Supported |