Troubleshooting¶
Common issues and how to resolve them.
Simulation not working¶
Simulated response not returned¶
Symptom: Your test makes a real API call instead of returning the simulated response.
Causes and fixes:
-
Wrong provider specified
-
Unsupported HTTP transport
Tenro intercepts httpx-based requests. If your SDK uses a different HTTP client (requests, aiohttp), use
@link_llmwithtarget=:from tenro import Provider, link_llm from tenro.simulate import llm @link_llm(Provider.OPENAI) def my_llm_call(prompt: str) -> str: return custom_sdk.Client().generate(prompt) # Use target= to route simulation to the @link_llm function llm.simulate( Provider.OPENAI, target=my_llm_call, response="Hello!", ) -
Custom HTTP client configuration
If you configured the SDK with a custom httpx client or session, ensure it's not bypassing Tenro's interception.
-
Framework internal caching
Some frameworks cache LLM responses. Clear caches between tests or disable caching in test configuration.
Wrong response format¶
Symptom: The response structure doesn't match what your code expects.
Fix: Ensure you're using the correct provider. Each provider has its own response format:
from tenro import Provider
from tenro.simulate import llm, tool
# OpenAI format
llm.simulate(Provider.OPENAI, response="Hello!")
# Returns: {"choices": [{"message": {"content": "Hello!"}}], ...}
# Anthropic format
llm.simulate(Provider.ANTHROPIC, response="Hello!")
# Returns: {"content": [{"type": "text", "text": "Hello!"}], ...}
Tool simulation issues¶
Tool not being simulated¶
Symptom: Your tool function runs with real implementation instead of returning simulated values.
Causes and fixes:
-
Missing
@link_tooldecorator -
Using function reference vs string
```python from tenro.simulate import tool from tenro import link_tool
@link_tool("search") def search_documents(query: str) -> list[str]: ...
Recommended: use function reference (refactor-safe)¶
tool.simulate(search_documents, result=["doc1"])
Alternative: use full dotted path as string¶
tool.simulate("myapp.tools.search_documents", result=["doc1"])
This does NOT work - bare strings require full path¶
tool.simulate("search", result=["doc1"]) # Error!¶
```
Verification failures¶
"Expected N calls, got M"¶
Symptom: llm.verify_many(count=2) fails with wrong count.
Causes:
-
Agent making more/fewer calls than expected
Debug by checking the actual call count:
-
Framework making internal calls
Some frameworks make warm-up or validation calls. Account for these:
"Call not found with expected arguments"¶
Symptom: Verification fails even though the call was made.
Fix: Use partial matching or check the actual call arguments:
from tenro.simulate import llm
# Partial match (recommended for robustness)
llm.verify(output_contains="expected text")
# Debug: inspect actual call
calls = construct.get_llm_calls()
for call in calls:
print(f"Request: {call.request}")
print(f"Response: {call.response}")
Import errors¶
"ModuleNotFoundError: No module named 'tenro'"¶
Fix: Install Tenro in your environment:
"ImportError: cannot import name 'Construct'"¶
Fix: Ensure Tenro is installed (pip install tenro). The Construct class is exported from the main module.
IDE issues¶
No autocomplete for simulation functions¶
Symptom: Your IDE doesn't show autocomplete for llm.simulate() or tool.verify().
Fix: Import the modules explicitly:
from tenro.simulate import llm, tool
from tenro.testing import tenro
@tenro
def test_my_agent():
llm.simulate(...) # IDE shows autocomplete
tool.verify(...) # IDE shows autocomplete
The module functions (llm.simulate, tool.verify, etc.) have full type annotations for IDE support.
Async test issues¶
RuntimeWarning: coroutine was never awaited¶
Symptom: Your test passes but shows a warning like:
Cause: Your agent's method is async, but you're calling it without await in a sync test.
Fix: Use pytest-asyncio and mark the test as async:
from tenro.testing import tenro
from tenro import Construct, Provider
from tenro.simulate import llm
@pytest.mark.asyncio
async def test_my_agent(construct):
llm.simulate(Provider.ANTHROPIC, response="Hello!")
agent = create_my_agent()
result = await agent.execute("Hello") # await the async method
llm.verify(output_contains="Hello!")
Ensure pytest-asyncio is installed:
Framework-specific issues¶
LangChain: "LLM call not intercepted"¶
LangChain uses the OpenAI/Anthropic SDKs internally. Ensure you're using a supported provider:
from langchain_openai import ChatOpenAI # Uses OpenAI SDK → works
from langchain_community.llms import CustomLLM # May not work
CrewAI: "Multiple agents not simulated correctly"¶
For multi-agent crews, simulate responses for each agent:
from tenro import Provider
from tenro.simulate import llm, tool
# Each agent gets its own simulated responses
llm.simulate(
provider=Provider.OPENAI,
responses=[
"Agent 1 response", # First agent
"Agent 2 response", # Second agent
"Agent 1 again", # Back to first
]
)
Still stuck?¶
- Check Compatibility for supported providers and transports
- Read How Tenro works to understand HTTP interception
- Review API Reference for parameter details
For bugs and feature requests, visit GitHub Issues.