Anthropic¶
Simulate Anthropic messages in your tests.
Compatibility
See Compatibility for providers, transports, and framework recipes.
How it works¶
Tenro intercepts the Anthropic SDK's outbound HTTP request and returns your configured response. No real network call is made.
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}]
)
# In tests, Tenro returns your simulated response
Capabilities¶
| Capability | Supported | Notes |
|---|---|---|
| TEXT | Yes | Basic text generation |
| TOOL_USE | Yes | Anthropic-style tool use blocks |
| STREAMING | Coming soon | Server-sent events |
Simulating responses¶
Basic text¶
from tenro import Provider
from tenro.simulate import llm, tool
llm.simulate(Provider.ANTHROPIC, response="Hello!")
With tool use¶
Use ToolCall to simulate LLM responses that include tool use:
from tenro import Provider, ToolCall
from tenro.simulate import llm
# Single response with text + tool call (nested list = one LLM call)
llm.simulate(
provider=Provider.ANTHROPIC,
responses=[["I'll search for that.", ToolCall(search, query="AI")]]
)
# Tool call only (no text)
llm.simulate(
provider=Provider.ANTHROPIC,
responses=[ToolCall("search", query="AI")]
)
Provider format handled automatically
ToolCall uses a unified format. Tenro converts to Anthropic's native format (with input instead of arguments) internally.
Multi-turn conversations¶
from tenro import Provider
from tenro.simulate import llm, tool
llm.simulate(
provider=Provider.ANTHROPIC,
responses=[
"Let me think about that...", # First LLM call
"Here's my answer: 42" # Second LLM call
]
)
Response format¶
Tenro returns a valid Anthropic Message object:
{
"id": "msg_...",
"type": "message",
"role": "assistant",
"model": "claude-sonnet-4-20250514",
"content": [
{"type": "text", "text": "Hello!"}
],
"stop_reason": "end_turn",
"usage": {"input_tokens": 10, "output_tokens": 5}
}
See also¶
- LLM Providers - All supported providers
- OpenAI - Compare with OpenAI's format