Tracing¶
Tenro records agent runs and tool calls marked with @link_agent and @link_tool decorators. LLM calls are captured automatically via HTTP interception. You can visualize these traces in the terminal or access them programmatically.
What gets recorded¶
| Source | Span type | What it captures |
|---|---|---|
@link_agent |
Agent run | Agent name, input, output, child spans |
@link_tool |
Tool call | Tool name, arguments, result |
@link_llm |
LLM scope | Function name, input args, return value |
| HTTP interception | LLM call | Provider, model, messages, response |
LLM calls are captured automatically via HTTP interception (no decorator needed).
Enabling trace output¶
With pytest¶
Set the environment variable to see traces after each test:
Or use the --tenro-trace flag:
Without pytest¶
Use TraceRenderer to print traces in scripts or other test frameworks:
from tenro import Construct, link_agent, link_tool
from tenro.trace import TraceRenderer
@link_tool
def fetch_data(id: int) -> dict:
return {"id": id, "name": "Item"}
@link_agent
def processor(item_id: int) -> str:
data = fetch_data(item_id)
return f"Processed: {data['name']}"
# Run your agent inside a Construct
construct = Construct()
with construct:
result = processor(42)
# Render the trace (get root agents from agent_runs that have no parent)
renderer = TraceRenderer()
root_agents = [a for a in construct.agent_runs if a.parent_agent_id is None]
renderer.render(root_agents, "my_script")
print(f"Result: {result}")
Reading trace output¶
Here's what a trace looks like for a single agent:
Trace: test_weather_agent
────────────────────────────────────────────────────────────────
🤖 weather_agent
├─ → user: "What is the weather in Paris?"
│
├─ 🧠 gpt-4
│ ├─ → prompt: "What is the weather in Paris?"
│ └─ ← "tool_call: get_weather(Paris)"
├─ 🔧 get_weather
│ ├─ → 'Paris'
│ └─ ← "Sunny, 22°C"
├─ 🧠 gpt-4
│ ├─ → prompt: "Sunny, 22°C"
│ └─ ← "The weather in Paris is sunny at 22°C."
│
└─ ← "The weather in Paris is sunny at 22°C."
────────────────────────────────────────────────────────────────
Summary: 1 agent | 2 LLM calls | 1 tool call | Total: 300ms
The trace shows the typical agent loop: LLM decides to call a tool, tool executes, LLM uses the result to respond.
Multi-agent traces¶
When an agent calls another agent as a tool, Tenro links them so you can see delegation:
Trace: test_research_team
────────────────────────────────────────────────────────────────
🤖 coordinator
├─ → user: "Write AI report"
│
├─ 🧠 gpt-4
│ ├─ → prompt: "Write AI report"
│ └─ ← "tool_call: run_agent(name=researcher)"
├─ 🔧 run_agent
│ ├─ → 'researcher', 'AI'
│ │
│ ├─ 🤖 researcher
│ │ ├─ → user: "AI"
│ │ │
│ │ ├─ 🔧 search_docs
│ │ │ ├─ → 'AI'
│ │ │ └─ ← "Found: doc1, doc2"
│ │ ├─ 🧠 gpt-4
│ │ │ ├─ → prompt: "Analyze docs"
│ │ │ └─ ← "Key findings: AI is transforming technology."
│ │ │
│ │ └─ ← "Key findings: AI is transforming technology."
│ │
│ └─ ← "Key findings: AI is transforming technology."
├─ 🧠 gpt-4
│ ├─ → prompt: "Key findings: AI is transforming technology."
│ └─ ← "Done! The researcher found key AI findings."
│
└─ ← "Done! The researcher found key AI findings."
────────────────────────────────────────────────────────────────
Summary: 2 agents | 3 LLM calls | 2 tool calls | Total: 500ms
The coordinator LLM calls run_agent tool, which spawns the researcher agent nested inside.
With @link_llm decorator¶
When you use @link_llm on a function that calls an LLM, the trace shows both layers:
Trace: test_entity_extraction
────────────────────────────────────────────────────────────────
🤖 entity_extractor
├─ → user: "The quick brown fox and lazy dog"
│
├─ 🔭 extract_entities
│ ├─ → 'The quick brown fox and lazy dog'
│ │
│ ├─ 🧠 gpt-4
│ │ ├─ → prompt: "Extract entities from: The quick brown fox..."
│ │ └─ ← "entities: fox, dog"
│ │
│ └─ ← ['fox', 'dog']
│
└─ ← ['fox', 'dog']
────────────────────────────────────────────────────────────────
Summary: 1 agent | 1 LLM call | 0 tool calls | Total: 150ms
Two layers of LLM instrumentation:
- 🔭 LLMScope: Your function boundary. What arguments did you pass in? What did your function return after post-processing?
- 🧠 LLMCall: The raw HTTP call. What prompt went to the model? What did the model respond?
In this example, the function returns ['fox', 'dog'] (a parsed list), while the raw LLM response was '{"entities": ["fox", "dog"]}' (JSON string).
Note: Most users won't see LLMScope in their traces. It only appears when you explicitly use
@link_llm. If you're using an LLM framework (LangChain, LlamaIndex, etc.), the LLM calls happen inside framework code you don't control, so you'll only see the 🧠 LLMCall spans from HTTP interception.
Understanding the symbols¶
| Symbol | Meaning |
|---|---|
| 🤖 | Agent run |
| 🔧 | Tool call |
| 🔭 | LLM scope (function decorated with @link_llm) |
| 🧠 | LLM call (HTTP request to provider) |
| → | Input (what was passed in) |
| ← | Output (what was returned) |
| ERR | Span ended with an error |
Error traces¶
When a span fails, the trace shows ERR and the error message:
Trace: test_error
────────────────────────────────────────────────────────────────
🤖 my_agent ERR
├─ → user: "test"
│
├─ 🧠 gpt-4 ERR
│ └─ → prompt: "Hello"
│ └─ ← error: Rate limit exceeded
│
└─ ← error: LLM call failed
────────────────────────────────────────────────────────────────
Summary: 1 agent | 1 LLM call | 0 tool calls | Total: 200ms
Programmatic access¶
Access recorded spans directly from the Construct object:
All spans (flat lists)¶
construct = Construct()
with construct:
result = my_agent("task") # Call your @link_agent decorated function
# All agent runs (flattened)
for agent in construct.agent_runs:
print(f"{agent.display_name}: {agent.status} ({agent.latency_ms:.0f}ms)")
# All tool calls
for tool in construct.tool_calls:
print(f"{tool.display_name}: {tool.args} → {tool.result}")
# All LLM calls
for llm in construct.llm_calls:
print(f"{llm.provider}/{llm.model}: {llm.response}")
Hierarchical access¶
Navigate the trace tree using spans. Each agent run has a spans list containing its child spans:
# Get root agents (top-level, no parent)
roots = [a for a in construct.agent_runs if a.parent_agent_id is None]
for root in roots:
print(f"Root: {root.display_name}")
for span in root.spans:
# Check span type using the class name
span_type = type(span).__name__
if span_type == "AgentRun":
print(f" Child agent: {span.display_name}")
elif span_type == "ToolCall":
print(f" Tool: {span.display_name}")
elif span_type == "LLMScope":
print(f" LLM function: {span.caller_name}")
elif span_type == "LLMCall":
print(f" LLM call: {span.model}")
Helper methods on agent runs¶
Each agent run has methods to filter its spans:
agent = construct.agent_runs[0]
# Get LLM calls made by this agent (and optionally nested agents)
llm_calls = agent.get_llm_calls(recursive=True)
# Get tool calls made by this agent
tool_calls = agent.get_tool_calls(recursive=True)
# Get child agents
children = agent.get_child_agents(recursive=True)
Span properties¶
These properties are available on spans accessed via construct.agent_runs, construct.tool_calls, and construct.llm_calls.
Agent run¶
| Property | Type | Description |
|---|---|---|
display_name |
str \| None |
Human-readable agent name (from decorator) |
target_path |
str |
Fully qualified path for verification matching |
input_data |
Any |
Positional arguments passed to the agent |
kwargs |
dict |
Keyword arguments passed to the agent |
output_data |
Any |
Return value from the agent |
spans |
list[BaseSpan] |
Child spans (agents, tools, LLMs) |
status |
str |
"running", "completed", or "error" |
latency_ms |
float |
Execution time in milliseconds |
error |
str \| None |
Error message if failed |
Tool call¶
| Property | Type | Description |
|---|---|---|
display_name |
str \| None |
Human-readable tool name (from decorator) |
target_path |
str |
Fully qualified path for verification matching |
args |
tuple |
Positional arguments |
kwargs |
dict |
Keyword arguments |
result |
Any |
Return value |
status |
str |
"running", "completed", or "error" |
latency_ms |
float |
Execution time in milliseconds |
error |
str \| None |
Error message if failed |
LLM scope¶
Created by @link_llm decorator. Captures your function's input and output.
| Property | Type | Description |
|---|---|---|
caller_name |
str |
Function name |
caller_signature |
str |
Function signature |
input_data |
tuple |
Positional arguments passed to function |
input_kwargs |
dict |
Keyword arguments passed to function |
output_data |
Any |
Return value from function |
status |
str |
"running", "completed", or "error" |
latency_ms |
float |
Execution time in milliseconds |
error |
str \| None |
Error message if failed |
LLM call¶
Captured via HTTP interception when your code calls OpenAI, Anthropic, or other supported providers.
| Property | Type | Description |
|---|---|---|
provider |
str |
Provider name ("openai", "anthropic", etc.) |
model |
str \| None |
Model identifier |
messages |
list[dict] |
Messages sent to the LLM |
response |
str \| None |
Raw response text from model |
llm_scope_id |
str \| None |
ID of parent LLMScope (if inside @link_llm) |
token_usage |
dict \| None |
Token counts if available |
status |
str |
"running", "completed", or "error" |
latency_ms |
float |
Execution time in milliseconds |
error |
str \| None |
Error message if failed |
Configuration options¶
Control trace output with environment variables:
| Variable | Default | Values | Description |
|---|---|---|---|
TENRO_TRACE |
not set | true, 1, yes |
Enable trace output |
TENRO_TRACE_PREVIEW |
true |
true, 1, yes, false, 0, no |
Show input/output previews |
TENRO_TRACE_PREVIEW_LENGTH |
80 |
integer | Max characters for previews |
Examples:
# Enable traces with shorter previews
TENRO_TRACE=true TENRO_TRACE_PREVIEW_LENGTH=40 pytest
# Enable traces but hide input/output values
TENRO_TRACE=true TENRO_TRACE_PREVIEW=false pytest
See also¶
- Linking decorators - How to mark agents, tools, and LLMs
- Construct - The test harness that records traces