Skip to content

tenro.linking

Decorators for linking agents, LLMs, and tools to Tenro's simulation system.

Use these decorators to mark functions that should be intercepted during testing.

Overview

Decorator Purpose Who uses it
@link_tool Mark a tool/function call Everyone
@link_agent Mark an agent entry point Everyone
@link_llm Mark an LLM API call Custom agents only (optional)

Framework users: You don't need @link_llm

If you're using LangChain, CrewAI, LangGraph, or similar frameworks, skip @link_llm. Tenro intercepts LLM calls at the HTTP level, so llm.simulate() works automatically.

See How Tenro works for details.

Quick example

from tenro import link_agent, link_tool

@link_tool
def search(query: str) -> list[str]:
    return vector_db.search(query)

@link_agent
def researcher(topic: str) -> str:
    docs = search(topic)

    # LangChain handles LLM calls internally. No @link_llm needed.
    prompt = ChatPromptTemplate.from_template("Summarize: {docs}")
    chain = prompt | ChatOpenAI() | StrOutputParser()
    return chain.invoke({"docs": "\n".join(docs)})
from tenro import link_agent, link_llm, link_tool

@link_tool
def search(query: str) -> list[str]:
    """Search for documents."""
    return vector_db.search(query)

@link_llm  # Provider inferred from HTTP call
def summarize(text: str) -> str:
    """Summarize text using OpenAI."""
    return openai.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": f"Summarize: {text}"}],
    )

@link_agent  # Uses function name "researcher"
def researcher(topic: str) -> str:
    """Research a topic."""
    docs = search(topic)
    return summarize(docs)

Mark functions that interact with external systems (APIs, databases, file systems).

Usage patterns

@link_tool                              # Uses function name: "get_weather"
@link_tool("weather")                   # Custom name: "weather"
@link_tool("multi", entry_points="run") # Explicit entry method for classes
@link_tool("multi", entry_points=["run", "stream"])  # Multiple entry methods

If you don't provide a name, Tenro uses the function's own name by default.

Custom names are optional. They're useful for code readability but don't affect how Tenro locates or patches your function.

Function-level decoration

@link_tool
def get_weather(city: str) -> dict:
    return weather_api.get(city)

@link_tool("db_query")
def query_db(sql: str) -> list:
    return db.execute(sql)

Class-level decoration

When you decorate a class, Tenro automatically wraps entry methods:

  • run, invoke, _run, _arun, arun, ainvoke, __call__
@link_tool("search")
class SearchTool:
    """A tool that searches documents."""

    def run(self, query: str) -> list[str]:  # Automatically wrapped
        return self.db.search(query)

    def _helper(self):  # NOT wrapped (not an entry method)
        ...

Use entry_points to specify which methods should be wrapped when decorating a class (instead of relying on the default list):

@link_tool("multi_search", entry_points=["search", "fetch"])
class MultiSearchTool:
    def search(self, query: str) -> list[str]: ...  # Wrapped
    def fetch(self, url: str) -> str: ...           # Wrapped
    def _internal(self): ...                        # NOT wrapped

This is useful when building tools as classes (common in LangChain and CrewAI):

from langchain.tools import BaseTool

@link_tool("calculator")
class CalculatorTool(BaseTool):
    name = "calculator"
    description = "Adds two numbers"

    def _run(self, a: int, b: int) -> str:  # LangChain calls _run
        return str(a + b)

Simulation and verification

Once you've decorated your function with @link_tool, you can simulate and verify it in tests:

from tenro.simulate import tool
# Pass the function object (recommended, refactor-safe)
tool.simulate(get_weather, result={"temp": 72})
tool.verify_many(get_weather, count=1)

# Or use the full module path as a string
tool.simulate("myapp.tools.get_weather", result={"temp": 72})

Using the function object is recommended because it stays correct when you rename or move the function.

For inspecting recorded calls, see Tracing.

Mark top-level agent functions or classes.

Usage patterns

@link_agent                                    # Uses function name: "customer_support"
@link_agent("CustomerSupport")                 # Custom name: "CustomerSupport"
@link_agent("Agent", entry_points=["run"])     # Explicit entry method for classes

If you don't provide a name, Tenro uses the function's own name by default.

The entry_points parameter works the same as for @link_tool - use it to specify custom entry methods for classes.

Examples

@link_agent
def customer_support(query: str) -> str:
    """Handle customer support queries."""
    ...

@link_agent("ResearchAssistant")
class ResearchAssistant:
    def run(self, topic: str) -> str:
        ...

Simulation and verification

Once you've decorated your function with @link_agent, you can simulate and verify it in tests:

from tenro.simulate import agent
# Pass the function or class object (recommended, refactor-safe)
agent.simulate(customer_support, result="Issue resolved")
agent.verify_many(customer_support, count=1)

# Or use the full module path as a string
agent.simulate("myapp.agents.customer_support", result="Issue resolved")

Using the function object is recommended because it stays correct when you rename or move the function.

For inspecting recorded runs, see Tracing.

Mark functions that call LLM providers directly.

For custom agent builders only

@link_llm is designed for developers building agents with raw LLM SDKs (OpenAI, Anthropic, etc.).

Framework users cannot use it because frameworks like LangChain, CrewAI, and LangGraph hide LLM calls inside their internal implementation. Tenro handles these via HTTP interception automatically.

Usage patterns

@link_llm                                   # Provider inferred from HTTP call
@link_llm()                                 # Provider inferred from HTTP call
@link_llm(Provider.OPENAI)                  # Explicit provider
@link_llm(Provider.OPENAI, model="gpt-4")   # With model info

When to use explicit provider

Specifying the provider acts as a safeguard:

from tenro import link_llm, Provider

@link_llm(Provider.OPENAI)  # Expect OpenAI calls only
def call_openai(prompt: str) -> str:
    # If this accidentally calls Anthropic, you'll get an error
    return openai.chat.completions.create(...)

If the function makes an HTTP call to a different provider than specified, Tenro will raise an error during validation.

With @link_llm Without @link_llm
caller_signature: call_openai(prompt: str) -> str None
caller_location: /path/to/file.py:42 None
llm_scope_id: Links LLMCall to the decorated function None
Targeted simulation with target=my_function Provider-level simulation only
Better error messages showing which function failed Generic "LLM call failed"

Bottom line: HTTP interception works without @link_llm. Use @link_llm when you need targeted simulation or want traceability.

Examples

from tenro import link_llm, Provider

@link_llm  # Provider auto-detected
def call_gpt(prompt: str) -> str:
    response = openai.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}],
    )
    return response.choices[0].message.content

@link_llm(Provider.ANTHROPIC)  # Explicit provider for safeguard
def call_claude(prompt: str) -> str:
    response = anthropic.messages.create(
        model="claude-3-opus-20240229",
        messages=[{"role": "user", "content": prompt}],
    )
    return response.content[0].text

Supported providers

  • Provider.OPENAI: OpenAI API
  • Provider.ANTHROPIC: Anthropic API
  • Provider.GEMINI: Google Gemini API
Scenario Use @link_llm? Reason
LangChain, CrewAI, LangGraph No Can't decorate framework internals
Pydantic AI, AutoGen, LlamaIndex No Can't decorate framework internals
Single raw SDK function Optional Nice for tracing
Multiple raw SDK functions Recommended Distinguish which function made each call
Want provider safeguard Yes Catches accidental wrong-provider calls

Linking framework agents

For agents built with frameworks like LangChain or CrewAI, link the entry point only:

# LangChain
@link_agent  # Uses function name
def run_langchain_agent(input: str) -> str:
    agent = create_langchain_agent()
    return agent.invoke(input)

# CrewAI
@link_agent("CrewAIAgent")  # Explicit name
def run_crew(task: str) -> str:
    crew = Crew(agents=[...], tasks=[...])
    return crew.kickoff()

register()

Register third-party functions for simulation without decorating them.

When to use register() vs @link_tool

Use @link_tool when you own the function. Use register() when you don't control the source code (third-party libraries, framework internals).

You can also wrap a third-party function with @link_tool instead of using register():

@link_tool
def search(query: str) -> str:
    return third_party_search(query)  # thin wrapper

This is simpler, but only intercepts calls through your wrapper. If the library internally calls third_party_search() from its own code, those calls won't be simulated or traced. Use register() when the callable your agent or framework invokes lives in code you don't control.

Important: When using a @link_tool wrapper, simulate and verify the wrapper function (e.g., search), not the original third-party callable (e.g., third_party_search).

How it works

register() marks a function so Tenro can intercept it during tests. Unlike @link_tool (which wraps the function), register() works at the function object level, so code that already holds a reference to that callable can still be intercepted.

# conftest.py — register in a fixture, available across all tests
import pytest
from tenro.simulate import register
from third_party_lib import search

@pytest.fixture(autouse=True)
def _register_third_party():
    register(search)

# test_my_agent.py
from tenro.simulate import tool
from third_party_lib import search
import tenro

@tenro.simulate
def test_agent_with_third_party_tool():
    tool.simulate(search, result="simulated results")

    result = Agent().run("test query")

    assert result == "simulated results"

Idempotent registration

register() is idempotent — calling it multiple times on the same function is safe. A common pattern is to place it in an autouse=True fixture in conftest.py so the callable is available across all tests.

Requirements

A function can be registered when all of the following are true:

Requirement Why
Pure Python function C extensions can't be intercepted
CPython runtime Uses CPython-specific internals
No closure variables Closures capture state that can't be transferred

If a function can't be registered, register() raises TenroSimulationSetupError with a specific reason.

Examples

LangChain third-party tools

# conftest.py
import pytest
from tenro.simulate import register
from langchain_community import tools as lc_tools

@pytest.fixture(autouse=True)
def _register_langchain():
    register(lc_tools.DuckDuckGoSearchRun.invoke)
    register(lc_tools.WikipediaQueryRun.invoke)

# test_agent.py
from langchain_community import tools as lc_tools
from tenro.simulate import tool, llm
from tenro import Provider
import tenro

@tenro.simulate
def test_agent_with_langchain_tools():
    tool.simulate(
        lc_tools.DuckDuckGoSearchRun.invoke,
        result="AI agents are autonomous systems...",
    )
    tool.simulate(
        lc_tools.WikipediaQueryRun.invoke,
        result="Artificial intelligence agents...",
    )
    llm.simulate(Provider.OPENAI, response="Summary of AI agents.")

    result = MyResearchAgent().run("What are AI agents?")

    assert "AI" in result

Async functions

# conftest.py
import pytest
from tenro.simulate import register
from third_party_lib import third_party_fetch

@pytest.fixture(autouse=True)
def _register_third_party():
    register(third_party_fetch)

# test_agents.py
from tenro.simulate import tool
from third_party_lib import third_party_fetch
import tenro

@tenro.simulate
async def test_async_tool():
    tool.simulate(third_party_fetch, result="fetched content")

    result = await third_party_fetch("https://example.com")
    assert result == "fetched content"

Local vs remote third-party tools

A third-party tool may do purely local work, or it may call an external API/database internally. That distinction does not change how @link_tool or register() work — both operate at the Python callable boundary. Choose based on ownership and interception scope:

  • Use @link_tool when your agent calls a wrapper function you control.
  • Use register() when the callable lives in third-party/framework code and you need to intercept it directly.

If you need to test the underlying HTTP/client behavior rather than the tool boundary, use a lower-level network mock instead of tool simulation.

Framework methods

When registering framework tool methods (e.g., LangChain BaseTool.invoke), register the class-level method, not an instance-bound method:

# Correct — class-level method
register(lc_tools.DuckDuckGoSearchRun.invoke)

# Wrong — instance-bound method
tool_instance = lc_tools.DuckDuckGoSearchRun()
register(tool_instance.invoke)  # Raises TenroSimulationSetupError

API

from tenro.simulate import register

: register(func) — Register a callable for simulation.

**Parameters:**

- `func` — The callable to register. Must be a pure Python function.

**Returns:** The same callable (unchanged).

**Raises:** `TenroSimulationSetupError` if the function can't be registered.

Reference

Linking utilities for agents, LLMs, and tools.

Provides decorators for registering functions, classes, and framework objects with the Tenro system for testing and observability.

link_agent(name: F) -> F
link_agent(name: str | None = None, *, id: str | None = None, version: str | None = None, entry_points: str | list[str] | None = None) -> Callable[[F], F]
link_agent(name: str | Callable[..., Any] | None = None, *, id: str | None = None, version: str | None = None, entry_points: str | list[str] | None = None) -> Callable[..., Any]

Decorator to register agent functions, classes, or objects with Tenro.

When a Construct is active, the decorator records an agent span. Otherwise, the function/method executes normally. Set TENRO_LINKING_ENABLED=false to disable decorator wrapping and return the original target unchanged.

Supports: - Sync and async functions - Classes with auto-detected entry methods (or explicit via entry_points=) - Framework objects (patches invoke/run methods)

For classes, the decorator wraps ALL matching entry methods with a re-entrancy guard, so method delegation (e.g., invoke → stream) creates only one span.

Can be used with or without parentheses: - @link_agent - @link_agent() - @link_agent("CustomName") - @link_agent(name="router", id="agt_router", version="2026.04.11")

Parameters:

Name Type Description Default
name str | Callable[..., Any] | None

Agent name for the span. If None, uses function/class name. Can also be the target itself when used without parentheses.

None
id str | None

Stable agent identifier that survives renames. Falls back to name, then target_path when not set.

None
version str | None

Agent version string (e.g., "2026.04.11", "1.4.2").

None
entry_points str | list[str] | None

For classes only. Explicit method name(s) to wrap. Can be a single string or list of strings. If None, auto-detects common entry methods (run, invoke, execute, call, stream, etc.).

None

Returns:

Type Description
Callable[..., Any]

Decorated target that registers with active Construct.

Raises:

Type Description
ValueError

If decorating a class and no entry method is found.

Examples:

>>> @link_agent
... def simple_agent(task: str) -> str:
...     return "done"
>>>
>>> @link_agent("PlannerBot")
... def plan_trip(destination: str) -> str:
...     return agent.run(destination)
>>>
>>> @link_agent("WriterAgent")
... class WriterAgent:
...     async def execute(self, prompt: str) -> str:
...         return "result"
>>>
>>> @link_agent(name="router", id="agt_router", version="2026.04.11")
... class SupportRouter:
...     def run(self, task: str) -> str: ...
Source code in tenro/linking/decorators.py
def link_agent(
    name: str | Callable[..., Any] | None = None,
    *,
    id: str | None = None,
    version: str | None = None,
    entry_points: str | list[str] | None = None,
) -> Callable[..., Any]:
    """Decorator to register agent functions, classes, or objects with Tenro.

    When a Construct is active, the decorator records an agent span. Otherwise,
    the function/method executes normally. Set TENRO_LINKING_ENABLED=false to
    disable decorator wrapping and return the original target unchanged.

    Supports:
    - Sync and async functions
    - Classes with auto-detected entry methods (or explicit via entry_points=)
    - Framework objects (patches invoke/run methods)

    For classes, the decorator wraps ALL matching entry methods with a
    re-entrancy guard, so method delegation (e.g., invoke → stream)
    creates only one span.

    Can be used with or without parentheses:
    - @link_agent
    - @link_agent()
    - @link_agent("CustomName")
    - @link_agent(name="router", id="agt_router", version="2026.04.11")

    Args:
        name: Agent name for the span. If None, uses function/class name.
            Can also be the target itself when used without parentheses.
        id: Stable agent identifier that survives renames.
            Falls back to name, then target_path when not set.
        version: Agent version string (e.g., ``"2026.04.11"``, ``"1.4.2"``).
        entry_points: For classes only. Explicit method name(s) to wrap.
            Can be a single string or list of strings. If None, auto-detects
            common entry methods (run, invoke, execute, call, stream, etc.).

    Returns:
        Decorated target that registers with active Construct.

    Raises:
        ValueError: If decorating a class and no entry method is found.

    Examples:
        >>> @link_agent
        ... def simple_agent(task: str) -> str:
        ...     return "done"
        >>>
        >>> @link_agent("PlannerBot")
        ... def plan_trip(destination: str) -> str:
        ...     return agent.run(destination)
        >>>
        >>> @link_agent("WriterAgent")
        ... class WriterAgent:
        ...     async def execute(self, prompt: str) -> str:
        ...         return "result"
        >>>
        >>> @link_agent(name="router", id="agt_router", version="2026.04.11")
        ... class SupportRouter:
        ...     def run(self, task: str) -> str: ...
    """
    resolved_name: str | None = None if callable(name) and not isinstance(name, str) else name

    def decorator(target: Any) -> Any:
        if not _is_linking_enabled():
            return target

        target_type = detect_target_type(target)
        agent_name: str = (
            resolved_name if resolved_name else getattr(target, "__name__", None) or str(target)
        )

        if target_type == TargetType.CLASS:
            return _decorate_agent_class(
                target, agent_name, entry_points, agent_id=id, version=version
            )
        elif target_type == TargetType.FRAMEWORK_OBJECT:
            return _patch_agent_object(
                target, agent_name, entry_points, agent_id=id, version=version
            )
        else:
            if entry_points is not None:
                raise TypeError(
                    f"@link_agent('{agent_name}', entry_points=...): "
                    f"entry_points is only valid for classes, not functions"
                )
            return _wrap_agent_function(target, agent_name, agent_id=id, version=version)

    if callable(name) and not isinstance(name, str):
        # name is the target function when used as @link_agent without parens
        return cast("Callable[..., Any]", decorator(name))

    return decorator
link_llm(provider: F) -> F
link_llm(provider: str | None = None, model: str | None = None) -> Callable[[F], F]
link_llm(provider: str | Callable[..., Any] | None = None, model: str | None = None) -> Callable[..., Any]

Decorator to mark functions as LLM call boundaries.

Creates an LLMScope (transparent annotation span) when a Construct is active. HTTP interception will create LLMCall spans inside this scope. The scope captures caller info for debugging but is transparent for parent attribution.

The provider can be specified explicitly or inferred automatically from HTTP interception (URL pattern matching) or simulation configuration.

Set TENRO_LINKING_ENABLED=false to disable decorator wrapping and return the original function unchanged.

Can be used with or without parentheses: - @link_llm - @link_llm() - @link_llm(Provider.OPENAI) - @link_llm(provider=Provider.OPENAI, model="gpt-4")

Parameters:

Name Type Description Default
provider str | Callable[..., Any] | None

LLM provider (e.g., Provider.OPENAI, Provider.ANTHROPIC), or the decorated function when used without parentheses. If None, provider is inferred from HTTP interception.

None
model str | None

Model identifier (e.g., "gpt-4", "claude-3").

None

Returns:

Type Description
Callable[..., Any]

Decorated function that creates LLMScope when Construct is active.

Examples:

>>> @link_llm  # Provider inferred from HTTP call
... def call_llm(prompt: str) -> str:
...     return client.chat.completions.create(...)
>>>
>>> @link_llm(Provider.OPENAI, model="gpt-4")  # Explicit provider
... def call_openai(prompt: str) -> str:
...     return openai_client.chat.completions.create(...)
Source code in tenro/linking/decorators.py
def link_llm(
    provider: str | Callable[..., Any] | None = None,
    model: str | None = None,
) -> Callable[..., Any]:
    """Decorator to mark functions as LLM call boundaries.

    Creates an LLMScope (transparent annotation span) when a Construct is active.
    HTTP interception will create LLMCall spans inside this scope. The scope
    captures caller info for debugging but is transparent for parent attribution.

    The provider can be specified explicitly or inferred automatically from HTTP
    interception (URL pattern matching) or simulation configuration.

    Set TENRO_LINKING_ENABLED=false to disable decorator wrapping and return the
    original function unchanged.

    Can be used with or without parentheses:
    - @link_llm
    - @link_llm()
    - @link_llm(Provider.OPENAI)
    - @link_llm(provider=Provider.OPENAI, model="gpt-4")

    Args:
        provider: LLM provider (e.g., Provider.OPENAI, Provider.ANTHROPIC), or
            the decorated function when used without parentheses. If None,
            provider is inferred from HTTP interception.
        model: Model identifier (e.g., "gpt-4", "claude-3").

    Returns:
        Decorated function that creates LLMScope when Construct is active.

    Examples:
        >>> @link_llm  # Provider inferred from HTTP call
        ... def call_llm(prompt: str) -> str:
        ...     return client.chat.completions.create(...)
        >>>
        >>> @link_llm(Provider.OPENAI, model="gpt-4")  # Explicit provider
        ... def call_openai(prompt: str) -> str:
        ...     return openai_client.chat.completions.create(...)
    """
    resolved_provider: str | None = (
        None if callable(provider) and not isinstance(provider, str) else provider
    )

    def decorator(func: F) -> F:
        if not _is_linking_enabled():
            return func

        if getattr(func, ATTR_WRAPPED, False):
            return func

        # Reject classes - link_llm is for functions only
        if inspect.isclass(func):
            from tenro.errors import TenroConfigError

            raise TenroConfigError(
                f"@link_llm cannot be applied to class '{func.__name__}'. "
                "Use @link_llm on functions only. "
                "For class-based LLM wrappers, use @link_tool or @link_agent instead."
            )

        sig = inspect.signature(func)
        caller_name = func.__name__
        caller_signature = f"{func.__qualname__}{sig}"
        try:
            file = inspect.getsourcefile(func) or inspect.getfile(func)
            line = inspect.getsourcelines(func)[1]
            caller_location: str | None = format_file_location(file, line)
        except (OSError, TypeError):
            caller_location = None

        # Generate target path for dispatch lookup
        target_path = f"{func.__module__}.{func.__qualname__}"

        # Stamp identity on original for bidirectional resolution
        _stamp_identity_on_original(func, target_path)

        if inspect.isasyncgenfunction(func):

            @wraps(func)
            async def asyncgen_wrapper(*args: Any, **kwargs: Any) -> Any:
                construct = get_active_construct()
                if not construct:
                    async for item in func(*args, **kwargs):
                        yield item
                    return

                # Create scope for the streaming LLM call
                scope = create_llm_scope_span(
                    caller_name,
                    provider=resolved_provider,
                    model=model,
                    caller_signature=caller_signature,
                    caller_location=caller_location,
                    input_data=args,
                    input_kwargs=kwargs,
                )
                lifecycle = construct._lifecycle
                parent_span_id = lifecycle.start_span_manual(scope)

                error: Exception | None = None
                collected_chunks: list[Any] = []
                try:
                    gen, simulated = await dispatch_asyncgen(target_path, func, args, kwargs)
                    if simulated:
                        # Create LLMCall span for tracking/verification
                        effective_model = get_simulation_model(target_path, model)
                        span = create_llm_call_span(
                            provider=resolved_provider or "custom",
                            model=effective_model,
                            target_path=target_path,
                            llm_scope_id=scope.span_id,
                            response="",
                        )
                        span.simulated = True
                        span_parent_id = lifecycle.start_span_manual(span)
                        try:
                            async for item in gen:
                                collected_chunks.append(item)
                                yield item
                            # Update response with collected chunks
                            span.response = "".join(str(c) for c in collected_chunks)
                        finally:
                            lifecycle.end_span_manual(span, span_parent_id)
                    else:
                        async for item in gen:
                            collected_chunks.append(item)
                            yield item
                    scope.output_data = collected_chunks
                except Exception as e:
                    error = e
                    raise
                finally:
                    if error is not None:
                        lifecycle.error_span_manual(scope, parent_span_id, error)
                    else:
                        lifecycle.end_span_manual(scope, parent_span_id)

            setattr(asyncgen_wrapper, ATTR_WRAPPED, True)
            setattr(asyncgen_wrapper, ATTR_LINKED_TYPE, "llm")
            setattr(asyncgen_wrapper, ATTR_TARGET_PATHS, (target_path,))
            return asyncgen_wrapper  # type: ignore[return-value]

        if inspect.isgeneratorfunction(func):

            @wraps(func)
            def gen_wrapper(*args: Any, **kwargs: Any) -> Any:
                construct = get_active_construct()
                if not construct:
                    yield from func(*args, **kwargs)
                    return

                # Create scope for the streaming LLM call
                scope = create_llm_scope_span(
                    caller_name,
                    provider=resolved_provider,
                    model=model,
                    caller_signature=caller_signature,
                    caller_location=caller_location,
                    input_data=args,
                    input_kwargs=kwargs,
                )
                lifecycle = construct._lifecycle
                parent_span_id = lifecycle.start_span_manual(scope)

                error: Exception | None = None
                collected_chunks: list[Any] = []
                try:
                    gen, simulated = dispatch_gen(target_path, func, args, kwargs)
                    if simulated:
                        # Create LLMCall span for tracking/verification
                        effective_model = get_simulation_model(target_path, model)
                        span = create_llm_call_span(
                            provider=resolved_provider or "custom",
                            model=effective_model,
                            target_path=target_path,
                            llm_scope_id=scope.span_id,
                            response="",
                        )
                        span.simulated = True
                        span_parent_id = lifecycle.start_span_manual(span)
                        try:
                            for item in gen:
                                collected_chunks.append(item)
                                yield item
                            # Update response with collected chunks
                            span.response = "".join(str(c) for c in collected_chunks)
                        finally:
                            lifecycle.end_span_manual(span, span_parent_id)
                    else:
                        for item in gen:
                            collected_chunks.append(item)
                            yield item
                    scope.output_data = collected_chunks
                except Exception as e:
                    error = e
                    raise
                finally:
                    if error is not None:
                        lifecycle.error_span_manual(scope, parent_span_id, error)
                    else:
                        lifecycle.end_span_manual(scope, parent_span_id)

            setattr(gen_wrapper, ATTR_WRAPPED, True)
            setattr(gen_wrapper, ATTR_LINKED_TYPE, "llm")
            setattr(gen_wrapper, ATTR_TARGET_PATHS, (target_path,))
            return gen_wrapper  # type: ignore[return-value]

        if inspect.iscoroutinefunction(func):

            @wraps(func)
            async def async_wrapper(*args: Any, **kwargs: Any) -> Any:
                construct = get_active_construct()
                if not construct:
                    return await func(*args, **kwargs)

                # Create scope and check for dispatch
                scope = create_llm_scope_span(
                    caller_name,
                    provider=resolved_provider,
                    model=model,
                    caller_signature=caller_signature,
                    caller_location=caller_location,
                    input_data=args,
                    input_kwargs=kwargs,
                )
                with construct._lifecycle.start_span(scope):
                    # Check for simulation via dispatch
                    result, simulated = await dispatch_async(target_path, func, args, kwargs)
                    if simulated:
                        # Create LLMCall span for tracking/verification
                        effective_model = get_simulation_model(target_path, model)
                        span = create_llm_call_span(
                            provider=resolved_provider or "custom",
                            model=effective_model,
                            target_path=target_path,
                            llm_scope_id=scope.span_id,
                            response=result if isinstance(result, str) else str(result),
                        )
                        span.simulated = True
                        with construct._lifecycle.start_span(span):
                            scope.output_data = result
                            return result
                    # Not simulated - result is already computed by dispatch
                    scope.output_data = result
                    return result

            setattr(async_wrapper, ATTR_WRAPPED, True)
            setattr(async_wrapper, ATTR_LINKED_TYPE, "llm")
            setattr(async_wrapper, ATTR_TARGET_PATHS, (target_path,))
            return async_wrapper  # type: ignore[return-value]
        else:

            @wraps(func)
            def sync_wrapper(*args: Any, **kwargs: Any) -> Any:
                construct = get_active_construct()
                if not construct:
                    return func(*args, **kwargs)

                # Create scope and check for dispatch
                scope = create_llm_scope_span(
                    caller_name,
                    provider=resolved_provider,
                    model=model,
                    caller_signature=caller_signature,
                    caller_location=caller_location,
                    input_data=args,
                    input_kwargs=kwargs,
                )
                with construct._lifecycle.start_span(scope):
                    # Check for simulation via dispatch
                    result, simulated = dispatch_sync(target_path, func, args, kwargs)
                    if simulated:
                        # Create LLMCall span for tracking/verification
                        effective_model = get_simulation_model(target_path, model)
                        span = create_llm_call_span(
                            provider=resolved_provider or "custom",
                            model=effective_model,
                            target_path=target_path,
                            llm_scope_id=scope.span_id,
                            response=result if isinstance(result, str) else str(result),
                        )
                        span.simulated = True
                        with construct._lifecycle.start_span(span):
                            scope.output_data = result
                            return result
                    # Not simulated - result is already computed by dispatch
                    scope.output_data = result
                    return result

            setattr(sync_wrapper, ATTR_WRAPPED, True)
            setattr(sync_wrapper, ATTR_LINKED_TYPE, "llm")
            setattr(sync_wrapper, ATTR_TARGET_PATHS, (target_path,))
            return sync_wrapper  # type: ignore[return-value]

    if callable(provider) and not isinstance(provider, str):
        return decorator(provider)

    return decorator
link_tool(name: F) -> F
link_tool(name: str | None = None, *, entry_points: str | list[str] | None = None) -> Callable[[F], F]
link_tool(name: str | Callable[..., Any] | None = None, *, entry_points: str | list[str] | None = None) -> Callable[..., Any]

Decorator to register tool functions, classes, or objects with Tenro.

When a Construct is active, the decorator records a tool span. Otherwise, the function executes normally. Set TENRO_LINKING_ENABLED=false to disable decorator wrapping and return the original target unchanged.

At decoration time (import), registers tool to GlobalDeclaredRegistry for attack surface tracking and coverage calculation.

Supports: - Sync and async functions - Classes with auto-detected entry methods (or explicit via entry_points=) - Framework objects (patches invoke/run methods)

Can be used with or without parentheses: - @link_tool - @link_tool() - @link_tool("CustomName")

Parameters:

Name Type Description Default
name str | Callable[..., Any] | None

Tool name for the span. If None, uses function/class name. Can also be the target itself when used without parentheses.

None
entry_points str | list[str] | None

For classes only. Explicit method name(s) to wrap. Can be a single string or list of strings. If None, auto-detects common entry methods (run, invoke, execute, call, etc.).

None

Returns:

Type Description
Callable[..., Any]

Decorated target that registers with active Construct.

Examples:

>>> @link_tool
... def simple_tool(query: str) -> str:
...     return "result"
>>>
>>> @link_tool("search")
... def search(query: str) -> list[str]:
...     return ["result1", "result2"]
>>>
>>> @link_tool("calculator")
... class Calculator:
...     def invoke(self, a: int, b: int) -> int:
...         return a + b
>>>
>>> @link_tool("multi_tool", entry_points=["search", "fetch"])
... class MultiTool:
...     def search(self, q: str) -> list[str]: ...
...     def fetch(self, url: str) -> str: ...
Source code in tenro/linking/decorators.py
def link_tool(
    name: str | Callable[..., Any] | None = None,
    *,
    entry_points: str | list[str] | None = None,
) -> Callable[..., Any]:
    """Decorator to register tool functions, classes, or objects with Tenro.

    When a Construct is active, the decorator records a tool span. Otherwise,
    the function executes normally. Set TENRO_LINKING_ENABLED=false to disable
    decorator wrapping and return the original target unchanged.

    At decoration time (import), registers tool to GlobalDeclaredRegistry
    for attack surface tracking and coverage calculation.

    Supports:
    - Sync and async functions
    - Classes with auto-detected entry methods (or explicit via entry_points=)
    - Framework objects (patches invoke/run methods)

    Can be used with or without parentheses:
    - @link_tool
    - @link_tool()
    - @link_tool("CustomName")

    Args:
        name: Tool name for the span. If None, uses function/class name.
            Can also be the target itself when used without parentheses.
        entry_points: For classes only. Explicit method name(s) to wrap.
            Can be a single string or list of strings. If None, auto-detects
            common entry methods (run, invoke, execute, call, etc.).

    Returns:
        Decorated target that registers with active Construct.

    Examples:
        >>> @link_tool
        ... def simple_tool(query: str) -> str:
        ...     return "result"
        >>>
        >>> @link_tool("search")
        ... def search(query: str) -> list[str]:
        ...     return ["result1", "result2"]
        >>>
        >>> @link_tool("calculator")
        ... class Calculator:
        ...     def invoke(self, a: int, b: int) -> int:
        ...         return a + b
        >>>
        >>> @link_tool("multi_tool", entry_points=["search", "fetch"])
        ... class MultiTool:
        ...     def search(self, q: str) -> list[str]: ...
        ...     def fetch(self, url: str) -> str: ...
    """
    resolved_name: str | None = None if callable(name) and not isinstance(name, str) else name

    def decorator(target: Any) -> Any:
        if not _is_linking_enabled():
            return target

        target_type = detect_target_type(target)
        tool_name: str = (
            resolved_name if resolved_name else getattr(target, "__name__", None) or str(target)
        )

        if target_type == TargetType.CLASS:
            return _decorate_tool_class(target, tool_name, entry_points)
        elif target_type == TargetType.FRAMEWORK_OBJECT:
            return _patch_tool_object(target, tool_name, entry_points)
        else:
            if entry_points is not None:
                raise TypeError(
                    f"@link_tool('{tool_name}', entry_points=...): "
                    f"entry_points is only valid for classes, not functions"
                )
            return _wrap_tool_function(target, tool_name)

    if callable(name) and not isinstance(name, str):
        return cast("Callable[..., Any]", decorator(name))

    return decorator

See also