Skip to content

tenro.linking

Decorators for linking agents, LLMs, and tools to Tenro's simulation system.

Use these decorators to mark functions that should be intercepted during testing.

Overview

Decorator Purpose Who uses it
@link_tool Mark a tool/function call Everyone
@link_agent Mark an agent entry point Everyone
@link_llm Mark an LLM API call Custom agents only (optional)

Framework users: You don't need @link_llm

If you're using LangChain, CrewAI, LangGraph, or similar frameworks, skip @link_llm. Tenro intercepts LLM calls at the HTTP level, so llm.simulate() works automatically.

See How Tenro works for details.

Quick example

from tenro import link_agent, link_tool

@link_tool
def search(query: str) -> list[str]:
    return vector_db.search(query)

@link_agent
def researcher(topic: str) -> str:
    docs = search(topic)

    # LangChain handles LLM calls internally. No @link_llm needed.
    prompt = ChatPromptTemplate.from_template("Summarize: {docs}")
    chain = prompt | ChatOpenAI() | StrOutputParser()
    return chain.invoke({"docs": "\n".join(docs)})
from tenro import link_agent, link_llm, link_tool

@link_tool
def search(query: str) -> list[str]:
    """Search for documents."""
    return vector_db.search(query)

@link_llm  # Provider inferred from HTTP call
def summarize(text: str) -> str:
    """Summarize text using OpenAI."""
    return openai.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": f"Summarize: {text}"}],
    )

@link_agent  # Uses function name "researcher"
def researcher(topic: str) -> str:
    """Research a topic."""
    docs = search(topic)
    return summarize(docs)

Mark functions that interact with external systems (APIs, databases, file systems).

Usage patterns

@link_tool                              # Uses function name: "get_weather"
@link_tool("weather")                   # Custom name: "weather"
@link_tool("multi", entry_points="run") # Explicit entry method for classes
@link_tool("multi", entry_points=["run", "stream"])  # Multiple entry methods

If you don't provide a name, Tenro uses the function's own name by default.

Custom names are optional. They're useful for code readability but don't affect how Tenro locates or patches your function.

Function-level decoration

@link_tool
def get_weather(city: str) -> dict:
    return weather_api.get(city)

@link_tool("db_query")
def query_db(sql: str) -> list:
    return db.execute(sql)

Class-level decoration

When you decorate a class, Tenro automatically wraps entry methods:

  • run, invoke, _run, _arun, arun, ainvoke, __call__
@link_tool("search")
class SearchTool:
    """A tool that searches documents."""

    def run(self, query: str) -> list[str]:  # Automatically wrapped
        return self.db.search(query)

    def _helper(self):  # NOT wrapped (not an entry method)
        ...

Use entry_points to specify which methods should be wrapped when decorating a class (instead of relying on the default list):

@link_tool("multi_search", entry_points=["search", "fetch"])
class MultiSearchTool:
    def search(self, query: str) -> list[str]: ...  # Wrapped
    def fetch(self, url: str) -> str: ...           # Wrapped
    def _internal(self): ...                        # NOT wrapped

This is useful when building tools as classes (common in LangChain and CrewAI):

from langchain.tools import BaseTool

@link_tool("calculator")
class CalculatorTool(BaseTool):
    name = "calculator"
    description = "Adds two numbers"

    def _run(self, a: int, b: int) -> str:  # LangChain calls _run
        return str(a + b)

Simulation and verification

Once you've decorated your function with @link_tool, you can simulate and verify it in tests:

from tenro.simulate import tool
# Pass the function object (recommended, refactor-safe)
tool.simulate(get_weather, result={"temp": 72})
tool.verify_many(get_weather, count=1)

# Or use the full module path as a string
tool.simulate("myapp.tools.get_weather", result={"temp": 72})

Using the function object is recommended because it stays correct when you rename or move the function.

For inspecting recorded calls, see Tracing.

Mark top-level agent functions or classes.

Usage patterns

@link_agent                                    # Uses function name: "customer_support"
@link_agent("CustomerSupport")                 # Custom name: "CustomerSupport"
@link_agent("Agent", entry_points=["run"])     # Explicit entry method for classes

If you don't provide a name, Tenro uses the function's own name by default.

The entry_points parameter works the same as for @link_tool - use it to specify custom entry methods for classes.

Examples

@link_agent
def customer_support(query: str) -> str:
    """Handle customer support queries."""
    ...

@link_agent("ResearchAssistant")
class ResearchAssistant:
    def run(self, topic: str) -> str:
        ...

Simulation and verification

Once you've decorated your function with @link_agent, you can simulate and verify it in tests:

from tenro.simulate import agent
# Pass the function or class object (recommended, refactor-safe)
agent.simulate(customer_support, result="Issue resolved")
agent.verify_many(customer_support, count=1)

# Or use the full module path as a string
agent.simulate("myapp.agents.customer_support", result="Issue resolved")

Using the function object is recommended because it stays correct when you rename or move the function.

For inspecting recorded runs, see Tracing.

Mark functions that call LLM providers directly.

For custom agent builders only

@link_llm is designed for developers building agents with raw LLM SDKs (OpenAI, Anthropic, etc.).

Framework users cannot use it because frameworks like LangChain, CrewAI, and LangGraph hide LLM calls inside their internal implementation. Tenro handles these via HTTP interception automatically.

Usage patterns

@link_llm                                   # Provider inferred from HTTP call
@link_llm()                                 # Provider inferred from HTTP call
@link_llm(Provider.OPENAI)                  # Explicit provider
@link_llm(Provider.OPENAI, model="gpt-4")   # With model info

When to use explicit provider

Specifying the provider acts as a safeguard:

from tenro import link_llm, Provider

@link_llm(Provider.OPENAI)  # Expect OpenAI calls only
def call_openai(prompt: str) -> str:
    # If this accidentally calls Anthropic, you'll get an error
    return openai.chat.completions.create(...)

If the function makes an HTTP call to a different provider than specified, Tenro will raise an error during validation.

With @link_llm Without @link_llm
caller_signature: call_openai(prompt: str) -> str None
caller_location: /path/to/file.py:42 None
llm_scope_id: Links LLMCall to the decorated function None
Targeted simulation with target=my_function Provider-level simulation only
Better error messages showing which function failed Generic "LLM call failed"

Bottom line: HTTP interception works without @link_llm. Use @link_llm when you need targeted simulation or want traceability.

Examples

from tenro import link_llm, Provider

@link_llm  # Provider auto-detected
def call_gpt(prompt: str) -> str:
    response = openai.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}],
    )
    return response.choices[0].message.content

@link_llm(Provider.ANTHROPIC)  # Explicit provider for safeguard
def call_claude(prompt: str) -> str:
    response = anthropic.messages.create(
        model="claude-3-opus-20240229",
        messages=[{"role": "user", "content": prompt}],
    )
    return response.content[0].text

Supported providers

  • Provider.OPENAI: OpenAI API
  • Provider.ANTHROPIC: Anthropic API
  • Provider.GEMINI: Google Gemini API
Scenario Use @link_llm? Reason
LangChain, CrewAI, LangGraph No Can't decorate framework internals
Pydantic AI, AutoGen, LlamaIndex No Can't decorate framework internals
Single raw SDK function Optional Nice for tracing
Multiple raw SDK functions Recommended Distinguish which function made each call
Want provider safeguard Yes Catches accidental wrong-provider calls

Linking framework agents

For agents built with frameworks like LangChain or CrewAI, link the entry point only:

# LangChain
@link_agent  # Uses function name
def run_langchain_agent(input: str) -> str:
    agent = create_langchain_agent()
    return agent.invoke(input)

# CrewAI
@link_agent("CrewAIAgent")  # Explicit name
def run_crew(task: str) -> str:
    crew = Crew(agents=[...], tasks=[...])
    return crew.kickoff()

Reference

Linking utilities for agents, LLMs, and tools.

Provides decorators for registering functions, classes, and framework objects with the Tenro system for testing and observability.

link_agent(name: F) -> F
link_agent(name: str | None = None, *, entry_points: str | list[str] | None = None) -> Callable[[F], F]
link_agent(name: str | Callable[..., Any] | None = None, *, entry_points: str | list[str] | None = None) -> Callable[..., Any]

Decorator to register agent functions, classes, or objects with Tenro.

When a Construct is active, the decorator records an agent span. Otherwise, the function/method executes normally. Set TENRO_LINKING_ENABLED=false to disable decorator wrapping and return the original target unchanged.

Supports: - Sync and async functions - Classes with auto-detected entry methods (or explicit via entry_points=) - Framework objects (patches invoke/run methods)

For classes, the decorator wraps ALL matching entry methods with a re-entrancy guard, so method delegation (e.g., invoke → stream) creates only one span.

Can be used with or without parentheses: - @link_agent - @link_agent() - @link_agent("CustomName")

Parameters:

Name Type Description Default
name str | Callable[..., Any] | None

Agent name for the span. If None, uses function/class name. Can also be the target itself when used without parentheses.

None
entry_points str | list[str] | None

For classes only. Explicit method name(s) to wrap. Can be a single string or list of strings. If None, auto-detects common entry methods (run, invoke, execute, call, stream, etc.).

None

Returns:

Type Description
Callable[..., Any]

Decorated target that registers with active Construct.

Raises:

Type Description
ValueError

If decorating a class and no entry method is found.

Examples:

>>> @link_agent
... def simple_agent(task: str) -> str:
...     return "done"
>>>
>>> @link_agent("PlannerBot")
... def plan_trip(destination: str) -> str:
...     return agent.run(destination)
>>>
>>> @link_agent("WriterAgent")
... class WriterAgent:
...     async def execute(self, prompt: str) -> str:
...         return "result"
>>>
>>> @link_agent("MultiEntry", entry_points=["run", "stream"])
... class MultiEntryAgent:
...     def run(self, task: str) -> str: ...
...     def stream(self, task: str) -> Iterator[str]: ...
Source code in tenro/linking/decorators.py
def link_agent(
    name: str | Callable[..., Any] | None = None,
    *,
    entry_points: str | list[str] | None = None,
) -> Callable[..., Any]:
    """Decorator to register agent functions, classes, or objects with Tenro.

    When a Construct is active, the decorator records an agent span. Otherwise,
    the function/method executes normally. Set TENRO_LINKING_ENABLED=false to
    disable decorator wrapping and return the original target unchanged.

    Supports:
    - Sync and async functions
    - Classes with auto-detected entry methods (or explicit via entry_points=)
    - Framework objects (patches invoke/run methods)

    For classes, the decorator wraps ALL matching entry methods with a
    re-entrancy guard, so method delegation (e.g., invoke → stream)
    creates only one span.

    Can be used with or without parentheses:
    - @link_agent
    - @link_agent()
    - @link_agent("CustomName")

    Args:
        name: Agent name for the span. If None, uses function/class name.
            Can also be the target itself when used without parentheses.
        entry_points: For classes only. Explicit method name(s) to wrap.
            Can be a single string or list of strings. If None, auto-detects
            common entry methods (run, invoke, execute, call, stream, etc.).

    Returns:
        Decorated target that registers with active Construct.

    Raises:
        ValueError: If decorating a class and no entry method is found.

    Examples:
        >>> @link_agent
        ... def simple_agent(task: str) -> str:
        ...     return "done"
        >>>
        >>> @link_agent("PlannerBot")
        ... def plan_trip(destination: str) -> str:
        ...     return agent.run(destination)
        >>>
        >>> @link_agent("WriterAgent")
        ... class WriterAgent:
        ...     async def execute(self, prompt: str) -> str:
        ...         return "result"
        >>>
        >>> @link_agent("MultiEntry", entry_points=["run", "stream"])
        ... class MultiEntryAgent:
        ...     def run(self, task: str) -> str: ...
        ...     def stream(self, task: str) -> Iterator[str]: ...
    """
    resolved_name: str | None = None if callable(name) and not isinstance(name, str) else name

    def decorator(target: Any) -> Any:
        if not _is_linking_enabled():
            return target

        target_type = detect_target_type(target)
        agent_name: str = (
            resolved_name if resolved_name else getattr(target, "__name__", None) or str(target)
        )

        if target_type == TargetType.CLASS:
            return _decorate_agent_class(target, agent_name, entry_points)
        elif target_type == TargetType.FRAMEWORK_OBJECT:
            return _patch_agent_object(target, agent_name, entry_points)
        else:
            if entry_points is not None:
                raise TypeError(
                    f"@link_agent('{agent_name}', entry_points=...): "
                    f"entry_points is only valid for classes, not functions"
                )
            return _wrap_agent_function(target, agent_name)

    if callable(name) and not isinstance(name, str):
        # name is the target function when used as @link_agent without parens
        return cast("Callable[..., Any]", decorator(name))

    return decorator
link_llm(provider: F) -> F
link_llm(provider: str | None = None, model: str | None = None) -> Callable[[F], F]
link_llm(provider: str | Callable[..., Any] | None = None, model: str | None = None) -> Callable[..., Any]

Decorator to mark functions as LLM call boundaries.

Creates an LLMScope (transparent annotation span) when a Construct is active. HTTP interception will create LLMCall spans inside this scope. The scope captures caller info for debugging but is transparent for parent attribution.

The provider can be specified explicitly or inferred automatically from HTTP interception (URL pattern matching) or simulation configuration.

Set TENRO_LINKING_ENABLED=false to disable decorator wrapping and return the original function unchanged.

Can be used with or without parentheses: - @link_llm - @link_llm() - @link_llm(Provider.OPENAI) - @link_llm(provider=Provider.OPENAI, model="gpt-4")

Parameters:

Name Type Description Default
provider str | Callable[..., Any] | None

LLM provider (e.g., Provider.OPENAI, Provider.ANTHROPIC), or the decorated function when used without parentheses. If None, provider is inferred from HTTP interception.

None
model str | None

Model identifier (e.g., "gpt-4", "claude-3").

None

Returns:

Type Description
Callable[..., Any]

Decorated function that creates LLMScope when Construct is active.

Examples:

>>> @link_llm  # Provider inferred from HTTP call
... def call_llm(prompt: str) -> str:
...     return client.chat.completions.create(...)
>>>
>>> @link_llm(Provider.OPENAI, model="gpt-4")  # Explicit provider
... def call_openai(prompt: str) -> str:
...     return openai_client.chat.completions.create(...)
Source code in tenro/linking/decorators.py
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
def link_llm(
    provider: str | Callable[..., Any] | None = None,
    model: str | None = None,
) -> Callable[..., Any]:
    """Decorator to mark functions as LLM call boundaries.

    Creates an LLMScope (transparent annotation span) when a Construct is active.
    HTTP interception will create LLMCall spans inside this scope. The scope
    captures caller info for debugging but is transparent for parent attribution.

    The provider can be specified explicitly or inferred automatically from HTTP
    interception (URL pattern matching) or simulation configuration.

    Set TENRO_LINKING_ENABLED=false to disable decorator wrapping and return the
    original function unchanged.

    Can be used with or without parentheses:
    - @link_llm
    - @link_llm()
    - @link_llm(Provider.OPENAI)
    - @link_llm(provider=Provider.OPENAI, model="gpt-4")

    Args:
        provider: LLM provider (e.g., Provider.OPENAI, Provider.ANTHROPIC), or
            the decorated function when used without parentheses. If None,
            provider is inferred from HTTP interception.
        model: Model identifier (e.g., "gpt-4", "claude-3").

    Returns:
        Decorated function that creates LLMScope when Construct is active.

    Examples:
        >>> @link_llm  # Provider inferred from HTTP call
        ... def call_llm(prompt: str) -> str:
        ...     return client.chat.completions.create(...)
        >>>
        >>> @link_llm(Provider.OPENAI, model="gpt-4")  # Explicit provider
        ... def call_openai(prompt: str) -> str:
        ...     return openai_client.chat.completions.create(...)
    """
    resolved_provider: str | None = (
        None if callable(provider) and not isinstance(provider, str) else provider
    )

    def decorator(func: F) -> F:
        if not _is_linking_enabled():
            return func

        if getattr(func, ATTR_WRAPPED, False):
            return func

        # Reject classes - link_llm is for functions only
        if inspect.isclass(func):
            from tenro.errors import TenroConfigError

            raise TenroConfigError(
                f"@link_llm cannot be applied to class '{func.__name__}'. "
                "Use @link_llm on functions only. "
                "For class-based LLM wrappers, use @link_tool or @link_agent instead."
            )

        sig = inspect.signature(func)
        caller_name = func.__name__
        caller_signature = f"{func.__qualname__}{sig}"
        try:
            file = inspect.getsourcefile(func) or inspect.getfile(func)
            line = inspect.getsourcelines(func)[1]
            caller_location: str | None = format_file_location(file, line)
        except (OSError, TypeError):
            caller_location = None

        # Generate target path for dispatch lookup
        target_path = f"{func.__module__}.{func.__qualname__}"

        # Stamp identity on original for bidirectional resolution
        _stamp_identity_on_original(func, target_path)

        if inspect.isasyncgenfunction(func):

            @wraps(func)
            async def asyncgen_wrapper(*args: Any, **kwargs: Any) -> Any:
                construct = get_active_construct()
                if not construct:
                    async for item in func(*args, **kwargs):
                        yield item
                    return

                # Create scope for the streaming LLM call
                scope = LLMScope(
                    id=str(uuid7()),
                    trace_id=str(uuid7()),
                    start_time=time.time(),
                    provider=resolved_provider,
                    model=model,
                    caller_name=caller_name,
                    caller_signature=caller_signature,
                    caller_location=caller_location,
                    input_data=args,
                    input_kwargs=kwargs,
                )
                lifecycle = construct._lifecycle
                parent_span_id = lifecycle.start_span_manual(scope)

                error: Exception | None = None
                collected_chunks: list[Any] = []
                try:
                    gen, simulated = await dispatch_asyncgen(target_path, func, args, kwargs)
                    if simulated:
                        # Create LLMCall span for tracking/verification
                        effective_model = get_simulation_model(target_path, model)
                        span = LLMCall(
                            id=str(uuid7()),
                            trace_id=str(uuid7()),
                            start_time=time.time(),
                            provider=resolved_provider or "custom",
                            messages=[],
                            response="",  # Will be updated after iteration
                            model=effective_model,
                            target_path=target_path,
                            llm_scope_id=scope.id,
                        )
                        span.simulated = True
                        span_parent_id = lifecycle.start_span_manual(span)
                        try:
                            async for item in gen:
                                collected_chunks.append(item)
                                yield item
                            # Update response with collected chunks
                            span.response = "".join(str(c) for c in collected_chunks)
                        finally:
                            lifecycle.end_span_manual(span, span_parent_id)
                    else:
                        async for item in gen:
                            collected_chunks.append(item)
                            yield item
                    scope.output_data = collected_chunks
                except Exception as e:
                    error = e
                    raise
                finally:
                    if error is not None:
                        lifecycle.error_span_manual(scope, parent_span_id, error)
                    else:
                        lifecycle.end_span_manual(scope, parent_span_id)

            setattr(asyncgen_wrapper, ATTR_WRAPPED, True)
            setattr(asyncgen_wrapper, ATTR_LINKED_TYPE, "llm")
            setattr(asyncgen_wrapper, ATTR_TARGET_PATHS, (target_path,))
            return asyncgen_wrapper  # type: ignore[return-value]

        if inspect.isgeneratorfunction(func):

            @wraps(func)
            def gen_wrapper(*args: Any, **kwargs: Any) -> Any:
                construct = get_active_construct()
                if not construct:
                    yield from func(*args, **kwargs)
                    return

                # Create scope for the streaming LLM call
                scope = LLMScope(
                    id=str(uuid7()),
                    trace_id=str(uuid7()),
                    start_time=time.time(),
                    provider=resolved_provider,
                    model=model,
                    caller_name=caller_name,
                    caller_signature=caller_signature,
                    caller_location=caller_location,
                    input_data=args,
                    input_kwargs=kwargs,
                )
                lifecycle = construct._lifecycle
                parent_span_id = lifecycle.start_span_manual(scope)

                error: Exception | None = None
                collected_chunks: list[Any] = []
                try:
                    gen, simulated = dispatch_gen(target_path, func, args, kwargs)
                    if simulated:
                        # Create LLMCall span for tracking/verification
                        effective_model = get_simulation_model(target_path, model)
                        span = LLMCall(
                            id=str(uuid7()),
                            trace_id=str(uuid7()),
                            start_time=time.time(),
                            provider=resolved_provider or "custom",
                            messages=[],
                            response="",  # Will be updated after iteration
                            model=effective_model,
                            target_path=target_path,
                            llm_scope_id=scope.id,
                        )
                        span.simulated = True
                        span_parent_id = lifecycle.start_span_manual(span)
                        try:
                            for item in gen:
                                collected_chunks.append(item)
                                yield item
                            # Update response with collected chunks
                            span.response = "".join(str(c) for c in collected_chunks)
                        finally:
                            lifecycle.end_span_manual(span, span_parent_id)
                    else:
                        for item in gen:
                            collected_chunks.append(item)
                            yield item
                    scope.output_data = collected_chunks
                except Exception as e:
                    error = e
                    raise
                finally:
                    if error is not None:
                        lifecycle.error_span_manual(scope, parent_span_id, error)
                    else:
                        lifecycle.end_span_manual(scope, parent_span_id)

            setattr(gen_wrapper, ATTR_WRAPPED, True)
            setattr(gen_wrapper, ATTR_LINKED_TYPE, "llm")
            setattr(gen_wrapper, ATTR_TARGET_PATHS, (target_path,))
            return gen_wrapper  # type: ignore[return-value]

        if inspect.iscoroutinefunction(func):

            @wraps(func)
            async def async_wrapper(*args: Any, **kwargs: Any) -> Any:
                construct = get_active_construct()
                if not construct:
                    return await func(*args, **kwargs)

                # Create scope and check for dispatch
                scope = LLMScope(
                    id=str(uuid7()),
                    trace_id=str(uuid7()),
                    start_time=time.time(),
                    provider=resolved_provider,
                    model=model,
                    caller_name=caller_name,
                    caller_signature=caller_signature,
                    caller_location=caller_location,
                    input_data=args,
                    input_kwargs=kwargs,
                )
                with construct._lifecycle.start_span(scope):
                    # Check for simulation via dispatch
                    result, simulated = await dispatch_async(target_path, func, args, kwargs)
                    if simulated:
                        # Create LLMCall span for tracking/verification
                        effective_model = get_simulation_model(target_path, model)
                        span = LLMCall(
                            id=str(uuid7()),
                            trace_id=str(uuid7()),
                            start_time=time.time(),
                            provider=resolved_provider or "custom",
                            messages=[],  # Dispatch doesn't capture messages
                            response=result if isinstance(result, str) else str(result),
                            model=effective_model,
                            target_path=target_path,
                            llm_scope_id=scope.id,  # Link to enclosing scope
                        )
                        span.simulated = True
                        with construct._lifecycle.start_span(span):
                            scope.output_data = result
                            return result
                    # Not simulated - result is already computed by dispatch
                    scope.output_data = result
                    return result

            setattr(async_wrapper, ATTR_WRAPPED, True)
            setattr(async_wrapper, ATTR_LINKED_TYPE, "llm")
            setattr(async_wrapper, ATTR_TARGET_PATHS, (target_path,))
            return async_wrapper  # type: ignore[return-value]
        else:

            @wraps(func)
            def sync_wrapper(*args: Any, **kwargs: Any) -> Any:
                construct = get_active_construct()
                if not construct:
                    return func(*args, **kwargs)

                # Create scope and check for dispatch
                scope = LLMScope(
                    id=str(uuid7()),
                    trace_id=str(uuid7()),
                    start_time=time.time(),
                    provider=resolved_provider,
                    model=model,
                    caller_name=caller_name,
                    caller_signature=caller_signature,
                    caller_location=caller_location,
                    input_data=args,
                    input_kwargs=kwargs,
                )
                with construct._lifecycle.start_span(scope):
                    # Check for simulation via dispatch
                    result, simulated = dispatch_sync(target_path, func, args, kwargs)
                    if simulated:
                        # Create LLMCall span for tracking/verification
                        effective_model = get_simulation_model(target_path, model)
                        span = LLMCall(
                            id=str(uuid7()),
                            trace_id=str(uuid7()),
                            start_time=time.time(),
                            provider=resolved_provider or "custom",
                            messages=[],  # Dispatch doesn't capture messages
                            response=result if isinstance(result, str) else str(result),
                            model=effective_model,
                            target_path=target_path,
                            llm_scope_id=scope.id,  # Link to enclosing scope
                        )
                        span.simulated = True
                        with construct._lifecycle.start_span(span):
                            scope.output_data = result
                            return result
                    # Not simulated - result is already computed by dispatch
                    scope.output_data = result
                    return result

            setattr(sync_wrapper, ATTR_WRAPPED, True)
            setattr(sync_wrapper, ATTR_LINKED_TYPE, "llm")
            setattr(sync_wrapper, ATTR_TARGET_PATHS, (target_path,))
            return sync_wrapper  # type: ignore[return-value]

    if callable(provider) and not isinstance(provider, str):
        return decorator(provider)

    return decorator
link_tool(name: F) -> F
link_tool(name: str | None = None, *, entry_points: str | list[str] | None = None) -> Callable[[F], F]
link_tool(name: str | Callable[..., Any] | None = None, *, entry_points: str | list[str] | None = None) -> Callable[..., Any]

Decorator to register tool functions, classes, or objects with Tenro.

When a Construct is active, the decorator records a tool span. Otherwise, the function executes normally. Set TENRO_LINKING_ENABLED=false to disable decorator wrapping and return the original target unchanged.

At decoration time (import), registers tool to GlobalDeclaredRegistry for attack surface tracking and coverage calculation.

Supports: - Sync and async functions - Classes with auto-detected entry methods (or explicit via entry_points=) - Framework objects (patches invoke/run methods)

Can be used with or without parentheses: - @link_tool - @link_tool() - @link_tool("CustomName")

Parameters:

Name Type Description Default
name str | Callable[..., Any] | None

Tool name for the span. If None, uses function/class name. Can also be the target itself when used without parentheses.

None
entry_points str | list[str] | None

For classes only. Explicit method name(s) to wrap. Can be a single string or list of strings. If None, auto-detects common entry methods (run, invoke, execute, call, etc.).

None

Returns:

Type Description
Callable[..., Any]

Decorated target that registers with active Construct.

Examples:

>>> @link_tool
... def simple_tool(query: str) -> str:
...     return "result"
>>>
>>> @link_tool("search")
... def search(query: str) -> list[str]:
...     return ["result1", "result2"]
>>>
>>> @link_tool("calculator")
... class Calculator:
...     def invoke(self, a: int, b: int) -> int:
...         return a + b
>>>
>>> @link_tool("multi_tool", entry_points=["search", "fetch"])
... class MultiTool:
...     def search(self, q: str) -> list[str]: ...
...     def fetch(self, url: str) -> str: ...
Source code in tenro/linking/decorators.py
def link_tool(
    name: str | Callable[..., Any] | None = None,
    *,
    entry_points: str | list[str] | None = None,
) -> Callable[..., Any]:
    """Decorator to register tool functions, classes, or objects with Tenro.

    When a Construct is active, the decorator records a tool span. Otherwise,
    the function executes normally. Set TENRO_LINKING_ENABLED=false to disable
    decorator wrapping and return the original target unchanged.

    At decoration time (import), registers tool to GlobalDeclaredRegistry
    for attack surface tracking and coverage calculation.

    Supports:
    - Sync and async functions
    - Classes with auto-detected entry methods (or explicit via entry_points=)
    - Framework objects (patches invoke/run methods)

    Can be used with or without parentheses:
    - @link_tool
    - @link_tool()
    - @link_tool("CustomName")

    Args:
        name: Tool name for the span. If None, uses function/class name.
            Can also be the target itself when used without parentheses.
        entry_points: For classes only. Explicit method name(s) to wrap.
            Can be a single string or list of strings. If None, auto-detects
            common entry methods (run, invoke, execute, call, etc.).

    Returns:
        Decorated target that registers with active Construct.

    Examples:
        >>> @link_tool
        ... def simple_tool(query: str) -> str:
        ...     return "result"
        >>>
        >>> @link_tool("search")
        ... def search(query: str) -> list[str]:
        ...     return ["result1", "result2"]
        >>>
        >>> @link_tool("calculator")
        ... class Calculator:
        ...     def invoke(self, a: int, b: int) -> int:
        ...         return a + b
        >>>
        >>> @link_tool("multi_tool", entry_points=["search", "fetch"])
        ... class MultiTool:
        ...     def search(self, q: str) -> list[str]: ...
        ...     def fetch(self, url: str) -> str: ...
    """
    resolved_name: str | None = None if callable(name) and not isinstance(name, str) else name

    def decorator(target: Any) -> Any:
        if not _is_linking_enabled():
            return target

        target_type = detect_target_type(target)
        tool_name: str = (
            resolved_name if resolved_name else getattr(target, "__name__", None) or str(target)
        )

        if target_type == TargetType.CLASS:
            return _decorate_tool_class(target, tool_name, entry_points)
        elif target_type == TargetType.FRAMEWORK_OBJECT:
            return _patch_tool_object(target, tool_name, entry_points)
        else:
            if entry_points is not None:
                raise TypeError(
                    f"@link_tool('{tool_name}', entry_points=...): "
                    f"entry_points is only valid for classes, not functions"
                )
            return _wrap_tool_function(target, tool_name)

    if callable(name) and not isinstance(name, str):
        return cast("Callable[..., Any]", decorator(name))

    return decorator

See also