Skip to content

OpenAI

Simulate OpenAI chat completions in your tests.

Compatibility

See Compatibility for providers, transports, and framework recipes.

How it works

Tenro intercepts the OpenAI SDK's outbound HTTP request and returns your configured response. No real network call is made.

import openai

response = openai.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}]
)
# In tests, Tenro returns your simulated response

Capabilities

Capability Supported Notes
TEXT Yes Basic text generation
TOOL_CALLS Yes OpenAI-style tools parameter
STRUCTURED_OUTPUT Yes JSON Schema-based output
JSON_MODE Yes response_format: {"type": "json_object"}
STREAMING Coming soon Server-sent events

Simulating responses

Basic text

from tenro import Provider
from tenro.simulate import llm, tool

llm.simulate(Provider.OPENAI, response="Hello!")

With tool calls

Use ToolCall to simulate LLM responses that include tool calls:

from tenro import Provider, ToolCall
from tenro.simulate import llm

# Single response with text + tool call (nested list = one LLM call)
llm.simulate(
    provider=Provider.OPENAI,
    responses=[["I'll search for that.", ToolCall(search, query="AI")]]
)

# Tool call only (no text)
llm.simulate(
    provider=Provider.OPENAI,
    responses=[ToolCall("search", query="AI")]
)

Multi-turn conversations

from tenro import Provider
from tenro.simulate import llm, tool

llm.simulate(
    provider=Provider.OPENAI,
    responses=[
        "Let me think about that...",  # First LLM call
        "Here's my answer: 42"          # Second LLM call
    ]
)

Response format

Tenro returns a valid ChatCompletion object:

{
    "id": "chatcmpl-...",
    "object": "chat.completion",
    "created": 1234567890,
    "model": "gpt-4o",
    "choices": [{
        "index": 0,
        "message": {
            "role": "assistant",
            "content": "Hello!",
            "tool_calls": None
        },
        "finish_reason": "stop"
    }],
    "usage": {"prompt_tokens": 10, "completion_tokens": 5, "total_tokens": 15}
}

See also

  • LLM Providers - All supported providers
  • Others - OpenAI-compatible APIs (Mistral, Groq, etc.)