Aurelio logo
Updated on May 1, 2025

Tracing with Agents SDK

AI Engineering

Agents SDK integrates with OpenAI's built-in Traces dashboard found within the OpenAI Platform. In this example, we'll take a look at both the default tracing enabled by default whenever we use Agents SDK (with an OpenAI API key present), and custom tracing.

Accessing the Traces Dashboard

We should first confirm that we have access to the Traces dashboard in the OpenAI Platform. You should be able to find the dashboard like so:

TK add visual steps

By default the traces dashboard is not visible by anyone but the organization owner. So, if you are not the organization owner or you need to provide access to other members of your organization you would do that via the Data controls setting, which can be found here:

Visual steps showing how to set traces visibility for an organization in the OpenAI Platform

After gaining access to the dashboard, if you have been following through with other chapters in the course you should see a list of traces already. If you don't — no problem — we'll be creating traces in the following steps of this example.

As usual, we need to set our OPENAI_API_KEY, which we find in the OpenAI Platform.

python
import os
import getpass

os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY") or \
getpass.getpass("OpenAI API Key: ")

Tracing Basic Agent Prompting

First we need to create an agent, we will use the Agent class to create a basic agent.

In each Agent object we have some fundamental parameters, those are:

  • name: The name of the agent
  • instructions: The system prompt for the agent
  • model: The model to use for the agent

The instructions parameter is where we will pass our system prompt, this will be used to guide the behavior of the agent

python
from agents import Agent # object class

prompt_agent = Agent(
name="Tracing Prompt Agent", # name of the agent
instructions="Speak like a pirate.", # system prompt
model="gpt-4.1-nano" # model to use
)

When running this agent we will automatically find traces being sent to our traces dashboard.

python
from agents import Runner

query = "Write a one-sentence poem"

prompt_result = await Runner.run(
starting_agent=prompt_agent, # agent to start the conversation with
input=query # input to pass to the agent
)

We should now see a trace in the dashboard with the default workflow name (Agents SDK uses Agent workflow for agent runs) and the name of our agent, ie "Tracing Prompt Agent":

Default trace in Platform

In the trace preview we can see key information such as whether any handoffs occured or tools were used, and how long the workflow took to complete.

We can also click on the trace to see a breakdown of each step in our trace, ie the individual spans, and we click on one of those spans to see the span details. Including LLM information, input / outputs, and the number of tokens that we have used.

Default trace span detail in Platform

Building Custom Traces

To customize our tracing, we can use the trace function from the Agents SDK library. With this, we can add and/or modify any defined parameters to anything ran within the context of our trace.

python
with trace(...):
# any agents-sdk code run here will inherit the parameters
# of our `trace`
...

There are various parameters we can set, the primary parameters are:

  • workflow_name: Name given to the workflow, this will be displayed as the primary name of the trace.
  • group_id: ID given to a group of traces. Throughout this notebook we set this to Agents SDK Course: Tracing so we can easily identify traces that have been generated from this notebook.
  • trace_id: ID of the individual trace (recommended to use util.gen_trace_id() if using)
  • metadata: Dictionary of data given with the workflow data
  • disabled: If true it will not return a trace

We will try setting just the workflow_name and group_id.

python
from agents import Runner, trace # object class

group_id = "Agents SDK Course: Tracing"

with trace(workflow_name="Prompt Agent Workflow", group_id=group_id):
prompt_result = await Runner.run(
starting_agent=prompt_agent, # agent to start interaction with
input=query, # input to pass to the workflow
)

From this, we should find a new Prompt Agent Workflow trace in our dashboard. We can also search for it by entering Agents SDK Course: Tracing in the Group search bar.

Filtering by the group field to find our new prompt agent workflow trace

Tracing Agent Tools

Again, we need to create an agent.

In this Agent object we have an additional parameter, this is:

  • tools: The tools provided to the agent for usage.

In order to use this we need to aquire a tool to use, we can import WebSearchTool from the agents library, this tool will allow the agent object to search the web for results.

python
from agents import Agent, WebSearchTool

tool_agent = Agent(
name="Tracing Tool Agent",
instructions=(
"You are a web search agent that searches the web for information on the "
"user's queries."
),
tools=[
WebSearchTool()
],
model='gpt-4.1-mini'
)

Let's try calling the tool agent and seeing what is traced.

python
query = "What are the current world headlines?"

tool_result = await Runner.run(
starting_agent=tool_agent,
input=query
)

Inside our trace we'll see it automatically picked up the agent name of Tracing Tool Agent as the workflow name. Then if we click through to the span details, we'll find a similar structure but this time we also see that the web search tool was called:

Default span details including tool use information

We can also set our trace metadata, which appears in the span details (by clicking the little cog icon at the top-right of the span page). Let's try adding metadata that our workflow includes a tool called WebSearchTool.

python
with trace(workflow_name="Web Search Agent", group_id=group_id, metadata={"Tools": "WebSearchTool"}):
tool_result = await Runner.run(
starting_agent=tool_agent,
input=query
)

We can find that metadata here:

Metadata field in traces dashboard

That's it for this guide to OpenAI's traces via Agents SDK.