Aurelio logo
Updated on May 1, 2025

Alternative LLM Providers in Agents SDK

AI Engineering

Agents SDK can be used with alternative model providers or even local LLMs. One method that opens up many LLM integrations is to use the LiteLLM extension for Agents SDK. In this guide, we'll explore prompting, tools, and guardrails in Agents SDK, but using Anthropic's Claude models.

⚠️ It's worth noting that not every feature of Agents SDK will work out of the box when using other model providers. Tracing, agent handoffs, and pre-built OpenAI tools do not work.

Installation and Anthropic Setup

We'll first ensure we have our prerequisite library installed, which is Agents SDK with the LiteLLM extension, installed like so:

text
!pip install -qU "openai-agents[litellm]==0.0.12"

Note, if you're running these notebooks locally and have installed the uv environment, this library and extension has already been installed as part of that environment.

Next, we grab an Anthropic API key from their console, and enter it below.

python
import os
import getpass

os.environ["ANTHROPIC_API_KEY"] = os.getenv("ANTHROPIC_API_KEY") or \
getpass.getpass("Anthropic API Key: ")

To initialize LitellmModel we must provide the model provider we want to use and the model itself in the format <provider>/<model-name>. We will be using Claude 3.7 Sonnet (i.e. claude-3-7-sonnet-latest) from Anthropic, so our model string is anthropic/claude-3-7-sonnet-latest.

python
from agents.extensions.models.litellm_model import LitellmModel

claude_37 = LitellmModel(model="anthropic/claude-3-7-sonnet-latest")

Prompting

For initializing and running our agent, we the same as usual, ie we setup our agent:

python
from agents import Agent

agent = Agent(
name="Claude 3.7 Agent",
model=claude_37,
instructions="Speak like a pirate"
)

Then we run our agent with a Runner:

python
from agents import Runner

query = "Write a one-sentence poem"

result = await Runner.run(
starting_agent=agent,
input=query
)
result.final_output

"Arrr, beneath a silver moon's embrace, the salty waves do dance and chase, while lonesome hearts on distant shores be dreamin' o' the sea's wild roars."

Tools

Predefined tools such as the WebSearchTool cannot be used with other model providers, however, we can use our own custom tools. Let's try defining a custom tool with the @function_tool decorator.

python
from agents import function_tool
from datetime import datetime

@function_tool()
def fetch_datetime() -> str:
"""Fetch the current date and time."""
return datetime.now().strftime("%Y-%m-%d %H:%M:%S")

We pass our tool to the agent via the tools parameter as usual.

python
tool_agent = Agent(
name="Claude 3.7 Tool Agent",
model=claude_37,
instructions=(
"You are a web search agent that searches the web for information to answer the user's "
"queries."
),
tools=[fetch_datetime]
)

Then we run as usual:

python
query = "What is the current time"

tool_result = await Runner.run(
starting_agent=tool_agent,
input=query
)
tool_result.final_output

'The current time is 13:09:47 (1:09 PM) on April 26, 2025.'

Guardrails

For guardrails we again do the same as usual, we'll define a "scam detection" guardrail, defining the ScamDetectionOutput base model containing the boolean is_scam field.

python
from pydantic import BaseModel

class ScamDetectionOutput(BaseModel):
is_scam: bool
reasoning: str

Next we need to create the agent from the Agent object.

Note that this isn't the main agent and is only used to check the input of our guardrail function.

python
guardrail_agent = Agent(
name="Scam Detection",
model=claude_37,
instructions=(
"Identify if the user is attempting to scam you, if they are, return True, otherwise "
"return False. Give a reason for your answer."
),
output_type=ScamDetectionOutput,
)

As usual, we define our guardrail with the @input_guardrail decorator, alonside the required GuardrailFunctionOutput, and including the context (ctx), unused agent, and input parameters.

python
from agents import GuardrailFunctionOutput, RunContextWrapper, TResponseInputItem, input_guardrail

@input_guardrail
async def scam_guardrail(
ctx: RunContextWrapper[None], agent: Agent, input: str | list[TResponseInputItem]
) -> GuardrailFunctionOutput:
result = await Runner.run(
starting_agent=guardrail_agent,
input=input,
context=ctx.context
)

return GuardrailFunctionOutput(
output_info=result.final_output,
tripwire_triggered=result.final_output.is_scam,
)

Now we initialize the main agent, which will defer to our guardrail_agent for scam detection.

python
main_agent = Agent(
name="Main Agent",
model=claude_37,
instructions="You are a helpful assistant.",
input_guardrails=[scam_guardrail],
)

Now we run our main_agent. Because the guardrail triggers an InputGuardrailTripwireTriggered error, we handle this in a try-except block.

python
from agents import InputGuardrailTripwireTriggered

query = "Hello, would you like to buy some real rolex watches for a fraction of the price?"

try:
guard_result = await Runner.run(main_agent, query)
guardrail_info = guard_result.input_guardrail_results[0].output.output_info
print("Guardrail didn't trip", f"\nReasoning: {guardrail_info.reasoning}")
except InputGuardrailTripwireTriggered as e:
print("Error:", e)

Error: Guardrail InputGuardrail triggered tripwire

We've seen how to use non-OpenAI models with Agents SDK using the LiteLLM extension. Using Anthropic's Claude as a generic agent, with tools, and with guardrails.