The OpenAI Agents SDK is a powerful framework that simplifies the creation of AI agents capable of reasoning, calling external tools (functions), and managing conversations. But what if you want to use an open-source model instead of GPT-4? In this tutorial, we’ll walk through how to connect the Agents SDK to a custom provider, specifically Regolo.ai, using a tool-enabled LLaMA 3.3 model with function calling.
CurrencyBot with Tool Calling
We’ll build a CurrencyBot, an AI agent that can understand user queries and, when needed, retrieve live exchange rates between two currencies. This is possible thanks to the Agents SDK’s tool calling feature, where the agent can decide when and how to call a registered function (in our case, the get_exchange_rate
tool).
We’ll use the Frankfurter API as our tool backend; a free and simple API that returns up-to-date foreign exchange rates. It’s fast, requires no authentication, and supports a wide range of currencies.

Architecture Overview
Here’s what the system looks like:
- Model:
Llama-3.3-70B-Instruct
via Regolo.ai (OpenAI-compatible API) - Agent SDK: Handles the agent logic, tool calling, and conversation management.
- Our Rates Tool: A simple async function that queries the Frankfurter API for currency rates.
- UI/Frontend: A lightweight Streamlit chat interface to interact with the agent.
Step-by-Step: Building the Agent
1. Configure the Regolo Model
The first step is to configure the OpenAI Agents SDK to use Regolo’s API instead of OpenAI’s.
from openai import AsyncOpenAI
BASE_URL = "https://api.regolo.ai/v1"
API_KEY = "YOUR-REGOLO-API-KEY"
MODEL_NAME = "Llama-3.3-70B-Instruct"
client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)
Code language: Python (python)
Then, define a custom model provider class that plugs into the Agents SDK:
from agents import (
Agent,
Model,
ModelProvider,
OpenAIChatCompletionsModel,
RunConfig,
Runner,
function_tool,
set_tracing_disabled,
)
class CustomModelProvider(ModelProvider):
def get_model(self, model_name: str | None) -> Model:
return OpenAIChatCompletionsModel(model=model_name or MODEL_NAME, openai_client=client)
CUSTOM_MODEL_PROVIDER = CustomModelProvider()
Code language: Python (python)
This is the trick to tell the SDK to use Regolo (or any other OpenAI compatible provider) as the backend for all chat completions.
2. Define the Tool: Currency Exchange Rate
Now, let’s define a tool that our agent can call. It fetches exchange rate data using the Frankfurter API:
@function_tool
async def get_exchange_rate(base: str, target: str) -> str:
"""
Retrieve the exchange rate between any two currencies passing currency codes as input.
"""
url = f"https://api.frankfurter.dev/v1/latest?base={base.upper()}"
async with httpx.AsyncClient() as client:
response = await client.get(url)
if response.status_code != 200:
return f"Error: impossible to fetch data (status {response.status_code})"
data = response.json()
rate = data.get("rates", {}).get(target.upper())
if rate is None:
return f"Currency {target.upper()} not found."
return f"1 {base.upper()} = {rate} {target.upper()} (updated at {data['date']})"
Code language: Python (python)
This tool is registered with the agent and can be automatically invoked whenever the user asks about exchange rates, whether the query includes currency codes, such as “What’s the exchange rate between USD and EUR?”, or uses natural language, like “Tell me the exchange rate between the Australian dollar and the British pound.” That’s the true strength of large language models combined with tool calling: the model can understand and interpret natural language, extract the relevant information, and validate inputs without requiring strict formatting from the user.
3. Create the Agent
Now we instantiate the agent with a name, some instructions, and the tools it can use:
agent = Agent(
name="CurrencyBot",
instructions="You are a smart financial assistant that is able but not limited to retrieve the exchange rate between any two currencies.",
tools=[get_exchange_rate],
)
Code language: Python (python)
The agent uses the custom model provider under the hood:
async def run_agent(user_input: str) -> str:
result = await Runner.run(
agent,
user_input,
run_config=RunConfig(model_provider=CUSTOM_MODEL_PROVIDER),
)
return result.final_output
Code language: Python (python)
4. Streamlit Chat Interface
For a simple and fast frontend, we use Streamlit to create a chat UI where users can interact with the agent and receive real-time responses.
import streamlit as st
# Display chat history, handle input, and render bot responses
st.title(":material/currency_exchange: Currency Agent")
user_input = st.chat_input("Type your message here...")
if user_input:
response = asyncio.run(run_agent(user_input))
st.chat_message("user").markdown(user_input)
st.chat_message("assistant").markdown(response)
Code language: Python (python)
To make the interaction with our CurrencyBot more intuitive, we can iterate on the user interface using Streamlit, which offers everything we need to create a clean, responsive chat experience.
Instead of a basic input/output loop, we enhance the UI with:
- Session handling: Keeps the conversation context alive between messages.
- Chat history: Displays the full exchange between the user and the agent.
- Clear chat button: Lets users reset the session with a single click.
- Loading indicators: Shows the agent is “thinking” while waiting for a response.
These improvements help simulate a more natural and fluid conversation, making it feel like you’re chatting with a real assistant.
I’ve included the full code below so you can run this interface locally and test your own agent integration without any hassle.
from __future__ import annotations
import asyncio
import httpx
import streamlit as st
from typing import Optional
from openai import AsyncOpenAI
from agents import (
Agent,
Model,
ModelProvider,
OpenAIChatCompletionsModel,
RunConfig,
Runner,
function_tool,
set_tracing_disabled,
)
# Regolo.ai Configuration
BASE_URL = "https://api.regolo.ai/v1"
API_KEY = "YOUR-REGOLO-API-KEY"
MODEL_NAME = "Llama-3.3-70B-Instruct"
client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)
set_tracing_disabled(disabled=True)
class CustomModelProvider(ModelProvider):
def get_model(self, model_name: str | None) -> Model:
return OpenAIChatCompletionsModel(model=model_name or MODEL_NAME, openai_client=client)
CUSTOM_MODEL_PROVIDER = CustomModelProvider()
@function_tool
async def get_exchange_rate(base: str, target: str) -> str:
"""
Retrieve the exchange rate between any two currencies passing currency codes as input.
"""
url = f"https://api.frankfurter.dev/v1/latest?base={base.upper()}"
async with httpx.AsyncClient() as client:
response = await client.get(url)
if response.status_code != 200:
return f"Error: impossible to fetch data (status {response.status_code})"
data = response.json()
rate: Optional[float] = data.get("rates", {}).get(target.upper())
if rate is None:
return f"Currency {target.upper()} not found."
rates = f"1 {base.upper()} = {rate} {target.upper()} (updated at {data['date']})"
# This print is only for debug purposes so you can observe in console if the tool triggers correctly
print(rates)
return rates
async def run_agent(user_input: str) -> str:
agent = Agent(
name="CurrencyBot",
instructions="You are a smart financial assistant that is able but not limited to retrieve the exchange rate between any two currencies.",
tools=[get_exchange_rate],
)
result = await Runner.run(
agent,
user_input,
run_config=RunConfig(model_provider=CUSTOM_MODEL_PROVIDER),
)
return result.final_output
def main():
# Initialize streamlit session state
if "chat_history" not in st.session_state:
st.session_state.chat_history = []
if "bot_ready" not in st.session_state:
st.session_state.bot_ready = False
if "pending_user_input" not in st.session_state:
st.session_state.pending_user_input = None
if "pending_bot_response" not in st.session_state:
st.session_state.pending_bot_response = None
st.title(":material/currency_exchange: Currency Agent", anchor=False)
col1, col2 = st.columns([13,3])
with col1:
st.subheader(":grey[I know updated exchange rates!]")
with col2:
if st.button(label="Clear chat", icon=":material/delete_history:", type="secondary"):
clear_chat_history()
user_input = st.chat_input("Type your message here...")
if user_input:
st.session_state.chat_history.append({"speaker": "user", "message": user_input})
st.session_state.pending_user_input = user_input
st.session_state.bot_ready = False
st.session_state.pending_bot_response = None
st.rerun()
for chat in st.session_state.chat_history:
if chat["speaker"] == "user":
with st.chat_message("user", avatar=":material/person:", width="content"):
st.markdown(chat["message"])
elif chat["speaker"] == "bot":
with st.chat_message("assistant", avatar=":material/smart_toy:", width="content"):
st.markdown(chat["message"])
if st.session_state.pending_user_input and not st.session_state.bot_ready:
with st.status(":material/smart_toy: CurrencyBot is thinking...", state="running", expanded=False):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
response = loop.run_until_complete(run_agent(st.session_state.pending_user_input))
st.session_state.pending_bot_response = response
st.session_state.bot_ready = True
st.rerun()
if st.session_state.bot_ready and st.session_state.pending_bot_response:
with st.chat_message("assistant", avatar=":material/smart_toy:", width="content"):
def stream_chars(text):
for char in text:
yield char
st.write_stream(stream_chars(st.session_state.pending_bot_response))
# Save bot response to history and clear flags
st.session_state.chat_history.append({
"speaker": "bot",
"message": st.session_state.pending_bot_response,
})
st.session_state.bot_ready = False
st.session_state.pending_user_input = None
st.session_state.pending_bot_response = None
def clear_chat_history():
st.session_state.chat_history = []
async def add_response_to_history(user_input):
response = await run_agent(user_input)
st.session_state.chat_history.append({"speaker": "bot", "message": response})
if __name__ == "__main__":
main()
Code language: Python (python)
Remember to install all the required libraries with:
pip install streamlit openai openai-agents
Code language: Bash (bash)
Then you only have to launch the app.py within streamlit:
streamlit run app.py
Code language: Bash (bash)
Why This Matters?
This tutorial shows that you’re not locked into proprietary models when using the OpenAI Agents SDK. Thanks to Regolo.ai (and other OpenAI-compatible APIs), you can:
- Use open-source models like LLaMA 3.3
- Enable tool calling and advanced reasoning
- Build real-world apps with full control of the stack
With just a few lines of code, we created a functional, friendly currency assistant using the OpenAI Agents SDK but powered entirely by an open model.
This opens up opportunities for developers, researchers, and companies who want to retain flexibility, reduce costs, or self-host models for privacy reasons.