Close Menu
    Trending
    • Scaling Recommender Transformers to a Billion Parameters
    • Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know
    • Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI
    • ChatGPT Gets More Personal. Is Society Ready for It?
    • Why the Future Is Human + Machine
    • Why AI Is Widening the Gap Between Top Talent and Everyone Else
    • Implementing the Fourier Transform Numerically in Python: A Step-by-Step Guide
    • Why AI should be able to “hang up” on you
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » How to Build An AI Agent with Function Calling and GPT-5
    Artificial Intelligence

    How to Build An AI Agent with Function Calling and GPT-5

    ProfitlyAIBy ProfitlyAIOctober 20, 2025No Comments16 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    and Massive Language Fashions (LLMs)

    Massive language fashions (LLMs) are superior AI methods constructed on deep neural community akin to transformers and educated on huge quantities of textual content to generate human-like language. LLMs like ChatGPT, Claude, Gemini and Grok can sort out many difficult duties and are used throughout fields akin to science, healthcare, training, and finance.

    An AI agent extends the capabilites of LLMs to unravel duties which are past their pre-trained data. An LLM can write a Python tutorial from what it discovered throughout coaching. In case you ask it to e-book a flight, the duty requires entry to your calendar, internet search and the power to take actions, these fall past the LLM’s pre-trained data. A number of the frequent actions embody:

    • Climate forecast: The LLM connects to an online search instrument to fetch the newest climate forecast.
    • Reserving agent: An AI agent that may examine a person’s calendar, search the online to go to a reserving website like Expedia to seek out obtainable choices for flights and resorts, current them to the person for affirmation, and full the reserving on behalf of the person.

    How an AI Agent Works

    AI brokers kind a system that makes use of a Massive Language Mannequin to plan, motive, and take steps to work together with its setting utilizing instruments advised from the mannequin’s reasoning to unravel a selected process.

    Primary Construction of an AI Agent

    Picture Generated By Gemini
    • A Massive Language Mannequin (LLM): the LLM is the mind of an AI agent. It takes a person’s immediate, plans and causes by the request and breaks the issue into steps that decide which instruments it ought to use to finish the duty.
    • A instrument is the framework that the agent makes use of to carry out an motion primarily based on the plan and reasoning from the Massive Language Mannequin. In case you ask an LLM to e-book a desk for you at a restaurant, attainable instruments that will likely be used embody calendar to examine your availability and an online search instrument to entry the restaurant web site and make a reservation for you.

    Ilustrated Resolution Making of a Reserving AI Agent

    Picture Generated By ChatGPT

    AI brokers can entry completely different instruments relying on the duty. A instrument could be a knowledge retailer, akin to a database. For instance, a customer-support agent might entry a buyer’s account particulars and buy historical past and determine when to retrieve that data to assist resolve a problem.

    AI brokers are used to unravel a variety of duties, and there are various highly effective brokers obtainable. Coding brokers, notably agentic IDEs akin to Cursor, Windsurf, and GitHub Copilot assist engineers write and debug code sooner and construct initiatives shortly. CLI Coding brokers like Claude Code and Codex CLI can work together with a person’s desktop and terminal to hold out coding duties. ChatGPT helps brokers that may carry out actions akin to reserving reservations on a person’s behalf. Brokers are additionally built-in into buyer assist workflows to speak with prospects and resolve their points.

    Operate Calling

    Operate calling is a way for connecting a big language mannequin (LLM) to exterior instruments akin to APIs or databases. It’s utilized in creating AI brokers to attach LLMs to instruments. In perform calling, every instrument is outlined as a code perform (for instance, a climate API to fetch the newest forecast) together with a JSON Schema that specifies the perform’s parameters and instructs the LLM on when and the way to name the perform for a given process.

    The kind of perform outlined will depend on the duty the agent is designed to carry out. For instance, for a buyer assist agent we will outline a perform that may extract data from unstructured information, akin to PDFs containing particulars a couple of enterprise’s merchandise.

    On this publish I’ll reveal the way to use perform calling to construct a easy internet search agent utilizing GPT-5 as the big language mannequin.

    Primary Construction of a Net Search Agent

    Picture Generated By Gemini

    The primary logic behind the online search agent:

    • Outline a code perform to deal with the online search.
    • Outline customized directions that information the big language mannequin in figuring out when to name the online search perform primarily based on the question. For instance, if the question asks in regards to the present climate, the online search agent will acknowledge the necessity to search the web to get the newest climate stories. Nevertheless, if the question asks it to write down a tutorial a couple of programming language like Python, one thing it could reply from its pre-trained data it won’t name the online search perform and can reply straight as an alternative.

    Prerequisite

    Create an OpenAI account and generate an API key
    1: Create an OpenAI Account in case you don’t have one
    2: Generate an API Key

    Arrange and Activate Surroundings

    python3 -m venv env
    supply env/bin/activate

    Export OpenAI API Key

    export OPENAI_API_KEY="Your Openai API Key"

    Setup Tavily for Net Search
    Tavily is a specialised web-search instrument for AI brokers. Create an account on Tavily.com, and as soon as your profile is ready up, an API key will likely be generated you could copy into your setting. New accounta obtain 1000 free credit that can be utilized for as much as 1000 internet searches.

    Export TAVILY API Key

    export TAVILY_API_KEY="Your Tavily API Key"

    Set up Packages

    pip3 set up openai
    pip3 set up tavily-python

    Constructing a Net Search Agent with Operate Calling Step by Step

    Step 1: Create Net Search Operate with Tavily

    An online search perform is applied utilizing Tavily, serving because the instrument for perform calling within the internet search agent.

    from tavily import TavilyClient
    import os
    
    tavily = TavilyClient(api_key=os.getenv("TAVILY_API_KEY"))
    
    def web_search(question: str, num_results: int = 10):
        attempt:
            consequence = tavily.search(
                question=question,
                search_depth="primary",
                max_results=num_results,
                include_answer=False,       
                include_raw_content=False,
                include_images=False
            )
    
            outcomes = consequence.get("outcomes", [])
    
            return {
                "question": question,
                "outcomes": outcomes, 
                "sources": [
                    {"title": r.get("title", ""), "url": r.get("url", "")}
                    for r in results
                ]
            }
    
        besides Exception as e:
            return {
                "error": f"Search error: {e}",
                "question": question,
                "outcomes": [],
                "sources": [],
            }

    Net perform code breakdown

    Tavily is initialized with its API key. Within the web_search perform, the next steps are carried out:

    • Tavily search perform is known as to go looking the web and retrieve the highest 10 outcomes.
    • The search outcomes and their corresponding sources are returned.

    This returned output will function related context for the online search agent: which we are going to outline later on this article, to fetch up-to-date data for queries (prompts) that require real-time information akin to climate forecasts.

    Step 2: Create Software Schema

    The instrument schema defines customized directions for an AI mannequin on when it ought to name a instrument, on this case the instrument that will likely be utilized in an online search perform. It additionally specifies the situations and actions to be taken when the mannequin calls a instrument. A json instrument schema is outlined beneath primarily based on the OpenAI tool schema structure.

    tool_schema = [
        {
            "type": "function",
            "name": "web_search",
    
            "description": """Execute a web search to fetch up to date information. Synthesize a concise, 
            self-contained answer from the content of the results of the visited pages.
            Fetch pages, extract text, and provide the best available result while citing 1-3 sources (title + URL). 
            If sources conflict, surface the uncertainty and prefer the most recent evidence.
            """,
    
            "strict": True,
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "Query to be searched on the web.",
                    },
                },
                "required": ["query"],
                "additionalProperties": False
            },
        },
    ]
    

    Software schema’s Properties

    • kind: Specifies that the kind of instrument is a perform.
    • title: the title of the perform that will likely be used for instrument name, which is web_search.
    • description: Describes what the AI mannequin ought to do when calling the online search instrument. It instructs the mannequin to go looking the web utilizing the web_search perform to fetch up-to-date data and extract related particulars to generate the most effective response.
    • strict: It’s set to true, this property instructs the LLM to strictly observe the instrument schema’s directions.
    • parameters: Defines the parameters that will likely be handed into the web_search perform. On this case, there is just one parameter: question which represents the search time period to search for on the web.
    • required: Instructs the LLM that question is a compulsory parameter for the web_search perform.
    • additionalProperties: it’s set to false, which means that the instrument’s arguments object can’t embody any parameters apart from these outlined below parameters.properties.

    Step 3: Create the Net Search Agent Utilizing GPT-5 and Operate Calling

    Lastly I’ll construct an agent that we will chat with, which might search the online when it wants up-to-date data. I’ll use GPT-5-mini, a quick and correct mannequin from OpenAI, together with perform calling to invoke the instrument schema and the internet search perform already outlined.

    from datetime import datetime, timezone
    import json
    from openai import OpenAI
    import os 
    
    consumer = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    
    # tracker for the final mannequin's response id to take care of dialog's state 
    prev_response_id = None
    
    # a listing for storing instrument's outcomes from the perform name 
    tool_results = []
    
    whereas True:
        # if the instrument outcomes is empty immediate message 
        if len(tool_results) == 0:
            user_message = enter("Person: ")
    
            """ instructions for exiting chat """
            if isinstance(user_message, str) and user_message.strip().decrease() in {"exit", "q"}:
                print("Exiting chat. Goodbye!")
                break
    
        else:
            user_message = tool_results.copy()
        
            # clear the instrument outcomes for the subsequent name 
            tool_results = []
    
        # acquire present's date to be handed into the mannequin as an instruction to help in choice making
        today_date = datetime.now(timezone.utc).date().isoformat()     
    
        response = consumer.responses.create(
            mannequin = "gpt-5-mini",
            enter = user_message,
            directions=f"Present date is {today_date}.",
            instruments = tool_schema,
            previous_response_id=prev_response_id,
            textual content = {"verbosity": "low"},
            reasoning={
                "effort": "low",
            },
            retailer=True,
            )
        
        prev_response_id = response.id
    
        # Handles mannequin response's output 
        for output in response.output:
            
            if output.kind == "reasoning":
                print("Assistant: ","Reasoning ....")
    
                for reasoning_summary in output.abstract:
                    print("Assistant: ",reasoning_summary)
    
            elif output.kind == "message":
                for merchandise in output.content material:
                    print("Assistant: ",merchandise.textual content)
    
            elif output.kind == "function_call":
                # acquire perform title 
                function_name = globals().get(output.title)
                # masses perform arguments 
                args = json.masses(output.arguments)
                function_response = function_name(**args)
                tool_results.append(
                    {
                        "kind": "function_call_output",
                        "call_id": output.call_id,
                        "output": json.dumps(function_response)
                    }
                )

    Step by Step Code Breakdown

    from openai import OpenAI
    import os 
    
    consumer = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    prev_response_id = None
    tool_results = []
    • Initialized the OpenAI mannequin API with an API key.
    • Initialized two variables prev_response_id and tool_results. prev_response_id retains monitor of the mannequin’s response to take care of dialog state, and tool_results is a listing that shops outputs returned from the web_search perform name.

    The chat runs contained in the loop. A person enters a message and the mannequin known as with instrument schema accepts the message, causes over it, decides whether or not to name the online search instrument, after which the instrument’s output is handed again to the mannequin. The mannequin generates a context-aware response. This continues till the person exits the chat.

    Code Walkthrough of the Loop

    if len(tool_results) == 0:
        user_message = enter("Person: ")
        if isinstance(user_message, str) and user_message.strip().decrease() in {"exit", "q"}:
            print("Exiting chat. Goodbye!")
            break
    
    else:
        user_message = tool_results.copy()
        tool_results = []
    
    today_date = datetime.now(timezone.utc).date().isoformat()     
    
    response = consumer.responses.create(
        mannequin = "gpt-5-mini",
        enter = user_message,
        directions=f"Present date is {today_date}.",
        instruments = tool_schema,
        previous_response_id=prev_response_id,
        textual content = {"verbosity": "low"},
        reasoning={
            "effort": "low",
        },
        retailer=True,
        )
    
    prev_response_id = response.id
    • Checks if the tool_results is empty. Whether it is, the person will likely be prompted to kind in a message, with an choice to stop utilizing exit or q.
    • If the tool_results is just not empty, user_message will likely be set to the collected instrument outputs to be despatched to the mannequin. tool_results is cleared to keep away from resending the identical instrument outputs on the subsequent loop iteration.
    • The present date (today_date) is obtained for use by the mannequin to make time-aware choices.
    • Calls consumer.responses.create to generate the mannequin’s response and it accepts the next parameters:
      • mannequin: set to gpt-5-mini.
      • enter: accepts the person’s message.
      • directions: set to present’s date (today_date).
      • instruments: set to the instrument schema that was outlined earlier.
      • previous_response_id: set to the earlier response’s id so the mannequin can keep dialog state.
      • textual content: verbosity is ready to low to maintain mannequin’s response concise.
      • reasoning: GPT-5-mini is a reasoning mannequin, set the reasoning’s effort to low for sooner’s response. For extra complicated duties we will set it to excessive.
      • retailer: tells the mannequin to retailer the present’s response so it may be retrieved later and helps with dialog continuity.
    • prev_response_id is ready to present’s response id so the subsequent perform name can thread onto the identical dialog.
    for output in response.output:
        if output.kind == "reasoning":
            print("Assistant: ","Reasoning ....")
    
            for reasoning_summary in output.abstract:
                print("Assistant: ",reasoning_summary)
    
        elif output.kind == "message":
            for merchandise in output.content material:
                print("Assistant: ",merchandise.textual content)
    
        elif output.kind == "function_call":
            # acquire perform title 
            function_name = globals().get(output.title)
            # masses perform arguments 
            args = json.masses(output.arguments)
            function_response = function_name(**args)
            # append instrument outcomes listing with the the perform name's id and performance's response 
            tool_results.append(
                {
                    "kind": "function_call_output",
                    "call_id": output.call_id,
                    "output": json.dumps(function_response)
                }
            )

    This processes the mannequin’s response output and does the next;

    • If the output kind is reasoning, print every merchandise within the reasoning abstract.
    • If the output kind is message, iterate by the content material and print every textual content merchandise.
    • If the output kind is a perform name, acquire the perform’s title, parse its arguments, and cross them to the perform (web_search) to generate a response. On this case, the online search response accommodates up-to-date data related to the person’s message. Lastly appends the perform name’s response and performance name id to tool_results. This lets the subsequent loop ship the instrument consequence again to the mannequin.

    Full Code for the Net Search Agent

    from datetime import datetime, timezone
    import json
    from openai import OpenAI
    import os 
    from tavily import TavilyClient
    
    tavily = TavilyClient(api_key=os.getenv("TAVILY_API_KEY"))
    
    def web_search(question: str, num_results: int = 10):
        attempt:
            consequence = tavily.search(
                question=question,
                search_depth="primary",
                max_results=num_results,
                include_answer=False,       
                include_raw_content=False,
                include_images=False
            )
    
            outcomes = consequence.get("outcomes", [])
    
            return {
                "question": question,
                "outcomes": outcomes, 
                "sources": [
                    {"title": r.get("title", ""), "url": r.get("url", "")}
                    for r in results
                ]
            }
    
        besides Exception as e:
            return {
                "error": f"Search error: {e}",
                "question": question,
                "outcomes": [],
                "sources": [],
            }
    
    
    tool_schema = [
        {
            "type": "function",
            "name": "web_search",
            "description": """Execute a web search to fetch up to date information. Synthesize a concise, 
            self-contained answer from the content of the results of the visited pages.
            Fetch pages, extract text, and provide the best available result while citing 1-3 sources (title + URL). "
            If sources conflict, surface the uncertainty and prefer the most recent evidence.
            """,
            "strict": True,
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "Query to be searched on the web.",
                    },
                },
                "required": ["query"],
                "additionalProperties": False
            },
        },
    ]
    
    consumer = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    
    # tracker for the final mannequin's response id to take care of dialog's state 
    prev_response_id = None
    
    # a listing for storing instrument's outcomes from the perform name 
    tool_results = []
    
    whereas True:
        # if the instrument outcomes is empty immediate message 
        if len(tool_results) == 0:
            user_message = enter("Person: ")
    
            """ instructions for exiting chat """
            if isinstance(user_message, str) and user_message.strip().decrease() in {"exit", "q"}:
                print("Exiting chat. Goodbye!")
                break
    
        else:
            # set the person's messages to the instrument outcomes to be despatched to the mannequin 
            user_message = tool_results.copy()
        
            # clear the instrument outcomes for the subsequent name 
            tool_results = []
    
        # acquire present's date to be handed into the mannequin as an instruction to help in choice making
        today_date = datetime.now(timezone.utc).date().isoformat()     
    
        response = consumer.responses.create(
            mannequin = "gpt-5-mini",
            enter = user_message,
            directions=f"Present date is {today_date}.",
            instruments = tool_schema,
            previous_response_id=prev_response_id,
            textual content = {"verbosity": "low"},
            reasoning={
                "effort": "low",
            },
            retailer=True,
            )
        
        prev_response_id = response.id
    
    
        # Handles mannequin response's output 
        for output in response.output:
            
            if output.kind == "reasoning":
                print("Assistant: ","Reasoning ....")
    
                for reasoning_summary in output.abstract:
                    print("Assistant: ",reasoning_summary)
    
            elif output.kind == "message":
                for merchandise in output.content material:
                    print("Assistant: ",merchandise.textual content)
    
            # checks if the output kind is a perform name and append the perform name's outcomes to the instrument outcomes listing
            elif output.kind == "function_call":
                # acquire perform title 
                function_name = globals().get(output.title)
                # masses perform arguments 
                args = json.masses(output.arguments)
                function_response = function_name(**args)
                # append instrument outcomes listing with the the perform name's id and performance's response 
                tool_results.append(
                    {
                        "kind": "function_call_output",
                        "call_id": output.call_id,
                        "output": json.dumps(function_response)
                    }
                )

    If you run the code, you possibly can simply chat with the agent to ask questions that require the newest data, akin to the present climate or the newest product releases. The agent responds with up-to-date data together with the corresponding sources from the web. Beneath is a pattern output from the terminal.

    Person: What's the climate like in London right now?
    Assistant:  Reasoning ....
    Assistant:  Reasoning ....
    Assistant:  Proper now in London: overcast, about 18°C (64°F), humidity ~88%, mild SW wind ~16 km/h, no precipitation reported. Supply: WeatherAPI (present situations) — https://www.weatherapi.com/
    
    Person: What's the newest iPhone mannequin?
    Assistant:  Reasoning ....
    Assistant:  Reasoning ....
    Assistant:  The newest iPhone fashions are the iPhone 17 lineup (together with iPhone 17, iPhone 17 Professional, iPhone 17 Professional Max) and the brand new iPhone Air — introduced by Apple on Sept 9, 2025. Supply: Apple Newsroom — https://www.apple.com/newsroom/2025/09/apple-debuts-iphone-17/
    
    Person: Multiply 500 by 12.           
    Assistant:  Reasoning ....
    Assistant:  6000
    Person: exit   
    Exiting chat. Goodbye!

    You may see the outcomes with their corresponding internet sources. If you ask it to carry out a process that doesn’t require up-to-date data, akin to maths calculations or writing code the agent responds straight with none internet search.

    Be aware: The online search agent is an easy, single-tool agent. Superior agentic methods orchestrate a number of specialised instruments and use environment friendly reminiscence to take care of context, plan, and resolve extra complicated duties.

    Conclusion

    On this publish I defined how an AI agent works and the way it extends the capabilities of a big language mannequin to work together with its setting, carry out actions and resolve duties by using instruments. I additionally defined perform calling and the way it permits LLMs to name instruments. I demonstrated the way to create a instrument schema for perform calling that defines when and the way an LLM ought to name a instrument to carry out an motion. I outlined an online search perform utilizing Tavily to fetch data from the online after which confirmed step-by-step the way to construct an online search agent utilizing perform calling and GPT-5-mini because the LLM. Ultimately, we constructed an online search agent able to retrieving up-to-date data from the web to reply person queries.

    Take a look at my GitHub repo, GenAI-Courses the place I’ve printed extra programs on varied Generative AI matters. It additionally features a information on constructing an Agentic RAG using function calling.

    Attain out to me by way of:

    Electronic mail: [email protected]

    Linkedin: https://www.linkedin.com/in/ayoola-olafenwa-003b901a9/

    References

    https://platform.openai.com/docs/guides/function-calling?api-mode=responses

    https://docs.tavily.com/documentation/api-reference/endpoint/search



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHow to Use Frontier Vision LLMs: Qwen3-VL
    Next Article Ny forskning visar varför AI-bilder ser så konstiga ut
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025
    Artificial Intelligence

    Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know

    October 21, 2025
    Artificial Intelligence

    Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI

    October 21, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Multimodal AI: The Complete Guide for 2025

    June 24, 2025

    OpenAI lanserar GPT-5 – AI nyheter

    August 7, 2025

    Hands‑On with Agents SDK: Your First API‑Calling Agent

    July 22, 2025

    A new computational model can predict antibody structures more accurately | MIT News

    April 7, 2025

    Nya föräldrakontroller i ChatGPT ger föräldrar insyn i AI-användning

    September 7, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    LMArena lanserar ny beta för AI-battle och användarröstning

    April 21, 2025

    Why AI hardware needs to be open

    June 18, 2025

    Vilka europeiska AI-företag har utvecklat användbara LLM?

    June 9, 2025
    Our Picks

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025

    Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know

    October 21, 2025

    Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI

    October 21, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.