Close Menu
    Trending
    • Gemini introducerar funktionen schemalagda åtgärder i Gemini-appen
    • AIFF 2025 Runway’s tredje årliga AI Film Festival
    • AI-agenter kan nu hjälpa läkare fatta bättre beslut inom cancervård
    • Not Everything Needs Automation: 5 Practical AI Agents That Deliver Enterprise Value
    • Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.
    • 5 Crucial Tweaks That Will Make Your Charts Accessible to People with Visual Impairments
    • Why AI Projects Fail | Towards Data Science
    • The Role of Luck in Sports: Can We Measure It?
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » AI Agents Processing Time Series and Large Dataframes
    Artificial Intelligence

    AI Agents Processing Time Series and Large Dataframes

    ProfitlyAIBy ProfitlyAIApril 23, 2025No Comments13 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Brokers are AI programs, powered by LLMs, that may motive about their targets and take actions to attain a closing purpose. They’re designed not simply to reply to queries, however to orchestrate a sequence of operations, together with processing information (i.e. dataframes and time sequence). This means unlocks quite a few real-world functions for democratizing entry to information evaluation, akin to automating reporting, no-code queries, assist on information cleansing and manipulation. 

    Brokers that may work together with dataframes in two alternative ways: 

    • with pure language — the LLM reads the desk as a string and tries to make sense of it based mostly on its data base
    • by producing and executing code — the Agent prompts instruments to course of the dataset as an object. 

    So, by combining the facility of NLP with the precision of code execution, AI Brokers allow a broader vary of customers to work together with complicated datasets and derive insights.

    On this tutorial, I’m going to indicate find out how to course of dataframes and time sequence with AI Brokers. I’ll current some helpful Python code that may be simply utilized in different comparable circumstances (simply copy, paste, run) and stroll by each line of code with feedback to be able to replicate this instance (hyperlink to full code on the finish of the article).

    Setup

    Let’s begin by organising Ollama (pip set up ollama==0.4.7), a library that enables customers to run open-source LLMs regionally, while not having cloud-based companies, giving extra management over information privateness and efficiency. Because it runs regionally, any dialog information doesn’t depart your machine.

    To begin with, you could obtain Ollama from the web site. 

    Then, on the immediate shell of your laptop computer, use the command to obtain the chosen LLM. I’m going with Alibaba’s Qwen, because it’s each sensible and lightweight.

    After the obtain is accomplished, you possibly can transfer on to Python and begin writing code.

    import ollama
    llm = "qwen2.5"

    Let’s check the LLM:

    stream = ollama.generate(mannequin=llm, immediate='''what time is it?''', stream=True)
    for chunk in stream:
        print(chunk['response'], finish='', flush=True)

    Time Collection

    A time sequence is a sequence of knowledge factors measured over time, typically used for evaluation and forecasting. It permits us to see how variables change over time, and it’s used to establish traits and seasonal patterns.

    I’m going to generate a faux time sequence dataset to make use of for instance.

    import pandas as pd
    import numpy as np
    import matplotlib.pyplot as plt
    
    ## create information
    np.random.seed(1) #<--for reproducibility
    size = 30
    ts = pd.DataFrame(information=np.random.randint(low=0, excessive=15, dimension=size),
                      columns=['y'],
                      index=pd.date_range(begin='2023-01-01', freq='MS', intervals=size).strftime('%Y-%m'))
    
    ## plot
    ts.plot(sort="bar", figsize=(10,3), legend=False, shade="black").grid(axis='y')

    Normally, time sequence datasets have a very easy construction with the primary variable as a column and the time because the index.

    Earlier than remodeling it right into a string, I wish to make it possible for every thing is positioned underneath a column, in order that we don’t lose any piece of knowledge.

    dtf = ts.reset_index().rename(columns={"index":"date"})
    dtf.head()

    Then, I shall change the information sort from dataframe to dictionary.

    information = dtf.to_dict(orient='data')
    information[0:5]

    Lastly, from dictionary to string.

    str_data = "n".be part of([str(row) for row in data])
    str_data

    Now that now we have a string, it may be included in a immediate that any language mannequin is ready to course of. Once you paste a dataset right into a immediate, the LLM reads the information as plain textual content, however can nonetheless perceive the construction and that means based mostly on patterns seen throughout coaching.

    immediate = f'''
    Analyze this dataset, it accommodates month-to-month gross sales information of a web based retail product:
    {str_data}
    '''

    We will simply begin a chat with the LLM. Please word that, proper now, this isn’t an Agent because it doesn’t have any Device, we’re simply utilizing the language mannequin. Whereas it doesn’t course of numbers like a pc, the LLM can acknowledge column names, time-based patterns, traits, and outliers, particularly with smaller datasets. It may possibly simulate evaluation and clarify findings, but it surely received’t carry out exact calculations independently, because it’s not executing code like an Agent.

    messages = [{"role":"system", "content":prompt}]
    
    whereas True:
        ## Person
        q = enter('🙂 >')
        if q == "give up":
            break
        messages.append( {"function":"person", "content material":q} )
       
        ## Mannequin
        agent_res = ollama.chat(mannequin=llm, messages=messages, instruments=[])
        res = agent_res["message"]["content"]
       
        ## Response
        print("👽 >", f"x1b[1;30m{res}x1b[0m")
        messages.append( {"role":"assistant", "content":res} )

    The LLM recognizes numbers and understands the general context, the same way it might understand a recipe or a line of code. 

    As you can see, using LLMs to analyze time series is great for quick and conversational insights.

    Agent

    LLMs are good for brainstorming and lite exploration, while an Agent can run code. Therefore, it can handle more complex tasks like plotting, forecasting, and anomaly detection. So, let’s create the Tools. 

    Sometimes, it can be more effective to treat the “final answer” as a Tool. For example, if the Agent does multiple actions to generate intermediate results, the final answer can be thought of as the Tool that integrates all of this information into a cohesive response. By designing it this way, you have more customization and control over the results.

    def final_answer(text:str) -> str:
        return text
    
    tool_final_answer = {'type':'function', 'function':{
      'name': 'final_answer',
      'description': 'Returns a natural language response to the user',
      'parameters': {'type': 'object',
                    'required': ['text'],
                    'properties': {'textual content': {'sort':'str', 'description':'pure language response'}}
    }}}
    
    final_answer(textual content="hello")

    Then, the coding Device.

    import io
    import contextlib
    
    def code_exec(code:str) -> str:
        output = io.StringIO()
        with contextlib.redirect_stdout(output):
            strive:
                exec(code)
            besides Exception as e:
                print(f"Error: {e}")
        return output.getvalue()
    
    tool_code_exec = {'sort':'operate', 'operate':{
      'title': 'code_exec',
      'description': 'Execute python code. Use all the time the operate print() to get the output.',
      'parameters': {'sort': 'object',
                    'required': ['code'],
                    'properties': {
                        'code': {'sort':'str', 'description':'code to execute'},
    }}}}
    
    code_exec("from datetime import datetime; print(datetime.now().strftime('%H:%M'))")

    Furthermore, I shall add a few utils capabilities for Device utilization and to run the Agent.

    dic_tools = {"final_answer":final_answer, "code_exec":code_exec}
    
    # Utils
    def use_tool(agent_res:dict, dic_tools:dict) -> dict:
        ## use instrument
        if "tool_calls" in agent_res["message"].keys():
            for instrument in agent_res["message"]["tool_calls"]:
                t_name, t_inputs = instrument["function"]["name"], instrument["function"]["arguments"]
                if f := dic_tools.get(t_name):
                    ### calling instrument
                    print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
                    ### tool output
                    t_output = f(**tool["function"]["arguments"])
                    print(t_output)
                    ### closing res
                    res = t_output
                else:
                    print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
        ## don't use tool
        if agent_res['message']['content'] != '':
            res = agent_res["message"]["content"]
            t_name, t_inputs = '', ''
        return {'res':res, 'tool_used':t_name, 'inputs_used':t_inputs}

    When the Agent is attempting to unravel a activity, I need it to maintain monitor of the Instruments which were used, the inputs that it tried, and the outcomes it will get. The iteration ought to cease solely when the mannequin is able to give the ultimate reply.

    def run_agent(llm, messages, available_tools):
        tool_used, local_memory = '', ''
        whereas tool_used != 'final_answer':
            ### use instruments
            strive:
                agent_res = ollama.chat(mannequin=llm,
                                        messages=messages,                                                                                                              instruments=[v for v in available_tools.values()])
                dic_res = use_tool(agent_res, dic_tools)
                res, tool_used, inputs_used = dic_res["res"], dic_res["tool_used"], dic_res["inputs_used"]
            ### error
            besides Exception as e:
                print("⚠️ >", e)
                res = f"I attempted to make use of {tool_used} however did not work. I'll strive one thing else."
                print("👽 >", f"x1b[1;30m{res}x1b[0m")
                messages.append( {"role":"assistant", "content":res} )
            ### update memory
            if tool_used not in ['','final_answer']:
                local_memory += f"nTool used: {tool_used}.nInput used: {inputs_used}.nOutput: {res}"
                messages.append( {"function":"assistant", "content material":local_memory} )
                available_tools.pop(tool_used)
                if len(available_tools) == 1:
                    messages.append( {"function":"person", "content material":"now activate the instrument final_answer."} )
            ### instruments not used
            if tool_used == '':
                break
        return res

    In regard to the coding Device, I’ve seen that Brokers are likely to recreate the dataframe at each step. So I’ll use a reminiscence reinforcement to remind the mannequin that the dataset already exists. A trick generally used to get the specified behaviour. Finally, reminiscence reinforcements aid you to get extra significant and efficient interactions.

    # Begin a chat
    messages = [{"role":"system", "content":prompt}]
    reminiscence = '''
    The dataset already exists and it is known as 'dtf', do not create a brand new one.
    '''
    whereas True:
        ## Person
        q = enter('🙂 >')
        if q == "give up":
            break
        messages.append( {"function":"person", "content material":q} )
    
        ## Reminiscence
        messages.append( {"function":"person", "content material":reminiscence} )     
       
        ## Mannequin
        available_tools = {"final_answer":tool_final_answer, "code_exec":tool_code_exec}
        res = run_agent(llm, messages, available_tools)
       
        ## Response
        print("👽 >", f"x1b[1;30m{res}x1b[0m")
        messages.append( {"role":"assistant", "content":res} )

    Creating a plot is something that the LLM alone can’t do. But keep in mind that even if Agents can create images, they can’t see them, because after all, the engine is still a language model. So the user is the only one who visualises the plot.

    The Agent is using the library statsmodels to train a model and forecast the time series. 

    Large Dataframes

    LLMs have limited memory, which restricts how much information they can process at once, even the most advanced models have token limits (a few hundred pages of text). Additionally, LLMs don’t retain memory across sessions unless a retrieval system is integrated. In practice, to effectively work with large dataframes, developers often use strategies like chunking, RAG, vector databases, and summarizing content before feeding it into the model.

    Let’s create a big dataset to play with.

    import random
    import string
    
    length = 1000
    
    dtf = pd.DataFrame(data={
        'Id': [''.join(random.choices(string.ascii_letters, k=5)) for _ in range(length)],
        'Age': np.random.randint(low=18, excessive=80, dimension=size),
        'Rating': np.random.uniform(low=50, excessive=100, dimension=size).spherical(1),
        'Standing': np.random.alternative(['Active','Inactive','Pending'], dimension=size)
    })
    
    dtf.tail()

    I’ll add a web-searching Device, in order that, with the flexibility to execute Python code and search the web, a general-purpose AI good points entry to all of the accessible data and might make data-driven selections. 

    In Python, the best option to create a web-searching Device is with the well-known personal browser DuckDuckGo (pip set up duckduckgo-search==6.3.5). You’ll be able to instantly use the unique library or import the LangChain wrapper (pip set up langchain-community==0.3.17).

    from langchain_community.instruments import DuckDuckGoSearchResults
    
    def search_web(question:str) -> str:
      return DuckDuckGoSearchResults(backend="information").run(question)
    
    tool_search_web = {'sort':'operate', 'operate':{
      'title': 'search_web',
      'description': 'Search the net',
      'parameters': {'sort': 'object',
                    'required': ['query'],
                    'properties': {
                        'question': {'sort':'str', 'description':'the subject or topic to look on the net'},
    }}}}
    
    search_web(question="nvidia")

    In whole, the Agent now has 3 instruments.

    dic_tools = {'final_answer':final_answer,
                 'search_web':search_web,
                 'code_exec':code_exec}

    Since I can’t add the complete dataframe within the immediate, I shall feed solely the primary 10 rows in order that the LLM can perceive the final context of the dataset. Moreover, I’ll specify the place to seek out the complete dataset.

    str_data = "n".be part of([str(row) for row in dtf.head(10).to_dict(orient='records')])
    
    immediate = f'''
    You're a Information Analyst, you can be given a activity to unravel as greatest you possibly can.
    You may have entry to the next instruments:
    - instrument 'final_answer' to return a textual content response.
    - instrument 'code_exec' to execute Python code.
    - instrument 'search_web' to seek for data on the web.
    
    In the event you use the 'code_exec' instrument, bear in mind to all the time use the operate print() to get the output.
    The dataset already exists and it is known as 'dtf', do not create a brand new one.
    
    This dataset accommodates credit score rating for every buyer of the financial institution. Here is the primary rows:
    {str_data}
    '''

    Lastly, we are able to run the Agent.

    messages = [{"role":"system", "content":prompt}]
    reminiscence = '''
    The dataset already exists and it is known as 'dtf', do not create a brand new one.
    '''
    whereas True:
        ## Person
        q = enter('🙂 >')
        if q == "give up":
            break
        messages.append( {"function":"person", "content material":q} )
    
        ## Reminiscence
        messages.append( {"function":"person", "content material":reminiscence} )     
       
        ## Mannequin
        available_tools = {"final_answer":tool_final_answer, "code_exec":tool_code_exec, "search_web":tool_search_web}
        res = run_agent(llm, messages, available_tools)
       
        ## Response
        print("👽 >", f"x1b[1;30m{res}x1b[0m")
        messages.append( {"role":"assistant", "content":res} )

    In this interaction, the Agent used the coding Tool properly. Now, I want to make it utilize the other tool as well.

    At last, I need the Agent to put together all the pieces of information obtained so far from this chat. 

    Conclusion

    This article has been a tutorial to demonstrate how to build from scratch Agents that process time series and large dataframes. We covered both ways that models can interact with the data: through natural language, where the LLM interprets the table as a string using its knowledge base, and by generating and executing code, leveraging tools to process the dataset as an object.

    Full code for this article: GitHub

    I hope you enjoyed it! Feel free to contact me for questions and feedback, or just to share your interesting projects.

    👉 Let’s Connect 👈



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleExplained: How Does L1 Regularization Perform Feature Selection?
    Next Article Seeing AI as a collaborator, not a creator
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Not Everything Needs Automation: 5 Practical AI Agents That Deliver Enterprise Value

    June 6, 2025
    Artificial Intelligence

    Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.

    June 6, 2025
    Artificial Intelligence

    5 Crucial Tweaks That Will Make Your Charts Accessible to People with Visual Impairments

    June 6, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Transforming Healthcare with Generative AI: Key Benefits & Applications

    May 1, 2025

    The future of AI processing

    April 22, 2025

    Will you be the boss of your own AI workforce?

    April 25, 2025

    OpenAI:s nya AI-modeller hallucinerar mer än tidigare

    April 21, 2025

    Data Science: From School to Work, Part IV

    April 24, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Teaching AI models the broad strokes to sketch more like humans do | MIT News

    June 3, 2025

    At the core of problem-solving | MIT News

    April 4, 2025

    Microsoft introducerar Copilot Vision till Windows och mobilen för AI-hjälp

    April 7, 2025
    Our Picks

    Gemini introducerar funktionen schemalagda åtgärder i Gemini-appen

    June 7, 2025

    AIFF 2025 Runway’s tredje årliga AI Film Festival

    June 7, 2025

    AI-agenter kan nu hjälpa läkare fatta bättre beslut inom cancervård

    June 7, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.