Close Menu
    Trending
    • Creating AI that matters | MIT News
    • Scaling Recommender Transformers to a Billion Parameters
    • Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know
    • Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI
    • ChatGPT Gets More Personal. Is Society Ready for It?
    • Why the Future Is Human + Machine
    • Why AI Is Widening the Gap Between Top Talent and Everyone Else
    • Implementing the Fourier Transform Numerically in Python: A Step-by-Step Guide
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Automating Ticket Creation in Jira With the OpenAI Agents SDK: A Step-by-Step Guide
    Artificial Intelligence

    Automating Ticket Creation in Jira With the OpenAI Agents SDK: A Step-by-Step Guide

    ProfitlyAIBy ProfitlyAIJuly 24, 2025No Comments17 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    ending a gathering with a colleague you’ll have already got all of your mentioned gadgets in your project-management instrument? No want for writing something down through the assembly, nor to manually create corresponding tickets! That was the considered this brief experimental undertaking.

    On this step-by-step information we are going to create the Python utility “TaskPilot” utilizing OpenAI’s Agents SDK to routinely create Jira points given a gathering transcript.

    The Problem: From Dialog to Actionable Duties

    Given the transcript of a gathering, create points in a Jira undertaking routinely and comparable to what was mentioned within the assembly.

    The Answer: Automating with OpenAI Brokers

    Utilizing the OpenAI Agents SDK we are going to implement an brokers workflow that:

    1. Receives and reads a gathering transcript.
    2. Makes use of an AI agent to extract motion gadgets from the dialog.
    3. Makes use of one other AI agent to create Jira points from these motion gadgets.
    Agent circulate: Picture created by the writer

    The OpenAI Brokers SDK

    The OpenAI Agents SDK is a Python library to create AI brokers programmatically that may work together with instruments, use MCP-Servers or hand off duties to different brokers.

    Listed below are a few of the key options of the SDK:

    • Agent Loop: A built-in agent loop that handles the back-and-forth communication with the LLM till the agent is completed with its process.
    • Perform Instruments: Turns any Python operate right into a instrument, with computerized schema technology and Pydantic-powered validation.
    • MCP Help: Permits brokers to make use of MCP servers to increase its capabilities of interacting with the skin world.
    • Handoffs: Permits brokers to delegate duties to different brokers relying on their experience/function.
    • Guardrails: Validates the inputs and outputs of the brokers. Aborts execution early if the agent receives invalid enter.
    • Classes: Mechanically manages the dialog historical past. Ensures that the brokers have the context they should carry out their duties.
    • Tracing: Supplies a tracing context supervisor which permits to visualise all the execution circulate of the brokers, making it straightforward to debug and perceive what’s taking place below the hood.

    Now, let’s dive into the implementation! 


    Implementation

    We are going to implement our undertaking in 8 easy steps:

    1. Setting up the project structure
    2. The TaskPilotRunner
    3. Defining our data models
    4. Creating the agents
    5. Providing tools
    6. Configuring the application
    7. Bringing it all together in main.py
    8. Monitoring our runs in the OpenAI Dev Platform

    Let’s get fingers on!

    Step 1: Setting Up the Challenge Construction

    First, let’s create the essential construction of our undertaking:

    • The taskpilot listing: will comprise our essential utility logic.
    • The local_agentslisting: will comprise the place we outline the brokers we are going to use on this undertaking (“local_agents” in order that there isn’t a interference with the OpenAI library brokers)
    • The utils listing: for helper capabilities, a config parser and information fashions.
    taskpilot_repo/
    ├── config.yml
    ├── .env
    ├── README.md
    ├── taskpilot/
    │   ├── essential.py
    │   ├── taskpilot_runner.py
    │   ├── local_agents/
    │   │   ├── __init__.py
    │   │   ├── action_items_extractor.py
    │   │   └── tickets_creator.py
    │   └── utils/
    │       ├── __init__.py
    │       ├── agents_tools.py
    │       ├── config_parser.py
    │       ├── jira_interface_functions.py
    │       └── fashions.py

    Step 2: The TaskPilotRunner

    The TaskPilotRunner class in taskpilot/taskpilot_runner.py would be the coronary heart of our utility. It would orchestrate all the workflow, extracting motion gadgets from the assembly transcript after which creating the Jira tickets from the motion gadgets. On the similar time it’s going to activate the built-in tracing from the Brokers SDK to gather a report of occasions through the brokers run that can assist for debugging and monitoring the agent workflows. 

    Let’s begin with the implementation:

    • Within the __init__() methodology we are going to create the 2 brokers used for this workflow.
    • The run() methodology might be a very powerful of the TaskPilotRunner class, which can obtain the assembly transcript and move it to the brokers to create the Jira points. The brokers might be began and run inside a hint context supervisor i.e. with hint("TaskPilot run", trace_id): . A hint from the Brokers SDK represents a single end-to-end operation of a “workflow”.
    • The _extract_action_items() and _create_tickets() strategies will begin and run every of the brokers respectively. Inside these strategies the Runner.run() methodology from the OpenAI Brokers SDK might be used to set off the brokers. It takes an agent and an enter, and it returns the ultimate output of the agent’s execution. Lastly, the results of every agent might be parsed to its outlined output kind.
    # taskpilot/taskpilot_runner.py
    
    from brokers import Runner, hint, gen_trace_id
    from local_agents import create_action_items_agent, create_tickets_creator_agent
    from utils.fashions import ActionItemsList, CreateIssuesResponse
    
    class TaskPilotRunner:
        def __init__(self):
            self.action_items_extractor = create_action_items_agent()
            self.tickets_creator = create_tickets_creator_agent()
    
        async def run(self, meeting_transcript: str) -> None:
            trace_id = gen_trace_id()
            print(f"Beginning TaskPilot run... (Hint ID: {trace_id})")
            print(
                f"View hint: https://platform.openai.com/traces/hint?trace_id={trace_id}"
            )
    
            with hint("TaskPilot run", trace_id=trace_id):
                # 1. Extract motion gadgets from assembly transcript
                action_items = await self._extract_action_items(meeting_transcript)
    
                # 2. Create tickets from motion gadgets
                tickets_creation_response = await self._create_tickets(action_items)
    
                # 3. Return the outcomes
                print(tickets_creation_response.textual content)
    
        async def _extract_action_items(self, meeting_transcript: str) -> ActionItemsList:
            consequence = await Runner.run(
                self.action_items_extractor, enter=meeting_transcript
            )
            final_output = consequence.final_output_as(ActionItemsList)
            return final_output
    
        async def _create_tickets(self, action_items: ActionItemsList) -> CreateIssuesResponse:
            consequence = await Runner.run(
                self.tickets_creator, enter=str(action_items)
            )
            final_output = consequence.final_output_as(CreateIssuesResponse)
            return final_output

    The three strategies are outlined as asynchronous capabilities. The explanation for that is that the Runner.run() methodology from the OpenAI Brokers SDK is outlined itself as an async coroutine. This enables a number of brokers, instrument calls, or streaming endpoints to run in parallel with out blocking.

    Step 3: Defining Our Information Fashions

    With out particular configuration brokers return textual content in str as output. To make sure that our brokers present structured and predictable responses, the library helps using Pydantic fashions for outlining the output_type of the brokers (it actually supports any type that can be wrapped in a Pydantic TypeAdapter — dataclasses, lists, TypedDict, etc.). The information-models we outline would be the information constructions that our brokers will work with.

    For our usecase we are going to outline three fashions in taskpilot/utils/fashions.py:

    • ActionItem: This mannequin represents a single motion merchandise that’s extracted from the assembly transcript.
    • ActionItemsList: This mannequin is an inventory of ActionItem objects.
    • CreateIssuesResponse: This mannequin defines the construction of the response from the agent that can create the problems/tickets.
    # taskpilot/utils/fashions.py
    
    from typing import Non-obligatory
    from pydantic import BaseModel
    
    class ActionItem(BaseModel):
        title: str
        description: str
        assignee: str
        standing: str
        issuetype: str
        undertaking: Non-obligatory[str] = None
        due_date: Non-obligatory[str] = None
        start_date: Non-obligatory[str] = None
        precedence: Non-obligatory[str] = None
        dad or mum: Non-obligatory[str] = None
        kids: Non-obligatory[list[str]] = None
    
    class ActionItemsList(BaseModel):
        action_items: record[ActionItem]
    
    class CreateIssuesResponse(BaseModel):
        action_items: record[ActionItem]
        error_messages: record[str]
        success_messages: record[str]
        textual content: str

    Step 4: Creating the Brokers

    The brokers are the core of our utility. Brokers are principally an LLM configured with directions (the AGENT_PROMPT) and entry to instruments for them to behave by itself on outlined duties. An agent from the OpenAI Brokers SDK is outlined by the next parameters:

    • title: The title of the agent for identification.
    • directions: The immediate to inform the agent its function or process it shall execute (aka. system immediate).
    • mannequin: Which LLM to make use of for the agent. The SDK offers out-of-the-box help for OpenAI fashions, nevertheless it’s also possible to use non-OpenAI fashions (see Agents SDK: Models).
    • output_type: Python object that the agent shall returned, as talked about beforehand.
    • instruments: An inventory of python callables, that would be the instruments that the agent can use to carry out its duties. 

    Primarily based on this data, let’s create our two brokers: the ActionItemsExtractor and the TicketsCreator.

    Motion Gadgets Extractor

    This agent’s job is to learn the assembly transcript and extract the motion gadgets. We’ll create it in taskpilot/local_agents/action_items_extractor.py. 

    # taskpilot/local_agents/action_items_extractor.py
    
    from brokers import Agent
    from utils.config_parser import Config
    from utils.fashions import ActionItemsList
    
    AGENT_PROMPT = """
    Your are an assistant to extract motion gadgets from a gathering transcript.
    
    You'll be given a gathering transcript and you'll want to extract the motion gadgets in order that they are often transformed into tickets by one other assistant.
    
    The motion gadgets ought to comprise the next data:
        - title: The title of the motion merchandise. It must be a brief description of the motion merchandise. It must be brief and concise. That is obligatory.
        - description: The outline of the motion merchandise. It must be a extra prolonged description of the motion merchandise. That is obligatory.
        - assignee: The title of the one who might be answerable for the motion merchandise. You shall infer from the dialog the title of the assignee and never use "Speaker 1" or "Speaker 2" or every other speaker identifier. That is obligatory.
        - standing: The standing of the motion merchandise. It may be "To Do", "In Progress", "In Evaluate" or "Performed". You shall extract from the transcript during which state the motion merchandise is. If it's a new motion merchandise, you shall set it to "To Do".
        - due_date: The due date of the motion merchandise. It shall be within the format "YYYY-MM-DD".  You shall extract this from the transcript, nevertheless if it's not explicitly talked about, you shall set it to None. If relative dates are talked about (eg. by tomorrow, in every week,...), you shall convert them to absolute dates within the format "YYYY-MM-DD".
        - start_date: The beginning date of the motion merchandise. It shall be within the format "YYYY-MM-DD". You shall extract this from the transcript, nevertheless if it's not explicitly talked about, you shall set it to None.
        - precedence: The precedence of the motion merchandise. It may be "Lowest", "Low", "Medium", "Excessive" or "Highest". You shall interpret the precedence of the motion merchandise from the transcript, nevertheless if it's not clear, you shall set it to None.
        - issuetype: The kind of the motion merchandise. It may be "Epic", "Bug", "Process", "Story", "Subtask". You shall interpret the issuetype of the motion merchandise from the transcript, whether it is unclear set it to "Process".
        - undertaking: The undertaking to which the motion merchandise belongs. You shall interpret the undertaking of the motion merchandise from the transcript, nevertheless if it's not clear, you shall set it to None.
        - dad or mum: If the motion merchandise is a subtask, you shall set the dad or mum of the motion merchandise to the title of the dad or mum motion merchandise. If the dad or mum motion merchandise isn't clear or the motion merchandise isn't a subtask, you shall set it to None.
        - kids: If the motion merchandise is a dad or mum process, you shall set the youngsters of the motion merchandise to the titles of the kid motion gadgets. If the youngsters motion gadgets will not be clear or the motion merchandise isn't a dad or mum process, you shall set it to None.
    """
    
    def create_action_items_agent() -> Agent:
        return Agent(
            title="Motion Gadgets Extractor",
            directions=AGENT_PROMPT,
            output_type=ActionItemsList,
            mannequin=Config.get().brokers.mannequin,
        )

    As you may see, within the AGENT_PROMPT we inform the agent very detailed that its job is to extract motion gadgets and supply an in depth description of how we wish the motion gadgets to be extracted.

    Tickets Creator

    This agent takes the record of motion gadgets and creates Jira points. We’ll create it in taskpilot/local_agents/tickets_creator.py.

    # taskpilot/local_agents/tickets_creator.py
    
    from brokers import Agent
    from utils.config_parser import Config
    from utils.agents_tools import create_jira_issue
    from utils.fashions import CreateIssuesResponse
    
    AGENT_PROMPT = """
    You might be an assistant that creates Jira points given motion gadgets.
    
    You'll be given an inventory of motion gadgets and for every motion merchandise you shall create a Jira concern utilizing the `create_jira_issue` instrument.
    
    You shall acquire the responses of the `create_jira_issue` instrument and return them because the supplied kind `CreateIssuesResponse` which accommodates:
        - action_items: record containing the action_items that have been supplied to you
        - error_messages: record containing the error messages returned by the `create_jira_issue` instrument at any time when there was an error attempting to create the difficulty.
        - success_messages: record containing the response messages returned by the `create_jira_issue` instrument at any time when the difficulty creation was profitable.
        - textual content: A textual content that summarizes the results of the tickets creation. It shall be a string created as following: 
            f"From the {len(action_items)} motion gadgets supplied {len(success_messages)} have been efficiently created within the Jira undertaking.n {len(error_messages)} didn't be created within the Jira undertaking.nnError messages:n{error_messages}"
    """
    
    def create_tickets_creator_agent() -> Agent:
        return Agent(
            title="Tickets Creator",
            directions=AGENT_PROMPT,
            instruments=[create_jira_issue],
            mannequin=Config.get().brokers.mannequin,
            output_type=CreateIssuesResponse
        )

    Right here we set the instruments parameter and provides the agent the create_jira_issue instrument, which we’ll create within the subsequent step.

    Step 5: Offering Instruments

    Probably the most highly effective options of brokers is their capacity to make use of instruments to work together with the skin world. One may argue that using instruments is what turns the interplay with an LLM into an agent. The OpenAI Brokers SDK permits the brokers to make use of three forms of instruments:

    • Hosted instruments: Offered immediately from OpenAI similar to looking out the net or information, pc use, working code, amongst others.
    • Perform calling: Utilizing any Python operate as a instrument.
    • Brokers as instruments: Permitting brokers to name different brokers with out handing off.

    For our usecase, we might be utilizing operate calling and implement a operate to create the Jira points utilizing Jira’s REST API. By private alternative, I made a decision to separate it in two information:

    • In taskpilot/utils/jira_interface_functions.py we are going to write the capabilities to work together by HTTP Requests with the Jira REST API.
    • In taskpilot/utils/agents_tools.py we are going to write wrappers of the capabilities to be supplied to the brokers. These wrapper-functions have extra response parsing to offer the agent a processed textual content response as a substitute of a JSON. Nonetheless, the agent must also be capable of deal with and perceive JSON as response.

    First we implement the create_issue() operate in taskpilot/utils/jira_interface_functions.py : 

    # taskpilot/utils/jira_interface_functions.py
    
    import os
    from typing import Non-obligatory
    import json
    from urllib.parse import urljoin
    import requests
    from requests.auth import HTTPBasicAuth
    from utils.config_parser import Config
    
    JIRA_AUTH = HTTPBasicAuth(Config.get().jira.person, str(os.getenv("ATLASSIAN_API_KEY")))
    
    def create_issue(
        project_key: str,
        title: str,
        description: str,
        issuetype: str,
        duedate: Non-obligatory[str] = None,
        assignee_id: Non-obligatory[str] = None,
        labels: Non-obligatory[list[str]] = None,
        priority_id: Non-obligatory[str] = None,
        reporter_id: Non-obligatory[str] = None,
    ) -> requests.Response:
    
        payload = {
            "fields": {
                "undertaking": {"key": project_key},
                "abstract": title,
                "issuetype": {"title": issuetype},
                "description": {
                    "content material": [
                        {
                            "content": [
                                {
                                    "text": description,
                                    "type": "text",
                                }
                            ],
                            "kind": "paragraph",
                        }
                    ],
                    "kind": "doc",
                    "model": 1,
                },
            }
        }
    
        if duedate:
            payload["fields"].replace({"duedate": duedate})
        if assignee_id:
            payload["fields"].replace({"assignee": {"id": assignee_id}})
        if labels:
            payload["fields"].replace({"labels": labels})
        if priority_id:
            payload["fields"].replace({"precedence": {"id": priority_id}})
        if reporter_id:
            payload["fields"].replace({"reporter": {"id": reporter_id}})
    
        endpoint_url = urljoin(Config.get().jira.url_rest_api, "concern")
    
        headers = {"Settle for": "utility/json", "Content material-Kind": "utility/json"}
    
        response = requests.put up(
            endpoint_url,
            information=json.dumps(payload),
            headers=headers,
            auth=JIRA_AUTH,
            timeout=Config.get().jira.request_timeout,
        )
        return response

    As you may see, we have to authenticate to our Jira account utilizing our Jira person and a corresponding API_KEY that we will acquire on Atlassian Account Management.

    In taskpilot/utils/agents_tools.py we implement the create_jira_issue() operate, that we’ll then present to the TicketsCreator agent:

    # taskpilot/utils/agents_tools.py
    
    from brokers import function_tool
    from utils.fashions import ActionItem
    from utils.jira_interface_functions import create_issue
    
    @function_tool
    def create_jira_issue(action_item: ActionItem) -> str:
        
        response = create_issue(
            project_key=action_item.undertaking,
            title=action_item.title,
            description=action_item.description,
            issuetype=action_item.issuetype,
            duedate=action_item.due_date,
            assignee_id=None,
            labels=None,
            priority_id=None,
            reporter_id=None,
        )
    
        if response.okay:
            return f"Efficiently created the difficulty. Response message: {response.textual content}"
        else:
            return f"There was an error attempting to create the difficulty. Error message: {response.textual content}"

    Essential: The @function_tool decorator is what makes this operate usable for our agent. The agent can now name this operate and move it an ActionItem object. The operate then makes use of the create_issue operate which accesses the Jira API to create a brand new concern.

    Step 6: Configuring the Utility

    To make our utility parametrizable, we’ll use a config.yml file for the configuration settings, in addition to a .env file for the API keys.

    The configuration of the appliance is separated in:

    • brokers: To configure the brokers and the entry to the OpenAI API. Right here now we have two parameters: mannequin , which is the LLM that shall be utilized by the brokers, and OPENAI_API_KEY , within the .env file, to authenticate using the OpenAI API. You may acquire an OpenAI API Key in your OpenAI Dev Platform.
    • jira: To configure the entry to the Jira API. Right here we’d like 4 parameters: url_rest_api , which is the URL to the REST API of our Jira occasion; person , which is the person we use to entry Jira; request_timeout , which is the timeout in seconds to attend for the server to ship information earlier than giving up, and at last ATLASSIAN_API_KEY , within the .env file, to authenticate to your Jira occasion.

    Right here is our .env file, that within the subsequent step might be loaded to our utility within the essential.py utilizing the python-dotenv library:

    OPENAI_API_KEY=some-api-key
    ATLASSIAN_API_KEY=some-api-key

    And right here is our config.yml file:

    # config.yml
    
    brokers:
      mannequin: "o4-mini"
    jira:
      url_rest_api: "https://your-domain.atlassian.web/relaxation/api/3/"
      person: "[email protected]"
      request_timeout: 5

    We’ll additionally create a config parser at taskpilot/utils/config_parser.py to load this configuration. For this we implement the Config class as a singleton (which means there can solely be one occasion of this class all through the appliance lifespan).

    # taskpilot/utils/config_parser.py
    
    from pathlib import Path
    import yaml
    from pydantic import BaseModel
    
    class AgentsConfig(BaseModel):
    
        mannequin: str
    
    class JiraConfig(BaseModel):
    
        url_rest_api: str
        person: str
        request_timeout: int
    
    class ConfigModel(BaseModel):
    
        brokers: AgentsConfig
        jira: JiraConfig
    
    class Config:
    
        _instance: ConfigModel | None = None
    
        @classmethod
        def load(cls, path: str = "config.yml") -> None:
            if cls._instance is None:
                with open(Path(path), "r", encoding="utf-8") as config_file:
                    raw_config = yaml.safe_load(config_file)
                cls._instance = ConfigModel(**raw_config)
    
        @classmethod
        def get(cls, path: str = "config.yml") -> ConfigModel:
            if cls._instance is None:
                cls.load(path)
            return cls._instance

    Step 7: Bringing It All Collectively in essential.py

    Lastly, in taskpilot/essential.py, we’ll carry every little thing collectively. This script will load the assembly transcript, create an occasion of the TaskPilotRunner , after which name the run() methodology.

    # taskpilot/essential.py
    
    import os
    import asyncio
    from dotenv import load_dotenv
    
    from taskpilot_runner import TaskPilotRunner
    
    # Load the variables within the .env file
    load_dotenv()
    
    def load_meeting_transcript_txt(file_path: str) -> str:
        # ...
        return meeting_transcript
    
    async def essential():
        print("TaskPilot utility beginning...")
    
        meeting_transcript = load_meeting_transcript_txt("meeting_transcript.txt")
    
        await TaskPilotRunner().run(meeting_transcript)
    
    if __name__ == "__main__":
        asyncio.run(essential())

    Step 8: Monitoring Our Runs within the OpenAI Dev Platform

    As talked about, one of many benefits of the OpenAI Brokers SDK is that, as a consequence of its tracing characteristic, it’s doable to visualise all the execution circulate of our brokers. This makes it straightforward to debug and perceive what’s taking place below the hood within the OpenAI Dev Platform.

    Within the Traces Dashboard one can:

    • Observe every run of the brokers workflow.
    Screenshot by the writer
    • Perceive precisely what the brokers did throughout the agent workflow and monitor efficiency.
    Screenshot by the writer
    • Debug each name to the OpenAI API in addition to monitor what number of tokens have been utilized in every enter and output.
    Screenshot by the writer

    So make the most of this characteristic to guage, debug and monitor your agent runs.

    Conclusion

    And that’s it! On this eight easy steps now we have carried out an utility that may routinely create Jira points from a gathering transcript. Because of the easy interface of the OpenAI Brokers SDK you may simply create brokers programmatically that will help you automatize your duties!

    Be at liberty to clone the repository (the undertaking as described on this put up is in department function_calling), strive it out for your self, and begin constructing your individual AI-powered functions!


    💡 Coming Up Subsequent:

    In an upcoming put up, we’ll dive into learn how to implement your individual MCP Server to additional prolong our brokers’ capabilities and permit them to work together with exterior techniques past your native instruments. Keep tuned!

    🙋‍♂️ Let’s Join

    When you’ve got questions, suggestions, or simply wish to comply with together with future initiatives:


    Reference

    This text is impressed by the “OpenAI: Agents SDK” course from LinkedinLearning.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticlePedestrians now walk faster and linger less, researchers find | MIT News
    Next Article America’s AI watchdog is losing its bite
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Creating AI that matters | MIT News

    October 21, 2025
    Artificial Intelligence

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025
    Artificial Intelligence

    Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know

    October 21, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    OpenAI has released its first research into how using ChatGPT affects people’s emotional wellbeing

    April 4, 2025

    The 7 Best Free ChatGPT Detectors in 2025

    April 3, 2025

    Reducing Time to Value for Data Science Projects: Part 4

    August 12, 2025

    Amerikanskt företag köper svenska AI‑bolaget Sana Labs

    September 24, 2025

    The Cost of Non-Compliance: EU AI Act Penalties and How Shaip Helps You Avoid Them

    April 8, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    LangGraph + SciPy: Building an AI That Reads Documentation and Makes Decisions

    August 11, 2025

    Open the pod bay doors, Claude

    August 26, 2025

    How to Connect an MCP Server for an AI-Powered, Supply-Chain Network Optimization Agent

    September 23, 2025
    Our Picks

    Creating AI that matters | MIT News

    October 21, 2025

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025

    Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know

    October 21, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.