Close Menu
    Trending
    • Agentic AI in Finance: Opportunities and Challenges for Indonesia
    • Dispatch: Partying at one of Africa’s largest AI gatherings
    • Topp 10 AI-filmer genom tiderna
    • OpenAIs nya webbläsare ChatGPT Atlas
    • Creating AI that matters | MIT News
    • Scaling Recommender Transformers to a Billion Parameters
    • Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know
    • Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » How to Build Effective Agentic Systems with LangGraph
    Artificial Intelligence

    How to Build Effective Agentic Systems with LangGraph

    ProfitlyAIBy ProfitlyAISeptember 30, 2025No Comments13 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    of highly effective AI fashions, reminiscent of GPT-5 and Gemini 2.5 Professional, we additionally see a rise in agentic frameworks to make the most of these fashions. These frameworks make working with AI fashions easier by abstracting away plenty of challenges, reminiscent of tool-calling, agentic state dealing with, and human-in-the-loop setups.

    Thus, on this article, I’ll dive deeper into LangGraph, one of many out there agentic AI frameworks. I’ll put it to use to develop a easy agentic software, with a number of steps highlighting the advantages of agentic AI packages. I’ll additionally cowl the professionals and cons of utilizing LangGraph and different related agentic frameworks.

    I’m not sponsored in any manner by LangGraph to create this text. I merely selected the framework because it is likely one of the most prevalent ones on the market. There are various different choices on the market, reminiscent of:

    • LangChain
    • LlamaIndex
    • CrewAI
    This determine reveals an instance of a sophisticated AI workflow you’ll be able to implement with LangGraph. The workflow consists of a number of routing steps, every resulting in completely different perform handlers to successfully deal with the consumer request. Picture by the creator.

    Why do you want an agentic framework?

    There are quite a few packages on the market which can be speculated to make programming purposes simpler. In plenty of circumstances, these packages have the precise reverse impact, as a result of they obscure the code, don’t work nicely in manufacturing, and typically make it tougher to debug.

    Nonetheless, you have to discover the packages that simplify your software by abstracting away boilerplate code. This precept is commonly highlighted within the startup world with a quote just like the one beneath:

    Give attention to fixing the precise drawback you’re attempting to unravel. All different (beforehand solved issues) needs to be outsourced to different purposes

    An agentic framework is required as a result of it abstracts away plenty of problems you do not need to cope with:

    • Sustaining state. Not simply message historical past, however all different info you collect, for instance, when performing RAG
    • Software utilization. You don’t need to arrange your individual logic for executing instruments. Moderately, you need to merely outline them and let the agentic framework deal with the way to invoke the instruments. (That is particularly related for parallel and async software calling)

    Thus, utilizing an agentic framework abstracts away plenty of problems, so you’ll be able to give attention to the core a part of your product.

    Fundamentals of LangGraph

    To get began implementing LangGraph, I start by studying the docs, overlaying:

    • Fundamental chatbot implementation
    • Software utilization
    • Sustaining and updating the state

    LangGraph is, as its title suggests, based mostly on constructing graphs and executing this graph per request. In a graph, you’ll be able to outline:

    • The state (the present info saved in reminiscence)
    • Nodes. Sometimes, an LLM or a software name, for instance, classifying consumer intent, or answering the consumer’s query
    • Edges. Conditional logic determines which node to go to subsequent.

    All of which stems from primary graph idea.

    Implementing a workflow

    LangGraph with Router and Tools
    On this article, you’ll create an agentic workflow as seen on this determine, the place you’ll have a consumer question to start out. This question is routed to certainly one of three choices: to both add a brand new doc to the database, delete a doc from the database, or ask a query a few doc within the database. Picture by the creator.

    I imagine top-of-the-line methods of studying is to easily attempt issues out for your self. Thus, I’ll implement a easy workflow in LangGraph. You possibly can study constructing these workflows within the workflow docs, which relies on Anthropic’s Constructing efficient brokers weblog (certainly one of my favourite weblog posts about brokers, which I’ve lined in a number of of my earlier articles. I extremely advocate studying it.

    I’ll make a easy workflow to outline an software the place a consumer can:

    • Create paperwork with textual content
    • Delete paperwork
    • Search in paperwork

    To do that, I’ll create the next workflow:

    1. Detect consumer intent. Do they need to create a doc, delete a doc, or search in a doc?
    2. Given the result of step 1, I’ll have completely different flows to deal with every of them.

    You would additionally do that by merely defining all of the instruments and giving the agent entry to create/delete/search a doc. Nonetheless, if you wish to do extra actions relying on intent, doing an intent classification routing step first is the way in which to go.

    Loading imports and LLM

    First, I’ll load the required imports and the LLM I’m utilizing. I’ll be utilizing AWS Bedrock, although you should use different suppliers, as you’ll be able to see from step 3 in this tutorial.

    """
    Make a doc handler workflow the place a consumer can
    create a brand new doc to the database (at the moment only a dictionary)
    delete a doc from the database
    ask a query a few doc
    """
    
    from typing_extensions import TypedDict, Literal
    from langgraph.checkpoint.reminiscence import InMemorySaver
    from langgraph.graph import StateGraph, START, END
    from langgraph.varieties import Command, interrupt
    from langchain_aws import ChatBedrockConverse
    from langchain_core.messages import HumanMessage, SystemMessage
    from pydantic import BaseModel, Area
    from IPython.show import show, Picture
    
    from dotenv import load_dotenv
    import os
    
    load_dotenv()
    
    aws_access_key_id = os.getenv("AWS_ACCESS_KEY_ID") or ""
    aws_secret_access_key = os.getenv("AWS_SECRET_ACCESS_KEY") or ""
    
    os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id
    os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key
    
    llm = ChatBedrockConverse(
        model_id="us.anthropic.claude-3-5-haiku-20241022-v1:0", # that is the mannequin id (added us. earlier than id in platform)
        region_name="us-east-1",
        aws_access_key_id=aws_access_key_id,
        aws_secret_access_key=aws_secret_access_key,
    
    )
    
    document_database: dict[str, str] = {} # a dictionary with key: filename, worth: textual content in doc
    

    I additionally outlined the database as a dictionary of information. In manufacturing, you’ll naturally use a correct database; nevertheless, I simplify it for this tutorial

    Defining the graph

    Subsequent, it’s time to outline the graph. I first create the Router object, which can classify the consumer’s immediate into certainly one of three intents:

    • add_document
    • delete_document
    • ask_document
    # Outline state
    class State(TypedDict):
        enter: str
        resolution: str | None
        output: str | None
    
    # Schema for structured output to make use of as routing logic
    class Route(BaseModel):
        step: Literal["add_document", "delete_document", "ask_document"] = Area(
            description="The subsequent step within the routing course of"
        )
    
    # Increase the LLM with schema for structured output
    router = llm.with_structured_output(Route)
    
    def llm_call_router(state: State):
        """Route the consumer enter to the suitable node"""
    
        # Run the augmented LLM with structured output to function routing logic
        resolution = router.invoke(
            [
                SystemMessage(
                    content="""Route the user input to one of the following 3 intents:
                    - 'add_document'
                    - 'delete_document'
                    - 'ask_document'
                    You only need to return the intent, not any other text.
                    """
                ),
                HumanMessage(content=state["input"]),
            ]
        )
    
        return {"resolution": resolution.step}
    
    # Conditional edge perform to path to the suitable node
    def route_decision(state: State):
        # Return the node title you need to go to subsequent
        if state["decision"] == "add_document":
            return "add_document_to_database_tool"
        elif state["decision"] == "delete_document":
            return "delete_document_from_database_tool"
        elif state["decision"] == "ask_document":
            return "ask_document_tool"
    
    

    I outline the state the place we retailer the consumer enter, the router’s resolution (one of many three intents), after which guarantee structured output from the LLM. The structured output ensures the mannequin responds with one of many three intents.

    Persevering with, I’ll outline the instruments we’re utilizing on this article, one for every of the intents.

    # Nodes
    def add_document_to_database_tool(state: State):
        """Add a doc to the database. Given consumer question, extract the filename and content material for the doc. If not offered, is not going to add the doc to the database."""
    
        user_query = state["input"]
        # extract filename and content material from consumer question
        filename_prompt = f"Given the next consumer question, extract the filename for the doc: {user_query}. Solely return the filename, not some other textual content."
        output = llm.invoke(filename_prompt)
        filename = output.content material
        content_prompt = f"Given the next consumer question, extract the content material for the doc: {user_query}. Solely return the content material, not some other textual content."
        output = llm.invoke(content_prompt)
        content material = output.content material
    
        # add doc to database
        document_database[filename] = content material
        return {"output": f"Doc {filename} added to database"}
    
    
    def delete_document_from_database_tool(state: State):
        """Delete a doc from the database. Given consumer question, extract the filename of the doc to delete. If not offered, is not going to delete the doc from the database."""
        user_query = state["input"]
        # extract filename from consumer question
        filename_prompt = f"Given the next consumer question, extract the filename of the doc to delete: {user_query}. Solely return the filename, not some other textual content."
        output = llm.invoke(filename_prompt)
        filename = output.content material
    
        # delete doc from database if it exsits, if not retunr information  about failure 
        if filename not in document_database:
            return {"output": f"Doc {filename} not present in database"}
        document_database.pop(filename)
        return {"output": f"Doc {filename} deleted from database"}
    
    
    def ask_document_tool(state: State):
        """Ask a query a few doc. Given consumer question, extract the filename and query for the doc. If not offered, is not going to ask the query concerning the doc."""
    
        user_query = state["input"]
        # extract filename and query from consumer question
        filename_prompt = f"Given the next consumer question, extract the filename of the doc to ask a query about: {user_query}. Solely return the filename, not some other textual content."
        output = llm.invoke(filename_prompt)
        filename = output.content material
        question_prompt = f"Given the next consumer question, extract the query to ask concerning the doc: {user_query}. Solely return the query, not some other textual content."
        output = llm.invoke(question_prompt)
        query = output.content material
    
        # ask query about doc
        if filename not in document_database:
            return {"output": f"Doc {filename} not present in database"}
        end result = llm.invoke(f"Doc: {document_database[filename]}nnQuestion: {query}")
        return {"output": f"Doc question end result: {end result.content material}"}

    And eventually, we construct the graph with nodes and edges:

    # Construct workflow
    router_builder = StateGraph(State)
    
    # Add nodes
    router_builder.add_node("add_document_to_database_tool", add_document_to_database_tool)
    router_builder.add_node("delete_document_from_database_tool", delete_document_from_database_tool)
    router_builder.add_node("ask_document_tool", ask_document_tool)
    router_builder.add_node("llm_call_router", llm_call_router)
    
    # Add edges to attach nodes
    router_builder.add_edge(START, "llm_call_router")
    router_builder.add_conditional_edges(
        "llm_call_router",
        route_decision,
        {  # Identify returned by route_decision : Identify of subsequent node to go to
            "add_document_to_database_tool": "add_document_to_database_tool",
            "delete_document_from_database_tool": "delete_document_from_database_tool",
            "ask_document_tool": "ask_document_tool",
        },
    )
    router_builder.add_edge("add_document_to_database_tool", END)
    router_builder.add_edge("delete_document_from_database_tool", END)
    router_builder.add_edge("ask_document_tool", END)
    
    
    # Compile workflow
    reminiscence = InMemorySaver()
    router_workflow = router_builder.compile(checkpointer=reminiscence)
    
    config = {"configurable": {"thread_id": "1"}}
    
    
    # Present the workflow
    show(Picture(router_workflow.get_graph().draw_mermaid_png()))
    

    The final show perform ought to present the graph as you see beneath:

    LangGraph Router Graph
    This determine reveals the graph you simply created. Picture by the creator,

    Now you’ll be able to check out the workflow by asking a query per intent.

    Add a doc:

    user_input = "Add the doc 'check.txt' with content material 'This can be a check doc' to the database"
    state = router_workflow.invoke({"enter": user_input}, config)
    print(state["output"]
    
    # -> Doc check.txt added to database

    Search a doc:

    user_input = "Give me a abstract of the doc 'check.txt'"
    state = router_workflow.invoke({"enter": user_input}, config)
    print(state["output"])
    
    # -> A quick, generic check doc with a easy descriptive sentence.

    Delete a doc:

    user_input = "Delete the doc 'check.txt' from the database"
    state = router_workflow.invoke({"enter": user_input}, config)
    print(state["output"])
    
    # -> Doc check.txt deleted from database

    Nice! You possibly can see the workflow is working with the completely different routing choices. Be at liberty so as to add extra intents or extra nodes per intent to create a extra complicated workflow.

    Stronger agentic use circumstances

    The distinction between agentic workflows and totally agentic purposes is typically complicated. Nonetheless, to separate the 2 phrases, I’ll use the quote beneath from Anthropic’s Building effective agents:

    Workflows are programs the place LLMs and instruments are orchestrated via predefined code paths. Brokers, then again, are programs the place LLMs dynamically direct their very own processes and power utilization, sustaining management over how they accomplish duties.

    Most challenges you clear up with LLMs will use the workflow sample, as a result of most issues (from my expertise) are pre-defined, and may have a pre-determined set of guardrails to observe. Within the instance above, when including/deleting/looking out paperwork, you need to completely arrange a pre-determined workflow by defining the intent classifier and what to do given every intent.

    Nonetheless, typically, you additionally need extra autonomous agentic use circumstances. Think about, for instance, Cursor, the place they need a coding agent that may search via your code, test the latest documentation on-line, and modify your code. In these cases, it’s troublesome to create pre-determined workflows as a result of there are such a lot of completely different situations that may happen.

    If you wish to create extra autonomous agentic programs, you’ll be able to learn extra about that here.

    LangGraph execs and cons

    Professionals

    My three predominant positives about LangGraph are:

    • Straightforward to arrange
    • Open-source
    • Simplifies your code

    It was easy to arrange LangGraph and shortly get it working. Particularly when following their documentation, or feeding their documentation to Cursor and prompting it to implement particular workflows.

    Moreover, the code for LangGraph is open-source, which means you’ll be able to maintain operating the code, it doesn’t matter what occurs to the corporate behind it or adjustments they determine to make. I feel that is essential if you wish to deploy it to manufacturing. Lastly, LangGraph additionally simplifies plenty of the code and abstracts away plenty of logic you’ll’ve needed to write in Python your self.

    Cons

    Nonetheless, there are additionally some downsides to LangGraph that I’ve seen throughout implementation.

    • Nonetheless a stunning quantity of boilerplate code
    • You’ll encounter LangGraph-specific errors

    When implementing my very own customized workflow, I felt I nonetheless had so as to add plenty of boilerplate code. Although the quantity of code was undoubtedly lower than if I’d carried out the whole lot from scratch, I discovered myself stunned by the quantity of code I had so as to add to create a comparatively easy workflow. Nonetheless, I feel a part of that is that LangGraph makes an attempt to place itself as a lower-code software than, for instance, plenty of performance you discover in LangChain (which I feel is nice as a result of LangChain, in my view, abstracts away an excessive amount of, making it tougher to debug your code).

    Moreover, as with many externally put in packages, you’ll encounter LangGraph-specific points when implementing the bundle. For instance, after I needed to preview the graph of the workflow I created, I bought a difficulty referring to the draw_mermaid_png perform. Encountering such errors is inevitable when utilizing exterior packages, and it’ll at all times be a trade-off between the useful code abstractions a bundle offers you, versus the completely different sorts of bugs you could face utilizing such packages.

    Abstract

    All in all, I discover LangGraph a useful bundle when coping with agentic programs. Establishing my desired workflow by first doing intent classification and continuing with completely different flows relying on intent was comparatively easy. Moreover, I feel LangGraph discovered a very good center floor between not abstracting away all logic (obscuring the code, making it tougher to debug) and truly abstracting away challenges I don’t need to cope with when growing my agentic system. There are each positives and negatives to implementing such agentic frameworks, and I feel one of the best ways to make this trade-off is by implementing easy workflows your self.

    👉 Discover me on socials:

    🧑‍💻 Get in touch

    🔗 LinkedIn

    🐦 X / Twitter

    ✍️ Medium

    If you wish to study extra about agentic workflows, you’ll be able to learn my article on Building Effective AI Agents To Process Millions of Documents. You may also study extra about LLMs in my article on LLM Validation.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleOpenAI’s New Report Details How We Use ChatGPT at Work
    Next Article Actual Intelligence in the Age of AI
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Agentic AI in Finance: Opportunities and Challenges for Indonesia

    October 22, 2025
    Artificial Intelligence

    Creating AI that matters | MIT News

    October 21, 2025
    Artificial Intelligence

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Building a Scalable and Accurate Audio Interview Transcription Pipeline with Google Gemini

    April 29, 2025

    A Practical Blueprint for AI Document Classification

    September 2, 2025

    AI May Soon Help You Understand What Your Pet Is Trying to Say

    May 9, 2025

    What the Latest AI Meltdown Reveals About Alignment

    July 22, 2025

    Automating Ticket Creation in Jira With the OpenAI Agents SDK: A Step-by-Step Guide

    July 24, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Should Sapling AI Be Your AI Detector: Sapling Review

    April 3, 2025

    The Case for Centralized AI Model Inference Serving

    April 3, 2025

    Hands-on Multi Agent LLM Restaurant Simulation, with Python and OpenAI

    April 28, 2025
    Our Picks

    Agentic AI in Finance: Opportunities and Challenges for Indonesia

    October 22, 2025

    Dispatch: Partying at one of Africa’s largest AI gatherings

    October 22, 2025

    Topp 10 AI-filmer genom tiderna

    October 22, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.