Close Menu
    Trending
    • Google’s generative video model Veo 3 has a subtitles problem
    • MedGemma – Nya AI-modeller för hälso och sjukvård
    • AI text-to-speech programs could “unlearn” how to imitate certain people
    • AI’s giants want to take over the classroom
    • What Can the History of Data Tell Us About the Future of AI?
    • Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need
    • Topic Model Labelling with LLMs | Towards Data Science
    • There and Back Again: An AI Career Journey
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Building a Сustom MCP Chatbot | Towards Data Science
    Artificial Intelligence

    Building a Сustom MCP Chatbot | Towards Data Science

    ProfitlyAIBy ProfitlyAIJuly 10, 2025No Comments28 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    a way to standardise communication between AI functions and exterior instruments or information sources. This standardisation helps to scale back the variety of integrations wanted (from N*M to N+M): 

    • You need to use community-built MCP servers if you want widespread performance, saving time and avoiding the necessity to reinvent the wheel each time.
    • It’s also possible to expose your personal instruments and sources, making them obtainable for others to make use of.

    In my previous article, we constructed the analytics toolbox (a group of instruments that may automate your day-to-day routine). We constructed an MCP server and used its capabilities with current shoppers like MCP Inspector or Claude Desktop. 

    Now, we wish to use these instruments straight in our AI functions. To try this, let’s construct our personal MCP consumer. We are going to write pretty low-level code, which can even provide you with a clearer image of how instruments like Claude Code work together with MCP underneath the hood.

    Moreover, I want to implement the function that’s presently (July 2025) lacking from Claude Desktop: the power for the LLM to robotically examine whether or not it has an appropriate immediate template for the duty at hand and use it. Proper now, you must choose the template manually, which isn’t very handy. 

    As a bonus, I can even share a high-level implementation utilizing the smolagents framework, which is right for situations if you work solely with MCP instruments and don’t want a lot customisation.

    MCP protocol overview

    Right here’s a fast recap of the MCP to make sure we’re on the identical web page. MCP is a protocol developed by Anthropic to standardise the best way LLMs work together with the surface world. 

    It follows a client-server structure and consists of three predominant elements: 

    • Host is the user-facing software. 
    • MCP consumer is a part throughout the host that establishes a one-to-one reference to the server and communicates utilizing messages outlined by the MCP protocol.
    • MCP server exposes capabilities akin to immediate templates, sources and instruments. 
    Picture by writer

    Since we’ve already implemented the MCP server earlier than, this time we are going to concentrate on constructing the MCP consumer. We are going to begin with a comparatively easy implementation and later add the power to dynamically choose immediate templates on the fly.

    You’ll find the total code on GitHub.

    Constructing the MCP chatbot

    Let’s start with the preliminary setup: we’ll load the Anthropic API key from a config file and alter Python’s asyncio occasion loop to help nested occasion loops.

    # Load configuration and atmosphere
    with open('../../config.json') as f:
        config = json.load(f)
    os.environ["ANTHROPIC_API_KEY"] = config['ANTHROPIC_API_KEY']
    
    nest_asyncio.apply()

    Let’s begin by constructing a skeleton of our program to get a transparent image of the appliance’s high-level structure.

    async def predominant():
        """Most important entry level for the MCP ChatBot software."""
        chatbot = MCP_ChatBot()
        attempt:
            await chatbot.connect_to_servers()
            await chatbot.chat_loop()
        lastly:
            await chatbot.cleanup()
    
    if __name__ == "__main__":
        asyncio.run(predominant())

    We begin by creating an occasion of the MCP_ChatBot class. The chatbot begins by discovering obtainable MCP capabilities (iterating via all configured MCP servers, establishing connections and requesting their lists of capabilities). 

    As soon as connections are arrange, we are going to initialise an infinite loop the place the chatbot listens to the person queries, calls instruments when wanted and continues this cycle till the method is stopped manually. 

    Lastly, we are going to carry out a cleanup step to shut all open connections.

    Let’s now stroll via every stage in additional element.

    Initialising the ChatBot class

    Let’s begin by creating the category and defining the __init__ methodology. The primary fields of the ChatBot class are: 

    • exit_stack manages the lifecycle of a number of async threads (connections to MCP servers), guaranteeing that each one connections shall be closed appropriately, even when we face an error throughout execution. This logic is carried out within the cleanup operate.
    • anthropic is a consumer for Anthropic API used to ship messages to LLM.
    • available_tools and available_prompts are the lists of instruments and prompts uncovered by all MCP servers we’re related to. 
    • periods is a mapping of instruments, prompts and sources to their respective MCP periods. This enables the chatbot to route requests to the proper MCP server when the LLM selects a selected instrument.
    class MCP_ChatBot:
      """
      MCP (Mannequin Context Protocol) ChatBot that connects to a number of MCP servers
      and supplies a conversational interface utilizing Anthropic's Claude.
        
      Helps instruments, prompts, and sources from related MCP servers.
      """
        
      def __init__(self):
        self.exit_stack = AsyncExitStack() 
        self.anthropic = Anthropic() # Consumer for Anthropic API
        self.available_tools = [] # Instruments from all related servers
        self.available_prompts = [] # Prompts from all related servers  
        self.periods = {} # Maps instrument/immediate/useful resource names to MCP periods
    
      async def cleanup(self):
        """Clear up sources and shut all connections."""
        await self.exit_stack.aclose()

    Connecting to servers

    The primary process for our chatbot is to provoke connections with all configured MCP servers and uncover what capabilities we will use. 

    The checklist of MCP servers that our agent can connect with is outlined within the server_config.json file. I’ve arrange connections with three MCP servers:

    • analyst_toolkit is my implementation of the on a regular basis analytical instruments we mentioned within the earlier article, 
    • Filesystem permits the agent to work with information,
    • Fetch helps LLMs retrieve the content material of webpages and convert it from HTML to markdown for higher readability.
    {
      "mcpServers": {
        "analyst_toolkit": {
          "command": "uv",
          "args": [
            "--directory",
            "/path/to/github/mcp-analyst-toolkit/src/mcp_server",
            "run",
            "server.py"
          ],
          "env": {
              "GITHUB_TOKEN": "your_github_token"
          }
        },
        "filesystem": {
          "command": "npx",
          "args": [
            "-y",
            "@modelcontextprotocol/server-filesystem",
            "/Users/marie/Desktop",
            "/Users/marie/Documents/github"
          ]
        },
        "fetch": {
            "command": "uvx",
            "args": ["mcp-server-fetch"]
          }
      }
    }

    First, we are going to learn the config file, parse it after which join to every listed server.

    async def connect_to_servers(self):
      """Load server configuration and connect with all configured MCP servers."""
      attempt:
        with open("server_config.json", "r") as file:
          information = json.load(file)
        
        servers = information.get("mcpServers", {})
        for server_name, server_config in servers.objects():
          await self.connect_to_server(server_name, server_config)
      besides Exception as e:
        print(f"Error loading server config: {e}")
        traceback.print_exc()
        elevate

    For every server, we carry out a number of steps to determine the connection:

    • On the transport stage, we launch the MCP server as a stdio course of and get streams for sending and receiving messages. 
    • On the session stage, we create a ClientSession incorporating the streams, after which we carry out the MCP handshake by calling initialize methodology.
    • We registered each the session and transport objects within the context supervisor exit_stack to make sure that all connections shall be closed correctly ultimately. 
    • The final step is to register server capabilities. We wrapped this performance right into a separate operate, and we are going to focus on it shortly.
    async def connect_to_server(self, server_name, server_config):
        """Connect with a single MCP server and register its capabilities."""
        attempt:
          server_params = StdioServerParameters(**server_config)
          stdio_transport = await self.exit_stack.enter_async_context(
              stdio_client(server_params)
          )
          learn, write = stdio_transport
          session = await self.exit_stack.enter_async_context(
              ClientSession(learn, write)
          )
          await session.initialize()
          await self._register_server_capabilities(session, server_name)
                
        besides Exception as e:
          print(f"Error connecting to {server_name}: {e}")
          traceback.print_exc()

    Registering capabilities entails iterating over all of the instruments, prompts and sources retrieved from the session. In consequence, we replace the interior variables periods (mapping between sources and a selected session between the MCP consumer and server), available_prompts and available_tools.

    async def _register_server_capabilities(self, session, server_name):
      """Register instruments, prompts and sources from a single server."""
      capabilities = [
        ("tools", session.list_tools, self._register_tools),
        ("prompts", session.list_prompts, self._register_prompts), 
        ("resources", session.list_resources, self._register_resources)
      ]
      
      for capability_name, list_method, register_method in capabilities:
        attempt:
          response = await list_method()
          await register_method(response, session)
        besides Exception as e:
          print(f"Server {server_name} would not help {capability_name}: {e}")
    
    async def _register_tools(self, response, session):
      """Register instruments from server response."""
      for instrument in response.instruments:
        self.periods[tool.name] = session
        self.available_tools.append({
            "identify": instrument.identify,
            "description": instrument.description,
            "input_schema": instrument.inputSchema
        })
    
    async def _register_prompts(self, response, session):
      """Register prompts from server response."""
      if response and response.prompts:
        for immediate in response.prompts:
            self.periods[prompt.name] = session
            self.available_prompts.append({
                "identify": immediate.identify,
                "description": immediate.description,
                "arguments": immediate.arguments
            })
    
    async def _register_resources(self, response, session):
      """Register sources from server response."""
      if response and response.sources:
        for useful resource in response.sources:
            resource_uri = str(useful resource.uri)
            self.periods[resource_uri] = session

    By the tip of this stage, our MCP_ChatBot object has the whole lot it wants to begin interacting with customers:

    • connections to all configured MCP servers are established,
    • all prompts, sources and instruments are registered, together with descriptions wanted for LLM to know learn how to use these capabilities,
    • mappings between these sources and their respective periods are saved, so we all know precisely the place to ship every request.

    Chat loop

    So, it’s time to begin our chat with customers by creating the chat_loop operate. 

    We are going to first share all of the obtainable instructions with the person: 

    • itemizing sources, instruments and prompts 
    • executing a instrument name 
    • viewing a useful resource 
    • utilizing a immediate template
    • quitting the chat (it’s vital to have a transparent option to exit the infinite loop).

    After that, we are going to enter an infinite loop the place, based mostly on person enter, we are going to execute the suitable motion: whether or not it’s one of many instructions above or making a request to the LLM.

    async def chat_loop(self):
      """Most important interactive chat loop with command processing."""
      print("nMCP Chatbot Began!")
      print("Instructions:")
      print("  stop                           - Exit the chatbot")
      print("  @durations                       - Present obtainable changelog durations") 
      print("  @<interval>                      - View changelog for particular interval")
      print("  /instruments                         - Record obtainable instruments")
      print("  /instrument <identify> <arg1=value1>     - Execute a instrument with arguments")
      print("  /prompts                       - Record obtainable prompts")
      print("  /immediate <identify> <arg1=value1>   - Execute a immediate with arguments")
      
      whereas True:
        attempt:
          question = enter("nQuery: ").strip()
          if not question:
              proceed
    
          if question.decrease() == 'stop':
              break
          
          # Deal with useful resource requests (@command)
          if question.startswith('@'):
            interval = question[1:]
            resource_uri = "changelog://durations" if interval == "durations" else f"changelog://{interval}"
            await self.get_resource(resource_uri)
            proceed
          
          # Deal with slash instructions
          if question.startswith('/'):
            components = self._parse_command_arguments(question)
            if not components:
              proceed
                
            command = components[0].decrease()
            
            if command == '/instruments':
              await self.list_tools()
            elif command == '/instrument':
              if len(components) < 2:
                print("Utilization: /instrument <identify> <arg1=value1> <arg2=value2>")
                proceed
                
              tool_name = components[1]
              args = self._parse_prompt_arguments(components[2:])
              await self.execute_tool(tool_name, args)
            elif command == '/prompts':
              await self.list_prompts()
            elif command == '/immediate':
              if len(components) < 2:
                print("Utilization: /immediate <identify> <arg1=value1> <arg2=value2>")
                proceed
              
              prompt_name = components[1]
              args = self._parse_prompt_arguments(components[2:])
              await self.execute_prompt(prompt_name, args)
            else:
              print(f"Unknown command: {command}")
            proceed
          
          # Course of common queries
          await self.process_query(question)
                
        besides Exception as e:
          print(f"nError in chat loop: {e}")
          traceback.print_exc()

    There are a bunch of helper capabilities to parse arguments and return the lists of obtainable instruments and prompts we registered earlier. Because it’s pretty easy, I received’t go into a lot element right here. You’ll be able to examine the full code in case you are .

    As an alternative, let’s dive deeper into how the interactions between the MCP consumer and server work in several situations.

    When working with sources, we use the self.periods mapping to seek out the suitable session (with a fallback choice if wanted) after which use that session to learn the useful resource.

    async def get_resource(self, resource_uri):
      """Retrieve and show content material from an MCP useful resource."""
      session = self.periods.get(resource_uri)
      
      # Fallback: discover any session that handles this useful resource kind
      if not session and resource_uri.startswith("changelog://"):
        session = subsequent(
            (sess for uri, sess in self.periods.objects() 
             if uri.startswith("changelog://")), 
            None
        )
          
      if not session:
        print(f"Useful resource '{resource_uri}' not discovered.")
        return
    
      attempt:
        consequence = await session.read_resource(uri=resource_uri)
        if consequence and consequence.contents:
            print(f"nResource: {resource_uri}")
            print("Content material:")
            print(consequence.contents[0].textual content)
        else:
            print("No content material obtainable.")
      besides Exception as e:
        print(f"Error studying useful resource: {e}")
        traceback.print_exc()

    To execute a instrument, we comply with the same course of: begin by discovering the session after which use it to name the instrument, passing its identify and arguments.

    async def execute_tool(self, tool_name, args):
      """Execute an MCP instrument straight with given arguments."""
      session = self.periods.get(tool_name)
      if not session:
          print(f"Software '{tool_name}' not discovered.")
          return
      
      attempt:
          consequence = await session.call_tool(tool_name, arguments=args)
          print(f"nTool '{tool_name}' consequence:")
          print(consequence.content material)
      besides Exception as e:
          print(f"Error executing instrument: {e}")
          traceback.print_exc()

    No shock right here. The identical strategy works for executing the immediate.

    async def execute_prompt(self, prompt_name, args):
        """Execute an MCP immediate with given arguments and course of the consequence."""
        session = self.periods.get(prompt_name)
        if not session:
            print(f"Immediate '{prompt_name}' not discovered.")
            return
        
        attempt:
            consequence = await session.get_prompt(prompt_name, arguments=args)
            if consequence and consequence.messages:
                prompt_content = consequence.messages[0].content material
                textual content = self._extract_prompt_text(prompt_content)
                
                print(f"nExecuting immediate '{prompt_name}'...")
                await self.process_query(textual content)
        besides Exception as e:
            print(f"Error executing immediate: {e}")
            traceback.print_exc()

    The one main use case we haven’t lined but is dealing with a common, free-form enter from a person (not one in every of particular instructions). 
    On this case, we ship the preliminary request to the LLM first, then we parse the output, defining whether or not there are any instrument calls. If instrument calls are current, we execute them. In any other case, we exit the infinite loop and return the reply to the person.

    async def process_query(self, question):
      """Course of a person question via Anthropic's Claude, dealing with instrument calls iteratively."""
      messages = [{'role': 'user', 'content': query}]
      
      whereas True:
        response = self.anthropic.messages.create(
            max_tokens=2024,
            mannequin='claude-3-7-sonnet-20250219', 
            instruments=self.available_tools,
            messages=messages
        )
        
        assistant_content = []
        has_tool_use = False
        
        for content material in response.content material:
            if content material.kind == 'textual content':
                print(content material.textual content)
                assistant_content.append(content material)
            elif content material.kind == 'tool_use':
                has_tool_use = True
                assistant_content.append(content material)
                messages.append({'function': 'assistant', 'content material': assistant_content})
                
                # Execute the instrument name
                session = self.periods.get(content material.identify)
                if not session:
                    print(f"Software '{content material.identify}' not discovered.")
                    break
                    
                consequence = await session.call_tool(content material.identify, arguments=content material.enter)
                messages.append({
                    "function": "person", 
                    "content material": [{
                        "type": "tool_result",
                        "tool_use_id": content.id,
                        "content": result.content
                    }]
                })
          
          if not has_tool_use:
              break

    So, now we have now absolutely lined how the MCP chatbot really works underneath the hood. Now, it’s time to check it in motion. You’ll be able to run it from the command line interface with the next command. 

    python mcp_client_example_base.py

    Whenever you run the chatbot, you’ll first see the next introduction message outlining potential choices:

    MCP Chatbot Began!
    Instructions:
      stop                           - Exit the chatbot
      @durations                       - Present obtainable changelog durations
      @<interval>                      - View changelog for particular interval
      /instruments                         - Record obtainable instruments
      /instrument <identify> <arg1=value1>     - Execute a instrument with arguments
      /prompts                       - Record obtainable prompts
      /immediate <identify> <arg1=value1>   - Execute a immediate with arguments

    From there, you may check out completely different instructions, for instance, 

    • name the instrument to checklist the databases obtainable within the DB
    • checklist all obtainable prompts 
    • use the immediate template, calling it like this /immediate sql_query_prompt query=”What number of prospects did now we have in Might 2024?”. 

    Lastly, I can end your chat by typing stop.

    Question: /instrument list_databases
    [07/02/25 18:27:28] INFO     Processing request of kind CallToolRequest                server.py:619
    Software 'list_databases' consequence:
    [TextContent(type='text', text='INFORMATION_SCHEMAndatasetsndefaultnecommercenecommerce_dbninformation_schemansystemn', annotations=None, meta=None)]
    
    Question: /prompts
    Out there prompts:
    - sql_query_prompt: Create a SQL question immediate
      Arguments:
        - query
    
    Question: /immediate sql_query_prompt query="What number of prospects did now we have in Might 2024?"
    [07/02/25 18:28:21] INFO     Processing request of kind GetPromptRequest               server.py:619
    Executing immediate 'sql_query_prompt'...
    I am going to create a SQL question to seek out the variety of prospects in Might 2024.
    [07/02/25 18:28:25] INFO     Processing request of kind CallToolRequest                server.py:619
    Primarily based on the question outcomes, this is the ultimate SQL question:
    ```sql
    choose uniqExact(user_id) as customer_count
    from ecommerce.periods
    the place toStartOfMonth(action_date) = '2024-05-01'
    format TabSeparatedWithNames
    ```
    Question: /instrument execute_sql_query question="choose uniqExact(user_id) as customer_count from ecommerce.periods the place toStartOfMonth(action_date) = '2024-05-01' format TabSeparatedWithNames"
    I am going to allow you to execute this SQL question to get the distinctive buyer depend for Might 2024. Let me run this for you.
    [07/02/25 18:30:09] INFO     Processing request of kind CallToolRequest                server.py:619
    The question has been executed efficiently. The outcomes present that there have been 246,852 distinctive prospects (distinctive user_ids) in Might 2024 based mostly on the ecommerce.periods desk.
    
    Question: stop

    Appears to be like fairly cool! Our primary model is working effectively! Now, it’s time to take it one step additional and make our chatbot smarter by educating it to counsel related prompts on the fly based mostly on buyer enter. 

    Immediate solutions

    In apply, suggesting immediate templates that finest match the person’s process might be extremely useful. Proper now, customers of our chatbot have to both already find out about obtainable prompts or a minimum of be curious sufficient to discover them on their very own to learn from what we’ve constructed. By including a immediate solutions function, we will do that discovery for our customers and make our chatbot considerably extra handy and user-friendly.

    Let’s brainstorm methods so as to add this performance. I’d strategy this function within the following approach:

    Consider the relevance of the prompts utilizing the LLM. Iterate via all obtainable immediate templates and, for each, assess whether or not the immediate is an effective match for the person’s question.

    Recommend an identical immediate to the person. If we discovered the related immediate template, share it with the person and ask whether or not they want to execute it. 

    Merge the immediate template with the person enter. If the person accepts, mix the chosen immediate with the unique question. Since immediate templates have placeholders, we’d want the LLM to fill them in. As soon as we’ve merged the immediate template with the person’s question, we’ll have an up to date message able to ship to the LLM.

    We are going to add this logic to the process_query operate. Due to our modular design, it’s fairly simple so as to add this enhancement with out disrupting the remainder of the code. 

    Let’s begin by implementing a operate to seek out probably the most related immediate template. We are going to use the LLM to judge every immediate and assign it a relevance rating from 0 to five. After that, we’ll filter out any prompts with a rating of two or decrease and return solely probably the most related one (the one with the best relevance rating among the many remaining outcomes).

    async def _find_matching_prompt(self, question):
      """Discover a matching immediate for the given question utilizing LLM analysis."""
      if not self.available_prompts:
        return None
      
      # Use LLM to judge immediate relevance
      prompt_scores = []
      
      for immediate in self.available_prompts:
        # Create analysis immediate for the LLM
        evaluation_prompt = f"""
    You're an knowledgeable at evaluating whether or not a immediate template is related for a person question.
    
    Person Question: "{question}"
    
    Immediate Template:
    - Identify: {immediate['name']}
    - Description: {immediate['description']}
    
    Fee the relevance of this immediate template for the person question on a scale of 0-5:
    - 0: Utterly irrelevant
    - 1: Barely related
    - 2: Considerably related  
    - 3: Reasonably related
    - 4: Extremely related
    - 5: Good match
    
    Contemplate:
    - Does the immediate template tackle the person's intent?
    - Would utilizing this immediate template present a greater response than a generic question?
    - Are the matters and context aligned?
    
    Reply with solely a single quantity (0-5) and no different textual content.
    """
          
        attempt:
          response = self.anthropic.messages.create(
              max_tokens=10,
              mannequin='claude-3-7-sonnet-20250219',
              messages=[{'role': 'user', 'content': evaluation_prompt}]
          )
          
          # Extract the rating from the response
          score_text = response.content material[0].textual content.strip()
          rating = int(score_text)
          
          if rating >= 3:  # Solely take into account prompts with rating >= 3
              prompt_scores.append((immediate, rating))
                
        besides Exception as e:
            print(f"Error evaluating immediate {immediate['name']}: {e}")
            proceed
      
      # Return the immediate with the best rating
      if prompt_scores:
          best_prompt, best_score = max(prompt_scores, key=lambda x: x[1])
          return best_prompt
      
      return None

    The following operate we have to implement is one that mixes the chosen immediate template with the person enter. We are going to depend on the LLM to intelligently mix them, filling all placeholders as wanted.

    async def _combine_prompt_with_query(self, prompt_name, user_query):
      """Use LLM to mix immediate template with person question."""
      # First, get the immediate template content material
      session = self.periods.get(prompt_name)
      if not session:
          print(f"Immediate '{prompt_name}' not discovered.")
          return None
      
      attempt:
          # Discover the immediate definition to get its arguments
          prompt_def = None
          for immediate in self.available_prompts:
              if immediate['name'] == prompt_name:
                  prompt_def = immediate
                  break
          
          # Put together arguments for the immediate template
          args = {}
          if prompt_def and prompt_def.get('arguments'):
              for arg in prompt_def['arguments']:
                  arg_name = arg.identify if hasattr(arg, 'identify') else arg.get('identify', '')
                  if arg_name:
                      # Use placeholder format for arguments
                      args[arg_name] = '<' + str(arg_name) + '>'
          
          # Get the immediate template with arguments
          consequence = await session.get_prompt(prompt_name, arguments=args)
          if not consequence or not consequence.messages:
              print(f"Couldn't retrieve immediate template for '{prompt_name}'")
              return None
          
          prompt_content = consequence.messages[0].content material
          prompt_text = self._extract_prompt_text(prompt_content)
          
          # Create mixture immediate for the LLM
          combination_prompt = f"""
    You're an knowledgeable at combining immediate templates with person queries to create optimized prompts.
    
    Unique Person Question: "{user_query}"
    
    Immediate Template:
    {prompt_text}
    
    Your process:
    1. Analyze the person's question and the immediate template
    2. Mix them intelligently to create a single, coherent immediate
    3. Make sure the person's particular query/request is addressed throughout the context of the template
    4. Preserve the construction and intent of the template whereas incorporating the person's question
    
    Reply with solely the mixed immediate textual content, no explanations or extra textual content.
    """
          
          response = self.anthropic.messages.create(
              max_tokens=2048,
              mannequin='claude-3-7-sonnet-20250219',
              messages=[{'role': 'user', 'content': combination_prompt}]
          )
          
          return response.content material[0].textual content.strip()
          
      besides Exception as e:
          print(f"Error combining immediate with question: {e}")
          return None

    Then, we are going to merely replace the process_query logic to examine for matching prompts, ask the person for affirmation and resolve which message to ship to the LLM.

    async def process_query(self, question):
      """Course of a person question via Anthropic's Claude, dealing with instrument calls iteratively."""
      # Examine if there is a matching immediate first
      matching_prompt = await self._find_matching_prompt(question)
      
      if matching_prompt:
        print(f"Discovered matching immediate: {matching_prompt['name']}")
        print(f"Description: {matching_prompt['description']}")
        
        # Ask person in the event that they wish to use the immediate template
        use_prompt = enter("Would you want to make use of this immediate template? (y/n): ").strip().decrease()
        
        if use_prompt == 'y' or use_prompt == 'sure':
            print("Combining immediate template along with your question...")
            
            # Use LLM to mix immediate template with person question
            combined_prompt = await self._combine_prompt_with_query(matching_prompt['name'], question)
            
            if combined_prompt:
                print(f"Mixed immediate created. Processing...")
                # Course of the mixed immediate as an alternative of the unique question
                messages = [{'role': 'user', 'content': combined_prompt}]
            else:
                print("Failed to mix immediate template. Utilizing authentic question.")
                messages = [{'role': 'user', 'content': query}]
        else:
            # Use authentic question if person would not wish to use the immediate
            messages = [{'role': 'user', 'content': query}]
      else:
        # Course of the unique question if no matching immediate discovered
        messages = [{'role': 'user', 'content': query}]
    
      # print(messages)
      
      # Course of the ultimate question (both authentic or mixed)
      whereas True:
        response = self.anthropic.messages.create(
            max_tokens=2024,
            mannequin='claude-3-7-sonnet-20250219', 
            instruments=self.available_tools,
            messages=messages
        )
        
        assistant_content = []
        has_tool_use = False
        
        for content material in response.content material:
          if content material.kind == 'textual content':
              print(content material.textual content)
              assistant_content.append(content material)
          elif content material.kind == 'tool_use':
              has_tool_use = True
              assistant_content.append(content material)
              messages.append({'function': 'assistant', 'content material': assistant_content})
              
              # Log instrument name data
              print(f"n[TOOL CALL] Software: {content material.identify}")
              print(f"[TOOL CALL] Arguments: {json.dumps(content material.enter, indent=2)}")
              
              # Execute the instrument name
              session = self.periods.get(content material.identify)
              if not session:
                  print(f"Software '{content material.identify}' not discovered.")
                  break
                  
              consequence = await session.call_tool(content material.identify, arguments=content material.enter)
              
              # Log instrument consequence
              print(f"[TOOL RESULT] Software: {content material.identify}")
              print(f"[TOOL RESULT] Content material: {consequence.content material}")
              
              messages.append({
                  "function": "person", 
                  "content material": [{
                      "type": "tool_result",
                      "tool_use_id": content.id,
                      "content": result.content
                  }]
              })
          
        if not has_tool_use:
            break

    Now, let’s check our up to date model with a query about our information. Excitingly, the chatbot was capable of finding the correct immediate and use it to seek out the correct reply.

    Question: What number of prospects did now we have in Might 2024?
    Discovered matching immediate: sql_query_prompt
    Description: Create a SQL question immediate
    Would you want to make use of this immediate template? (y/n): y
    Combining immediate template along with your question...
    [07/05/25 14:38:58] INFO     Processing request of kind GetPromptRequest               server.py:619
    Mixed immediate created. Processing...
    I am going to write a question to depend distinctive prospects who had periods in Might 2024. Since it is a enterprise metric, I am going to exclude fraudulent periods.
    
    [TOOL CALL] Software: execute_sql_query
    [TOOL CALL] Arguments: {
      "question": "/* Depend distinct customers with non-fraudulent periods in Might 2024n   Utilizing uniqExact for exact person countn   Filtering for Might 2024 utilizing toStartOfMonth and including date vary */nSELECT n    uniqExactIf(s.user_id, s.is_fraud = 0) AS active_customers_countnFROM ecommerce.periods snWHERE toStartOfMonth(action_date) = toDate('2024-05-01')nFORMAT TabSeparatedWithNames"
    }
    [07/05/25 14:39:17] INFO     Processing request of kind CallToolRequest                server.py:619
    [TOOL RESULT] Software: execute_sql_query
    [TOOL RESULT] Content material: [TextContent(type='text', text='active_customers_countn245287n', annotations=None, meta=None)]
    The question exhibits we had 245,287 distinctive prospects with legit (non-fraudulent) periods in Might 2024. This is a breakdown of why I wrote the question this manner:
    
    1. Used uniqExactIf() to get exact depend of distinctive customers whereas excluding fraudulent periods in a single step
    2. Used toStartOfMonth() to make sure we seize all days in Might 2024
    3. Specified the date format correctly with toDate('2024-05-01')
    4. Used TabSeparatedWithNames format as required
    5. Offered a significant column alias
    
    Would you wish to see any variations of this evaluation, akin to together with fraudulent periods or breaking down the numbers by nation?

    It’s all the time a good suggestion to check damaging examples as effectively. On this case, the chatbot behaves as anticipated and doesn’t counsel an SQL-related immediate when given an unrelated query.

    Question: How are you?
    I ought to be aware that I am an AI assistant centered on serving to you're employed with the obtainable instruments, which embrace executing SQL queries, getting database/desk data, and accessing GitHub PR information. I haven't got a instrument particularly for responding to non-public questions.
    
    I will help you:
    - Question a ClickHouse database
    - Record databases and describe tables
    - Get details about GitHub Pull Requests
    
    What would you wish to find out about these areas?

    Now that our chatbot is up and operating, we’re able to wrap issues up.

    BONUS: fast and simple MCP consumer with smolagents

    We’ve checked out low-level code that permits constructing extremely customised MCP shoppers, however many use circumstances require solely primary performance. So, I made a decision to share with you a fast and easy implementation for situations if you want simply the instruments. We are going to use one in every of my favorite agent frameworks — smolagents from HuggingFace (I’ve mentioned this framework intimately in my previous article).

    # wanted imports
    from smolagents import CodeAgent, DuckDuckGoSearchTool, LiteLLMModel, VisitWebpageTool, ToolCallingAgent, ToolCollection
    from mcp import StdioServerParameters
    import json
    import os
    
    # setting OpenAI APIKey 
    with open('../../config.json') as f:
        config = json.masses(f.learn())
    
    os.environ["OPENAI_API_KEY"] = config['OPENAI_API_KEY']
    
    # defining the LLM 
    mannequin = LiteLLMModel(
        model_id="openai/gpt-4o-mini",  
        max_tokens=2048
    )
    
    # configuration for the MCP server
    server_parameters = StdioServerParameters(
        command="uv",
        args=[
            "--directory",
            "/path/to/github/mcp-analyst-toolkit/src/mcp_server",
            "run",
            "server.py"
        ],
        env={"GITHUB_TOKEN": "github_<your_token>"},
    )
    
    # immediate 
    CLICKHOUSE_PROMPT_TEMPLATE = """
    You're a senior information analyst with greater than 10 years of expertise writing advanced SQL queries, particularly optimized for ClickHouse to reply person questions.
    
    ## Database Schema
    
    You're working with an e-commerce analytics database containing the next tables:
    
    ### Desk: ecommerce.customers 
    **Description:** Buyer data for the web store
    **Main Key:** user_id
    **Fields:** 
    - user_id (Int64) - Distinctive buyer identifier (e.g., 1000004, 3000004)
    - nation (String) - Buyer's nation of residence (e.g., "Netherlands", "United Kingdom")
    - is_active (Int8) - Buyer standing: 1 = lively, 0 = inactive
    - age (Int32) - Buyer age in full years (e.g., 31, 72)
    
    ### Desk: ecommerce.periods 
    **Description:** Person session information and transaction information
    **Main Key:** session_id
    **Overseas Key:** user_id (references ecommerce.customers.user_id)
    **Fields:** 
    - user_id (Int64) - Buyer identifier linking to customers desk (e.g., 1000004, 3000004)
    - session_id (Int64) - Distinctive session identifier (e.g., 106, 1023)
    - action_date (Date) - Session begin date (e.g., "2021-01-03", "2024-12-02")
    - session_duration (Int32) - Session length in seconds (e.g., 125, 49)
    - os (String) - Working system used (e.g., "Home windows", "Android", "iOS", "MacOS")
    - browser (String) - Browser used (e.g., "Chrome", "Safari", "Firefox", "Edge")
    - is_fraud (Int8) - Fraud indicator: 1 = fraudulent session, 0 = legit
    - income (Float64) - Buy quantity in USD (0.0 for non-purchase periods, >0 for purchases)
    
    ## ClickHouse-Particular Pointers
    
    1. **Use ClickHouse-optimized capabilities:**
       - uniqExact() for exact distinctive counts
       - uniqExactIf() for conditional distinctive counts
       - quantile() capabilities for percentiles
       - Date capabilities: toStartOfMonth(), toStartOfYear(), right this moment()
    
    2. **Question formatting necessities:**
       - All the time finish queries with "format TabSeparatedWithNames"
       - Use significant column aliases
       - Use correct JOIN syntax when combining tables
       - Wrap date literals in quotes (e.g., '2024-01-01')
    
    3. **Efficiency issues:**
       - Use applicable WHERE clauses to filter information
       - Think about using HAVING for post-aggregation filtering
       - Use LIMIT when discovering high/backside outcomes
    
    4. **Knowledge interpretation:**
       - income > 0 signifies a purchase order session
       - income = 0 signifies a searching session with out buy
       - is_fraud = 1 periods ought to usually be excluded from enterprise metrics except particularly analyzing fraud
    
    ## Response Format
    Present solely the SQL question as your reply. Embody transient reasoning in feedback if the question logic is advanced. 
    
    ## Examples
    
    **Query:** What number of prospects made buy in December 2024?
    **Reply:** choose uniqExact(user_id) as prospects from ecommerce.periods the place toStartOfMonth(action_date) = '2024-12-01' and income > 0 format TabSeparatedWithNames
    
    **Query:** What was the fraud price in 2023, expressed as a proportion?
    **Reply:** choose 100 * uniqExactIf(user_id, is_fraud = 1) / uniqExact(user_id) as fraud_rate from ecommerce.periods the place toStartOfYear(action_date) = '2023-01-01' format TabSeparatedWithNames
    
    **Query:** What was the share of customers utilizing Home windows yesterday?
    **Reply:** choose 100 * uniqExactIf(user_id, os = 'Home windows') / uniqExact(user_id) as windows_share from ecommerce.periods the place action_date = right this moment() - 1 format TabSeparatedWithNames
    
    **Query:** What was the income from Dutch customers aged 55 and older in December 2024?
    **Reply:** choose sum(s.income) as total_revenue from ecommerce.periods as s interior be a part of ecommerce.customers as u on s.user_id = u.user_id the place u.nation = 'Netherlands' and u.age >= 55 and toStartOfMonth(s.action_date) = '2024-12-01' format TabSeparatedWithNames
    
    **Query:** What are the median and interquartile vary (IQR) of buy income for every nation?
    **Reply:** choose nation, median(income) as median_revenue, quantile(0.25)(income) as q25_revenue, quantile(0.75)(income) as q75_revenue from ecommerce.periods as s interior be a part of ecommerce.customers as u on u.user_id = s.user_id the place income > 0 group by nation format TabSeparatedWithNames
    
    **Query:** What's the common variety of days between the primary session and the primary buy for customers who made a minimum of one buy?
    **Reply:** choose avg(first_purchase - first_action_date) as avg_days_to_purchase from (choose user_id, min(action_date) as first_action_date, minIf(action_date, income > 0) as first_purchase, max(income) as max_revenue from ecommerce.periods group by user_id) the place max_revenue > 0 format TabSeparatedWithNames
    
    **Query:** What's the variety of periods in December 2024, damaged down by working techniques, together with the totals?
    **Reply:** choose os, uniqExact(session_id) as session_count from ecommerce.periods the place toStartOfMonth(action_date) = '2024-12-01' group by os with totals format TabSeparatedWithNames
    
    **Query:** Do now we have prospects who used a number of browsers throughout 2024? In that case, please calculate the variety of prospects for every mixture of browsers.
    **Reply:** choose browsers, depend(*) as customer_count from (choose user_id, arrayStringConcat(arraySort(groupArray(distinct browser)), ', ') as browsers from ecommerce.periods the place toStartOfYear(action_date) = '2024-01-01' group by user_id) group by browsers order by customer_count desc format TabSeparatedWithNames
    
    **Query:** Which browser has the best share of fraud customers?
    **Reply:** choose browser, 100 * uniqExactIf(user_id, is_fraud = 1) / uniqExact(user_id) as fraud_rate from ecommerce.periods group by browser order by fraud_rate desc restrict 1 format TabSeparatedWithNames
    
    **Query:** Which nation had the best variety of first-time customers in 2024?
    **Reply:** choose nation, depend(distinct user_id) as new_users from (choose user_id, min(action_date) as first_date from ecommerce.periods group by user_id having toStartOfYear(first_date) = '2024-01-01') as t interior be a part of ecommerce.customers as u on t.user_id = u.user_id group by nation order by new_users desc restrict 1 format TabSeparatedWithNames
    
    ---
    
    **Your Job:** Utilizing all of the supplied data above, write a ClickHouse SQL question to reply the next buyer query: 
    {query}
    """
    
    with ToolCollection.from_mcp(server_parameters, trust_remote_code=True) as tool_collection:
      agent = ToolCallingAgent(instruments=[*tool_collection.tools], mannequin=mannequin)
      immediate = CLICKHOUSE_PROMPT_TEMPLATE.format(
          query = 'What number of prospects did now we have in Might 2024?'
      )
      response = agent.run(immediate)

    In consequence, we acquired the proper reply.

    Picture by writer

    In the event you don’t want a lot customisation or integration with prompts and sources, this implementation is certainly the best way to go.

    Abstract

    On this article, we constructed a chatbot that integrates with MCP servers and leverages all the advantages of standardisation to entry instruments, prompts, and sources seamlessly.

    We began with a primary implementation able to itemizing and accessing MCP capabilities. Then, we enhanced our chatbot with a sensible function that implies related immediate templates to customers based mostly on their enter. This makes our product extra intuitive and user-friendly, particularly for customers unfamiliar with the whole library of obtainable prompts.

    To implement our chatbot, we used comparatively low-level code, providing you with a greater understanding of how the MCP protocol works underneath the hood and what occurs if you use AI instruments like Claude Desktop or Cursor.

    As a bonus, we additionally mentioned the smolagents implementation that allows you to rapidly deploy an MCP consumer built-in with instruments.

    Thanks for studying. I hope this text was insightful. Bear in mind Einstein’s recommendation: “The vital factor is to not cease questioning. Curiosity has its personal purpose for current.” Might your curiosity lead you to your subsequent nice perception.

    Reference

    This text is impressed by the “MCP: Build Rich-Context AI Apps with Anthropic” brief course from DeepLearning.AI.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleReducing Time to Value for Data Science Projects: Part 3
    Next Article Grok 4 – xAI:s nya AI-modell
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    What Can the History of Data Tell Us About the Future of AI?

    July 15, 2025
    Artificial Intelligence

    Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need

    July 15, 2025
    Artificial Intelligence

    Topic Model Labelling with LLMs | Towards Data Science

    July 14, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    AI Experts Say White Collar Jobs Will Be Automated in 5 Years

    May 27, 2025

    From RGB to HSV — and Back Again

    May 7, 2025

    The Complete Guide to NetSuite SuiteScript

    April 4, 2025

    The Dangers of Deceptive Data Part 2–Base Proportions and Bad Statistics

    May 9, 2025

    Top Machine Learning Jobs and How to Prepare For Them

    May 22, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    How Much Data Is Needed to Train Successful ML Models in 2024?

    April 6, 2025

    AI Is Not a Black Box (Relatively Speaking)

    June 13, 2025

    Med Claude Explains kan Claude nu skapa egna blogginlägg

    June 4, 2025
    Our Picks

    Google’s generative video model Veo 3 has a subtitles problem

    July 15, 2025

    MedGemma – Nya AI-modeller för hälso och sjukvård

    July 15, 2025

    AI text-to-speech programs could “unlearn” how to imitate certain people

    July 15, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.