Close Menu
    Trending
    • Optimizing Data Transfer in Distributed AI/ML Training Workloads
    • Achieving 5x Agentic Coding Performance with Few-Shot Prompting
    • Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found
    • From Transactions to Trends: Predict When a Customer Is About to Stop Buying
    • America’s coming war over AI regulation
    • “Dr. Google” had its issues. Can ChatGPT Health do better?
    • Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics
    • Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Measuring What Matters with NeMo Agent Toolkit
    Artificial Intelligence

    Measuring What Matters with NeMo Agent Toolkit

    ProfitlyAIBy ProfitlyAIJanuary 6, 2026No Comments15 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    a decade working in analytics, I firmly imagine that observability and analysis are important for any LLM software working in manufacturing. Monitoring and metrics aren’t simply nice-to-haves. They guarantee your product is functioning as anticipated and that every new replace is definitely transferring you in the best route.

    On this article, I wish to share my expertise with the observability and analysis options of the NeMo Agent Toolkit (NAT). For those who haven’t learn my previous article on NAT, right here’s a fast refresher: NAT is Nvidia’s framework for constructing production-ready LLM functions. Consider it because the glue that connects LLMs, instruments, and workflows, whereas additionally providing deployment and observability choices.

    Utilizing NAT, we constructed a Happiness Agent able to answering nuanced questions on the World Happiness report data and performing calculations based mostly on actual metrics. Our focus was on constructing agentic flows, integrating brokers from different frameworks as instruments (in our instance, a LangGraph-based calculator agent), and deploying the appliance each as a REST API and a user-friendly interface.

    On this article, I’ll dive into my favorite subjects: observability and evaluations. In any case, because the saying goes, you’ll be able to’t enhance what you don’t measure. So, with out additional ado, let’s soar in.

    Observability

    Let’s begin with observability — the flexibility to trace what’s taking place inside your software, together with all intermediate steps, instruments used, timings, and token utilization. The NeMo Agent Toolkit integrates with quite a lot of observability instruments reminiscent of Phoenix, W&B Weave, and Catalyst. You may all the time verify the most recent checklist of supported frameworks in the documentation.

    For this text, we’ll strive Phoenix. Phoenix is an open-source platform for tracing and evaluating LLMs. Earlier than we are able to begin utilizing it, we first want to put in the plugin.

    uv pip set up arize-phoenix
    uv pip set up "nvidia-nat[phoenix]"

    Subsequent, we are able to launch the Phoenix server.

    phoenix server

    As soon as it’s working, the tracing service shall be obtainable at http://localhost:6006/v1/traces. At this level, you’ll see a default challenge since we haven’t despatched any knowledge but.

    Picture by writer

    Now, that the Phoenix server is working, let’s see how we are able to begin utilizing it. Since NAT is predicated on YAML configuration, all we have to do is add a telemetry part to our config. You will discover the config and full agent implementation on GitHub. If you wish to be taught extra in regards to the NAT framework, verify my previous article.

    normal:                                             
      telemetry:                                          
        tracing:                                          
          phoenix:                                        
            _type: phoenix                               
            endpoint: http://localhost:6006/v1/traces 
            challenge: happiness_report

    With this in place, we are able to run our agent.

    export ANTHROPIC_API_KEY=<your_key>
    supply .venv_nat_uv/bin/activate
    cd happiness_v3 
    uv pip set up -e . 
    cd .. 
    nat run 
      --config_file happiness_v3/src/happiness_v3/configs/config.yml 
      --input "How a lot happier in percentages are folks in Finland in comparison with the UK?"

    Let’s run a couple of extra queries to see what sort of knowledge Phoenix can monitor.

    nat run 
      --config_file happiness_v3/src/happiness_v3/configs/config.yml 
      --input "Are folks general getting happier over time?"
    
    nat run 
      --config_file happiness_v3/src/happiness_v3/configs/config.yml 
      --input "Is Switzerland on the primary place?"
    
    nat run 
      --config_file happiness_v3/src/happiness_v3/configs/config.yml 
      --input "What's the fundamental contibutor to the happiness in the UK?"
    
    nat run 
      --config_file happiness_v3/src/happiness_v3/configs/config.yml 
      --input "Are folks in France happier than in Germany?"

    After working these queries, you’ll discover a brand new challenge in Phoenix (happiness_report, as we outlined within the config) together with all of the LLM calls we simply made. This offers you a transparent view of what’s taking place underneath the hood.

    Picture by writer

    We are able to zoom in on one of many queries, like “Are folks general getting happier over time?”

    Picture by writer

    This question takes fairly some time (about 25 seconds) as a result of it entails 5 device requires annually. If we anticipate quite a lot of comparable questions on general traits, it’d make sense to offer our agent a brand new device that may calculate abstract statistics unexpectedly. 

    That is precisely the place observability shines: by revealing bottlenecks and inefficiencies, it helps you scale back prices and ship a smoother expertise for customers.

    Evaluations

    Observability is about tracing how your software works in manufacturing. This info is useful, however it isn’t sufficient to say whether or not the standard of solutions is nice sufficient or whether or not a brand new model is performing higher. To reply such questions, we want evaluations. Thankfully, the NeMo Agent Toolkit can help us with evals as properly. 

    First, let’s put collectively a small set of evaluations. We have to specify simply 3 fields: id, query and reply. 

    [
      {
        "id": "1",
        "question": "In what country was the happiness score highest in 2021?",
        "answer": "Finland"
      }, 
      {
        "id": "2",
        "question": "What contributed most to the happiness score in 2024?",
        "answer": "Social Support"
      }, 
      {
        "id": "3",
        "question": "How UK's rank changed from 2019 to 2024?",
        "answer": "The UK's rank dropped from 13th in 2019 to 23rd in 2024."
      },
      {
        "id": "4",
        "question": "Are people in France happier than in Germany based on the latest report?",
        "answer": "No, Germany is at 22nd place in 2024 while France is at 33rd place."
      },
      {
        "id": "5",
        "question": "How much in percents are people in Poland happier in 2024 compared to 2019?",
        "answer": "Happiness in Poland increased by 7.9% from 2019 to 2024. It was 6.1863 in 2019 and 6.6730 in 2024."
      }
    ]

    Subsequent, we have to replace our YAML config to outline the place to retailer analysis outcomes and the place to seek out the analysis dataset. I arrange a devoted eval_llm for analysis functions to maintain the answer modular, and I’m utilizing Sonnet 4.5 for it.

    # Analysis configuration
    eval:
      normal:
        output:
          dir: ./tmp/nat/happiness_v3/eval/evals/
          cleanup: false  
        dataset:
          _type: json
          file_path: src/happiness_v3/knowledge/evals.json
    
      evaluators:
        answer_accuracy:
          _type: ragas
          metric: AnswerAccuracy
          llm_name: eval_llm
        groundedness:
          _type: ragas
          metric: ResponseGroundedness
          llm_name: eval_llm
        trajectory_accuracy:
          _type: trajectory
          llm_name: eval_llm

    I’ve outlined a number of evaluators right here. We’ll give attention to Reply Accuracy and Response Groundedness from Ragas (an open-source framework for evaluating LLM workflows end-to-end), in addition to trajectory analysis. Let’s break them down.

    Answer Accuracy measures how properly a mannequin’s response aligns with a reference floor fact. It makes use of two “LLM-as-a-Decide” prompts, every returning a ranking of 0, 2, or 4. These scores are then transformed to a [0,1] scale and averaged. Greater scores point out that the mannequin’s reply intently matches the reference.

    • 0 → Response is inaccurate or off-topic,
    • 2 → Response partially aligns,
    • 4 → Response precisely aligns.

    Response Groundedness evaluates whether or not a response is supported by the retrieved contexts. That’s, whether or not every declare may be discovered (totally or partially) within the supplied knowledge. This works equally to Reply Accuracy, utilizing two distinct “LLM-as-a-Decide” prompts with scores of 0, 1, or 2, that are then normalised to a [0,1] scale.

    • 0 → Not grounded in any respect,
    • 1 → Partially grounded,
    • 2 → Absolutely grounded.

    Trajectory Analysis tracks the intermediate steps and power calls executed by the LLM, serving to to watch the reasoning course of. A decide LLM evaluates the trajectory produced by the workflow, contemplating the instruments used throughout execution. It returns a floating-point rating between 0 and 1, the place 1 represents an ideal trajectory.

    Let’s run evaluations to see the way it works in apply.

    nat eval --config_file src/happiness_v3/configs/config.yml

    Because of working the evaluations, we get a number of information within the output listing we specified earlier. One of the crucial helpful ones is workflow_output.json. This file comprises execution outcomes for every pattern in our analysis set, together with the unique query, the reply generated by the LLM, the anticipated reply, and an in depth breakdown of all intermediate steps. This file may also help you hint how the system labored in every case.

    Right here’s a shortened instance for the primary pattern.

    {
      "id": 1,
      "query": "In what nation was the happiness rating highest in 2021?",
      "reply": "Finland",
      "generated_answer": "Finland had the best happiness rating in 2021 with a rating of seven.821.",
      "intermediate_steps": [...],
      "expected_intermediate_steps": []
    }

    For Reply Accuracy and Response Groundedness, we achieved the best potential scores (1.0 out of 1.0 on common), which is all the time good to see. Right here is the returned file.

    {
      "average_score": 1.0,
      "eval_output_items": [
        {
          "id": 1,
          "score": 1.0,
          "reasoning": {
            "user_input": "In what country was the happiness score highest in 2021?",
            "reference": "Finland",
            "response": "Finland had the highest happiness score in 2021 with a score of 7.821.",
            "retrieved_contexts": [...]
          }
        },
        {
          "id": 2,
          "rating": 1.0,
          "reasoning": {
            "user_input": "What contributed most to the happiness rating in 2024?",
            "reference": "Social Assist",
            "response": "In 2024, **Social Assist** contributed most to the happiness rating, with a mean impression of 1.333 factors. This was adopted very intently by **GDP per capita** with an impression of 1.319 factors. These two components have been considerably extra influential than different contributors like life expectancy (0.551), freedom (0.749), generosity (0.113), and perceptions of corruption (0.147). This implies that having robust social connections and relationships, together with financial prosperity, have been the first drivers of happiness throughout nations in 2024.",
            "retrieved_contexts": [...]
          }
        },
        {
          "id": 3,
          "rating": 1.0,
          "reasoning": {
            "user_input": "How UK's rank modified from 2019 to 2024?",
            "reference": "The UK's rank dropped from thirteenth in 2019 to twenty third in 2024.",
            "response": "The UK's rank within the World Happiness Report modified from thirteenth place in 2019 to twenty third place in 2024, representing a decline of 10 positions. The happiness rating additionally decreased from 7.1645 in 2019 to six.7280 in 2024.",
            "retrieved_contexts": [...]
          }
        },
        {
          "id": 4,
          "rating": 1.0,
          "reasoning": {
            "user_input": "Are folks in France happier than in Germany based mostly on the most recent report?",
            "reference": "No, Germany is at twenty second place in 2024 whereas France is at thirty third place.",
            "response": "Primarily based on the most recent 2024 World Happiness Report, folks in Germany are happier than folks in France. Germany has a happiness rating of 6.753 (ranked twenty second globally), whereas France has a happiness rating of 6.593 (ranked thirty third globally). The distinction is 0.16 factors in Germany's favor.",
            "retrieved_contexts": [...]
          }
        },
        {
          "id": 5,
          "rating": 1.0,
          "reasoning": {
            "user_input": "How a lot in percents are folks in Poland happier in 2024 in comparison with 2019?",
            "reference": "Happiness in Poland elevated by 7.9% from 2019 to 2024. It was 6.1863 in 2019 and 6.6730 in 2024.",
            "response": "Individuals in Poland are roughly 7.87% happier in 2024 in comparison with 2019. The happiness rating elevated from 6.1863 in 2019 to six.6730 in 2024, representing a rise of 0.4867 factors or about 7.87%.",
            "retrieved_contexts": [...]
          }
        }
      ]
    }

    For trajectory analysis, we achieved a mean rating of 0.95. To know the place the mannequin fell brief, let’s have a look at one non-ideal instance. For the fifth query, the decide accurately recognized that the agent adopted a suboptimal path: it took 8 steps to succeed in the ultimate reply, regardless that the identical outcome may have been achieved in 4–5 steps. Consequently, this trajectory obtained a rating of 0.75 out of 1.0.

    Let me consider this AI language mannequin's efficiency step-by-step:
    
    ## Analysis Standards:
    **i. Is the ultimate reply useful?**
    Sure, the ultimate reply is evident, correct, and immediately addresses the query. 
    It supplies each the share improve (7.87%) and explains the underlying 
    knowledge (happiness scores from 6.1863 to six.6730). The reply is well-formatted 
    and straightforward to grasp.
    
    **ii. Does the AI language use a logical sequence of instruments to reply the query?**
    Sure, the sequence is logical:
    1. Question nation statistics for Poland
    2. Retrieve the info exhibiting happiness scores for a number of years together with 
    2019 and 2024
    3. Use a calculator to compute the share improve
    4. Formulate the ultimate reply
    This can be a wise method to the issue.
    
    **iii. Does the AI language mannequin use the instruments in a useful means?**
    Sure, the instruments are used appropriately:
    - The `country_stats` device efficiently retrieved the related happiness knowledge
    - The `calculator_agent` accurately computed the share improve utilizing 
    the right components
    - The Python analysis device carried out the precise calculation precisely
    
    **iv. Does the AI language mannequin use too many steps to reply the query?**
    That is the place there's some inefficiency. The mannequin makes use of 8 steps complete, which 
    consists of some redundancy:
    - Steps 4-7 seem to contain a number of calls to calculate the identical share 
    (the calculator_agent is invoked, which then calls Claude Opus, which calls 
    evaluate_python, and returns via the chain)
    - Step 7 appears to repeat what was already carried out in steps 4-6
    Whereas the reply is right, there's pointless duplication. The calculation 
    may have been carried out extra effectively in 4-5 steps as a substitute of 8.
    
    **v. Are the suitable instruments used to reply the query?**
    Sure, the instruments chosen are applicable:
    - `country_stats` was the best device to get happiness knowledge for Poland
    - `calculator_agent` was applicable for computing the share change
    - The underlying `evaluate_python` device accurately carried out the mathematical 
    calculation
    
    ## Abstract:
    The mannequin efficiently answered the query with correct knowledge and proper 
    calculations. The logical circulate was sound, and applicable instruments have been chosen. 
    Nevertheless, there was some inefficiency within the execution with redundant steps 
    within the calculation section.

    Wanting on the reasoning, this seems to be a surprisingly complete analysis of your complete LLM workflow. What’s particularly beneficial is that it really works out of the field and doesn’t require any ground-truth knowledge. I might undoubtedly advise utilizing this analysis to your functions. 

    Evaluating completely different variations

    Evaluations grow to be particularly highly effective when you should examine completely different variations of your software. Think about a staff centered on price optimisation and contemplating a swap from the dearer sonnet mannequin to haiku. With NAT, altering the mannequin takes lower than a minute, however doing so with out validating high quality can be dangerous. That is precisely the place evaluations shine.

    For this comparability, we’ll additionally introduce one other observability device: W&B Weave. It supplies notably useful visualisations and side-by-side comparisons throughout completely different variations of your workflow.

    To get began, you’ll want to enroll on the W&B website and acquire an API key. W&B is free to make use of for private tasks.

    export WANDB_API_KEY=<your key>

    Subsequent, set up the required packages and plugins.

    uv pip set up wandb weave
    uv pip set up "nvidia-nat[weave]"

    We additionally have to replace our YAML config. This consists of including Weave to the telemetry part and introducing a workflow alias so we are able to clearly distinguish between completely different variations of the appliance.

    normal:                                             
      telemetry:                                          
        tracing:                                          
          phoenix:                                        
            _type: phoenix                               
            endpoint: http://localhost:6006/v1/traces 
            challenge: happiness_report
          weave: # specified Weave
            _type: weave
            challenge: "nat-simple"
    
    eval:
      normal:
        workflow_alias: "nat-simple-sonnet-4-5" # added alias
        output:
          dir: ./.tmp/nat/happiness_v3/eval/evals/
          cleanup: false  
        dataset:
          _type: json
          file_path: src/happiness_v3/knowledge/evals.json
    
      evaluators:
        answer_accuracy:
          _type: ragas
          metric: AnswerAccuracy
          llm_name: chat_llm
        groundedness:
          _type: ragas
          metric: ResponseGroundedness
          llm_name: chat_llm
        trajectory_accuracy:
          _type: trajectory
          llm_name: chat_llm

    For the haiku model, I created a separate config the place each chat_llm and calculator_llm use haiku as a substitute of sonnet.

    Now we are able to run evaluations for each variations.

    nat eval --config_file src/happiness_v3/configs/config.yml
    nat eval --config_file src/happiness_v3/configs/config_simple.yml

    As soon as the evaluations are full, we are able to head over to the W&B interface and discover a complete comparability report. I actually just like the radar chart visualisation, because it makes trade-offs instantly apparent.

    Picture by writer
    Picture by writer

    With sonnet, we observe increased token utilization (and better price per token) in addition to slower response instances (24.8 seconds in comparison with 16.9 seconds for haiku). Nevertheless, regardless of the clear positive factors in velocity and value, I wouldn’t advocate switching fashions. The drop in high quality is simply too giant: trajectory accuracy falls from 0.85 to 0.55, and reply accuracy drops from 0.95 to 0.45. On this case, evaluations helped us keep away from breaking the consumer expertise within the pursuit of price optimisation.

    You will discover the complete implementation on GitHub.

    Abstract

    On this article, we explored the NeMo Agent Toolkit’s observability and analysis capabilities.

    • We labored with two observability instruments (Phoenix and W&B Weave), each of which combine seamlessly with NAT and permit us to log what’s taking place inside our system in manufacturing, in addition to seize analysis outcomes.
    • We additionally walked via methods to configure evaluations in NAT and used W&B Weave to match the efficiency of two completely different variations of the identical software. This made it simple to motive about trade-offs between price, latency, and reply high quality.

    The NeMo Agent Toolkit delivers strong, production-ready options for observability and evaluations — foundational items of any severe LLM software. Nevertheless, the standout for me was W&B Weave, whose analysis visualisations make evaluating fashions and trade-offs remarkably easy.

    Thanks for studying. I hope this text was insightful. Bear in mind Einstein’s recommendation: “The necessary factor is to not cease questioning. Curiosity has its personal motive for current.” Might your curiosity lead you to your subsequent nice perception.

    Reference

    This text is impressed by the “Nvidia’s NeMo Agent Toolkit: Making Agents Reliable” brief course from DeepLearning.AI.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHow to Optimize Your AI Coding Agent Context
    Next Article LLMs contain a LOT of parameters. But what’s a parameter?
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Optimizing Data Transfer in Distributed AI/ML Training Workloads

    January 23, 2026
    Artificial Intelligence

    Achieving 5x Agentic Coding Performance with Few-Shot Prompting

    January 23, 2026
    Artificial Intelligence

    Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found

    January 23, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    A New Survey Shows 1 in 5 Teens Are in Relationships With AI

    October 31, 2025

    What even is the AI bubble?

    December 15, 2025

    Shaip Unveils Cutting-Edge Data Platform for Ethical and Quality AI Training

    April 7, 2025

    “I think of analysts as data wizards who help their product teams solve problems”

    August 1, 2025

    Pi din emotionellt intelligenta AI-kompis

    November 6, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    From Transactions to Trends: Predict When a Customer Is About to Stop Buying

    January 23, 2026

    Why the White House and Big Tech Are Pouring Billions Into AI Education

    September 9, 2025

    TDS Newsletter: September Must-Reads on ML Career Roadmaps, Python Essentials, AI Agents, and More

    October 9, 2025
    Our Picks

    Optimizing Data Transfer in Distributed AI/ML Training Workloads

    January 23, 2026

    Achieving 5x Agentic Coding Performance with Few-Shot Prompting

    January 23, 2026

    Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found

    January 23, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.