, we frequently want to research what’s happening with KPIs: whether or not we’re reacting to anomalies on our dashboards or simply routinely doing a numbers replace. Based mostly on my years of expertise as a KPI analyst, I might estimate that greater than 80% of those duties are pretty customary and might be solved simply by following a easy guidelines.
Right here’s a high-level plan for investigating a KPI change (you will discover extra particulars within the article “Anomaly Root Cause Analysis 101”):
- Estimate the top-line change within the metric to know the magnitude of the shift.
- Examine knowledge high quality to make sure that the numbers are correct and dependable.
- Collect context about inside and exterior occasions that may have influenced the change.
- Slice and cube the metric to establish which segments are contributing to the metric’s shift.
- Consolidate your findings in an government abstract that features hypotheses and estimates of their impacts on the principle KPI.
Since we’ve a transparent plan to execute, such duties can doubtlessly be automated utilizing AI brokers. The code brokers we lately discussed may very well be a great match there, as their potential to put in writing and execute code will assist them to analyse knowledge effectively, with minimal back-and-forth. So, let’s attempt constructing such an agent utilizing the HuggingFace smolagents framework.
Whereas engaged on our process, we are going to focus on extra superior options of the smolagents framework:
- Methods for tweaking all types of prompts to make sure the specified behaviour.
- Constructing a multi-agent system that may clarify the Kpi adjustments and hyperlink them to root causes.
- Including reflection to the movement with supplementary planning steps.
MVP for explaining KPI adjustments
As common, we are going to take an iterative method and begin with a easy MVP, specializing in the slicing and dicing step of the evaluation. We are going to analyse the adjustments of a easy metric (income) break up by one dimension (nation). We are going to use the dataset from my earlier article, “Making sense of KPI changes”.
Let’s load the information first.
raw_df = pd.read_csv('absolute_metrics_example.csv', sep = 't')
df = raw_df.groupby('nation')[['revenue_before', 'revenue_after_scenario_2']].sum()
.sort_values('revenue_before', ascending = False).rename(
columns = {'revenue_after_scenario_2': 'after',
'revenue_before': 'earlier than'})
Subsequent, let’s initialise the mannequin. I’ve chosen the OpenAI GPT-4o-mini as my most popular possibility for easy duties. Nevertheless, the smolagents framework supports all types of fashions, so you need to use the mannequin you like. Then, we simply must create an agent and provides it the duty and the dataset.
from smolagents import CodeAgent, LiteLLMModel
mannequin = LiteLLMModel(model_id="openai/gpt-4o-mini",
api_key=config['OPENAI_API_KEY'])
agent = CodeAgent(
mannequin=mannequin, instruments=[], max_steps=10,
additional_authorized_imports=["pandas", "numpy", "matplotlib.*",
"plotly.*"], verbosity_level=1
)
process = """
Here's a dataframe exhibiting income by section, evaluating values
earlier than and after.
Might you please assist me perceive the adjustments? Particularly:
1. Estimate how the entire income and the income for every section
have modified, each in absolute phrases and as a share.
2. Calculate the contribution of every section to the entire
change in income.
Please spherical all floating-point numbers within the output
to 2 decimal locations.
"""
agent.run(
process,
additional_args={"knowledge": df},
)
The agent returned fairly a believable outcome. We bought detailed statistics on the metric adjustments in every section and their influence on the top-line KPI.
{'total_before': 1731985.21, 'total_after':
1599065.55, 'total_change': -132919.66, 'segment_changes':
{'absolute_change': {'different': 4233.09, 'UK': -4376.25, 'France':
-132847.57, 'Germany': -690.99, 'Italy': 979.15, 'Spain':
-217.09}, 'percentage_change': {'different': 0.67, 'UK': -0.91,
'France': -55.19, 'Germany': -0.43, 'Italy': 0.81, 'Spain':
-0.23}, 'contribution_to_change': {'different': -3.18, 'UK': 3.29,
'France': 99.95, 'Germany': 0.52, 'Italy': -0.74, 'Spain': 0.16}}}
Let’s check out the code generated by the agent. It’s advantageous, however there’s one potential challenge. The Llm recreated the dataframe primarily based on the enter knowledge as a substitute of referencing it instantly. This method shouldn’t be best (particularly when working with large datasets), as it may well result in errors and better token utilization.
import pandas as pd
# Creating the DataFrame from the supplied knowledge
knowledge = {
'earlier than': [632767.39, 481409.27, 240704.63, 160469.75,
120352.31, 96281.86],
'after': [637000.48, 477033.02, 107857.06, 159778.76,
121331.46, 96064.77]
}
index = ['other', 'UK', 'France', 'Germany', 'Italy', 'Spain']
df = pd.DataFrame(knowledge, index=index)
# Calculating whole income earlier than and after
total_before = df['before'].sum()
total_after = df['after'].sum()
# Calculating absolute and share change for every section
df['absolute_change'] = df['after'] - df['before']
df['percentage_change'] = (df['absolute_change'] /
df['before']) * 100
# Calculating whole income change
total_change = total_after - total_before
# Calculating contribution of every section to the entire change
df['contribution_to_change'] = (df['absolute_change'] /
total_change) * 100
# Rounding outcomes
df = df.spherical(2)
# Printing the calculated outcomes
print("Whole income earlier than:", total_before)
print("Whole income after:", total_after)
print("Whole change in income:", total_change)
print(df)
It’s price fixing this drawback earlier than transferring on to constructing a extra complicated system.
Tweaking prompts
For the reason that LLM is simply following the directions given to it, we are going to tackle this challenge by tweaking the immediate.
Initially, I tried to make the duty immediate extra specific, clearly instructing the LLM to make use of the supplied variable.
process = """Here's a dataframe exhibiting income by section, evaluating
values earlier than and after. The info is saved in df variable.
Please, use it and do not attempt to parse the information your self.
Might you please assist me perceive the adjustments?
Particularly:
1. Estimate how the entire income and the income for every section
have modified, each in absolute phrases and as a share.
2. Calculate the contribution of every section to the entire change in income.
Please spherical all floating-point numbers within the output to 2 decimal locations.
"""
It didn’t work. So, the subsequent step is to look at the system immediate and see why it really works this manner.
print(agent.prompt_templates['system_prompt'])
#...
# Listed below are the foundations it is best to at all times comply with to resolve your process:
# 1. At all times present a 'Thought:' sequence, and a 'Code:n```py' sequence ending with '```<end_code>' sequence, else you'll fail.
# 2. Use solely variables that you've outlined.
# 3. At all times use the appropriate arguments for the instruments. DO NOT move the arguments as a dict as in 'reply = wiki({'question': "What's the place the place James Bond lives?"})', however use the arguments instantly as in 'reply = wiki(question="What's the place the place James Bond lives?")'.
# 4. Take care to not chain too many sequential instrument calls in the identical code block, particularly when the output format is unpredictable. As an illustration, a name to go looking has an unpredictable return format, so wouldn't have one other instrument name that relies on its output in the identical block: fairly output outcomes with print() to make use of them within the subsequent block.
# 5. Name a instrument solely when wanted, and by no means re-do a instrument name that you just beforehand did with the very same parameters.
# 6. Do not title any new variable with the identical title as a instrument: as an example do not title a variable 'final_answer'.
# 7. By no means create any notional variables in our code, as having these in your logs will derail you from the true variables.
# 8. You should use imports in your code, however solely from the next listing of modules: ['collections', 'datetime', 'itertools', 'math', 'numpy', 'pandas', 'queue', 'random', 're', 'stat', 'statistics', 'time', 'unicodedata']
# 9. The state persists between code executions: so if in a single step you have created variables or imported modules, these will all persist.
# 10. Do not hand over! You are accountable for fixing the duty, not offering instructions to resolve it.
# Now Start!
On the finish of the immediate, we’ve the instruction "# 2. Use solely variables that you've outlined!"
. This may be interpreted as a strict rule to not use every other variables. So, I modified it to "# 2. Use solely variables that you've outlined or ones supplied in further arguments! By no means attempt to copy and parse further arguments."
modified_system_prompt = agent.prompt_templates['system_prompt']
.substitute(
'2. Use solely variables that you've outlined!',
'2. Use solely variables that you've outlined or ones supplied in further arguments! By no means attempt to copy and parse further arguments.'
)
agent.prompt_templates['system_prompt'] = modified_system_prompt
This modification alone didn’t assist both. Then, I examined the duty message.
╭─────────────────────────── New run ────────────────────────────╮
│ │
│ Here's a pandas dataframe exhibiting income by section, │
│ evaluating values earlier than and after. │
│ Might you please assist me perceive the adjustments? │
│ Particularly: │
│ 1. Estimate how the entire income and the income for every │
│ section have modified, each in absolute phrases and as a │
│ share. │
│ 2. Calculate the contribution of every section to the entire │
│ change in income. │
│ │
│ Please spherical all floating-point numbers within the output to 2 │
│ decimal locations. │
│ │
│ You've got been supplied with these further arguments, that │
│ you possibly can entry utilizing the keys as variables in your python │
│ code: │
│ {'df': earlier than after │
│ nation │
│ different 632767.39 637000.48 │
│ UK 481409.27 477033.02 │
│ France 240704.63 107857.06 │
│ Germany 160469.75 159778.76 │
│ Italy 120352.31 121331.46 │
│ Spain 96281.86 96064.77}. │
│ │
╰─ LiteLLMModel - openai/gpt-4o-mini ────────────────────────────╯
It has an instruction associated the the utilization of further arguments "You've got been supplied with these further arguments, you could entry utilizing the keys as variables in your python code"
. We are able to attempt to make it extra particular and clear. Sadly, this parameter shouldn’t be uncovered externally, so I needed to find it in the source code. To seek out the trail of a Python bundle, we are able to use the next code.
import smolagents
print(smolagents.__path__)
Then, I discovered the brokers.py
file and modified this line to incorporate a extra particular instruction.
self.process += f"""
You've got been supplied with these further arguments obtainable as variables
with names {",".be a part of(additional_args.keys())}. You may entry them instantly.
Here's what they include (only for informational functions):
{str(additional_args)}."""
It was a little bit of hacking, however that’s typically what occurs with the LLM frameworks. Don’t neglect to reload the bundle afterwards, and we’re good to go. Let’s check whether or not it really works now.
process = """
Here's a pandas dataframe exhibiting income by section, evaluating values
earlier than and after.
Your process shall be perceive the adjustments to the income (after vs earlier than)
in numerous segments and supply government abstract.
Please, comply with the next steps:
1. Estimate how the entire income and the income for every section
have modified, each in absolute phrases and as a share.
2. Calculate the contribution of every section to the entire change
in income.
Spherical all floating-point numbers within the output to 2 decimal locations.
"""
agent.logger.stage = 1 # Decrease verbosity stage
agent.run(
process,
additional_args={"df": df},
)
Hooray! The issue has been fastened. The agent not copies the enter variables and references df
variable instantly as a substitute. Right here’s the newly generated code.
import pandas as pd
# Calculate whole income earlier than and after
total_before = df['before'].sum()
total_after = df['after'].sum()
total_change = total_after - total_before
percentage_change_total = (total_change / total_before * 100)
if total_before != 0 else 0
# Spherical values
total_before = spherical(total_before, 2)
total_after = spherical(total_after, 2)
total_change = spherical(total_change, 2)
percentage_change_total = spherical(percentage_change_total, 2)
# Show outcomes
print(f"Whole Income Earlier than: {total_before}")
print(f"Whole Income After: {total_after}")
print(f"Whole Change: {total_change}")
print(f"Share Change: {percentage_change_total}%")
Now, we’re prepared to maneuver on to constructing the precise agent that may resolve our process.
AI agent for KPI narratives
Lastly, it’s time to work on the AI agent that may assist us clarify KPI adjustments and create an government abstract.
Our agent will comply with this plan for the foundation trigger evaluation:
- Estimate the top-line KPI change.
- Slice and cube the metric to know which segments are driving the shift.
- Search for occasions within the change log to see whether or not they can clarify the metric adjustments.
- Consolidate all of the findings within the complete government abstract.
After plenty of experimentation and several other tweaks, I’ve arrived at a promising outcome. Listed below are the important thing changes I made (we are going to focus on them intimately later):
- I leveraged the multi-agent setup by including one other workforce member — the change log Agent, who can entry the change log and help in explaining KPI adjustments.
- I experimented with extra highly effective fashions like
gpt-4o
andgpt-4.1-mini
sincegpt-4o-mini
wasn’t adequate. Utilizing stronger fashions not solely improved the outcomes, but additionally considerably lowered the variety of steps: withgpt-4.1-mini
I bought the ultimate outcome after simply six steps, in comparison with 14–16 steps withgpt-4o-mini
. This means that investing in dearer fashions may be worthwhile for agentic workflows. - I supplied the agent with the complicated instrument to analyse KPI adjustments for easy metrics. The instrument performs all of the calculations, whereas LLM can simply interpret the outcomes. I mentioned the method to KPI adjustments evaluation intimately in my previous article.
- I reformulated the immediate into a really clear step-by-step information to assist the agent keep on observe.
- I added planning steps that encourage the LLM agent to assume by means of its method first and revisit the plan each three iterations.
After all of the changes, I bought the next abstract from the agent, which is fairly good.
Government Abstract:
Between April 2025 and Could 2025, whole income declined sharply by
roughly 36.03%, falling from 1,731,985.21 to 1,107,924.43, a
drop of -624,060.78 in absolute phrases.
This decline was primarily pushed by important income
reductions within the 'new' buyer segments throughout a number of
international locations, with declines of roughly 70% in these segments.
Probably the most impacted segments embrace:
- other_new: earlier than=233,958.42, after=72,666.89,
abs_change=-161,291.53, rel_change=-68.94%, share_before=13.51%,
influence=25.85, impact_norm=1.91
- UK_new: earlier than=128,324.22, after=34,838.87,
abs_change=-93,485.35, rel_change=-72.85%, share_before=7.41%,
influence=14.98, impact_norm=2.02
- France_new: earlier than=57,901.91, after=17,443.06,
abs_change=-40,458.85, rel_change=-69.87%, share_before=3.34%,
influence=6.48, impact_norm=1.94
- Germany_new: earlier than=48,105.83, after=13,678.94,
abs_change=-34,426.89, rel_change=-71.56%, share_before=2.78%,
influence=5.52, impact_norm=1.99
- Italy_new: earlier than=36,941.57, after=11,615.29,
abs_change=-25,326.28, rel_change=-68.56%, share_before=2.13%,
influence=4.06, impact_norm=1.91
- Spain_new: earlier than=32,394.10, after=7,758.90,
abs_change=-24,635.20, rel_change=-76.05%, share_before=1.87%,
influence=3.95, impact_norm=2.11
Based mostly on evaluation from the change log, the principle causes for this
pattern are:
1. The introduction of recent onboarding controls carried out on Could
8, 2025, which lowered new buyer acquisition by about 70% to
stop fraud.
2. A postal service strike within the UK beginning April 5, 2025,
inflicting order supply delays and elevated cancellations
impacting the UK new section.
3. A rise in VAT by 2% in Spain as of April 22, 2025,
affecting new buyer pricing and inflicting larger cart
abandonment.
These components mixed clarify the outsized damaging impacts
noticed in new buyer segments and the general income decline.
The LLM agent additionally generated a bunch of illustrative charts (they have been a part of our development explaining instrument). For instance, this one reveals the impacts throughout the mix of nation and maturity.

The outcomes look actually thrilling. Now let’s dive deeper into the precise implementation to know the way it works beneath the hood.
Multi-AI agent setup
We are going to begin with our change log agent. This agent will question the change log and attempt to establish potential root causes for the metric adjustments we observe. Since this agent doesn’t must do complicated operations, we implement it as a ToolCallingAgent. As a result of this agent shall be referred to as by one other agent, we have to outline its title
and description
attributes.
@instrument
def get_change_log(month: str) -> str:
"""
Returns the change log (listing of inside and exterior occasions that may have affected our KPIs) for the given month
Args:
month: month within the format %Y-%m-01, for instance, 2025-04-01
"""
return events_df[events_df.month == month].drop('month', axis = 1).to_dict('information')
mannequin = LiteLLMModel(model_id="openai/gpt-4.1-mini", api_key=config['OPENAI_API_KEY'])
change_log_agent = ToolCallingAgent(
instruments=[get_change_log],
mannequin=mannequin,
max_steps=10,
title="change_log_agent",
description="Helps you discover the related info within the change log that may clarify adjustments on metrics. Present the agent with all of the context to obtain information",
)
For the reason that supervisor agent shall be calling this agent, we gained’t have any management over the question it receives. Subsequently, I made a decision to change the system immediate to incorporate further context.
change_log_system_prompt = '''
You are a grasp of the change log and also you assist others to elucidate
the adjustments to metrics. Once you obtain a request, search for the listing of occasions
occurred by month, then filter the related info primarily based
on supplied context and return again. Prioritise probably the most possible components
affecting the KPI and restrict your reply solely to them.
'''
modified_system_prompt = change_log_agent.prompt_templates['system_prompt']
+ 'nnn' + change_log_system_prompt
change_log_agent.prompt_templates['system_prompt'] = modified_system_prompt
To allow the first agent to delegate duties to the change log agent, we merely must specify it within the managed_agents
subject.
agent = CodeAgent(
mannequin=mannequin,
instruments=[calculate_simple_growth_metrics],
max_steps=20,
additional_authorized_imports=["pandas", "numpy", "matplotlib.*", "plotly.*"],
verbosity_level = 2,
planning_interval = 3,
managed_agents = [change_log_agent]
)
Let’s see the way it works. First, we are able to have a look at the brand new system immediate for the first agent. It now consists of details about workforce members and directions on how one can ask them for assist.
You can too give duties to workforce members.
Calling a workforce member works the identical as for calling a instrument: merely,
the one argument you can provide within the name is 'process'.
On condition that this workforce member is an actual human, try to be very verbose
in your process, it must be an extended string offering informations
as detailed as obligatory.
Here's a listing of the workforce members you could name:
```python
def change_log_agent("Your question goes right here.") -> str:
"""Helps you discover the related info within the change log that
can clarify adjustments on metrics. Present the agent with all of the context
to obtain information"""
```
The execution log reveals that the first agent efficiently delegated the duty to the second agent and obtained the next response.
<-- Main agent calling the change log agent -->
─ Executing parsed code: ───────────────────────────────────────
# Question change_log_agent with the detailed process description
ready
context_for_change_log = (
"We analyzed adjustments in income from April 2025 to Could
2025. We discovered massive decreases "
"primarily within the 'new' maturity segments throughout international locations:
Spain_new, UK_new, Germany_new, France_new, Italy_new, and
other_new. "
"The income fell by round 70% in these segments, which
have outsized damaging influence on whole income change. "
"We wish to know the 1-3 most possible causes for this
important drop in income within the 'new' buyer segments
throughout this era."
)
rationalization = change_log_agent(process=context_for_change_log)
print("Change log agent rationalization:")
print(rationalization)
────────────────────────────────────────────────────────────────
<-- Change log agent execution begin -->
╭──────────────────── New run - change_log_agent ─────────────────────╮
│ │
│ You are a useful agent named 'change_log_agent'. │
│ You've got been submitted this process by your supervisor. │
│ --- │
│ Activity: │
│ We analyzed adjustments in income from April 2025 to Could 2025. │
│ We discovered massive decreases primarily within the 'new' maturity segments │
│ throughout international locations: Spain_new, UK_new, Germany_new, France_new, │
│ Italy_new, and other_new. The income fell by round 70% in these │
│ segments, which have outsized damaging influence on whole income │
│ change. We wish to know the 1-3 most possible causes for this │
│ important drop in income within the 'new' buyer segments throughout │
│ this era. │
│ --- │
│ You are serving to your supervisor resolve a wider process: so ensure that to │
│ not present a one-line reply, however give as a lot info as │
│ doable to present them a transparent understanding of the reply. │
│ │
│ Your final_answer WILL HAVE to include these elements: │
│ ### 1. Activity consequence (brief model): │
│ ### 2. Activity consequence (extraordinarily detailed model): │
│ ### 3. Extra context (if related): │
│ │
│ Put all these in your final_answer instrument, the whole lot that you just do │
│ not move as an argument to final_answer shall be misplaced. │
│ And even when your process decision shouldn't be profitable, please return │
│ as a lot context as doable, in order that your supervisor can act upon │
│ this suggestions. │
│ │
╰─ LiteLLMModel - openai/gpt-4.1-mini ────────────────────────────────╯
Utilizing the smolagents framework, we are able to simply arrange a easy multi-agent system, the place a supervisor agent coordinates and delegates duties to workforce members with particular expertise.
Iterating on the immediate
We’ve began with a really high-level immediate outlining the purpose and a imprecise course, however sadly, it didn’t work persistently. LLMs should not good sufficient but to determine the method on their very own. So, I created an in depth step-by-step immediate describing the entire plan and together with the detailed specs of the expansion narrative instrument we’re utilizing.
process = """
Here's a pandas dataframe exhibiting the income by section, evaluating values
earlier than (April 2025) and after (Could 2025).
You are a senior and skilled knowledge analyst. Your process shall be to know
the adjustments to the income (after vs earlier than) in numerous segments
and supply government abstract.
## Observe the plan:
1. Begin by udentifying the listing of dimensions (columns in dataframe that
should not "earlier than" and "after")
2. There may be a number of dimensions within the dataframe. Begin high-level
by taking a look at every dimension in isolation, mix all outcomes
collectively into the listing of segments analysed (do not forget to save lots of
the dimension used for every section).
Use the supplied instruments to analyse the adjustments of metrics: {tools_description}.
3. Analyse the outcomes from earlier step and preserve solely segments
which have outsized influence on the KPI change (absolute of impact_norm
is above 1.25).
4. Examine what dimensions are current within the listing of serious section,
if there are a number of ones - execute the instrument on their mixtures
and add to the analysed segments. If after including a further dimension,
all subsegments present shut different_rate and impact_norm values,
then we are able to exclude this break up (regardless that impact_norm is above 1.25),
because it would not clarify something.
5. Summarise the numerous adjustments you recognized.
6. Attempt to clarify what's going on with metrics by getting information
from the change_log_agent. Please, present the agent the complete context
(what segments have outsized influence, what's the relative change and
what's the interval we're taking a look at).
Summarise the knowledge from the changelog and point out
solely 1-3 probably the most possible causes of the KPI change
(ranging from probably the most impactful one).
7. Put collectively 3-5 sentences commentary what occurred high-level
and why (primarily based on the data obtained from the change log).
Then comply with it up with extra detailed abstract:
- Prime-line whole worth of metric earlier than and after in human-readable format,
absolute and relative change
- Listing of segments that meaningfully influenced the metric positively
or negatively with the next numbers: values earlier than and after,
absoltue and relative change, share of section earlier than, influence
and normed influence. Order the segments by absolute worth
of absolute change because it represents the ability of influence.
## Instruction on the calculate_simple_growth_metrics instrument:
By default, it is best to use the instrument for the entire dataset not the section,
because it will provide you with the complete details about the adjustments.
Right here is the steering how one can interpret the output of the instrument
- distinction - absolutely the distinction between after and earlier than values
- difference_rate - the relative distinction (if it is shut for
all segments then the dimension shouldn't be informative)
- influence - the share of KPI differnce defined by this section
- segment_share_before - share of section earlier than
- impact_norm - influence normed on the share of segments, we're
in very excessive or very low numbers since they present outsized influence,
rule of thumb - impact_norm between -1.25 and 1.25 is not-informative
Should you're utilizing the instrument on the subset of dataframe have in mind,
that the outcomes will not be aplicable to the complete dataset, so keep away from utilizing it
except you wish to explicitly have a look at subset (i.e. change in France).
Should you determined to make use of the instrument on a selected section
and share these leads to the manager abstract, explicitly define
that we're diving deeper into a selected section.
""".format(tools_description = tools_description)
agent.run(
process,
additional_args={"df": df},
)
Explaining the whole lot in such element was fairly a frightening process, nevertheless it’s obligatory if we would like constant outcomes.
Planning steps
The smolagents framework helps you to add planning steps to your agentic movement. This encourages the agent to start out with a plan and replace it after the required variety of steps. From my expertise, this reflection may be very useful for sustaining concentrate on the issue and adjusting actions to remain aligned with the preliminary plan and purpose. I undoubtedly suggest utilizing it in instances when complicated reasoning is required.
Setting it up is as straightforward as specifying planning_interval = 3
for the code agent.
agent = CodeAgent(
mannequin=mannequin,
instruments=[calculate_simple_growth_metrics],
max_steps=20,
additional_authorized_imports=["pandas", "numpy", "matplotlib.*", "plotly.*"],
verbosity_level = 2,
planning_interval = 3,
managed_agents = [change_log_agent]
)
That’s it. Then, the agent offers reflections beginning with fascinated with the preliminary plan.
────────────────────────── Preliminary plan ──────────────────────────
Listed below are the information I do know and the plan of motion that I'll
comply with to resolve the duty:
```
## 1. Details survey
### 1.1. Details given within the process
- We've a pandas dataframe `df` exhibiting income by section, for
two time factors: earlier than (April 2025) and after (Could 2025).
- The dataframe columns embrace:
- Dimensions: `nation`, `maturity`, `country_maturity`,
`country_maturity_combined`
- Metrics: `earlier than` (income in April 2025), `after` (income in
Could 2025)
- The duty is to know the adjustments in income (after vs
earlier than) throughout totally different segments.
- Key directions and instruments supplied:
- Establish all dimensions besides earlier than/after for segmentation.
- Analyze every dimension independently utilizing
`calculate_simple_growth_metrics`.
- Filter segments with outsized influence on KPI change (absolute
normed influence > 1.25).
- Look at mixtures of dimensions if a number of dimensions have
important segments.
- Summarize important adjustments and have interaction `change_log_agent`
for contextual causes.
- Present a remaining government abstract together with top-line adjustments
and segment-level detailed impacts.
- Dataset snippet reveals segments combining international locations (`France`,
`UK`, `Germany`, `Italy`, `Spain`, `different`) and maturity standing
(`new`, `current`).
- The mixed segments are uniquely recognized in columns
`country_maturity` and `country_maturity_combined`.
### 1.2. Details to search for
- Definitions or descriptions of the segments if unclear (e.g.,
what defines `new` vs `current` maturity).
- Probably not obligatory to proceed, however may very well be requested from
enterprise documentation or change log.
- Extra particulars on the change log (accessible by way of
`change_log_agent`) that might present possible causes for income
adjustments.
- Affirmation on dealing with mixed dimension splits - how precisely
`country_maturity_combined` is shaped and must be interpreted in
mixed dimension evaluation.
- Information dictionary or description of metrics if any further KPI
in addition to income is related (unlikely given knowledge).
- Dates verify interval of study: April 2025 (earlier than) and Could
2025 (after). No must look these up since given.
### 1.3. Details to derive
- Establish all dimension columns obtainable for segmentation:
- By excluding 'earlier than' and 'after', seemingly candidates are
`nation`, `maturity`, `country_maturity`, and
`country_maturity_combined`.
- For every dimension, calculate change metrics utilizing the given
instrument:
- Absolute and relative distinction in income per section.
- Influence, section share earlier than, and normed influence for every
section.
- Establish which segments have outsized influence on KPI change
(|impact_norm| > 1.25).
- If a number of dimensions have important segments, mix
dimensions (e.g., nation + maturity) and reanalyze.
- Decide if mixed dimension splits present significant
differentiation or not, primarily based on delta price and impact_norm
consistency.
- Summarize course and magnitude of KPI adjustments at top-line
stage (combination income earlier than and after).
- Establish prime segments driving constructive and damaging adjustments
primarily based on ordered absolute absolute_change.
- Collect contextual insights from the change log agent concerning
possible causes tied to important segments and the Could 2025 vs
April 2025 interval.
## 2. Plan
1. Establish all dimension columns current within the dataframe by
itemizing columns and excluding 'earlier than' and 'after'.
2. For every dimension recognized (`nation`, `maturity`,
`country_maturity`, `country_maturity_combined`):
- Use `calculate_simple_growth_metrics` on the complete dataframe
grouped by that dimension.
- Extract segments with calculated metrics together with
impact_norm.
3. Mixture outcomes from all single-dimension analyses and filter
segments the place |impact_norm| > 1.25.
4. Decide which dimensions these important segments belong
to.
5. If a couple of dimension is represented in these important
segments, analyze the mixed dimension shaped by these
dimensions (for instance, mixture of `nation` and `maturity`
or use current mixed dimension columns).
6. Repeat metric calculation utilizing
`calculate_simple_growth_metrics` on the mixed dimension.
7. Look at if the mixed dimension splits create significant
differentiation - if all subsegments present shut difference_rate
and impact_norm, exclude the break up.
8. Put together a abstract of serious adjustments:
- Prime-line KPIs earlier than and after (absolute and relative
adjustments).
- Listing of impactful segments sorted by absolute absolute_change
that influenced total income.
9. Present the listing of segments with particulars (values earlier than,
after, absolute and relative change, share earlier than, influence,
impact_norm).
10. Utilizing this summarized info, question `change_log_agent`
with full context:
- Embrace important segments, their relative adjustments, and
durations (April to Could 2025).
11. Course of the agent's response to establish 1-3 major possible
causes of the KPI adjustments.
12. Draft government abstract commentary:
- Excessive-level overview of what occurred and why, primarily based on log
information.
- Detailed abstract together with top-line adjustments and
segment-level metrics influence.
13. Ship the ultimate reply utilizing `final_answer` instrument containing
the above government abstract and data-driven insights.
Then, after every three steps, the agent revisits and updates the plan.
────────────────────────── Up to date plan ──────────────────────────
I nonetheless want to resolve the duty I used to be given:
```
Here's a pandas dataframe exhibiting the income by section,
evaluating values earlier than (April 2025) and after (Could 2025).
You are a senior and skilled knowledge analyst. Your process shall be
perceive the adjustments to the income (after vs earlier than) in
totally different segments
and supply government abstract.
<... repeating the complete preliminary process ...>
```
Listed below are the information I do know and my new/up to date plan of motion to
resolve the duty:
```
## 1. Up to date information survey
### 1.1. Details given within the process
- We've a pandas dataframe with income by section, exhibiting
values "earlier than" (April 2025) and "after" (Could 2025).
- Columns within the dataframe embrace a number of dimensions and the
"earlier than" and "after" income values.
- The purpose is to know income adjustments by section and supply
an government abstract.
- Steerage and guidelines about how one can analyze and interpret outcomes
from the `calculate_simple_growth_metrics` instrument are supplied.
- The dataframe accommodates columns: nation, maturity,
country_maturity, country_maturity_combined, earlier than, after.
### 1.2. Details that we've discovered
- The size to investigate are: nation, maturity,
country_maturity, and country_maturity_combined.
- Analyzed income adjustments by dimension.
- Solely the "new" maturity section has important influence
(impact_norm=1.96 > 1.25), with a big damaging income change (~
-70.6%).
- Within the mixed section "country_maturity," the "new" segments
throughout international locations (Spain_new, UK_new, Germany_new, France_new,
Italy_new, other_new) all have outsized damaging impacts with
impact_norm values all above 1.9.
- The mature/current segments in these international locations have smaller
normed impacts beneath 1.25.
- Nation-level and maturity-level section dimension alone are
much less revealing than the mixed nation+maturity section
dimension which highlights the brand new segments as strongly impactful.
- Whole income dropped considerably from earlier than to after, principally
pushed by new segments shrinking drastically.
### 1.3. Details nonetheless to search for
- Whether or not splitting the information by further dimensions past
nation and maturity (e.g., country_maturity_combined) explains
additional heterogeneous impacts or if the sample is uniform.
- Clarification/context from change log about what prompted the main
drop predominantly in new segments in all international locations.
- Confirming whether or not any nation inside the new section behaved
otherwise or mitigated losses.
### 1.4. Details nonetheless to derive
- A concise government abstract describing the top-level income
change and figuring out which segments clarify the declines.
- Clarification involving the change log agent with abstract of
possible causes for these outsized reductions in income within the
new segments throughout international locations for April-Could 2025.
## 2. Plan
### 2.1. Confirm if including the extra dimension
'country_maturity_combined' splits the impactful "new" segments
into subsegments with considerably totally different impacts or if the
change charges and normed impacts are comparatively homogeneous. If
homogeneous, we don't acquire deeper perception and will disregard
additional splitting.
### 2.2. Summarize all important segments recognized with
outsized impact_norm ≥ 1.25, together with their earlier than and after
values, absolute and relative adjustments, section shares earlier than,
influence, and normalized influence, ordered by absolute worth of the
change.
### 2.3. Question the change_log_agent with the complete context:
important segments are the brand new country_maturity segments with
massive damaging adjustments (~ -70%), timeframe April 2025 to Could 2025,
and request prime 1-3 most possible causes for the KPI income drop
in these segments.
### 2.4. Based mostly on the change log agent's response, synthesize a
3-5 sentence high-level commentary explaining what occurred
broadly and why.
### 2.5. Draft an in depth government abstract together with:
- Whole income earlier than and after in human-readable format with
absolute and relative change.
- An inventory of serious segments driving these adjustments, so as
by absolute influence, with detailed numbers (earlier than, after, absolute
and relative change, section share earlier than, influence, normed influence).
### 2.6. Use the `final_answer` instrument to provide the finalized
government abstract report.
I actually like how the agent is inspired to reiterate on the preliminary process and keep centered on the principle drawback. Common reflection like that is useful in actual life as nicely, as groups usually get slowed down within the course of and lose sight of the why behind what they’re doing. It’s fairly cool to see managerial greatest practices being built-in into agentic frameworks.
That’s it! We’ve constructed a code agent able to analysing KPI adjustments for easy metrics and explored all the important thing nuances of the method.
Yow will discover the entire code and execution logs on GitHub.
Abstract
We’ve experimented rather a lot with code brokers and at the moment are prepared to attract conclusions. For our experiments, we used the HuggingFace smolagents framework for code brokers — a really useful toolset that gives:
- straightforward integration with totally different LLMs (from native fashions by way of Ollama to public suppliers like Anthropic or OpenAI),
- excellent logging that makes it straightforward to know the entire thought means of the agent and debug points,
- potential to construct complicated techniques leveraging multi-AI agent setups or planning options with out a lot effort.
Whereas smolagents is at the moment my favorite agentic framework, it has its limitations:
- It may possibly lack flexibility at occasions. For instance, I needed to modify the immediate instantly within the supply code to get the behaviour I needed.
- It solely helps hierarchical multi-agent set-up (the place one supervisor can delegate duties to different brokers), however doesn’t cowl sequential workflow or consensual decision-making processes.
- There’s no help for long-term reminiscence out of the field, which means you’re ranging from scratch with each process.
Thank you a large number for studying this text. I hope this text was insightful for you.
Reference
This text is impressed by the “Building Code Agents with Hugging Face smolagents” brief course by DeepLearning.AI.