Close Menu
    Trending
    • Gemini introducerar funktionen schemalagda åtgärder i Gemini-appen
    • AIFF 2025 Runway’s tredje årliga AI Film Festival
    • AI-agenter kan nu hjälpa läkare fatta bättre beslut inom cancervård
    • Not Everything Needs Automation: 5 Practical AI Agents That Deliver Enterprise Value
    • Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.
    • 5 Crucial Tweaks That Will Make Your Charts Accessible to People with Visual Impairments
    • Why AI Projects Fail | Towards Data Science
    • The Role of Luck in Sports: Can We Measure It?
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Making Sense of KPI Changes | Towards Data Science
    Artificial Intelligence

    Making Sense of KPI Changes | Towards Data Science

    ProfitlyAIBy ProfitlyAIMay 6, 2025No Comments16 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    , we’re often monitoring metrics. Very often, metrics change. And after they do, it’s our job to determine what’s happening: why did the conversion price all of the sudden drop, or what’s driving constant income progress?

    I began my journey in knowledge analytics as a Kpi analyst. For nearly three years, I’d been doing root trigger evaluation and KPI deep dives practically full-time. Even after transferring to product analytics, I’m nonetheless often investigating the KPI shifts. You would say I’ve develop into fairly the skilled analytics detective.

    The cornerstone of Root Cause Analysis is often slicing and dicing the info. Most frequently, determining what segments are driving the change provides you with a clue to the basis causes. So, on this article, I wish to share a framework for estimating how completely different segments contribute to modifications in your key metric. We’ll put collectively a set of capabilities to slice and cube our knowledge and establish the primary drivers behind the metric’s modifications.

    Nevertheless, in actual life, earlier than leaping into knowledge crunching, it’s vital to know the context: 

    • Is the info full, and may we evaluate latest intervals to earlier ones?
    • Are there any long-term developments and identified seasonal results we’ve seen up to now?
    • Have we launched something not too long ago, or are we conscious of any exterior occasions affecting our metrics, comparable to a competitor’s advertising marketing campaign or foreign money fluctuations?

    I’ve mentioned such nuances in additional element in my earlier article, “Root Cause Analysis 101”.

    KPI change framework

    We encounter completely different metrics, and analysing their modifications requires completely different approaches. Let’s begin by defining the 2 forms of metrics we will likely be working with:

    • Easy metrics characterize a single measure, for instance, complete income or the variety of lively customers. Regardless of their simplicity, they’re typically utilized in product analytics. One of many frequent examples is the North Star metrics. Good North Star metric estimates the full worth acquired by prospects. For instance, AirBnB may use nights booked, and WhatsApp may observe messages despatched. Each are easy metrics.

    You’ll be able to study extra about North Star Metrics from the Amplitude Playbook.

    • Nevertheless, we will’t keep away from utilizing compound or ratio metrics, like conversion price or common income per consumer (ARPU). Such metrics assist us observe our product efficiency extra exactly and isolate the affect of particular modifications. For instance, think about your workforce is engaged on bettering the registration web page. They will doubtlessly observe the variety of registered prospects as their major KPI, nevertheless it may be extremely affected by exterior components (i.e., a advertising marketing campaign driving extra visitors). A greater metric for this case can be a conversion price from touchdown on a registration web page to finishing it.

    We’ll use a fictional instance to discover ways to strategy root trigger evaluation for several types of metrics. Think about we’re engaged on an e-commerce product, and our workforce is targeted on two foremost KPIs: 

    • complete income (a easy metric),
    • conversion to buy — the ratio of customers who made a purchase order to the full variety of customers (a ratio metric).

    We’ll use artificial datasets to take a look at attainable situations of metrics’ modifications. Now it’s time to maneuver on and see what’s happening with the income.

    Evaluation: easy metrics

    Let’s begin easy and dig into the income modifications. As common, step one is to load a dataset. Our knowledge has two dimensions: nation and maturity (whether or not a buyer is new or current). Moreover, now we have three completely different situations to check our framework beneath varied situations.

    import pandas as pd
    df = pd.read_csv('absolute_metrics_example.csv', sep = 't')
    df.head()
    Picture by writer

    The primary purpose of our evaluation is to find out how every phase contributes to the change in our top-line metric. Let’s break it down. We’ll write a bunch of formulation. However don’t fear, it gained’t require any information past primary arithmetic.

    To begin with, it’s useful to see how the metric modified in every phase, each in absolute and relative numbers. 

    [textbf{difference}^{textsf{i}} = textbf{metric}_{textsf{before}}^textsf{i} – textbf{metric}_{textsf{after}}^textsf{i}
    textbf{difference_rate}^{textsf{i}} = frac{textbf{difference}^{textsf{i}}}{textbf{metric}_{textsf{before}}^textsf{i}}]

    The subsequent step is to take a look at it holistically and see how every phase contributed to the general change within the metric. We’ll calculate the affect because the share of the full distinction.

    [textbf{impact}^{textsf{i}} = frac{textbf{difference}^{textsf{i}}}{sum_{textsf{i}}{textbf{difference}^{textsf{i}}}}]

    That already offers us some beneficial insights. Nevertheless, to know whether or not any phase is behaving unusually and requires particular consideration, it’s helpful to check the phase’s contribution to the metric change with its preliminary share of the metric. 

    Right here’s the reasoning. If the phase makes up 90% of our metric, then it’s anticipated for it to contribute 85–95% of the change. But when a phase that accounts for under 10% finally ends up contributing 90% of the change, that’s undoubtedly an anomaly.

    To calculate it, we are going to merely normalise every phase’s contribution to the metric by the preliminary phase measurement. 

    [textbf{segment_share}_{textsf{before}}^textsf{i} = frac{textbf{metric}_{textsf{before}}^textsf{i}}{sum_{textsf{i}}{textbf{metric}_{textsf{before}}^textsf{i}}}
    textbf{impact_normalised}^textsf{i} = frac{textbf{impact}^{textsf{i}}}{textbf{segment_share}_{textsf{before}}^textsf{i}}]

    That’s it for the formulation. Now, let’s write the code and see this strategy in observe. Will probably be simpler to know the way it works via sensible examples. 

    def calculate_simple_growth_metrics(stats_df):
      # Calculating total stats
      earlier than = stats_df.earlier than.sum()
      after = stats_df.after.sum()
      print('Metric change: %.2f -> %.2f (%.2f%%)' % (earlier than, after, 100*(after - earlier than)/earlier than))
    
      # Estimating affect of every phase
      stats_df['difference'] = stats_df.after - stats_df.earlier than
      stats_df['difference_rate'] = (100*stats_df.distinction/stats_df.earlier than)
        .map(lambda x: spherical(x, 2))
      stats_df['impact'] = (100*stats_df.distinction / stats_df.distinction.sum())
        .map(lambda x: spherical(x, 2))
      stats_df['segment_share_before'] = (100* stats_df.earlier than / stats_df.earlier than.sum())
        .map(lambda x: spherical(x, 2))
      stats_df['impact_norm'] = (stats_df.affect/stats_df.segment_share_before)
        .map(lambda x: spherical(x, 2))
    
      # Creating visualisations
      create_parallel_coordinates_chart(stats_df.reset_index(), stats_df.index.identify)
      create_share_vs_impact_chart(stats_df.reset_index(), stats_df.index.identify, 'segment_share_before', 'affect')
      
      return stats_df.sort_values('impact_norm', ascending = False)

    I consider that visualisations are a vital a part of any knowledge storytelling as visualisations assist viewers grasp insights extra rapidly and intuitively. That’s why I’ve included a few charts in our operate: 

    • A parallel coordinates chart to point out how the metric modified in every slice — this visualisation will assist us see essentially the most vital drivers in absolute phrases.
    • A scatter plot to check every phase’s affect on the KPI with the phase’s preliminary measurement. This chart helps spot anomalies — segments whose affect on the KPI is disproportionately giant or small.

    Yow will discover the entire code for the visualisations on GitHub.

    Now that now we have all of the instruments in place to analyse income knowledge, let’s see how our framework performs in numerous situations.

    Situation 1: Income dropped equally throughout all segments

    Let’s begin with the primary state of affairs. The evaluation could be very simple — we simply must name the operate outlined above.

    calculate_simple_growth_metrics(
      df.groupby('nation')[['revenue_before', 'revenue_after_scenario_1']].sum()
        .sort_values('revenue_before', ascending = False).rename(
            columns = {'revenue_after_scenario_1': 'after', 
              'revenue_before': 'earlier than'}
        )
    )

    Within the output, we are going to get a desk with detailed stats.

    Picture by writer

    Nevertheless, in my view, visualisations are extra informative. It’s apparent that income dropped by 30–40% in all nations, and there are not any anomalies. 

    Picture by writer

    Situation 2: A number of segments drove the change

    Let’s take a look at one other state of affairs by calling the identical operate.

    calculate_simple_growth_metrics(
      df.groupby('nation')[['revenue_before', 'revenue_after_scenario_2']].sum()
        .sort_values('revenue_before', ascending = False).rename(
            columns = {'revenue_after_scenario_2': 'after', 
              'revenue_before': 'earlier than'}
        )
    )
    Picture by writer

    We will see the largest drop in each absolute and relative numbers in France. It’s undoubtedly an anomaly because it accounts for 99.9% of the full metric change. We will simply spot this in our visualisations.

    Picture by writer

    Additionally, it’s value going again to the primary instance. We seemed on the metric break up by nation and located no particular segments driving modifications. However digging a bit of bit deeper may assist us perceive what’s happening. Let’s strive including one other layer and take a look at nation and maturity.

    df['segment'] = df.nation + ' - ' + df.maturity 
    calculate_simple_growth_metrics(
        df.groupby(['segment'])[['revenue_before', 'revenue_after_scenario_1']].sum()
            .sort_values('revenue_before', ascending = False).rename(
                columns = {'revenue_after_scenario_1': 'after', 'revenue_before': 'earlier than'}
            )
    )

    Now, we will see that the change is generally pushed by new customers throughout the nations. These charts clearly spotlight points with the brand new buyer expertise and provide you with a transparent path for additional investigation.

    Picture by writer

    Situation 3: Quantity shifting between segments

    Lastly, let’s discover the final state of affairs for income.

    calculate_simple_growth_metrics(
        df.groupby(['segment'])[['revenue_before', 'revenue_after_scenario_3']].sum()
            .sort_values('revenue_before', ascending = False).rename(
                columns = {'revenue_after_scenario_3': 'after', 'revenue_before': 'earlier than'}
            )
    )
    Picture by writer

    We will clearly see that France is the largest anomaly — income in France has dropped, and this transformation is correlated with the top-line income drop. Nevertheless, there may be one other excellent phase — Spain. In Spain, income has elevated considerably.

    This sample raises a suspicion that a few of the income from France may need shifted to Spain. Nevertheless, we nonetheless see a decline within the top-line metric, so it’s value additional investigation. Virtually, this case may very well be brought on by knowledge points, logging errors or service unavailability in some areas (so prospects have to make use of VPNs and seem with a distinct nation in our logs).

    Picture by writer

    We’ve checked out a bunch of various examples, and our framework helped us discover the primary drivers of change. I hope it’s now clear methods to conduct root trigger evaluation with easy metrics, and we’re prepared to maneuver on to ratio metrics.

    Evaluation: ratio metrics

    Product metrics are sometimes ratios like common income per buyer or conversion. Let’s see how we will break down modifications in this kind of metrics. In our case, we are going to take a look at conversion. 

    There are two forms of results to contemplate when analysing ratio metrics: 

    • Change inside a phase, for instance, if buyer conversion in France drops, the general conversion may even drop.
    • Change within the combine, for instance, if the share of recent prospects will increase, and new customers usually convert at a decrease price, this shift within the combine can even result in a drop within the total conversion price.

    To know what’s happening, we want to have the ability to distinguish these results. As soon as once more, we are going to write a bunch of formulation to interrupt down and quantify every sort of affect. 

    Let’s begin by defining some helpful variables.

    [
    textbf{c}_{textsf{before}}^{textsf{i}}, textbf{c}_{textsf{after}}^{textsf{i}} – textsf{converted users}
    textbf{C}_{textsf{before}}^{textsf{total}} = sum_{textsf{i}}{textbf{c}_{textsf{before}}^{textsf{i}}}
    textbf{C}_{textsf{after}}^{textsf{total}} = sum_{textsf{i}}{textbf{c}_{textsf{after}}^{textsf{i}}}
    textbf{t}_{textsf{before}}^{textsf{i}}, textbf{t}_{textsf{after}}^{textsf{i}} – textsf{total users}
    textbf{T}_{textsf{before}}^{textsf{total}} = sum_{textsf{i}}{textbf{t}_{textsf{before}}^{textsf{i}}}
    textbf{T}_{textsf{after}}^{textsf{total}} = sum_{textsf{i}}{textbf{t}_{textsf{after}}^{textsf{i}}}
    ]

    Subsequent, let’s discuss in regards to the affect of the change in combine. To isolate this impact, we are going to estimate how the general conversion price would change if conversion charges inside all segments remained fixed, and absolutely the numbers for each transformed and complete customers in all different segments stayed mounted. The one variables we are going to change are the full and transformed variety of customers in phase i. We’ll regulate it to mirror its new share within the total inhabitants.

    Let’s begin by calculating how the full variety of customers in our phase wants to alter to match the goal phase share. 

    [
    frac{textbf{t}_{textsf{after}}^{textsf{i}}}{textbf{T}_{textsf{after}}^{textsf{total}}} = frac{textbf{t}_{textsf{before}}^{textsf{i}} + deltatextbf{t}^{textsf{i}}}{textbf{T}_{textsf{before}}^{textsf{total}}+ deltatextbf{t}^{textsf{i}}}
    deltatextbf{t}^{textsf{i}} = frac{textbf{T}_{textsf{before}}^{textsf{total}} * textbf{t}_{textsf{after}}^{textsf{i}} – textbf{T}_{textsf{after}}^{textsf{total}} * textbf{t}_{textsf{before}}^{textsf{i}}}{textbf{T}_{textsf{after}}^{textsf{total}} – textbf{t}_{textsf{after}}^{textsf{i}}}
    ]

    Now, we will estimate the change in combine affect utilizing the next components. 

    [
    textbf{change in mix impact} = frac{textbf{C}_{textsf{before}}^{textsf{total}} + deltatextbf{t}^{textsf{i}} * frac{textbf{c}_{textsf{before}}^{textsf{i}}}{textbf{t}_{textsf{before}}^{textsf{i}}}}{textbf{T}_{textsf{before}}^{textsf{total}} + deltatextbf{t}^{textsf{i}}} – frac{textbf{C}_{textsf{before}}^{textsf{total}}}{textbf{T}_{textsf{before}}^{textsf{total}}}
    ]

    The subsequent step is to estimate the affect of the conversion price change inside phase i. To isolate this impact, we are going to maintain the full variety of prospects and transformed prospects in all different segments mounted. We’ll solely change the variety of transformed customers in phase i to match the goal conversion price at a brand new level. 

    [
    textbf{change within segment impact} = frac{textbf{C}_{textsf{before}}^{textsf{total}} + textbf{t}_{textsf{before}}^{textsf{i}} * frac{textbf{c}_{textsf{after}}^{textsf{i}}}{textbf{t}_{textsf{after}}^{textsf{i}}} – textbf{c}_{textsf{before}}^{textsf{i}}}{textbf{T}_{textsf{before}}^{textsf{total}}} – frac{textbf{C}_{textsf{before}}^{textsf{total}}}{textbf{T}_{textsf{before}}^{textsf{total}}} = frac{textbf{t}_{textsf{before}}^{textsf{i}} * textbf{c}_{textsf{after}}^{textsf{i}} – textbf{t}_{textsf{after}}^{textsf{i}} * textbf{c}_{textsf{before}}^{textsf{i}}}{textbf{T}_{textsf{before}}^{textsf{total}} * textbf{t}_{textsf{after}}^{textsf{i}}}
    ]

    We will’t merely sum the several types of results as a result of their relationship shouldn’t be linear. That’s why we additionally must estimate the mixed affect for the phase. This can mix the 2 formulation above, assuming that we are going to match each the brand new conversion price inside phase i and the brand new phase share.

    [
    textbf{total segment change} = frac{textbf{C}_{textsf{before}}^{textsf{total}} – textbf{c}_{textsf{before}}^{textsf{i}} + (textbf{t}_{textsf{before}}^{textsf{i}} + deltatextbf{t}^{textsf{i}}) * frac{textbf{c}_{textsf{after}}^{textsf{i}}}{textbf{t}_{textsf{after}}^{textsf{i}}}}{textbf{T}_{textsf{before}}^{textsf{total}} + deltatextbf{t}^{textsf{i}}} – frac{textbf{C}_{textsf{before}}^{textsf{total}}}{textbf{T}_{textsf{before}}^{textsf{total}}}
    ]

    It’s value noting that these impact estimations will not be 100% correct (i.e. we will’t sum them up instantly). Nevertheless, they’re exact sufficient to make selections and establish the primary drivers of the change.

    The subsequent step is to place all the things into code. We’ll once more leverage visualisations: correlation and parallel coordinates charts that we’ve already used for easy metrics, together with a few waterfall charts to interrupt down affect by segments.

    def calculate_conversion_effects(df, dimension, numerator_field1, denominator_field1, 
                           numerator_field2, denominator_field2):
      cmp_df = df.groupby(dimension)[[numerator_field1, denominator_field1, numerator_field2, denominator_field2]].sum()
      cmp_df = cmp_df.rename(columns = {
          numerator_field1: 'c1', 
          numerator_field2: 'c2',
          denominator_field1: 't1', 
          denominator_field2: 't2'
      })
        
      cmp_df['conversion_before'] = cmp_df['c1']/cmp_df['t1']
      cmp_df['conversion_after'] = cmp_df['c2']/cmp_df['t2']
      
      C1 = cmp_df['c1'].sum()
      T1 = cmp_df['t1'].sum()
      C2 = cmp_df['c2'].sum()
      T2 = cmp_df['t2'].sum()
    
      print('conversion earlier than = %.2f' % (100*C1/T1))
      print('conversion after = %.2f' % (100*C2/T2))
      print('complete conversion change = %.2f' % (100*(C2/T2 - C1/T1)))
      
      cmp_df['dt'] = (T1*cmp_df.t2 - T2*cmp_df.t1)/(T2 - cmp_df.t2)
      cmp_df['total_effect'] = (C1 - cmp_df.c1 + (cmp_df.t1 + cmp_df.dt)*cmp_df.conversion_after)/(T1 + cmp_df.dt) - C1/T1
      cmp_df['mix_change_effect'] = (C1 + cmp_df.dt*cmp_df.conversion_before)/(T1 + cmp_df.dt) - C1/T1
      cmp_df['conversion_change_effect'] = (cmp_df.t1*cmp_df.c2 - cmp_df.t2*cmp_df.c1)/(T1 * cmp_df.t2)
      
      for col in ['total_effect', 'mix_change_effect', 'conversion_change_effect', 'conversion_before', 'conversion_after']:
          cmp_df[col] = 100*cmp_df[col]
            
      cmp_df['conversion_diff'] = cmp_df.conversion_after - cmp_df.conversion_before
      cmp_df['before_segment_share'] = 100*cmp_df.t1/T1
      cmp_df['after_segment_share'] = 100*cmp_df.t2/T2
      for p in ['before_segment_share', 'after_segment_share', 'conversion_before', 'conversion_after', 'conversion_diff',
                       'total_effect', 'mix_change_effect', 'conversion_change_effect']:
          cmp_df[p] = cmp_df[p].map(lambda x: spherical(x, 2))
      cmp_df['total_effect_share'] = 100*cmp_df.total_effect/(100*(C2/T2 - C1/T1))
      cmp_df['impact_norm'] = cmp_df.total_effect_share/cmp_df.before_segment_share
    
      # creating visualisations
      create_share_vs_impact_chart(cmp_df.reset_index(), dimension, 'before_segment_share', 'total_effect_share')
      cmp_df = cmp_df[['t1', 't2', 'before_segment_share', 'after_segment_share', 'conversion_before', 'conversion_after', 'conversion_diff',
                       'total_effect', 'mix_change_effect', 'conversion_change_effect', 'total_effect_share']]
    
      plot_conversion_waterfall(
          100*C1/T1, 100*C2/T2, cmp_df[['total_effect']].rename(columns = {'total_effect': 'impact'})
      )
    
      # placing collectively results break up by change of combine and conversion change
      tmp = []
      for rec in cmp_df.reset_index().to_dict('information'): 
        tmp.append(
          {
              'phase': rec[dimension] + ' - change of combine',
              'impact': rec['mix_change_effect']
          }
        )
        tmp.append(
          {
            'phase': rec[dimension] + ' - conversion change',
            'impact': rec['conversion_change_effect']
          }
        )
      effects_det_df = pd.DataFrame(tmp)
      effects_det_df['effect_abs'] = effects_det_df.impact.map(lambda x: abs(x))
      effects_det_df = effects_det_df.sort_values('effect_abs', ascending = False) 
      top_effects_det_df = effects_det_df.head(5).drop('effect_abs', axis = 1)
      plot_conversion_waterfall(
        100*C1/T1, 100*C2/T2, top_effects_det_df.set_index('phase'),
        add_other = True
      )
    
      create_parallel_coordinates_chart(cmp_df.reset_index(), dimension, before_field='before_segment_share', 
        after_field='after_segment_share', impact_norm_field = 'impact_norm', 
        metric_name = 'share of phase', show_mean = False)
      create_parallel_coordinates_chart(cmp_df.reset_index(), dimension, before_field='conversion_before', 
        after_field='conversion_after', impact_norm_field = 'impact_norm', 
        metric_name = 'conversion', show_mean = False)
    
      return cmp_df.rename(columns = {'t1': 'total_before', 't2': 'total_after'})

    With that, we’re achieved with the speculation and able to apply this framework in observe. We’ll load one other dataset that features a few situations.

    conv_df = pd.read_csv('conversion_metrics_example.csv', sep = 't')
    conv_df.head()
    Picture by writer

    Situation 1: Uniform conversion uplift

    We’ll once more simply name the operate above and analyse the outcomes.

    calculate_conversion_effects(
        conv_df, 'nation', 'converted_users_before', 'users_before', 
        'converted_users_after_scenario_1', 'users_after_scenario_1',
    )

    The primary state of affairs is fairly simple: conversion has elevated in all nations by 4–7% factors, ensuing within the top-line conversion improve as effectively. 

    Picture by writer

    We will see that there are not any anomalies in segments: the affect is correlated with the phase share, and conversion has elevated uniformly throughout all nations. 

    Picture by writer
    Picture by writer

    We will take a look at the waterfall charts to see the change break up by nations and forms of results. Regardless that impact estimations will not be additive, we will nonetheless use them to check the impacts of various slices.

    Picture by writer

    The prompt framework has been fairly useful. We have been in a position to rapidly determine what’s happening with the metrics.

    Situation 2: Simpson’s paradox

    Let’s check out a barely trickier case.

    calculate_conversion_effects(
        conv_df, 'nation', 'converted_users_before', 'users_before', 
        'converted_users_after_scenario_2', 'users_after_scenario_2',
    )
    Picture by writer

    The story is extra difficult right here: 

    • The share of UK customers has elevated whereas conversion on this phase has dropped considerably, from 74.9% to 34.8%.
    • In all different nations, conversion has elevated by 8–11% factors.
    Picture by writer

    Unsurprisingly, the conversion change within the UK is the largest driver of the top-line metric decline.

    Picture by writer

    Right here we will see an instance of non-linearity: 10% of results will not be defined by the present break up. Let’s dig one degree deeper and add a maturity dimension. This reveals the true story: 

    • Conversion has really elevated uniformly by round 10% factors in all segments, but the top-line metric has nonetheless dropped. 
    • The primary purpose is the rise within the share of recent customers within the UK, as these prospects have a considerably decrease conversion price than common.
    Picture by writer

    Right here is the break up of results by segments.

    Picture by writer

    This counterintuitive impact is known as Simpson’s paradox. A traditional instance of Simpson’s paradox comes from a 1973 research on graduate college admissions at Berkeley. At first, it appeared like males had a better probability of getting in than girls. Nevertheless, after they seemed on the departments individuals have been making use of to, it turned out girls have been making use of to extra aggressive departments with decrease admission charges, whereas males tended to use to much less aggressive ones. Once they added division as a confounder, the info really confirmed a small however vital bias in favour of ladies.

    As all the time, visualisation may give you a little bit of instinct on how this paradox works.

    source | licence CC BY-SA 4.0

    That’s it. We’ve discovered methods to break down the modifications in ratio metrics.

    Yow will discover the entire code and knowledge on GitHub.

    Abstract

    It’s been an extended journey, so let’s rapidly recap what we’ve lined on this article:

    • We’ve recognized two main forms of metrics: easy metrics (like income or variety of customers) and ratio metrics (like conversion price or ARPU). 
    • For every metric sort, we’ve discovered methods to break down the modifications and establish the primary drivers. We’ve put collectively a set of capabilities that may enable you to discover the solutions with simply a few operate calls.

    With this sensible framework, you’re now totally outfitted to conduct root trigger evaluation for any metric. Nevertheless, there may be nonetheless room for enchancment in our resolution. In my subsequent article, I’ll discover methods to construct an LLM agent that can do the entire evaluation and abstract for us. Keep tuned!

    Thank you numerous for studying this text. I hope this text was insightful for you.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleDiffusion Models, Explained Simply | Towards Data Science
    Next Article Ambient Scribes in Healthcare: AI-Powered Documentation Automation
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Not Everything Needs Automation: 5 Practical AI Agents That Deliver Enterprise Value

    June 6, 2025
    Artificial Intelligence

    Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.

    June 6, 2025
    Artificial Intelligence

    5 Crucial Tweaks That Will Make Your Charts Accessible to People with Visual Impairments

    June 6, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    AI system predicts protein fragments that can bind to or inhibit a target | MIT News

    April 5, 2025

    Svenska AI-reformen – miljoner svenskar får gratis AI-verktyg

    May 9, 2025

    Use PyTorch to Easily Access Your GPU

    May 21, 2025

    Are You Sure Your Posterior Makes Sense?

    April 12, 2025

    Regression Discontinuity Design: How It Works and When to Use It

    May 7, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Sam Altman Admits: ChatGPT’s New Personality Is “Annoying”, Fix Coming This Week

    April 29, 2025

    Stolen faces, stolen lives: The disturbing trend of AI-powered exploitation

    April 18, 2025

    Creating a common language | MIT News

    April 5, 2025
    Our Picks

    Gemini introducerar funktionen schemalagda åtgärder i Gemini-appen

    June 7, 2025

    AIFF 2025 Runway’s tredje årliga AI Film Festival

    June 7, 2025

    AI-agenter kan nu hjälpa läkare fatta bättre beslut inom cancervård

    June 7, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.