Close Menu
    Trending
    • America’s coming war over AI regulation
    • “Dr. Google” had its issues. Can ChatGPT Health do better?
    • Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics
    • Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026
    • Stop Writing Messy Boolean Masks: 10 Elegant Ways to Filter Pandas DataFrames
    • What Other Industries Can Learn from Healthcare’s Knowledge Graphs
    • Everyone wants AI sovereignty. No one can truly have it.
    • Yann LeCun’s new venture is a contrarian bet against large language models
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Beyond ROC-AUC and KS: The Gini Coefficient, Explained Simply
    Artificial Intelligence

    Beyond ROC-AUC and KS: The Gini Coefficient, Explained Simply

    ProfitlyAIBy ProfitlyAISeptember 30, 2025No Comments9 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    mentioned about classification metrics like ROC-AUC and Kolmogorov-Smirnov (KS) Statistic in earlier blogs.

    On this weblog, we’ll discover one other essential classification metric known as the Gini Coefficient.


    Why do we have now a number of classification metrics?

    Each classification metric tells us the mannequin efficiency from a special angle. We all know that ROC-AUC offers us the general rating potential of a mannequin, whereas KS Statistic exhibits us the place the utmost hole between two teams happens.

    Relating to the Gini Coefficient, it tells us how a lot better our mannequin is than random guessing at rating the positives increased than the negatives.


    First, let’s see how the Gini Coefficient is calculated.

    For this, we once more use the German Credit Dataset.

    Let’s use the identical pattern knowledge that we used to know the calculation of Kolmogorov-Smirnov (KS) Statistic.

    Picture by Creator

    This pattern knowledge was obtained by making use of logistic regression on the German Credit score dataset.

    Because the mannequin outputs possibilities, we chosen a pattern of 10 factors from these possibilities to show the calculation of the Gini coefficient.

    Calculation

    Step 1: Type the info by predicted possibilities.

    The pattern knowledge is already sorted descending by predicting possibilities.

    Step 2: Compute Cumulative Inhabitants and Cumulative Positives.

    Cumulative Inhabitants: The cumulative variety of information thought of as much as that row.

    Cumulative Inhabitants (%): The proportion of the overall inhabitants coated thus far.

    Cumulative Positives: What number of precise positives (class 2) we’ve seen up up to now.

    Cumulative Positives (%): The proportion of positives captured thus far.

    Picture by Creator

    Step 3: Plot X and Y values

    X = Cumulative Inhabitants (%)

    Y = Cumulative Positives (%)

    Right here, let’s use Python to plot these X and Y values.

    Code:

    import matplotlib.pyplot as plt
    
    X = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
    Y = [0.0, 0.25, 0.50, 0.75, 0.75, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00]
    
    # Plot curve
    plt.determine(figsize=(6,6))
    plt.plot(X, Y, marker='o', colour="cornflowerblue", label="Mannequin Lorenz Curve")
    plt.plot([0,1], [0,1], linestyle="--", colour="grey", label="Random Mannequin (Diagonal)")
    plt.title("Lorenz Curve from Pattern Information", fontsize=14)
    plt.xlabel("Cumulative Inhabitants % (X)", fontsize=12)
    plt.ylabel("Cumulative Positives % (Y)", fontsize=12)
    plt.legend()
    plt.grid(True)
    plt.present()

    Plot:

    Picture by Creator

    The curve we get once we plot Cumulative Inhabitants (%) and Cumulative Positives (%) is known as the Lorenz curve.

    Step 4: Calculate the world beneath the Lorenz curve.

    After we mentioned ROC-AUC, we discovered the world beneath the curve utilizing the trapezoid system.

    Every area between two factors was handled as a trapezoid, its space was calculated, after which all areas had been added collectively to get the ultimate worth.

    The identical methodology is utilized right here to calculate the world beneath the Lorenz curve.

    Space beneath the Lorenz curve

    Space of Trapezoid:

    $$
    textual content{Space} = frac{1}{2} instances (y_1 + y_2) instances (x_2 – x_1)
    $$

    From (0.0, 0.0) to (0.1, 0.25):
    [
    A_1 = frac{1}{2}(0+0.25)(0.1-0.0) = 0.0125
    ]

    From (0.1, 0.25) to (0.2, 0.50):
    [
    A_2 = frac{1}{2}(0.25+0.50)(0.2-0.1) = 0.0375
    ]

    From (0.2, 0.50) to (0.3, 0.75):
    [
    A_3 = frac{1}{2}(0.50+0.75)(0.3-0.2) = 0.0625
    ]

    From (0.3, 0.75) to (0.4, 0.75):
    [
    A_4 = frac{1}{2}(0.75+0.75)(0.4-0.3) = 0.075
    ]

    From (0.4, 0.75) to (0.5, 1.00):
    [
    A_5 = frac{1}{2}(0.75+1.00)(0.5-0.4) = 0.0875
    ]

    From (0.5, 1.00) to (0.6, 1.00):
    [
    A_6 = frac{1}{2}(1.00+1.00)(0.6-0.5) = 0.100
    ]

    From (0.6, 1.00) to (0.7, 1.00):
    [
    A_7 = frac{1}{2}(1.00+1.00)(0.7-0.6) = 0.100
    ]

    From (0.7, 1.00) to (0.8, 1.00):
    [
    A_8 = frac{1}{2}(1.00+1.00)(0.8-0.7) = 0.100
    ]

    From (0.8, 1.00) to (0.9, 1.00):
    [
    A_9 = frac{1}{2}(1.00+1.00)(0.9-0.8) = 0.100
    ]

    From (0.9, 1.00) to (1.0, 1.00):
    [
    A_{10} = frac{1}{2}(1.00+1.00)(1.0-0.9) = 0.100
    ]

    Whole Space Below Lorenz Curve:
    [
    A = 0.0125+0.0375+0.0625+0.075+0.0875+0.100+0.100+0.100+0.100+0.100 = 0.775
    ]

    We calculated the world beneath the Lorenz curve, which is 0.775.

    Right here, we plotted Cumulative Inhabitants (%) and Cumulative Positives (%), and we are able to observe that the world beneath this curve exhibits how rapidly the positives (class 2) are being captured as we transfer down the sorted record.

    In our pattern dataset, we have now 4 positives (class 2) and 6 negatives (class 1).

    For an ideal mannequin, by the point we attain 40% of the inhabitants, it captures 100% of the positives.

    The curve appears like this for an ideal mannequin.

    Picture by Creator

    Space beneath the lorenz curve for the right mannequin.

    [
    begin{aligned}
    text{Perfect Area} &= text{Triangle (0,0 to 0.4,1)} + text{Rectangle (0.4,1 to 1,1)} [6pt]
    &= frac{1}{2} instances 0.4 instances 1 ;+; 0.6 instances 1 [6pt]
    &= 0.2 + 0.6 [6pt]
    &= 0.8
    finish{aligned}
    ]

    We even have one other methodology to calculate the Space beneath the curve for the right mannequin.

    [
    text{Let }pi text{ be the proportion of positives in the dataset.}
    ]

    [
    text{Perfect Area} = frac{1}{2}pi cdot 1 + (1-pi)cdot 1
    ]
    [
    = frac{pi}{2} + (1-pi)
    ]
    [
    = 1 – frac{pi}{2}
    ]

    For our dataset:

    Right here, we have now 4 positives out of 10 information, so: π = 4/10 = 0.4.

    [
    text{Perfect Area} = 1 – frac{0.4}{2} = 1 – 0.2 = 0.8
    ]

    We calculated the world beneath the lorenz curve for our pattern dataset and likewise for the right mannequin with identical variety of positives and negatives.

    Now, if we undergo the dataset with out sorting, the positives are evenly unfold out. This implies the speed at which we accumulate positives is similar as the speed at which we transfer via the inhabitants.

    That is the random mannequin, and it all the time offers an space beneath the curve of 0.5.

    Picture by Creator

    Step 5: Calculate the Gini Coefficient

    [
    A_{text{model}} = 0.775
    ]

    [
    A_{text{random}} = 0.5
    ]
    [
    A_{text{perfect}} = 0.8
    ]
    [
    text{Gini} = frac{A_{text{model}} – A_{text{random}}}{A_{text{perfect}} – A_{text{random}}}
    ]
    [
    = frac{0.775 – 0.5}{0.8 – 0.5}
    ]
    [
    = frac{0.275}{0.3}
    ]
    [
    approx 0.92
    ]

    We acquired Gini = 0.92, which suggests virtually all of the positives are concentrated on the prime of the sorted record. This exhibits that the mannequin does an excellent job of separating positives from negatives, coming near excellent.


    As we have now seen how the Gini Coefficient is calculated, let’s take a look at what we really did through the calculation.

    We thought of a pattern of 10 factors consisting of output possibilities from logistic regression.

    We sorted the chances in descending order.

    Subsequent, we calculated Cumulative Inhabitants (%) and Cumulative Positives (%) after which plotted them.

    We acquired a curve known as the Lorenz curve, and we calculated the world beneath it, which is 0.775.

    Now let’s perceive what’s 0.775?

    Our pattern consists of 4 positives (class 2) and 6 negatives (class 1).

    The output possibilities are for sophistication 2, which suggests the upper the likelihood, the extra possible the client belongs to class 2.

    In our pattern knowledge, the positives are captured inside 50% of the inhabitants, which suggests all of the positives are ranked on the prime.

    If the mannequin is ideal, then the positives are captured throughout the first 4 rows, i.e., throughout the first 40% of the inhabitants, and the world beneath the curve for the right mannequin is 0.8.

    However we acquired AUC = 0.775, which is almost excellent.

    Right here, we try to calculate the effectivity of the mannequin. If extra positives are concentrated on the prime, it means the mannequin is nice at classifying positives and negatives.

    Subsequent, we calculated the Gini Coefficient, which is 0.92.

    [
    text{Gini} = frac{A_{text{model}} – A_{text{random}}}{A_{text{perfect}} – A_{text{random}}}
    ]

    The numerator tells us how a lot better our mannequin is than random guessing.

    The denominator tells us the utmost attainable enchancment over random.

    The ratio places these two collectively, so the Gini coefficient all the time falls between 0 (random) and 1 (excellent).

    Gini is used to measure how shut the mannequin is to being excellent in separating optimistic and unfavourable lessons.

    However we could get a doubt about why we calculated Gini and why we didn’t cease after 0.775.

    0.775 is the world beneath the Lorenz curve for our mannequin. It doesn’t inform us how shut the mannequin is to being excellent with out evaluating it to 0.8, which is the world for the right mannequin.

    So, we calculate Gini to standardize it in order that it falls between 0 and 1, which makes it straightforward to check fashions.


    Banks additionally use Gini Coefficient to judge credit score threat fashions alongside ROC-AUC and KS Statistic. Collectively, these measures give a whole image of mannequin efficiency.


    Now, let’s calculate ROC-AUC for our pattern knowledge.

    import pandas as pd
    from sklearn.metrics import roc_auc_score
    
    # Pattern knowledge
    knowledge = {
        "Precise": [2, 2, 2, 1, 2, 1, 1, 1, 1, 1],
        "Pred_Prob_Class2": [0.92, 0.63, 0.51, 0.39, 0.29, 0.20, 0.13, 0.10, 0.05, 0.01]
    }
    
    df = pd.DataFrame(knowledge)
    
    # Convert Precise: class 2 -> 1 (optimistic), class 1 -> 0 (unfavourable)
    y_true = (df["Actual"] == 2).astype(int)
    y_score = df["Pred_Prob_Class2"]
    
    # Calculate ROC-AUC
    roc_auc = roc_auc_score(y_true, y_score)
    roc_auc

    We acquired AUC = 0.9583

    Now, Gini = (2 * AUC) – 1 = (2 * 0.9583) – 1 = 0.92

    That is the relation between Gini & ROC-AUC.


    Now let’s calculate Gini Coefficient on a full dataset.

    Code:

    import pandas as pd
    from sklearn.linear_model import LogisticRegression
    from sklearn.model_selection import train_test_split
    from sklearn.metrics import roc_auc_score
    
    # Load dataset
    file_path = "C:/german.knowledge"
    knowledge = pd.read_csv(file_path, sep=" ", header=None)
    
    # Rename columns
    columns = [f"col_{i}" for i in range(1, 21)] + ["target"]
    knowledge.columns = columns
    
    # Options and goal
    X = pd.get_dummies(knowledge.drop(columns=["target"]), drop_first=True)
    y = knowledge["target"]
    
    # Convert goal: make it binary (1 = good, 0 = dangerous)
    y = (y == 2).astype(int)
    
    # Prepare-test break up
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.3, random_state=42, stratify=y
    )
    
    # Prepare logistic regression
    mannequin = LogisticRegression(max_iter=10000)
    mannequin.match(X_train, y_train)
    
    # Predicted possibilities
    y_pred_proba = mannequin.predict_proba(X_test)[:, 1]
    
    # Calculate ROC-AUC
    auc = roc_auc_score(y_test, y_pred_proba)
    
    # Calculate Gini
    gini = 2 * auc - 1
    
    auc, gini
    

    We acquired Gini = 0.60

    Interpretation:

    Gini > 0.5: acceptable.

    Gini = 0.6–0.7: good mannequin.

    Gini = 0.8+: glorious, not often achieved.


    Dataset

    The dataset used on this weblog is the German Credit dataset, which is publicly out there on the UCI Machine Studying Repository. It’s supplied beneath the Creative Commons Attribution 4.0 International (CC BY 4.0) License. This implies it may be freely used and shared with correct attribution.


    I hope you discovered this weblog helpful.

    If you happen to loved studying, take into account sharing it together with your community, and be happy to share your ideas.

    If you happen to haven’t learn my earlier blogs on ROC-AUC and Kolmogorov Smirnov Statistic, you possibly can verify them out right here.

    Thanks for studying!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleActual Intelligence in the Age of AI
    Next Article OpenAI is huge in India. Its models are steeped in caste bias.
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics

    January 22, 2026
    Artificial Intelligence

    Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026

    January 22, 2026
    Artificial Intelligence

    Stop Writing Messy Boolean Masks: 10 Elegant Ways to Filter Pandas DataFrames

    January 22, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    How to Implement Randomization with the Python Random Module

    November 24, 2025

    Why Your Prompts Don’t Belong in Git

    August 25, 2025

    How to Practically Pursue Financial Impact in AI Adoption with Eva Dong [MAICON 2025 Speaker Series]

    October 2, 2025

    Nvidia’s $5 Trillion Milestone

    November 6, 2025

    Grammar as an Injectable: A Trojan Horse to NLP

    June 2, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Sam Altman Admits: ChatGPT’s New Personality Is “Annoying”, Fix Coming This Week

    April 29, 2025

    (Many) More TDS Contributors Are Now Eligible for Earning Through the Author Payment Program

    April 23, 2025

    This AI Startup Is Making an Anime Series and Giving Away $1 Million to Creators

    May 2, 2025
    Our Picks

    America’s coming war over AI regulation

    January 23, 2026

    “Dr. Google” had its issues. Can ChatGPT Health do better?

    January 22, 2026

    Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics

    January 22, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.