Close Menu
    Trending
    • America’s coming war over AI regulation
    • “Dr. Google” had its issues. Can ChatGPT Health do better?
    • Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics
    • Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026
    • Stop Writing Messy Boolean Masks: 10 Elegant Ways to Filter Pandas DataFrames
    • What Other Industries Can Learn from Healthcare’s Knowledge Graphs
    • Everyone wants AI sovereignty. No one can truly have it.
    • Yann LeCun’s new venture is a contrarian bet against large language models
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Think Your Python Code Is Slow? Stop Guessing and Start Measuring
    Artificial Intelligence

    Think Your Python Code Is Slow? Stop Guessing and Start Measuring

    ProfitlyAIBy ProfitlyAIDecember 26, 2025No Comments14 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    I used to be engaged on a script the opposite day, and it was driving me nuts. It labored, positive, nevertheless it was simply… gradual. Actually gradual. I had that feeling that this might be a lot sooner if I might determine the place the hold-up was.

    My first thought was to begin tweaking issues. I might optimise the info loading. Or rewrite that for loop? However I finished myself. I’ve fallen into that entice earlier than, spending hours “optimising” a bit of code solely to seek out it made barely any distinction to the general runtime. Donald Knuth had a degree when he stated, “Untimely optimisation is the foundation of all evil.”

    I made a decision to take a extra methodical strategy. As an alternative of guessing, I used to be going to seek out out for positive. I wanted to profile the code to acquire exhausting information on precisely which features have been consuming the vast majority of the clock cycles.

    On this article, I’ll stroll you thru the precise course of I used. We’ll take a intentionally gradual Python script and use two implausible instruments to pinpoint its bottlenecks with surgical precision.

    The primary of those instruments is named cProfile, a robust profiler constructed into Python. The opposite is named snakeviz, a good device that transforms the profiler’s output into an interactive visible map.

    Establishing a growth atmosphere

    Earlier than we begin coding, let’s arrange our growth atmosphere. The very best apply is to create a separate Python atmosphere the place you may set up any essential software program and experiment, realizing that something you do gained’t affect the remainder of your system. I’ll be utilizing conda for this, however you need to use any methodology with which you’re acquainted.

    #create our check atmosphere
    conda create -n profiling_lab python=3.11 -y
    
    # Now activate it
    conda activate profiling_lab

    Now that now we have our surroundings arrange, we have to set up snakeviz for our visualisations and numpy for the instance script. cProfile is already included with Python, so there’s nothing extra to do there. As we’ll be operating our scripts with a Jupyter Pocket book, we’ll additionally set up that.

    # Set up our visualization device and numpy
    pip set up snakeviz numpy jupyter

    Now sort in jupyter pocket book into your command immediate. You need to see a jupyter pocket book open in your browser. If that doesn’t occur routinely, you’ll doubtless see a screenful of knowledge after the jupyter pocket book command. Close to the underside of that, there shall be a URL that you must copy and paste into your browser to provoke the Jupyter Pocket book.

    Your URL shall be completely different to mine, nevertheless it ought to look one thing like this:-

    http://127.0.0.1:8888/tree?token=3b9f7bd07b6966b41b68e2350721b2d0b6f388d248cc69da

    With our instruments prepared, it’s time to have a look at the code we’re going to repair.

    Our “Downside” Script

    To correctly check our profiling instruments, we’d like a script that displays clear efficiency points. I’ve written a easy program that simulates processing issues with reminiscence, iteration and CPU cycles, making it an ideal candidate for our investigation.

    # run_all_systems.py
    import time
    import math
    
    # ===================================================================
    CPU_ITERATIONS = 34552942
    STRING_ITERATIONS = 46658100
    LOOP_ITERATIONS = 171796964
    # ===================================================================
    
    # --- Job 1: A Calibrated CPU-Certain Bottleneck ---
    def cpu_heavy_task(iterations):
        print("  -> Operating CPU-bound activity...")
        outcome = 0
        for i in vary(iterations):
            outcome += math.sin(i) * math.cos(i) + math.sqrt(i)
        return outcome
    
    # --- Job 2: A Calibrated Reminiscence/String Bottleneck ---
    def memory_heavy_string_task(iterations):
        print("  -> Operating Reminiscence/String-bound activity...")
        report = ""
        chunk = "report_item_abcdefg_123456789_"
        for i in vary(iterations):
            report += f"|{chunk}{i}"
        return report
    
    # --- Job 3: A Calibrated "Thousand Cuts" Iteration Bottleneck ---
    def simulate_tiny_op(n):
        cross
    
    def iteration_heavy_task(iterations):
        print("  -> Operating Iteration-bound activity...")
        for i in vary(iterations):
            simulate_tiny_op(i)
        return "OK"
    
    # --- Predominant Orchestrator ---
    def run_all_systems():
        print("--- Beginning FINAL SLOW Balanced Showcase ---")
        
        cpu_result = cpu_heavy_task(iterations=CPU_ITERATIONS)
        string_result = memory_heavy_string_task(iterations=STRING_ITERATIONS)
        iteration_result = iteration_heavy_task(iterations=LOOP_ITERATIONS)
    
        print("--- FINAL SLOW Balanced Showcase Completed ---")

    Step 1: Amassing the Information with cProfile

    Our first device, cProfile, is a deterministic profiler constructed into Python. We will run it from code to execute our script and document detailed statistics about each operate name. 

    import cProfile, pstats, io
    
    pr = cProfile.Profile()
    pr.allow()
    
    # Run the operate you wish to profile
    run_all_systems()
    
    pr.disable()
    
    # Dump stats to a string and print the highest 10 by cumulative time
    s = io.StringIO()
    ps = pstats.Stats(pr, stream=s).sort_stats("cumtime")
    ps.print_stats(10)
    print(s.getvalue())

    Right here is the output.

    --- Beginning FINAL SLOW Balanced Showcase ---
      -> Operating CPU-bound activity...
      -> Operating Reminiscence/String-bound activity...
      -> Operating Iteration-bound activity...
    --- FINAL SLOW Balanced Showcase Completed ---
             275455984 operate calls in 30.497 seconds
    
       Ordered by: cumulative time
       Listing lowered from 47 to 10 resulting from restriction <10>
    
       ncalls  tottime  percall  cumtime  percall filename:lineno(operate)
            2    0.000    0.000   30.520   15.260 /house/tom/.native/lib/python3.10/site-packages/IPython/core/interactiveshell.py:3541(run_code)
            2    0.000    0.000   30.520   15.260 {built-in methodology builtins.exec}
            1    0.000    0.000   30.497   30.497 /tmp/ipykernel_173802/1743829582.py:41(run_all_systems)
            1    9.652    9.652   14.394   14.394 /tmp/ipykernel_173802/1743829582.py:34(iteration_heavy_task)
            1    7.232    7.232   12.211   12.211 /tmp/ipykernel_173802/1743829582.py:14(cpu_heavy_task)
    171796964    4.742    0.000    4.742    0.000 /tmp/ipykernel_173802/1743829582.py:31(simulate_tiny_op)
            1    3.891    3.891    3.892    3.892 /tmp/ipykernel_173802/1743829582.py:22(memory_heavy_string_task)
     34552942    1.888    0.000    1.888    0.000 {built-in methodology math.sin}
     34552942    1.820    0.000    1.820    0.000 {built-in methodology math.cos}
     34552942    1.271    0.000    1.271    0.000 {built-in methodology math.sqrt}

    We’ve a bunch of numbers that may be tough to interpret. That is the place snakeviz comes into its personal. 

    Step 2: Visualising the bottleneck with snakeviz

    That is the place the magic occurs. Snakeviz takes the output of our profiling file and converts it into an interactive, browser-based chart, making it simpler to seek out bottlenecks.

    So let’s use that device to visualise what now we have. As I’m utilizing a Jupyter Pocket book, we have to load it first.

    %load_ext snakeviz

    And we run it like this.

    %%snakeviz
    most important()

    The output is available in two components. First is a visualisation like this.

    Picture by Writer

    What you see is a top-down “icicle” chart. From the highest to the underside, it represents the decision hierarchy. 

    On the very high: Python is executing our script (<built-in methodology builtins exec>).

    Subsequent: the script’s __main__ execution (<string>:1(<module>)). Then the operate run_all_systems. Inside that, it calls two key features: iteration_heavy_task and cpu_heavy_task.

    The memory-intensive processing half isn’t labelled on the chart. That’s as a result of the proportion of time related to this activity is way smaller than the instances apportioned to the opposite two intensive features. Consequently, we see a a lot smaller, unlabelled block to the fitting of the cpu_heavy_task block.

    Be aware that, for evaluation, there’s additionally a Snakeviz chart model known as a Sunburst chart. It appears to be like a bit like a pie chart besides it accommodates a set of more and more massive concentric circles and arcs. The concept beng that the time taken by features to run is represented by the angular extent of the arc dimension of the circle. The foundation operate is a circle in the course of viz. The foundation operate runs by calling the sub-functions beneath it and so forth. We wont be taking a look at that show sort on this article.

    Visible affirmation, like this, may be a lot extra impactful than watching a desk of numbers. I didn’t must guess anymore the place to look; the info was staring me proper within the face. 

    The visualisation is rapidly adopted by a block of textual content detailing the timings for varied components of your code, very like the output of the cprofile device. I’m solely displaying the primary dozen or so strains of this, as there have been 30+ in complete.

    ncalls tottime percall cumtime percall filename:lineno(operate)
    ----------------------------------------------------------------
    1 9.581 9.581 14.3 14.3 1062495604.py:34(iteration_heavy_task)
    1 7.868 7.868 12.92 12.92 1062495604.py:14(cpu_heavy_task)
    171796964 4.717 2.745e-08 4.717 2.745e-08 1062495604.py:31(simulate_tiny_op)
    1 3.848 3.848 3.848 3.848 1062495604.py:22(memory_heavy_string_task)
    34552942 1.91 5.527e-08 1.91 5.527e-08 ~:0(<built-in methodology math.sin>)
    34552942 1.836 5.313e-08 1.836 5.313e-08 ~:0(<built-in methodology math.cos>)
    34552942 1.305 3.778e-08 1.305 3.778e-08 ~:0(<built-in methodology math.sqrt>)
    1 0.02127 0.02127 31.09 31.09 <string>:1(<module>)
    4 0.0001764 4.409e-05 0.0001764 4.409e-05 socket.py:626(ship)
    10 0.000123 1.23e-05 0.0004568 4.568e-05 iostream.py:655(write)
    4 4.594e-05 1.148e-05 0.0002735 6.838e-05 iostream.py:259(schedule)
    ...
    ...
    ...

    Step 3: The Repair

    After all, instruments like cprofiler and snakeviz don’t inform you how to kind out your efficiency points, however now that I knew precisely the place the issues have been, I might apply focused fixes. 

    # final_showcase_fixed_v2.py
    import time
    import math
    import numpy as np
    
    # ===================================================================
    CPU_ITERATIONS = 34552942
    STRING_ITERATIONS = 46658100
    LOOP_ITERATIONS = 171796964
    # ===================================================================
    
    # --- Repair 1: Vectorization for the CPU-Certain Job ---
    def cpu_heavy_task_fixed(iterations):
        """
        Mounted through the use of NumPy to carry out the advanced math on a complete array
        without delay, in extremely optimized C code as a substitute of a Python loop.
        """
        print("  -> Operating CPU-bound activity...")
        # Create an array of numbers from 0 to iterations-1
        i = np.arange(iterations, dtype=np.float64)
        # The identical calculation, however vectorized, is orders of magnitude sooner
        result_array = np.sin(i) * np.cos(i) + np.sqrt(i)
        return np.sum(result_array)
    
    # --- Repair 2: Environment friendly String Becoming a member of ---
    def memory_heavy_string_task_fixed(iterations):
        """
        Mounted through the use of an inventory comprehension and a single, environment friendly ''.be a part of() name.
        This avoids creating thousands and thousands of intermediate string objects.
        """
        print("  -> Operating Reminiscence/String-bound activity...")
        chunk = "report_item_abcdefg_123456789_"
        # A listing comprehension is quick and memory-efficient
        components = [f"|{chunk}{i}" for i in range(iterations)]
        return "".be a part of(components)
    
    # --- Repair 3: Eliminating the "Thousand Cuts" Loop ---
    def iteration_heavy_task_fixed(iterations):
        """
        Mounted by recognizing the duty is usually a no-op or a bulk operation.
        In a real-world state of affairs, you'd discover a method to keep away from the loop solely.
        Right here, we exhibit the repair by merely eradicating the pointless loop.
        The objective is to point out the price of the loop itself was the issue.
        """
        print("  -> Operating Iteration-bound activity...")
        # The repair is to discover a bulk operation or get rid of the necessity for the loop.
        # For the reason that unique operate did nothing, the repair is to do nothing, however sooner.
        return "OK"
    
    # --- Predominant Orchestrator ---
    def run_all_systems():
        """
        The principle orchestrator now calls the FAST variations of the duties.
        """
        print("--- Beginning FINAL FAST Balanced Showcase ---")
        
        cpu_result = cpu_heavy_task_fixed(iterations=CPU_ITERATIONS)
        string_result = memory_heavy_string_task_fixed(iterations=STRING_ITERATIONS)
        iteration_result = iteration_heavy_task_fixed(iterations=LOOP_ITERATIONS)
    
        print("--- FINAL FAST Balanced Showcase Completed ---")

    Now we will rerun the cprofiler on our up to date code.

    import cProfile, pstats, io
    
    pr = cProfile.Profile()
    pr.allow()
    
    # Run the operate you wish to profile
    run_all_systems()
    
    pr.disable()
    
    # Dump stats to a string and print the highest 10 by cumulative time
    s = io.StringIO()
    ps = pstats.Stats(pr, stream=s).sort_stats("cumtime")
    ps.print_stats(10)
    print(s.getvalue())
    
    #
    # begin of output
    #
    
    --- Beginning FINAL FAST Balanced Showcase ---
      -> Operating CPU-bound activity...
      -> Operating Reminiscence/String-bound activity...
      -> Operating Iteration-bound activity...
    --- FINAL FAST Balanced Showcase Completed ---
             197 operate calls in 6.063 seconds
    
       Ordered by: cumulative time
       Listing lowered from 52 to 10 resulting from restriction <10>
    
       ncalls  tottime  percall  cumtime  percall filename:lineno(operate)
            2    0.000    0.000    6.063    3.031 /house/tom/.native/lib/python3.10/site-packages/IPython/core/interactiveshell.py:3541(run_code)
            2    0.000    0.000    6.063    3.031 {built-in methodology builtins.exec}
            1    0.002    0.002    6.063    6.063 /tmp/ipykernel_173802/1803406806.py:1(<module>)
            1    0.402    0.402    6.061    6.061 /tmp/ipykernel_173802/3782967348.py:52(run_all_systems)
            1    0.000    0.000    5.152    5.152 /tmp/ipykernel_173802/3782967348.py:27(memory_heavy_string_task_fixed)
            1    4.135    4.135    4.135    4.135 /tmp/ipykernel_173802/3782967348.py:35(<listcomp>)
            1    1.017    1.017    1.017    1.017 {methodology 'be a part of' of 'str' objects}
            1    0.446    0.446    0.505    0.505 /tmp/ipykernel_173802/3782967348.py:14(cpu_heavy_task_fixed)
            1    0.045    0.045    0.045    0.045 {built-in methodology numpy.arange}
            1    0.000    0.000    0.014    0.014 <__array_function__ internals>:177(sum)

    That’s a implausible outcome that demonstrates the ability of profiling. We spent our effort on the components of the code that mattered. To be thorough, I additionally ran snakeviz on the mounted script.

    %%snakeviz
    run_all_systems()
    Picture by Writer

    Probably the most notable change is the discount in complete runtime, from roughly 30 seconds to roughly 6 seconds. This can be a 5x speedup, achieved by addressing the three most important bottlenecks that have been seen within the “earlier than” profile.

    Let’s have a look at each individually.

    1. The iteration_heavy_task

    Earlier than (The Downside)
    Within the first picture, the big bar on the left, iteration_heavy_task, is the only greatest bottleneck, consuming 14.3 seconds.

    • Why was it gradual? This activity was a basic “demise by a thousand cuts.” The operate simulate_tiny_op did virtually nothing, nevertheless it was known as thousands and thousands of instances from inside a pure Python for loop. The immense overhead of the Python interpreter beginning and stopping a operate name repeatedly was the complete supply of the slowness.

    The Repair
    The mounted model, iteration_heavy_task_fixed, recognised that the objective might be achieved with out the loop. In our showcase, this meant eradicating the pointless loop solely. In a real-world software, this could contain discovering a single “bulk” operation to switch the iterative one.

    After (The Consequence)
    Within the second picture, the iteration_heavy_task bar is utterly gone. It’s now so quick that its runtime is a tiny fraction of a second and is invisible on the chart. We efficiently eradicated a 14.3-second downside.

    2. The cpu_heavy_task

    Earlier than (The Downside)
    The second main bottleneck, clearly seen as the big orange bar on the fitting, is cpu_heavy_task, which took 12.9 seconds.

    • Why was it gradual? Just like the iteration activity, this operate was additionally restricted by the velocity of the Python for loop. Whereas the maths operations inside have been quick, the interpreter needed to course of every of the thousands and thousands of calculations individually, which is very inefficient for numerical duties.

    The Repair
    The repair was vectorisation utilizing the NumPy library. As an alternative of utilizing a Python loop, cpu_heavy_task_fixed created a NumPy array and carried out all of the mathematical operations (np.sqrt, np.sin, and so forth.) on the complete array concurrently. These operations are executed in extremely optimised, pre-compiled C code, utterly bypassing the gradual Python interpreter loop.

    After (The Consequence).
    Identical to the primary bottleneck, the cpu_heavy_task bar has vanished from the “after” diagram. Its runtime was lowered from 12.9 seconds to some milliseconds.

    3. The memory_heavy_string_task

    Earlier than (The Downside):
    Within the first diagram, the memory-heavy_string_task was operating, however its runtime was small in comparison with the opposite two bigger points, so it was relegated to the small, unlabeled sliver of area on the far proper. It was a comparatively minor subject.

    The Repair
    The repair for this activity was to switch the inefficient report += “…” string concatenation with a way more environment friendly methodology: constructing an inventory of all of the string components after which calling “”.be a part of() a single time on the finish.

    After (The Consequence)
    Within the second diagram, we see the results of our success. Having eradicated the 2 10+ second bottlenecks, the memory-heavy-string-task-fixed is now the new dominant bottleneck, accounting for 4.34 seconds of the whole 5.22-second runtime.

    Snakeviz even lets us look inside this mounted operate. The brand new most important contributor is the orange bar labelled <listcomp> (record comprehension), which takes 3.52 seconds. This means that even within the mounted code, essentially the most time-consuming half is now the method of making the in depth record of strings in reminiscence earlier than they are often joined.

    Abstract

    This text offers a hands-on information to figuring out and resolving efficiency points in Python code, arguing that builders ought to utilise profiling instruments to measure efficiency as a substitute of counting on instinct or guesswork to pinpoint the supply of slowdowns.

    I demonstrated a methodical workflow utilizing two key instruments:-

    • cProfile: Python’s built-in profiler, used to assemble detailed information on operate calls and execution instances.
    • snakeviz: A visualisation device that turns cProfile’s information into an interactive “icicle” chart, making it simple to visually determine which components of the code are consuming essentially the most time.

    The article makes use of a case examine of a intentionally gradual script engineered with three distinct and important bottlenecks:

    1. An iteration-bound activity: A operate known as thousands and thousands of instances in a loop, showcasing the efficiency value of Python’s operate name overhead (“demise by a thousand cuts”).
    2. A CPU-bound activity: A for loop performing thousands and thousands of math calculations, highlighting the inefficiency of pure Python for heavy numerical work.
    3. A memory-bound activity: A big string constructed inefficiently utilizing repeated += concatenation.

    By analysing the snakeviz output, I pinpointed these three issues and utilized focused fixes.

    • The iteration bottleneck was mounted by eliminating the pointless loop.
    • The CPU bottleneck was resolved with vectorisation utilizing NumPy, which executes mathematical operations in quick, compiled C code.
    • The reminiscence bottleneck was mounted by appending string components to an inventory and utilizing a single, environment friendly “”.be a part of() name.

    These fixes resulted in a dramatic speedup, lowering the script’s runtime from over 30 seconds to only over 6 seconds. I concluded by demonstrating that, even after main points are resolved, the profiler can be utilized once more to determine new, smaller bottlenecks, illustrating that efficiency tuning is an iterative course of guided by measurement.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHow to Build an AI-Powered Weather ETL Pipeline with Databricks and GPT-4o: From API To Dashboard
    Next Article How IntelliNode Automates Complex Workflows with Vibe Agents
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics

    January 22, 2026
    Artificial Intelligence

    Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026

    January 22, 2026
    Artificial Intelligence

    Stop Writing Messy Boolean Masks: 10 Elegant Ways to Filter Pandas DataFrames

    January 22, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Google Gemini AI Suite erbjuder gratis avancerade lärverktyg för studenter

    May 27, 2025

    Martin Trust Center for MIT Entrepreneurship welcomes Ana Bakshi as new executive director | MIT News

    October 2, 2025

    Chain-of-Thought Prompting: Everything You Need to Know About It

    April 5, 2025

    The Alarming Rise of Nudify Apps and the Inability to Stop Deepfakes

    November 6, 2025

    Replit’s CEO Says Your Company’s Org Chart Is Obsolete. Here’s What Replaces It.

    September 23, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    New model predicts a chemical reaction’s point of no return | MIT News

    April 23, 2025

    The Beauty of Space-Filling Curves: Understanding the Hilbert Curve

    September 7, 2025

    How to improve AP and invoice tasks

    May 28, 2025
    Our Picks

    America’s coming war over AI regulation

    January 23, 2026

    “Dr. Google” had its issues. Can ChatGPT Health do better?

    January 22, 2026

    Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics

    January 22, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.