Close Menu
    Trending
    • Are OpenAI and Google intentionally downgrading their models?
    • 3 Questions: On the future of AI and the mathematical and physical sciences | MIT News
    • Is Open AI actually making its own models dumber?
    • An Intuitive Guide to MCMC (Part I): The Metropolis-Hastings Algorithm
    • New MIT class uses anthropology to improve chatbots | MIT News
    • Spectral Clustering Explained: How Eigenvectors Reveal Complex Cluster Structures
    • We ran 16 AI Models on 9,000+ Real Documents. Here’s What We Found.
    • Why Most A/B Tests Are Lying to You
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Using Local LLMs to Discover High-Performance Algorithms
    Artificial Intelligence

    Using Local LLMs to Discover High-Performance Algorithms

    ProfitlyAIBy ProfitlyAIJanuary 19, 2026No Comments10 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Ever since I used to be a baby, I’ve been fascinated by drawing. What struck me was not solely the drawing act itself, but additionally the concept each drawing could possibly be improved an increasing number of. I bear in mind reaching very excessive ranges with my drawing type. Nonetheless, as soon as I reached the height of perfection, I’d attempt to see how I may enhance the drawing even additional – alas, with disastrous outcomes.

    From there I at all times be mindful the identical mantra: “refine and iterate and also you’ll attain perfection”. At college, my strategy was to learn books many instances, increasing my data trying to find different sources, for locating hidden layers of that means in every idea. Immediately, I apply this similar philosophy to AI/ML and coding.

    We all know that matrix multiplication (matmul for simplicity right here), is the core a part of any AI course of. Again previously I developed LLM.rust, a Rust mirror of Karpathy’s LLM.c. The toughest level within the Rust implementation has been the matrix multiplication. Since we’ve to carry out 1000’s of iterations for fine-tuning a GPT-based mannequin, we want an environment friendly matmul operation. For this function, I had to make use of the BLAS library, implementing an unsafe technique for overcoming the boundaries and boundaries. The utilization of unsafe in Rust is towards Rust’s philosophy, that’s why I’m at all times on the lookout for safer strategies for enhance matmul on this context.

    So, taking inspiration from Sam Altman’s assertion – “ask GPT the right way to create worth” – I made a decision to ask native LLMs to generate, benchmark, and iterate on their very own algorithms to create a greater, native Rust matmul implementation.

    The problem has some constraints:

    • We have to use our native atmosphere. In my case, a MacBook Professional, M3, 36GB RAM;
    • Overcome the boundaries of tokens;
    • Time and benchmark the code inside the technology loop itself

    I do know that reaching BLAS-level performances with this technique is sort of unimaginable, however I need to spotlight how we are able to leverage AI for customized wants, even with our “tiny” laptops, in order that we are able to unblock concepts and push boundaries in any subject. This submit desires to be an inspiration for practitioners, and individuals who need to get extra aware of Microsoft Autogen, and native LLM deployment.

    All of the cod implementation might be discovered on this Github repo. That is an on-going experiment, and plenty of modifications/enhancements might be dedicated.

    Common concept

    The general concept is to have a roundtable of brokers. The place to begin is the MrAderMacher Mixtral 8x7B model Q4 K_M native mannequin. From the mannequin we create 5 entities:

    • the Proposer comes up with a brand new Strassen-like algorithm, to discover a higher and extra environment friendly strategy to carry out matmul;
    • the Verifier opinions the matmul formulation by means of symbolic math;
    • the Coder creates the underlying Rust code;
    • the Tester executes it and saves all the information to the vector database;
    • the Supervisor acts silently, controlling the general workflow.
    Agent Position operate
    Proposer Analyses benchmark instances, and it proposes new tuning parameters and matmul formulations.
    Verifier (Presently disabled within the code). It verifies the proposer’s mathematical formulation by means of symbolic verification.
    Coder It takes the parameters, and it really works out the Rust template code.
    Tester It runs the Rust code, it saves the code and computes the benchmark timing.
    Supervisor Total management of the workflow.
    Tab. 1: Roles of brokers.

    The general workflow might be orchestrated by means of Microsoft Autogen as depicted in fig.1.

    Fig.1: Matmul optimisation. The consumer have an preliminary request with a immediate. From there the supervisor orchestrates the general workflow: 1) The proposer acts a theorist and generates a Strassen-like algorithm; 2) The verifier checks the mathematical correctness of the code; 3) The coder generates a Rust Neon code; 4) The tester runs the benchmark. [Image generated with Nano Banana Pro].

    Put together the enter knowledge and vector database

    The enter knowledge is collected from all educational papers, targeted on matrix multiplication optimisation. Many of those papers are referenced in, and associated to, DeepMind’s Strassen paper. I need to begin merely, so I collected 50 papers, printed from 2020 until 2025, that particularly tackle matrix multiplication.

    Subsequent, I’ve used chroma to create the vector database. The important facet in producing a brand new vector database is how the PDFs are chunked. On this context, I used a semantic chunker. In a different way from cut up textual content strategies, the semantic chunker makes use of the precise that means of the textual content, to find out the place to chop. The objective is to maintain the associated sentences collectively in a single chunk, making the ultimate vector database extra coherent and correct. That is completed utilizing the native mannequin BAAI/bge-base-en-v1.5. The Github gist beneath exhibits the complete implementation.

    The core code: autogen-core and GGML fashions

    I’ve used Microsoft Autogen, specifically the autogen-core variant (model 0.7.5). In a different way from the higher-level chat, in autogen-core we are able to have entry to low-level event-driven constructing blocks, which can be essential to create a state-machine-driven workflow as we want. As a matter of reality, the problem is to keep up a strict workflow. All of the appearing brokers should act in a particular order: Proposer –> Verifier –> Coder –> Tester.

    The core half is the BaseMatMulAgent, that inherits from AutoGen’s RoutedAgent. This base class permits us to standardise how LLM brokers will participate within the chat, and they’re going to behave.

    From the code above, we are able to see the category is designed to take part in an asynchronous group chat, dealing with dialog historical past, calls to exterior instruments and producing responses by means of the native LLM.

    The core element is @message_handler, a decorator that registers a way as listener or subscriber , primarily based on the message kind. The decorator robotically detects the sort trace of the primary technique’s argument – in our case is message: GroupChatMessage. It then subscribes the agent to obtain any occasions of that kind despatched to the agent’s subject. The handle_message async technique is then answerable for updating the agent’s inner reminiscence, with out producing a response.

    With the listener-subscriber mechanism is in place, we are able to concentrate on the Supervisor class. The MatMulManager inherits RoutedAgent and orchestrates the general brokers’ move.

    The code above handles all of the brokers. We’re skipping the Verifier half, for the second. The Coder publish the ultimate code, and the Tester takes care of saving each the code and the entire context to the Vector Database. On this method, we are able to keep away from consuming all of the tokens of our native mannequin. At every new run, the mannequin will catch-up on the most recent generated algorithms from the vector database and suggest a brand new answer.

    An important caveat, for ensuring autogen-core can work with llama fashions on MacOS, make use of the next snippet:

    #!/bin/bash 
    
    CMAKE_ARGS="-DGGML_METAL=on" FORCE_CMAKE=1 pip set up --upgrade --verbose --force-reinstall llama-cpp-python --no-cache-dir

    Fig.2 summarises your complete code. We are able to roughly subdivide the code into 3 essential blocks:

    • The BaseAgent, that handles messages by means of LLM’s brokers, evaluating the mathematical formulation and producing code;
    • The MatMulManager orchestrates your complete brokers’ move;
    • autogen_core.SingleThreadedAgentRuntime permits us to make your complete workflow a actuality.
    Fig.2: Total workflow in a nutshell. The bottom agent executes the LLM by means of brokers, it evaluates the mathematical formulation, creates the algorithm in Rust, and save all the information within the vector database. The MatMulManager is the actual core of the general workflow. Lastly, the autogen_core.SingleThreadedAgentRuntime makes all of this to work on our MacBook PRO. [Image created with Nano Banana Pro.]

    Outcomes and benchmark

    All of the Rust code has been revised and re-run manually. Whereas the workflow is strong, working with LLMs requires a important eye. A number of instances the mannequin confabulated*, producing code that seemed optimised however did not carry out the precise matmul work.

    The very first iteration generates a kind of Strassen-like algorithm (“Run 0” code within the fig.3):

    The mannequin thinks of higher implementations, extra Rust-NEON like, in order that after 4 iterations it provides the next code (“Run 3” in fig.3):

    We are able to see the utilization of features like vaddq_f32, particular CPU instruction for ARM processors, coming from std::arch::aarch64. The mannequin manages to make use of rayon to separate the workflow throughout a number of CPU cores, and contained in the parallel threads it makes use of NEON intrinsics. The code itself shouldn’t be completely appropriate, furthermore, I’ve seen that we’re operating into an out-of-memory error when coping with 1024×1024 matrices. I needed to manually re-work out the code to make it work.

    This brings us again to our my mantra “iterating to perfection”, and we are able to ask ourselves: ‘can a neighborhood agent autonomously refine Rust code to the purpose of mastering advanced NEON intrinsics?’. The findings present that sure, even on client {hardware}, this degree of optimisation is achievable.

    Fig.3 exhibits the ultimate outcomes I’ve obtained after every iterations.

    Fig.3: Logarithmic plot of the Rust-Neon implementation at numerous iterations. The calculations have been carried out on 1024×1024 Matrix Multiplication benchmarks. [Image generated by the author].

    The 0th and 2nd benchmark have some errors, as it’s bodily unimaginable to attain such a outcomes on a 1024×1024 matmul on a CPU:

    • the primary code suffers from a diagonal fallacy, so the code is computing solely diagonal blocks of the matrix and it’s ignoring the remaining;
    • the second code has a damaged buffer, as it’s repeatedly overwriting a small, cache-hot buffer 1028 floats, moderately than transversing the complete 1 million components.

    Nonetheless, the code produced two actual code, the run 1 and run 3. The primary iteration achieves 760 ms, and it constitutes an actual baseline. It suffers from cache misses and lack of SIMD vectorisation. The run 3 information 359 ms, the development is the implementation of NEON SIMD and Rayon parallelism.

    *: I wrote “the mannequin confabulates” on functions. From a medical point-of-view, all of the LLMs will not be hallucinating, however confabulating. Hallucinations are a completely completely different scenario w.r.t what LLMs are doing when babbling and producing “flawed” solutions.

    Conclusions

    This experiment began with a query that appeared an unimaginable problem: “can we use consumer-grade native LLMs to find high-performance Rust algorithms that may compete with BLAS implementation?”.

    We are able to say sure, or a minimum of we’ve a sound and stable background, the place we are able to construct up higher code to attain a full BLAS-like code in Rust.

    The submit confirmed the right way to work together with Microsoft Autogen, autogen-core, and the right way to create a roundtable of brokers.

    The bottom mannequin in use comes from GGUF, and it may run on a MacBook Professional M3, 36GB.

    After all, we didn’t discover (but) something higher than BLAS in a single easy code. Nonetheless, we proved that native agentic workflow, on a MacBook Professional, can obtain what was beforehand thought to require a large cluster and big fashions. Finally, the mannequin managed to discover a cheap Rust-NEON implementation, “Run 3 above”, that has a pace up of over 50% on normal Rayon implementation. We should spotlight that the spine implementation was AI generated.

    The frontier is open. I hope this blogpost can encourage you in making an attempt to see what limits we are able to overcome with native LLM deployment.


    I’m penning this in a private capability; these views are my very own.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleGoing beyond pilots with composable and sovereign AI
    Next Article Bridging the Gap Between Research and Readability with Marco Hening Tallarico
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    3 Questions: On the future of AI and the mathematical and physical sciences | MIT News

    March 11, 2026
    Artificial Intelligence

    An Intuitive Guide to MCMC (Part I): The Metropolis-Hastings Algorithm

    March 11, 2026
    Artificial Intelligence

    New MIT class uses anthropology to improve chatbots | MIT News

    March 11, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    What Makes Quantum Machine Learning “Quantum”?

    March 6, 2026

    How do you know if you’re ready to stand up an AI gateway?

    November 3, 2025

    Google’s “Nano Banana” Might Be the Most Powerful AI Image Editor Yet

    September 3, 2025

    Graph RAG vs SQL RAG

    November 1, 2025

    Javascript Fatigue: HTMX is all you need to build ChatGPT — Part 1

    November 17, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Historic Milestone or Creative Crisis?

    November 20, 2025

    How I Optimized My Leaf Raking Strategy Using Linear Programming

    December 19, 2025

    TDS Newsletter: The Rapid Transformation of Data Science in the Age of AI

    October 18, 2025
    Our Picks

    Are OpenAI and Google intentionally downgrading their models?

    March 12, 2026

    3 Questions: On the future of AI and the mathematical and physical sciences | MIT News

    March 11, 2026

    Is Open AI actually making its own models dumber?

    March 11, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.