Close Menu
    Trending
    • Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found
    • From Transactions to Trends: Predict When a Customer Is About to Stop Buying
    • America’s coming war over AI regulation
    • “Dr. Google” had its issues. Can ChatGPT Health do better?
    • Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics
    • Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026
    • Stop Writing Messy Boolean Masks: 10 Elegant Ways to Filter Pandas DataFrames
    • What Other Industries Can Learn from Healthcare’s Knowledge Graphs
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Implementing Vibe Proving with Reinforcement Learning
    Artificial Intelligence

    Implementing Vibe Proving with Reinforcement Learning

    ProfitlyAIBy ProfitlyAIDecember 29, 2025No Comments9 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    “The event of arithmetic towards larger precision has led, as is well-known, to the formalization of huge tracts of it, in order that one can show any theorem utilizing nothing however a couple of mechanical guidelines.”
    — K. Gödel

    In Part 1, we constructed a proof checker and developed a psychological mannequin for why we must always belief proofs that come out of an LLM: so long as now we have formalized reasoning and a sound verifier, a “few mechanical guidelines” are all we’d like. So how will we practice an LLM to generate legitimate proofs?

    As DeepSeek beautifully showed, the identical instinct behind AI learning the game of Go works for AI studying methods to purpose, so long as reasoning will be checked (and now we all know it may well). On this second half we put to good use our verifier and construct an end-to-end RL coaching loop to fine-tune an open-source mannequin to supply proofs within the language we launched partly 1: at a look, the next determine reveals the essential components of the circulate.

    The complete implementation: from dataset era with Sonnet to the RL loop on Tinker. [ Image by the author ]

    TL;DR: after some machine-human collaboration to generate a dataset (leveraging our checker as a sanity examine on LLM-generated examples), we run on Tinker an RL loop to do LoRA-style fine-tuning of open-source fashions. We immediate the mannequin with (1) how our language works, (2) methods to apply guidelines to construct proofs, and (3) methods to format solutions so that they’re simple to parse. Each proof is then run by means of the proof checker, and the reward will get propagated again to enhance the mannequin’s skills: ideally, the mannequin will begin with principally failing proof makes an attempt, after which get progressively higher because the coaching progresses.

    Word that whereas the collection particularly targets mathematical reasoning, verifiable proofs are basic in constructing confidence in distributed software program programs. As some specialists argued, AI could be the lacking ingredient for proving software program correctness at scale!

    Buckle up, clone the repo, and code alongside. In the event you skipped the primary half, you’ll be able to learn it here!

    Dataset era

    “Folks suppose arithmetic is sophisticated. Arithmetic is the easy bit. It’s the stuff we are able to perceive. It’s cats which are sophisticated.” — J. Conway

    To get a reward to enhance our mannequin, we’d like examples of proofs within the first place: ideally, we wish a mixture of simple and onerous proofs, written in our personal reasoning language. We are able to’t simply generate random strings in our alphabet as a result of we’d just like the mannequin to try to show issues that we all know are provable within the first place! How will we bootstrap the method?

    Our coaching combination is a mixture of three sources:

    • A guide translation of workouts (premises->conclusion) taken from forallx, which we assume are solvable proofs;
    • A guide translation of workouts (premises->conclusion) taken from Language, Proof and Logic, which we assume are solvable proofs;
    • A corpus of proofs generated by a strong LLM (Sonnet by Anthropic). Since we can’t assume that LLM-generated premises->conclusion tuples are right, we immediate the LLM to generate a full proof, which (you guessed it!) will get checked by our proof checker earlier than being added to the coaching set.

    A single statement within the dataset appears to be like like the next object:

    {"premises": ["P", "Q"], "conclusion": "P and Q", "num_steps": 1}

    i.e., a set of premises, a conclusion and what number of steps Sonnet took to generate a legitimate proof: premises and conclusion will find yourself within the immediate throughout RL (as we’ll ask the mannequin to discover a proof of the conclusion from the premises), and num_steps is a handy worth to print out some statistics on the perceived issue of the coaching set (assuming for simplicity that the size of a proof loosely correlates with its issue).

    Reinforcement Studying on Tinker

    “One of the simplest ways to have a good suggestion is to have numerous concepts.”
    — attributed to L. “

    We are actually able to get our personal, smaller, open-source LLM for Vibe Proving. There are numerous recipes and companies on-line to carry out RL on open-source fashions, however we picked Tinker because it guarantees to summary away the infrastructure and many of the boilerplate required (it’s also the brand new child on the block, so it’s an opportunity to try it out!). 

    The coaching loop itself doesn’t have many surprises:

    1. Pattern: given the immediate and a tuple (premises->conclusion), we ask the mannequin to generate a number of proof makes an attempt.
    2. Confirm: we run every try by means of the proof checker.
    3. Reward: legitimate proofs (i.e. proofs which are totally parseable and logically right) get reward 1, every other outcome gets 0 (‘Do or do not‘, certainly). Word that we additionally examine that the generated proof has the identical (premises->conclusion) as our request, to keep away from having the LLM simply gaming the system by at all times producing a trivially right proof.
    4. Replace: we regulate the mannequin weights to make profitable proofs extra doubtless.

    Following Tinker’s own guidelines, we select to experiment with MoE reasoning fashions in a couple of sizes: gpt-oss-20b, gpt-oss-120b and Qwen3-30B-A3B-Instruct-2507. Throughout coaching, logs and proofs are saved within the training_logs folder: on the finish, our (vibe coded!) app can be utilized to visualise the metric traits and examine the generated proofs.

    Displaying coaching metrics from a 20b mannequin utilizing the vibe coded app. [ Screenshot from the author ]

    If you’re utilizing an AI assistant to observe the coaching (which I experimented with for the primary time with this mission), an fascinating information slice to trace is the proofs from textbooks, since they’re designed to be tough. For example, the next is a standing replace from Claude Code:

    AI-assisted monitoring, with a breakdown of efficiency over textbook examples. [ Screenshot from the author ]

    How good is our vibe proving?

    Throughout a couple of runs and a little bit of tinkering with the parameters, we at all times find yourself with fashions that may show nearly all of the generated examples, however wrestle on some textbook proofs. It’s instructive and barely amusing to examine the generated proofs.

    On the success aspect, that is an try at proving DeMorgan’s law, i.e. exhibiting methods to go from ['not A or not B'] to not (A and B), by first assuming A and B and proving a contradiction:

    1. not A or not B (premise)
    2. | A and B (subproof)
    3. | A (2)
    4. | B (2)
    5. || not A (nested subproof, from 1)
    6. || ~ (3,5)
    7. || not B (nested subproof)
    8. || ~ (4,7)
    9. | (1, 5-6, 7-8)
    10. QED

    On the failure aspect, no mannequin efficiently proved from 'A or B', 'not A or C', 'not B or D' that C or D , struggling to correctly handle nested subproofs and apply the rule of explosion, as proven from this hint:

    1. A or B (premise)
    2. not A or C (premise)
    3. not B or D (premise)
    4. | A (subproof)
    5. || not A (nested subproof)
    6. || ~ (4,5)
    7. | C (5-6) ← ERROR
    8. ….

    How simple was Tinker?

    Our small proof of idea is hardly a stress check for a coaching service at scale, but it surely was sufficient to get some grounded impressions of the system.

    The mix of fine public examples, Claude-friendly documentation and {hardware} abstraction made for a nice, mild introduction to RL, at an inexpensive price (all of the experiments for the weblog publish price $60 or so, together with preliminary runs that – in hindsight! – had been clearly a waste of money and time!).

    Whenever you get the dangle of it and begin to run a couple of jobs in parallel, the dearth of monitoring and observability turns into a difficulty: generally my runs slowed down considerably (getting try_again responses for a very long time, as if the system was overloaded), and a few jobs failed in some unspecified time in the future for unclear causes (however, positive sufficient, you’ll be able to restart from a earlier checkpoint). Contemplating the affordable worth and the prototype nature of my workloads, none of those points outweighed the professionals, and I walked away with a constructive sufficient Tinker expertise that I might positively use it once more for a future mission.

    See you, RL cowboys!

    “We do this stuff not as a result of they’re simple, however as a result of we thought they had been going to be simple.” — Nameless

    Whereas Tinker certainly makes the coaching course of (principally) seamless, the satan remains to be within the (RL) particulars: we barely scratched the floor to this point, as our aim was to go from zero to a Vibe Proving stack, not optimizing RL per se.

    The excellent news is that the circulate is pretty modular, so that each one parts might be improved and tinkered with (kind of) independently: 

    • mannequin alternative: mannequin kind, mannequin dimension, supplier …
    • coaching parameters: choose studying charge, batch dimension, LoRA rank …
    • code abstractions: re-write the code with RL Envs …
    • immediate optimization: higher directions, simpler formatting, helpful in-context examples, …
    • dataset optimization: extra various examples, curriculum studying (not simply various the proof issue, however for instance beginning with proofs which are performed aside from one lacking step, then proofs with two lacking steps and many others. till the mannequin must fill all the proof) …

    In the identical vein, our personal customized proof language is certainly not sufficient to get fascinating outcomes: we might enhance on it, however attending to one thing usable really would require an astounding quantity of labor. For these causes, you’re higher off migrating to a purpose-built language, akin to Lean: importantly, now that about proofs-as-formalized-reasoning, the identical psychological mannequin carries over to a language that’s (approach) extra expressive. Furthermore, Lean has just about the same style for writing down proofs, i.e. guidelines for introducing and eliminating logical operators.

    In different phrases, as soon as we nail the mathematics behind Vibe Proving and construct an preliminary RL harness, what’s left is nice ol’ engineering.

    Acknowledgements

    Because of Patrick John Chia, Federico Bianchi, Ethan Rosenthal, Ryan Vilim, Davis Treybig for treasured suggestions over earlier variations of this draft.

    In the event you just like the intersection of genAI, reasoning about distributed programs and verification, it’s also possible to take a look at our research at Bauplan.

    AI coding assistants had been used to put in writing the companion repository, however no assistant was used to put in writing the textual content (if not for proof-reading and typo correction).



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleBreaking the Hardware Barrier: Software FP8 for Older GPUs
    Next Article Machine Learning vs AI Engineer: What Are the Differences?
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found

    January 23, 2026
    Artificial Intelligence

    From Transactions to Trends: Predict When a Customer Is About to Stop Buying

    January 23, 2026
    Artificial Intelligence

    Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics

    January 22, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Police tech can sidestep facial recognition bans now

    May 13, 2025

    OpenAI Accused of Weakening Mental Health Safeguards in Wrongful Death Lawsuit

    October 30, 2025

    Natasha Lyonne to Direct AI-Powered Sci-Fi Film That Could Redefine Hollywood

    April 30, 2025

    China built hundreds of AI data centers to catch the AI boom. Now many stand unused.

    April 3, 2025

    Why your AI investments aren’t paying off

    April 5, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Ethical AI Innovations for Empowering Linguistic Diversity and Economic Empowerment

    April 9, 2025

    The enterprise path to agentic AI

    April 9, 2025

    Layers of the AI Stack, Explained Simply

    April 15, 2025
    Our Picks

    Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found

    January 23, 2026

    From Transactions to Trends: Predict When a Customer Is About to Stop Buying

    January 23, 2026

    America’s coming war over AI regulation

    January 23, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.