Close Menu
    Trending
    • Do You Really Need a Foundation Model?
    • xAI lanserar AI-sällskap karaktärer genom Grok-plattformen
    • How to more efficiently study complex treatment interactions | MIT News
    • Claude får nya superkrafter med verktygskatalog
    • How Metrics (and LLMs) Can Trick You: A Field Guide to Paradoxes
    • Så här påverkar ChatGPT vårt vardagsspråk
    • Deploy a Streamlit App to AWS
    • How to Ensure Reliability in LLM Applications
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Reinforcement Learning from Human Feedback, Explained Simply
    Artificial Intelligence

    Reinforcement Learning from Human Feedback, Explained Simply

    ProfitlyAIBy ProfitlyAIJune 24, 2025No Comments8 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The looks of ChatGPT in 2022 utterly modified how the world began perceiving synthetic intelligence. The unbelievable efficiency of ChatGPT led to the speedy growth of different highly effective LLMs.

    We may roughly say that ChatGPT is an upgraded model of GPT-3. However compared to the earlier GPT variations, this time OpenAI builders not solely used extra knowledge or simply complicated mannequin architectures. As an alternative, they designed an unbelievable method that allowed a breakthrough.

    On this article, we are going to speak about RLHF — a basic algorithm carried out on the core of ChatGPT that surpasses the boundaries of human annotations for LLMs. Although the algorithm relies on proximal coverage optimization (PPO), we are going to preserve the reason easy, with out going into the small print of reinforcement studying, which isn’t the main focus of this text.

    NLP growth earlier than ChatGPT

    To raised dive into the context, allow us to remind ourselves how LLMs have been developed up to now, earlier than ChatGPT. Usually, LLM growth consisted of two phases:

    Pre-training & fine-tuning framework

    Pre-training consists of language modeling — a job wherein a mannequin tries to foretell a hidden token within the context. The chance distribution produced by the mannequin for the hidden token is then in comparison with the bottom reality distribution for loss calculation and additional backpropagation. On this method, the mannequin learns the semantic construction of the language and the that means behind phrases.

    If you wish to study extra about pre-training & fine-tuning framework, take a look at my article about BERT.

    After that, the mannequin is fine-tuned on a downstream job, which could embody totally different targets: textual content summarization, textual content translation, textual content era, query answering, and many others. In lots of conditions, fine-tuning requires a human-labeled dataset, which ought to ideally include sufficient textual content samples to permit the mannequin to generalize its studying properly and keep away from overfitting.

    That is the place the boundaries of fine-tuning seem. Information annotation is often a time-consuming job carried out by people. Allow us to take a question-answering job, for instance. To assemble coaching samples, we would wish a manually labeled dataset of questions and solutions. For each query, we would wish a exact reply supplied by a human. As an illustration:

    Throughout knowledge annotation, offering full solutions to prompts requires a variety of human time.

    In actuality, for coaching an LLM, we would wish tens of millions and even billions of such (query, reply) pairs. This annotation course of may be very time-consuming and doesn’t scale properly.

    RLHF

    Having understood the principle drawback, now it’s excellent second to dive into the small print of RLHF.

    When you have already used ChatGPT, you may have in all probability encountered a scenario wherein ChatGPT asks you to decide on the reply that higher fits your preliminary immediate:

    The ChatGPT interface asks a person to charge two potential solutions.

    This data is definitely used to constantly enhance ChatGPT. Allow us to perceive how.

    Initially, you will need to discover that selecting the most effective reply amongst two choices is a a lot less complicated job for a human than offering an actual reply to an open query. The thought we’re going to have a look at relies precisely on that: we would like the human to simply select a solution from two potential choices to create the annotated dataset.

    Selecting between two choices is a neater job than asking somebody to write down the very best response.

    Response era

    In LLMs, there are a number of potential methods to generate a response from the distribution of predicted token possibilities:

    • Having an output distribution p over tokens, the mannequin all the time deterministically chooses the token with the best chance.
    The mannequin all the time selects the token with the best softmax chance.
    • Having an output distribution p over tokens, the mannequin randomly samples a token in response to its assigned chance.
    The mannequin randomly chooses a token every time. The best chance doesn’t assure that the corresponding token might be chosen. When the era course of is run once more, the outcomes will be totally different.

    This second sampling technique leads to extra randomized mannequin conduct, which permits the era of various textual content sequences. For now, allow us to suppose that we generate many pairs of such sequences. The ensuing dataset of pairs is labeled by people: for each pair, a human is requested which of the 2 output sequences suits the enter sequence higher. The annotated dataset is used within the subsequent step.

    Within the context of RLHF, the annotated dataset created on this method known as “Human Suggestions”.

    Reward Mannequin

    After the annotated dataset is created, we use it to coach a so-called “reward” mannequin, whose objective is to study to numerically estimate how good or unhealthy a given reply is for an preliminary immediate. Ideally, we would like the reward mannequin to generate optimistic values for good responses and adverse values for unhealthy responses.

    Talking of the reward mannequin, its structure is strictly the identical because the preliminary LLM, apart from the final layer, the place as an alternative of outputting a textual content sequence, the mannequin outputs a float worth — an estimate for the reply.

    It’s essential to cross each the preliminary immediate and the generated response as enter to the reward mannequin.

    Loss operate

    You would possibly logically ask how the reward mannequin will study this regression job if there are usually not numerical labels within the annotated dataset. It is a cheap query. To handle it, we’re going to use an attention-grabbing trick: we are going to cross each a superb and a nasty reply via the reward mannequin, which can finally output two totally different estimates (rewards).

    Then we are going to well assemble a loss operate that may evaluate them comparatively.

    Loss operate used within the RLHF algorithm. R₊ refers back to the reward assigned to the higher response whereas R₋ is a reward estimated for the more severe response.

    Allow us to plug in some argument values for the loss operate and analyze its conduct. Under is a desk with the plugged-in values:

    A desk of loss values relying on the distinction between R₊ and R₋. 

    We will instantly observe two attention-grabbing insights:

    • If the distinction between R₊ and R₋ is adverse, i.e. a greater response obtained a decrease reward than a worse one, then the loss worth might be proportionally massive to the reward distinction, that means that the mannequin must be considerably adjusted.
    • If the distinction between R₊ and R₋ is optimistic, i.e. a greater response obtained a better reward than a worse one, then the loss might be bounded inside a lot decrease values within the interval (0, 0.69), which signifies that the mannequin does its job properly at distinguishing good and unhealthy responses.

    A pleasant factor about utilizing such a loss operate is that the mannequin learns acceptable rewards for generated texts by itself, and we (people) wouldn’t have to explicitly consider each response numerically — simply present a binary worth: is a given response higher or worse.

    Coaching an authentic LLM

    The educated reward mannequin is then used to coach the unique LLM. For that, we are able to feed a sequence of recent prompts to the LLM, which can generate output sequences. Then the enter prompts, together with the output sequences, are fed to the reward mannequin to estimate how good these responses are.

    After producing numerical estimates, that data is used as suggestions to the unique LLM, which then performs weight updates. A quite simple however elegant strategy!

    RLHF coaching diagram

    More often than not, within the final step to regulate mannequin weights, a reinforcement studying algorithm is used (often accomplished by proximal coverage optimization — PPO).

    Even when it’s not technically appropriate, in case you are not accustomed to reinforcement studying or PPO, you’ll be able to roughly consider it as backpropagation, like in regular machine studying algorithms.

    Inference

    Throughout inference, solely the unique educated mannequin is used. On the identical time, the mannequin can constantly be improved within the background by amassing person prompts and periodically asking them to charge which of two responses is best.

    Conclusion

    On this article, we have now studied RLHF — a extremely environment friendly and scalable method to coach trendy LLMs. A sublime mixture of an LLM with a reward mannequin permits us to considerably simplify the annotation job carried out by people, which required large efforts up to now when accomplished via uncooked fine-tuning procedures.

    RLHF is used on the core of many fashionable fashions like ChatGPT, Claude, Gemini, or Mistral.

    Sources

    All photographs except in any other case famous are by the writer



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleBuilding A Modern Dashboard with Python and Taipy
    Next Article Multimodal AI: The Complete Guide for 2025
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Do You Really Need a Foundation Model?

    July 16, 2025
    Artificial Intelligence

    How to more efficiently study complex treatment interactions | MIT News

    July 16, 2025
    Artificial Intelligence

    How Metrics (and LLMs) Can Trick You: A Field Guide to Paradoxes

    July 16, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Understanding AI Hallucinations: The Risks and Prevention Strategies with Shaip

    April 7, 2025

    11 Speechify Alternative You Should Try » Ofemwire

    April 4, 2025

    Using generative AI to help robots jump higher and land safely | MIT News

    June 27, 2025

    Pharmacy Placement in Urban Spain

    May 8, 2025

    Use OpenAI Whisper for Automated Transcriptions

    June 26, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Apple’s AI Promises Just Got Exposed — Here’s What They’re Not Telling You

    April 23, 2025

    Harvard släpper 1 miljon historiska böcker för att främja AI-träning

    June 16, 2025

    AI stirs up trouble in the science peer review process

    April 4, 2025
    Our Picks

    Do You Really Need a Foundation Model?

    July 16, 2025

    xAI lanserar AI-sällskap karaktärer genom Grok-plattformen

    July 16, 2025

    How to more efficiently study complex treatment interactions | MIT News

    July 16, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.