Close Menu
    Trending
    • Undetectable AI vs. Grammarly’s AI Humanizer: What’s Better with ChatGPT?
    • Do You Really Need a Foundation Model?
    • xAI lanserar AI-sällskap karaktärer genom Grok-plattformen
    • How to more efficiently study complex treatment interactions | MIT News
    • Claude får nya superkrafter med verktygskatalog
    • How Metrics (and LLMs) Can Trick You: A Field Guide to Paradoxes
    • Så här påverkar ChatGPT vårt vardagsspråk
    • Deploy a Streamlit App to AWS
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » From Equal Weights to Smart Weights: OTPO’s Approach to Better LLM Alignment
    Artificial Intelligence

    From Equal Weights to Smart Weights: OTPO’s Approach to Better LLM Alignment

    ProfitlyAIBy ProfitlyAIJuly 15, 2025No Comments8 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Context

    have developed from fundamental search instruments to AI assistants that code, write, and analysis. Now they’re accessible by way of smartphone apps through web APIs, placing highly effective AI at everybody’s fingertips. These techniques have gotten an integral a part of our every day lives. Persons are utilizing AI assistants for searching for recommendation for private relationships, fact-checking to kind opinions (although it clearly states it could make errors), food regimen plans and subsequent vacation locations.

    As increasingly more highly effective fashions are launched, the query of belief arises and fashions are extra scrutinized to ensure the responses produced by them are reliable and aligned with human values. These are usually not new questions. Historically, fashions are fine-tuned on human choice knowledge (normally incorporates enter, chosen reply, rejected reply) earlier than launching them for public use. Mannequin alignment and security have been main areas of analysis, and a number of algorithms have been developed to coach the mannequin for alignment. Amongst all of the alignment coaching algorithms, the preferred is Direct Desire Optimization (DPO) on account of its simplicity and effectivity.

    However DPO has a elementary limitation. When calculating the probability of a response, it makes use of equal weight for every phrase or token current within the response, although people naturally give extra significance or weight to significant phrases. For instance, let’s take a look at the next person interplay with LLM.

    Consumer: What’s the capital of France?
    LLM: The capital of France is Paris, and it’s an attractive metropolis with many sights.

    On this interplay, people primarily care concerning the accuracy of “Paris” moderately than the stylistic thrives, but commonplace DPO provides equal weight to each token, permitting much less related content material to dilute the training sign.

    There have been a number of makes an attempt to repair DPO’s issues. Algorithms like SimPO and SamPO have been launched to deal with completely different points. On this publish, we’re going to look into one other algorithm revealed in Might 2025 on “Optimal Transport-Based mostly Token Weighting scheme for Enhanced Preference Optimization (OTPO).” This publish explains the core concepts behind their work and builds a basis for understanding LLM alignment with human preferences.

    Why Equal Token Weighting Fails

    To know why token weighting issues, we first want to look at how DPO really processes tokens. Often pre-trained fashions are educated with trillions of parameters, then superb tuned, after which educated additional utilizing DPO on human choice knowledge to align with human preferences earlier than being launched to the general public.
    DPO operates by computing log probability variations between chosen and rejected responses on the token stage. For every coaching instance with a selected response y_w and rejected response y_l, DPO calculates its goal worth. The core of DPO lies in its loss operate system:

    Picture from DPO paper

    Pi_theta(πθ) is the mannequin to be optimized, Pi_reference(π_ref) is a reference mannequin, and π∗(y|x) denotes the chance of response y given person enter x. 

    π∗(y|x) breaks down into token-level computations. For a selected response with tokens [t₁, t₂, ..., tₙ], the log chance turns into:

    log π∗(y|x) = Σᵢ log π(tᵢ|x, t₁…tᵢ₋₁)

    Every token contributes its particular person log chance to the general sequence chance, and there’s no mechanism to weight necessary content material greater than filler. Let’s take a look at an instance of choice knowledge. 

    Enter: What’s the capital of France?
    Chosen: The capital of France is Paris.
    Rejected: The capital of France is Italy, which is definitely incorrect.

    DPO computes log possibilities for each token equally.
    Chosen: log P("The") + log P("capital") + log P("of") + log P("France") + log P("is") + log P("Paris") + log P(".")

    Rejected: log P("The") + log P("capital") + ... + log P("Italy") + ... + log P("incorrect") + log P(".")

    The essential factual distinction lies in “Paris” vs “Italy,” however DPO provides equal weight to articles, prepositions, and the factually essential tokens. This uniform token remedy creates a mismatch between what the optimization focuses on and what people really care about.

    The mannequin receives equal studying sign from semantically essential tokens (“Paris”) and inconsequential ones (“which”, “really”). This results in the verbosity lure, longer sequences accumulate extra log probability mass by way of sheer token depend, so DPO can inadvertently reward verbosity over high quality.

    When semantically essential tokens get averaged with stylistic ones, the training alerts turn into unreliable, resulting in suboptimal choice studying. These issues may be solved if now we have a greater solution to give extra weight to related tokens when calculating the chance of the response. That’s precisely what OTPO does.

    Optimum Transport-Based mostly Token Weighting (OTPO)

    Now that we perceive DPO’s token weighting drawback, let’s see how OTPO solves it utilizing optimum transport principle. OTPO views choice optimization as a transport drawback, how a lot effort does it take to rework one response into one other?

    The important thing perception is what’s the minimal effort wanted to alter “The capital of France is Paris” into “The capital of France is Italy”? Most tokens stay the identical, however “Paris” → “Italy” requires vital semantic transformation since they’re fully completely different ideas.

    OTPO formulates this as an optimum transport drawback the place sources are tokens within the chosen response, targets are tokens within the rejected response, and transport prices mirror semantic similarity between token pairs. Semantically related tokens (like “Paris” and “London”) have low transport prices, whereas distant tokens (like “Paris” and “apple”) have excessive prices.

    The algorithm computes an optimum transport answer that tells us tips on how to transfer chance mass between responses with minimal whole value. Token pairs that take part closely on this transport, particularly these requiring costly semantic transformations, obtain increased weights within the closing loss calculation. This implies OTPO mechanically focuses studying on the tokens that matter most for human preferences, fixing DPO’s equal weighting drawback.

    Math behind OTPO

    Now let’s dive into the mathematical basis of OTPO. The algorithm has three foremost elements, developing a value matrix, fixing the optimum transport drawback, and computing weighted token losses.

    Step 1: Price Matrix Building

    OTPO begins by constructing a value matrix M that measures semantic distance between each token pair. For the i-th token within the chosen(w) response and j-th token within the rejected(l) response the price is 

    M[i][j] = ( h[w][i] — h[l][j] )²

    The place h[w][i] and h[l][j] are the last-layer hidden representations of tokens from the mannequin. This Euclidean distance captures semantic similarity. Related tokens like “Paris” and “London” have low value, whereas distant tokens like “Paris” and “apple” have excessive value.

    Step 2: Optimum Transport Downside

    OTPO formulates token weighting as an unbalanced optimum transport optimization:

    Picture from OTPO paper

    Right here Γ is the transport plan (what we’re fixing for) that aligns tokens between the chosen and rejected responses. Ω controls entropy regularization. KL phrases be sure that the marginal distributions of Γ are near the naive DPO uniform weights. The answer Γ* tells us tips on how to optimally transport chance mass between chosen and rejected tokens.

    Step 3: Computing Token Weights

    From the optimum transport answer, we derive token-level weights by summing alongside dimensions:

    Picture from OTPO paper

    Right here, Γ(i,j) represents the load assigned to every token pair (i, j) from chosen(w) and rejected(r) response. Lastly these weights are utilized on the DPO to switch the uniform weighting. Reward distinction with weighting scheme.

    Picture from OTPO paper

    Experiment Outcomes and Limitations

    OTPO was examined on a wide range of duties however in a managed atmosphere. When it was utilized to summarization duties, it confirmed about 8.5% enchancment over different strategies. When it was examined for size biases on the UltraFeedback dataset with smaller fashions like Llama-3–8B, OTPO was producing shorter responses. These preliminary exams present proof that OTPO helps scale back the verbosity and enhance the standard of responses which usually tend to be chosen by people.

    The testing was not exhaustive sufficient to current the accuracy quantity throughout the area. There have been combined outcomes on completely different datasets. OTPO requires costly value metric and transport plan calculation. Additionally, the LLM as decide was used to calculate the standard of response, which was additional scanned manually by a couple of folks. These strategies are good solely however completely depending on reviewers who is perhaps simply biased in direction of sure datasets.

    Conclusion

    LLM alignment has been main a subject of analysis, and OTPO presents promising ends in a managed atmosphere. Whereas this method shouldn’t be excellent, the introduction of weighted token choice lays the groundwork for extra fine-grained choice modeling in alignment duties.

    References:

    1. Direct coverage optimization(DPO). https://arxiv.org/pdf/2305.18290 
    2. Optimum transport primarily based token weighting scheme. https://arxiv.org/pdf/2505.18720 
    3. Eliminating Biased Size Reliance of Direct Desire Optimization(SamPO). https://arxiv.org/pdf/2406.10957 



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe Future of AI Agent Communication with ACP
    Next Article Automating Deep Learning: A Gentle Introduction to AutoKeras and Keras Tuner
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Do You Really Need a Foundation Model?

    July 16, 2025
    Artificial Intelligence

    How to more efficiently study complex treatment interactions | MIT News

    July 16, 2025
    Artificial Intelligence

    How Metrics (and LLMs) Can Trick You: A Field Guide to Paradoxes

    July 16, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    AI Cognitive Health Prediction

    April 10, 2025

    Computer Vision’s Annotation Bottleneck Is Finally Breaking

    June 18, 2025

    ChatGPT får långtidsminne – kommer nu ihåg alla dina konversationer

    April 13, 2025

    EU växlar upp: Ny handlingsplan ska göra Europa till en AI-kontinent

    April 10, 2025

    Exporting MLflow Experiments from Restricted HPC Systems

    April 24, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    3 Questions: Modeling adversarial intelligence to exploit AI’s security vulnerabilities | MIT News

    April 6, 2025

    Building a Сustom MCP Chatbot | Towards Data Science

    July 10, 2025

    The Hidden Security Risks of LLMs

    May 29, 2025
    Our Picks

    Undetectable AI vs. Grammarly’s AI Humanizer: What’s Better with ChatGPT?

    July 16, 2025

    Do You Really Need a Foundation Model?

    July 16, 2025

    xAI lanserar AI-sällskap karaktärer genom Grok-plattformen

    July 16, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.