Close Menu
    Trending
    • The robots who predict the future
    • Personalization features can make LLMs more agreeable | MIT News
    • Use OpenClaw to Make a Personal AI Assistant
    • Advance Planning for AI Project Evaluation
    • Building a LangGraph Agent from Scratch
    • Iron Triangles: Powerful Tools for Analyzing Trade-Offs in AI Product Development
    • The digital quant: instant portfolio optimization with JointFM
    • The Strangest Bottleneck in Modern LLMs
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Personalization features can make LLMs more agreeable | MIT News
    Artificial Intelligence

    Personalization features can make LLMs more agreeable | MIT News

    ProfitlyAIBy ProfitlyAIFebruary 18, 2026No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Most of the newest giant language fashions (LLMs) are designed to recollect particulars from previous conversations or retailer consumer profiles, enabling these fashions to personalize responses.

    However researchers from MIT and Penn State College discovered that, over lengthy conversations, such personalization options typically improve the probability an LLM will turn out to be overly agreeable or start mirroring the person’s standpoint.

    This phenomenon, often called sycophancy, can forestall a mannequin from telling a consumer they’re flawed, eroding the accuracy of the LLM’s responses. As well as, LLMs that mirror somebody’s political opinions or worldview can foster misinformation and deform a consumer’s notion of actuality.

    In contrast to many previous sycophancy research that consider prompts in a lab setting with out context, the MIT researchers collected two weeks of dialog information from people who interacted with an actual LLM throughout their each day lives. They studied two settings: agreeableness in private recommendation and mirroring of consumer beliefs in political explanations.

    Though interplay context elevated agreeableness in 4 of the 5 LLMs they studied, the presence of a condensed consumer profile within the mannequin’s reminiscence had the best impression. However, mirroring habits solely elevated if a mannequin may precisely infer a consumer’s beliefs from the dialog.

    The researchers hope these outcomes encourage future analysis into the event of personalization strategies which are extra strong to LLM sycophancy.

    “From a consumer perspective, this work highlights how essential it’s to know that these fashions are dynamic and their habits can change as you work together with them over time. In case you are speaking to a mannequin for an prolonged time frame and begin to outsource your considering to it, you could end up in an echo chamber which you can’t escape. That could be a danger customers ought to undoubtedly keep in mind,” says Shomik Jain, a graduate pupil within the Institute for Knowledge, Methods, and Society (IDSS) and lead creator of a paper on this research.

    Jain is joined on the paper by Charlotte Park, {an electrical} engineering and laptop science (EECS) graduate pupil at MIT; Matt Viana, a graduate pupil at Penn State College; in addition to co-senior authors Ashia Wilson, the Lister Brothers Profession Growth Professor in EECS and a principal investigator in LIDS; and Dana Calacci PhD ’23, an assistant professor on the Penn State. The analysis shall be introduced on the ACM CHI Convention on Human Components in Computing Methods.

    Prolonged interactions

    Primarily based on their very own sycophantic experiences with LLMs, the researchers began serious about potential advantages and penalties of a mannequin that’s overly agreeable. However after they searched the literature to develop their evaluation, they discovered no research that tried to know sycophantic habits throughout long-term LLM interactions.

    “We’re utilizing these fashions by means of prolonged interactions, they usually have numerous context and reminiscence. However our analysis strategies are lagging behind. We needed to guage LLMs within the methods individuals are really utilizing them to know how they’re behaving within the wild,” says Calacci.

    To fill this hole, the researchers designed a consumer research to discover two sorts of sycophancy: settlement sycophancy and perspective sycophancy.

    Settlement sycophancy is an LLM’s tendency to be overly agreeable, generally to the purpose the place it provides incorrect data or refuses the inform the consumer they’re flawed. Perspective sycophancy happens when a mannequin mirrors the consumer’s values and political opinions.

    “There’s a lot we learn about the advantages of getting social connections with individuals who have related or completely different viewpoints. However we don’t but learn about the advantages or dangers of prolonged interactions with AI fashions which have related attributes,” Calacci provides.

    The researchers constructed a consumer interface centered on an LLM and recruited 38 individuals to speak with the chatbot over a two-week interval. Every participant’s conversations occurred in the identical context window to seize all interplay information.

    Over the two-week interval, the researchers collected a median of 90 queries from every consumer.

    They in contrast the habits of 5 LLMs with this consumer context versus the identical LLMs that weren’t given any dialog information.

    “We discovered that context actually does basically change how these fashions function, and I’d wager this phenomenon would lengthen effectively past sycophancy. And whereas sycophancy tended to go up, it didn’t at all times improve. It actually will depend on the context itself,” says Wilson.

    Context clues

    As an example, when an LLM distills details about the consumer into a particular profile, it results in the biggest positive aspects in settlement sycophancy. This consumer profile characteristic is more and more being baked into the latest fashions.

    Additionally they discovered that random textual content from artificial conversations additionally elevated the probability some fashions would agree, despite the fact that that textual content contained no user-specific information. This implies the size of a dialog could generally impression sycophancy greater than content material, Jain provides.

    However content material issues enormously on the subject of perspective sycophancy. Dialog context solely elevated perspective sycophancy if it revealed some details about a consumer’s political perspective.

    To acquire this perception, the researchers rigorously queried fashions to deduce a consumer’s beliefs then requested every particular person if the mannequin’s deductions have been right. Customers mentioned LLMs precisely understood their political opinions about half the time.

    “It’s simple to say, in hindsight, that AI firms must be doing this type of analysis. However it’s arduous and it takes numerous time and funding. Utilizing people within the analysis loop is dear, however we’ve proven that it will possibly reveal new insights,” Jain says.

    Whereas the intention of their analysis was not mitigation, the researchers developed some suggestions.

    As an example, to scale back sycophancy one may design fashions that higher determine related particulars in context and reminiscence. As well as, fashions could be constructed to detect mirroring behaviors and flag responses with extreme settlement. Mannequin builders may additionally give customers the power to reasonable personalization in lengthy conversations.

    “There are numerous methods to personalize fashions with out making them overly agreeable. The boundary between personalization and sycophancy is just not a high quality line, however separating personalization from sycophancy is a crucial space of future work,” Jain says.

    “On the finish of the day, we’d like higher methods of capturing the dynamics and complexity of what goes on throughout lengthy conversations with LLMs, and the way issues can misalign throughout that long-term course of,” Wilson provides.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleUse OpenClaw to Make a Personal AI Assistant
    Next Article The robots who predict the future
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Use OpenClaw to Make a Personal AI Assistant

    February 18, 2026
    Artificial Intelligence

    Advance Planning for AI Project Evaluation

    February 18, 2026
    Artificial Intelligence

    Building a LangGraph Agent from Scratch

    February 17, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    LangChain for EDA: Build a CSV Sanity-Check Agent in Python

    September 9, 2025

    The three-layer AI strategy for supply chains

    July 10, 2025

    My Honest Advice for Aspiring Machine Learning Engineers

    July 5, 2025

    Exploring data and its influence on political behavior | MIT News

    July 7, 2025

    3 Techniques to Effectively Utilize AI Agents for Coding

    December 17, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    How to Improve the Efficiency of Your PyTorch Training Loop

    October 1, 2025

    A $100M AI Super PAC Is About to Reshape US Elections

    September 3, 2025

    It’s surprisingly easy to stumble into a relationship with an AI chatbot

    September 24, 2025
    Our Picks

    The robots who predict the future

    February 18, 2026

    Personalization features can make LLMs more agreeable | MIT News

    February 18, 2026

    Use OpenClaw to Make a Personal AI Assistant

    February 18, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.