Close Menu
    Trending
    • The Missing Curriculum: Essential Concepts For Data Scientists in the Age of AI Coding Agents
    • Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News
    • Understanding the Chi-Square Test Beyond the Formula
    • Microsoft has a new plan to prove what’s real and what’s AI online
    • AlpamayoR1: Large Causal Reasoning Models for Autonomous Driving
    • AI in Multiple GPUs: How GPUs Communicate
    • Parking-aware navigation system could prevent frustration and emissions | MIT News
    • Can AI Solve Failures in Your Supply Chain?
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News
    Artificial Intelligence

    Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News

    ProfitlyAIBy ProfitlyAIFebruary 19, 2026No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    By now, ChatGPT, Claude, and different giant language fashions have gathered a lot human data that they’re removed from easy answer-generators; they’ll additionally specific summary ideas, comparable to sure tones, personalities, biases, and moods. Nevertheless, it’s not apparent precisely how these fashions symbolize summary ideas to start with from the data they comprise.

    Now a workforce from MIT and the College of California San Diego has developed a strategy to check whether or not a big language mannequin (LLM) comprises hidden biases, personalities, moods, or different summary ideas. Their methodology can zero in on connections inside a mannequin that encode for an idea of curiosity. What’s extra, the tactic can then manipulate, or “steer” these connections, to strengthen or weaken the idea in any reply a mannequin is prompted to provide.

    The workforce proved their methodology might shortly root out and steer greater than 500 common ideas in among the largest LLMs used at this time. As an example, the researchers might house in on a mannequin’s representations for personalities comparable to “social influencer” and “conspiracy theorist,” and stances comparable to “concern of marriage” and “fan of Boston.” They might then tune these representations to reinforce or decrease the ideas in any solutions {that a} mannequin generates.

    Within the case of the “conspiracy theorist” idea, the workforce efficiently recognized a illustration of this idea inside one of many largest imaginative and prescient language fashions accessible at this time. Once they enhanced the illustration, after which prompted the mannequin to clarify the origins of the well-known “Blue Marble” picture of Earth taken from Apollo 17, the mannequin generated a solution with the tone and perspective of a conspiracy theorist.

    The workforce acknowledges there are dangers to extracting sure ideas, which additionally they illustrate (and warning towards). Total, nevertheless, they see the brand new strategy as a strategy to illuminate hidden ideas and potential vulnerabilities in LLMs, that might then be turned up or down to enhance a mannequin’s security or improve its efficiency.

    “What this actually says about LLMs is that they’ve these ideas in them, however they’re not all actively uncovered,” says Adityanarayanan “Adit” Radhakrishnan, assistant professor of arithmetic at MIT. “With our methodology, there’s methods to extract these completely different ideas and activate them in ways in which prompting can not provide you with solutions to.”

    The workforce printed their findings at this time in a research appearing in the journal Science. The research’s co-authors embody Radhakrishnan, Daniel Beaglehole and Mikhail Belkin of UC San Diego, and Enric Boix-Adserà of the College of Pennsylvania.

    A fish in a black field

    As use of OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and different synthetic intelligence assistants has exploded, scientists are racing to grasp how fashions symbolize sure summary ideas comparable to “hallucination” and “deception.” Within the context of an LLM, a hallucination is a response that’s false or comprises deceptive info, which the mannequin has “hallucinated,” or constructed erroneously as truth.

    To seek out out whether or not an idea comparable to “hallucination” is encoded in an LLM, scientists have usually taken an strategy of “unsupervised studying” — a kind of machine studying by which algorithms broadly trawl via unlabeled representations to search out patterns which may relate to an idea comparable to “hallucination.” However to Radhakrishnan, such an strategy might be too broad and computationally costly.

    “It’s like going fishing with a giant internet, making an attempt to catch one species of fish. You’re gonna get a whole lot of fish that you need to look via to search out the best one,” he says. “As a substitute, we’re moving into with bait for the best species of fish.”

    He and his colleagues had beforehand developed the beginnings of a extra focused strategy with a kind of predictive modeling algorithm often called a recursive function machine (RFM). An RFM is designed to immediately establish options or patterns inside information by leveraging a mathematical mechanism that neural networks — a broad class of AI fashions that features LLMs — implicitly use to study options.

    Because the algorithm was an efficient, environment friendly strategy for capturing options typically, the workforce puzzled whether or not they might use it to root out representations of ideas, in LLMs, that are by far essentially the most broadly used sort of neural community and maybe the least well-understood.

    “We wished to use our function studying algorithms to LLMs to, in a focused manner, uncover representations of ideas in these giant and sophisticated fashions,” Radhakrishnan says.

    Converging on an idea

    The workforce’s new strategy identifies any idea of curiosity inside a LLM and “steers” or guides a mannequin’s response primarily based on this idea. The researchers regarded for 512 ideas inside 5 lessons: fears (comparable to of marriage, bugs, and even buttons); specialists (social influencer, medievalist); moods (boastful, detachedly amused); a desire for places (Boston, Kuala Lumpur); and personas (Ada Lovelace, Neil deGrasse Tyson).

    The researchers then looked for representations of every idea in a number of of at this time’s giant language and imaginative and prescient fashions. They did so by coaching RFMs to acknowledge numerical patterns in an LLM that might symbolize a selected idea of curiosity.

    A typical giant language mannequin is, broadly, a neural network that takes a pure language immediate, comparable to “Why is the sky blue?” and divides the immediate into particular person phrases, every of which is encoded mathematically as an inventory, or vector, of numbers. The mannequin takes these vectors via a sequence of computational layers, creating matrices of many numbers that, all through every layer, are used to establish different phrases which are most probably for use to reply to the unique immediate. Finally, the layers converge on a set of numbers that’s decoded again into textual content, within the type of a pure language response.

    The workforce’s strategy trains RFMs to acknowledge numerical patterns in an LLM that may very well be related to a selected idea. For instance, to see whether or not an LLM comprises any illustration of a “conspiracy theorist,” the researchers would first practice the algorithm to establish patterns amongst LLM representations of 100 prompts which are clearly associated to conspiracies, and 100 different prompts that aren’t. On this manner, the algorithm would study patterns related to the conspiracy theorist idea. Then, the researchers can mathematically modulate the exercise of the conspiracy theorist idea by perturbing LLM representations with these recognized patterns. 

    The tactic might be utilized to seek for and manipulate any common idea in an LLM. Amongst many examples, the researchers recognized representations and manipulated an LLM to provide solutions within the tone and perspective of a “conspiracy theorist.” Additionally they recognized and enhanced the idea of “anti-refusal,” and confirmed that whereas usually, a mannequin can be programmed to refuse sure prompts, it as a substitute answered, as an example giving directions on easy methods to rob a financial institution.

    Radhakrishnan says the strategy can be utilized to shortly seek for and decrease vulnerabilities in LLMs. It will also be used to reinforce sure traits, personalities, moods, or preferences, comparable to emphasizing the idea of “brevity” or “reasoning” in any response an LLM generates. The workforce has made the tactic’s underlying code publicly accessible.

    “LLMs clearly have a whole lot of these summary ideas saved inside them, in some illustration,” Radhakrishnan says. “There are methods the place, if we perceive these representations effectively sufficient, we are able to construct extremely specialised LLMs which are nonetheless protected to make use of however actually efficient at sure duties.”

    This work was supported, partially, by the Nationwide Science Basis, the Simons Basis, the TILOS institute, and the U.S. Workplace of Naval Analysis. 



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleUnderstanding the Chi-Square Test Beyond the Formula
    Next Article The Missing Curriculum: Essential Concepts For Data Scientists in the Age of AI Coding Agents
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    The Missing Curriculum: Essential Concepts For Data Scientists in the Age of AI Coding Agents

    February 19, 2026
    Artificial Intelligence

    Understanding the Chi-Square Test Beyond the Formula

    February 19, 2026
    Artificial Intelligence

    AlpamayoR1: Large Causal Reasoning Models for Autonomous Driving

    February 19, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    How to build a better AI benchmark

    May 8, 2025

    AI models are using material from retracted scientific papers

    September 23, 2025

    The future of AI processing

    April 22, 2025

    ShapeLLM-Omni designad för att förstå och generera 3D-innehåll

    June 8, 2025

    AI-modell tränas på hälsodata från 57M britter för att förutse sjukdomar

    May 14, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    FramePack videodiffusion som kan köras på konsument-GPU:er

    April 18, 2025

    The AI Hype Index: DeepSeek mania, vibe coding, and cheating at chess

    April 3, 2025

    Why AI leaders can’t afford fragmented AI tools

    April 5, 2025
    Our Picks

    The Missing Curriculum: Essential Concepts For Data Scientists in the Age of AI Coding Agents

    February 19, 2026

    Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News

    February 19, 2026

    Understanding the Chi-Square Test Beyond the Formula

    February 19, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.