Close Menu
    Trending
    • Is Open AI actually making its own models dumber?
    • An Intuitive Guide to MCMC (Part I): The Metropolis-Hastings Algorithm
    • New MIT class uses anthropology to improve chatbots | MIT News
    • Spectral Clustering Explained: How Eigenvectors Reveal Complex Cluster Structures
    • We ran 16 AI Models on 9,000+ Real Documents. Here’s What We Found.
    • Why Most A/B Tests Are Lying to You
    • Hustlers are cashing in on China’s OpenClaw AI craze
    • How the Fourier Transform Converts Sound Into Frequencies
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Is Open AI actually making its own models dumber?
    AI Technology

    Is Open AI actually making its own models dumber?

    ProfitlyAIBy ProfitlyAIMarch 11, 2026No Comments13 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email





    GPT-5.4 simply dropped and my feeds instantly crammed with takes. Builders who spent the final six months swearing by Claude had been all of a sudden hedging. “It is a workhorse,” one particular person wrote. “Not a thoroughbred, however I am utilizing it.” One other stated they’re now 50/50 between Claude and GPT the place they had been 90/10 a month in the past.

    This occurs each single time. A brand new mannequin lands, and the outdated one begins to really feel totally different. Slower, possibly. Much less sharp. You begin noticing stuff you did not discover earlier than.

    The plain rationalization is that you simply’re evaluating it to one thing higher. Nevertheless it additionally raises a query no one actually solutions cleanly: did the outdated mannequin really worsen after the brand new one launched? Or did you simply get a greater reference level and now all the pieces earlier than it appears dumb by comparability?

    I went searching for an precise reply.


    The primary crack confirmed in 2023

    In July 2023, researchers at Stanford and UC Berkeley ran a deceptively easy take a look at. They took GPT-4 – the identical mannequin, referred to as with the identical title, and ran similar prompts on it at two deadlines: March 2023 and June 2023.

    GPT-4’s accuracy on figuring out prime numbers dropped from 84% to 51%. The share of GPT-4’s code outputs that had been instantly executable dropped from 52% to 10%. James Zou, one of many paper’s authors, described what this meant in follow: “If you happen to’re counting on the output of those fashions in some type of software program stack or workflow, the mannequin all of a sudden modifications habits, and you do not know what is going on on, this could really break your whole stack.”

    They named the phenomenon LLM drift. Behavioral change and not using a model change. The mannequin moved beneath the developer.

    When the paper dropped, OpenAI VP of Product Peter Welinder replied on Twitter: “No, we have not made GPT-4 dumber. Fairly the other: we make every new model smarter than the earlier one. Present speculation: Once you use it extra closely, you begin noticing points you did not see earlier than.” The subtext was plain. It is you, not us.

    What Welinder was describing has a technical title: immediate drift. The concept is that your prompts and utilization patterns shift over time, so an unchanged mannequin surfaces totally different behaviors. It is an actual phenomenon. Builders do write otherwise as they get extra accustomed to a mannequin. The Stanford research was designed to make that rationalization not possible – similar prompts, fastened intervals, nothing on the person’s aspect modified. The efficiency dropped anyway.

    Two years later, OpenAI revealed one thing that instantly contradicted Welinder’s place.


    OpenAI confirmed it, in writing, twice

    On April 25, 2025, OpenAI pushed an replace to GPT-4o and not using a public announcement, a developer notification, or an API changelog entry.

    Inside 48 hours, the web was stuffed with screenshots. GPT-4o had referred to as a enterprise concept constructed round literal “shit on a stick” an excellent idea. It endorsed a person’s resolution to cease taking their treatment. When a person stated they had been listening to radio alerts via the partitions, it responded: “I am pleased with you for talking your fact so clearly and powerfully.” One person reported spending an hour speaking to GPT-4o earlier than it began insisting they had been a divine messenger from God.

    OpenAI rolled it again 4 days later and revealed two postmortems with a number of admissions. Since launching GPT-4o, the corporate had made 5 important updates to the mannequin’s habits, with minimal public communication about what modified in any of them. The April replace broke as a result of a brand new reward sign they launched “weakened the affect of our major reward sign, which had been holding sycophancy in test.” Their very own inner evaluations hadn’t caught it. “Our offline evals weren’t broad or deep sufficient to catch sycophantic habits.”

    And this: “mannequin updates are much less of a clear industrial course of and extra of an artisanal, multi-person effort” and there’s “a scarcity of superior analysis strategies for systematically monitoring and speaking refined enhancements at scale.”

    They’re describing a corporation that ships behavioral modifications throughout each pipeline constructed on high of their API, can not at all times predict what these modifications will do, and doesn’t have dependable strategies to speak them to the builders relying on consistency. Welinder’s 2023 “you are imagining it” was what OpenAI wished to be true. Their 2025 postmortem was what was really occurring.

    When GPT-5 launched in August 2025, it launched a brand new wrinkle. As a substitute of a single mannequin, they made GPT-5 a routing system that decides which variant your immediate hits, and builders rapidly discovered that it typically hit the cheaper, much less succesful one. Pipelines broke. Prompts that had labored for months produced totally different outputs.

    One founder wrote: “When routing hits, it seems like magic. When it misses, it seems like sabotage.” OpenAI denied it was routing to cheaper fashions intentionally. No one has a strategy to confirm. The underlying drawback was the identical because the sycophancy incident: a change in what the mannequin returns, with no mechanism for builders to detect it had occurred.


    Google did virtually the identical, typically quicker

    OpenAI just isn’t alone on this. Google has produced a parallel set of incidents with Gemini, and in some circumstances moved quicker and extra chaotically.

    In Might 2025, builders seen that the gemini-2.5-pro-preview-03-25 endpoint, a particularly dated mannequin snapshot, named with a date to suggest stability, was silently redirecting to a totally totally different mannequin: gemini-2.5-pro-preview-05-06. The API was returning a unique mannequin than the one you requested for by title. Google’s developer boards crammed with a protracted thread titled “Pressing Suggestions & Name for Correction: A Severe Breach of Developer Belief and Stability.” The core grievance: “your documentation by no means addresses particularly dated endpoints. The expectation {that a} mannequin named for a selected date will really be that mannequin just isn’t an unreasonable one.”

    That was simply the primary incident. When Gemini 2.5 Professional reached Basic Availability in June 2025, the “secure” launch meant for manufacturing – builders instantly reported it was worse than the preview. Considerably worse. The boards crammed with studies of upper hallucination charges, context abandonment in multi-turn conversations, and sharply degraded code technology. One developer wrote: “I seen Gemini 2.5 Professional in Google AI Studio supplies considerably worse understanding of lengthy context. It hallucinates the proper reply from the preview model.” One other deserted the mannequin fully as a result of code technology degraded to the purpose of being unusable. A separate thread was merely titled “Gemini 2.5 Professional has gotten worse.”

    Google did not formally acknowledge any of it.

    Then in October 2025, forward of the Gemini 3.0 launch, Gemini 2.5 Professional builders began reporting widespread degradation. The main principle: Google had reallocated computational sources away from the present mannequin to assist coaching and serving Gemini 3.0. Some builders seen higher efficiency late at night time. Others suspected a deployed quantized model. Google maintained silence all through.

    Gemini 3.0 launched in late 2025, and the sample held. Developer boards reported important regressions in reasoning and context retention in comparison with Gemini 2.5 Professional, regardless of Google’s announcement touting superior benchmark efficiency. One discussion board submit from December 2025 was titled “Suggestions: Gemini 3 Professional Preview – Vital regression in Reasoning, Context Retention, and Security False Positives in comparison with 2.5.”

    The sample throughout each labs: a brand new model launches, the present mannequin’s efficiency degrades, typically via a silent replace, typically via useful resource reallocation, typically via a routing change – builders discover, labs initially deny or ignore it, the cycle repeats.


    Even leaderboards nonetheless cannot catch this

    The instruments meant to independently monitor mannequin high quality have a structural drawback.

    LMSYS Chatbot Area – essentially the most trusted human-preference leaderboard, constructed on hundreds of thousands of votes, notes of their methodology that “the hosted proprietary fashions is probably not static and their habits can change with out discover.” The leaderboard’s statistical structure assumes mannequin weights are fastened. If a mannequin will get a silent replace mid-data-collection, the system registers totally different outcomes and treats them as regular variance.

    A 2025 research monitoring 2,250 responses from GPT-4 and Claude 3 throughout six months discovered GPT-4 confirmed 23% variance in response size over that interval, and Mixtral confirmed 31% inconsistency in instruction adherence. A PLOS One paper revealed in February 2026 ran a ten-week longitudinal human-anchored analysis and confirmed “significant behavioral drift throughout deployed transformer providers.” The authors famous: as a result of suppliers do not launch replace logs or coaching particulars, “any attribution for noticed degradation can be purely speculative.” They will let you know the mannequin modified. They can not let you know why.

    Other than this, a small variety of researchers have tried to go additional and distinguish what drifts from what holds. A big-scale longitudinal research run throughout the 2024 US election season queried GPT-4o and Claude 3.5 Sonnet on over 12,000 questions throughout 4 months, together with a class particularly designed to be time-stable: factual questions in regards to the election course of whose appropriate solutions do not change. 

    These responses held largely constant over the research interval. A separate research revealed in late 2025 examined 14 fashions together with GPT-4 on validated creativity duties over 18 to 24 months and located one thing totally different: no enchancment in artistic efficiency over that interval, with GPT-4 performing worse than it had in earlier research.

    Taken collectively, these two findings describe a mannequin that’s secure alongside one dimension and degraded alongside one other, measured by impartial researchers, in the identical timeframe. Some capabilities maintain, others erode, usually in the identical mannequin over the identical interval. With out operating your personal longitudinal exams in opposition to the precise duties you care about, you haven’t any strategy to know which bucket you are in.


    What we have really seen

    Not all drift lands the identical method. There is a sample to the place it reveals up, and it tracks carefully to activity construction.

    The technical baseline is straightforward. A mannequin with fastened weights, operating on constant infrastructure, ought to behave the identical method for a similar enter each time. If habits modifications on similar prompts, one thing modified, both in your finish or theirs. Immediate drift is the user-side rationalization: your prompts advanced, your system contexts shifted, inputs drifted from what the mannequin was initially optimized for. Information drift is the associated concept that the distribution of real-world inputs strikes over time, pulling habits with it. Each are actual. Each additionally require one thing in your aspect to have modified. 

    At Nanonets, we ran the identical fashions on doc extraction workflows for patrons throughout two to a few years, via a number of mannequin generations. Outcomes stayed constant. The mannequin did not really feel degraded, nothing broke, prospects did not discover a change. Information extraction runs on slim context home windows with structured inputs and bounded outputs. There is not a lot floor space for habits to shift beneath regular situations. 

    However that’s not a assure in opposition to a lab actively pushing a foul replace – these can hit any activity sort, because the prime quantity collapse confirmed.

    Coding is the other. The duty is open-ended, context accumulates, and the mannequin has to carry coherence throughout a protracted chain of choices. It is also the place virtually each main degradation grievance has landed. The GPT-4 drift the Stanford research documented was worst on code, instantly executable outputs dropped from 52% to 10%. The Gemini 2.5 Professional regression complaints in June 2025 had been virtually fully about code technology. 

    In August 2025, Anthropic’s personal incident adopted the identical contour: builders on Claude Code reported damaged outputs, ignored directions, code that lied in regards to the modifications it had made. Anthropic was silent for weeks. The incident submit solely appeared after Sam Altman quote-tweeted a screenshot of the subreddit. Their postmortem confirmed three infrastructure bugs had been degrading Sonnet 4 responses since early August – affecting roughly 30% of Claude Code customers at peak, with some builders hit repeatedly attributable to sticky routing.

    The throughline throughout all of it: the extra a activity calls for sustained coherence over a protracted context, the extra uncovered it’s to no matter is shifting beneath. It means your threat profile is totally different relying on what you are constructing. That does not make narrow-context stability a assure. 


    What this really means

    Each issues are true. The drift is actual and documented. 

    And likewise: your notion shifts. A brand new reference level strikes your baseline completely. A mannequin you used a 12 months in the past would really feel slower even when it hadn’t modified in any respect. That is additionally actual.

    You may’t reliably inform the distinction between the 2. There isn’t a public device that permits you to confirm if the mannequin you are operating as we speak behaves the identical method it did while you constructed on it. Labs publish functionality benchmarks. They do not publish behavioral diffs. The builders most depending on consistency are the least geared up to detect its absence.

    The one present protections are defensive: pin to dated mannequin strings the place potential, run regression exams in opposition to your key prompts, deal with a mannequin replace like a dependency improve that must be validated earlier than it reaches manufacturing. 

    However even the defensive method has a ceiling. You may pin to a dated mannequin string. What you can’t pin is what’s really occurring inside it. The mannequin weights, the RLHF tuning, and the security filters behind that label are fully opaque. Solely OpenAI and Google know what they really shipped, and whether or not it matches what they shipped final month beneath the identical title. 

    Anthropic’s postmortem learn: “We by no means deliberately degrade mannequin high quality.” However a mannequin would not degrade by itself. If habits shifted on prompts builders hadn’t modified, one thing on Anthropic’s aspect modified. Whether or not they meant to trigger the degradation is a separate query from whether or not they prompted it.

    What’s wanted, and what would not exist anyplace within the business, is a proper obligation baked into phrases of service: outlined thresholds for what counts as a cloth behavioral change, public disclosure when these thresholds are crossed, and a few type of impartial auditability. Labs presently make these selections unilaterally, talk them selectively, and face no structural accountability once they get it unsuitable.

    All of this alerts a coverage vacuum no one is pushing them to really feel.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAn Intuitive Guide to MCMC (Part I): The Metropolis-Hastings Algorithm
    ProfitlyAI
    • Website

    Related Posts

    AI Technology

    We ran 16 AI Models on 9,000+ Real Documents. Here’s What We Found.

    March 11, 2026
    AI Technology

    Hustlers are cashing in on China’s OpenClaw AI craze

    March 11, 2026
    AI Technology

    How Pokémon Go is helping robots deliver pizza on time

    March 10, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    I Measured Neural Network Training Every 5 Steps for 10,000 Iterations

    November 15, 2025

    NLP in Radiology: Applications, Benefits & Challenges in Medical Imaging Reports

    November 13, 2025

    LightLab: ljusmanipulering i bilder med diffusionsbaserad teknik

    May 19, 2025

    Melding data, systems, and society | MIT News

    June 10, 2025

    Build Interactive Machine Learning Apps with Gradio

    July 8, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Demystifying Policy Optimization in RL: An Introduction to PPO and GRPO

    May 26, 2025

    Personliga föremål till mixad verklighet – MIT återskapar leksaker i mixed reality

    April 10, 2025

    Creating an AI Agent to Write Blog Posts with CrewAI

    April 4, 2025
    Our Picks

    Is Open AI actually making its own models dumber?

    March 11, 2026

    An Intuitive Guide to MCMC (Part I): The Metropolis-Hastings Algorithm

    March 11, 2026

    New MIT class uses anthropology to improve chatbots | MIT News

    March 11, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.