Close Menu
    Trending
    • Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found
    • From Transactions to Trends: Predict When a Customer Is About to Stop Buying
    • America’s coming war over AI regulation
    • “Dr. Google” had its issues. Can ChatGPT Health do better?
    • Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics
    • Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026
    • Stop Writing Messy Boolean Masks: 10 Elegant Ways to Filter Pandas DataFrames
    • What Other Industries Can Learn from Healthcare’s Knowledge Graphs
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » The AI doomers feel undeterred
    AI Technology

    The AI doomers feel undeterred

    ProfitlyAIBy ProfitlyAIDecember 15, 2025No Comments21 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    It’s a bizarre time to be an AI doomer.

    This small however influential neighborhood of researchers, scientists, and coverage specialists believes, within the easiest phrases, that AI might get so good it may very well be dangerous—very, very dangerous—for humanity. Although many of those individuals could be extra more likely to describe themselves as advocates for AI security than as literal doomsayers, they warn that AI poses an existential risk to humanity. They argue that absent extra regulation, the business might hurtle towards techniques it may possibly’t management. They generally count on such techniques to observe the creation of synthetic common intelligence (AGI), a slippery concept typically understood as expertise that may do no matter people can do, and higher. 


    This story is a part of MIT Know-how Assessment’s Hype Correction package deal, a collection that resets expectations about what AI is, what it makes attainable, and the place we go subsequent.


    Although that is removed from a universally shared perspective within the AI area, the doomer crowd has had some notable success over the previous a number of years: helping shape AI coverage coming from the Biden administration, organizing prominent calls for international “red lines” to stop AI dangers, and getting a much bigger (and extra influential) megaphone as a few of its adherents win science’s most prestigious awards.

    However quite a lot of developments over the previous six months have put them on the again foot. Speak of an AI bubble has overwhelmed the discourse as tech corporations proceed to invest in a number of Manhattan Projects’ price of knowledge facilities with none certainty that future demand will match what they’re constructing. 

    After which there was the August release of OpenAI’s newest basis mannequin, GPT-5, which proved one thing of a letdown. Perhaps that was inevitable, because it was essentially the most hyped AI launch of all time; OpenAI CEO Sam Altman had boasted that GPT-5 felt “like a PhD-level knowledgeable” in each matter and told the podcaster Theo Von that the mannequin was so good, it had made him really feel “ineffective relative to the AI.” 

    Many anticipated GPT-5 to be a giant step towards AGI, however no matter progress the mannequin could have made was overshadowed by a string of technical bugs and the corporate’s mystifying, shortly reversed choice to close off entry to each previous OpenAI mannequin with out warning. And whereas the brand new mannequin achieved state-of-the-art benchmark scores, many individuals felt, maybe unfairly, that in day-to-day use GPT-5 was a step backward. 

    All this would appear to threaten a number of the very foundations of the doomers’ case. In flip, a competing camp of AI accelerationists, who concern AI is definitely not shifting quick sufficient and that the business is continually vulnerable to being smothered by overregulation, is seeing a contemporary probability to vary how we method AI security (or, perhaps extra precisely, how we don’t). 

    That is significantly true of the business sorts who’ve decamped to Washington: “The Doomer narratives had been incorrect,” declared David Sacks, the longtime enterprise capitalist turned Trump administration AI czar. “This notion of imminent AGI has been a distraction and dangerous and now successfully confirmed incorrect,” echoed the White Home’s senior coverage advisor for AI and tech investor Sriram Krishnan. (Sacks and Krishnan didn’t reply to requests for remark.) 

    (There’s, after all, one other camp within the AI security debate: the group of researchers and advocates generally related to the label “AI ethics.” Although additionally they favor regulation, they have an inclination to assume the pace of AI progress has been overstated and have often written off AGI as a sci-fi story or a scam that distracts us from the technology’s immediate threats. However any potential doomer demise wouldn’t precisely give them the identical opening the accelerationists are seeing.)

    So the place does this go away the doomers? As a part of our Hype Correction package, we determined to ask a number of the motion’s greatest names to see if the latest setbacks and common vibe shift had altered their views. Are they annoyed that policymakers now not appear to heed their threats? Are they quietly adjusting their timelines for the apocalypse? 

    Latest interviews with 20 individuals who examine or advocate AI security and governance—together with Nobel Prize winner Geoffrey Hinton, Turing Prize winner Yoshua Bengio, and high-profile specialists like former OpenAI board member Helen Toner—reveal that quite than feeling chastened or misplaced within the wilderness, they’re nonetheless deeply dedicated to their trigger, believing that AGI stays not simply attainable however extremely harmful.

    On the similar time, they appear to be grappling with a close to contradiction. Whereas they’re considerably relieved that latest developments counsel AGI is additional out than they beforehand thought (“Thank God we have now extra time,” says AI researcher Jeffrey Ladish), additionally they really feel offended that folks in energy aren’t taking them significantly sufficient (Daniel Kokotajlo, lead writer of a cautionary forecast referred to as “AI 2027,” calls the Sacks and Krishnan tweets “deranged and/or dishonest”). 

    Broadly talking, these specialists see the speak of an AI bubble as not more than a pace bump, and disappointment in GPT-5 as extra distracting than illuminating. They nonetheless typically favor extra sturdy regulation and fear that progress on coverage—the implementation of the EU AI Act; the passage of the primary main American AI security invoice, California’s SB 53; and new curiosity in AGI danger from some members of Congress—has change into weak as Washington overreacts to what doomers see as short-term failures to dwell as much as the hype. 

    Some had been additionally desirous to appropriate what they see as essentially the most persistent misconceptions in regards to the doomer world. Although their critics routinely mock them for predicting that AGI is true across the nook, they declare that’s by no means been a necessary a part of their case: It “isn’t about imminence,” says Berkeley professor Stuart Russell, the writer of Human Compatible: Synthetic Intelligence and the Drawback of Management. Most individuals I spoke with say their timelines to harmful techniques have truly lengthened barely within the final 12 months—an necessary change given how shortly the coverage and technical landscapes can shift. 

    “If somebody stated there’s a four-mile-diameter asteroid that’s going to hit the Earth in 2067, we wouldn’t say, ‘Remind me in 2066 and we’ll give it some thought.’”

    Lots of them, in actual fact, emphasize the significance of fixing timelines. And even when they’re only a tad longer now, Toner tells me that one big-picture story of the ChatGPT period is the dramatic compression of those estimates throughout the AI world. For an extended whereas, she says, AGI was anticipated in lots of many years. Now, for essentially the most half, the anticipated arrival is someday within the subsequent few years to twenty years. So even when we have now slightly bit extra time, she (and lots of of her friends) proceed to see AI security as extremely, vitally pressing. She tells me that if AGI had been attainable anytime in even the following 30 years, “It’s an enormous fucking deal. We should always have lots of people engaged on this.”

    So regardless of the precarious second doomers discover themselves in, their backside line stays that regardless of when AGI is coming (and, once more, they are saying it’s very possible coming), the world is way from prepared. 

    Perhaps you agree. Or perhaps chances are you’ll assume this future is way from assured. Or that it’s the stuff of science fiction. You might even assume AGI is a good huge conspiracy theory. You’re not alone, after all—this matter is polarizing. However no matter you consider the doomer mindset, there’s no getting round the truth that sure individuals on this world have a variety of affect. So listed below are a number of the most distinguished individuals within the area, reflecting on this second in their very own phrases. 

    Interviews have been edited and condensed for size and readability. 


    The Nobel laureate who’s unsure what’s coming

    Geoffrey Hinton, winner of the Turing Award and the Nobel Prize in physics for pioneering deep studying

    The most important change in the previous couple of years is that there are people who find themselves exhausting to dismiss who’re saying these items is harmful. Like, [former Google CEO] Eric Schmidt, for instance, actually acknowledged these items could be really dangerous. He and I had been in China just lately speaking to somebody on the Politburo, the occasion secretary of Shanghai, to ensure he actually understood—and he did. I believe in China, the management understands AI and its risks a lot better as a result of lots of them are engineers.

    I’ve been targeted on the longer-term risk: When AIs get extra clever than us, can we actually count on that people will stay in management and even related? However I don’t assume something is inevitable. There’s big uncertainty on every thing. We’ve by no means been right here earlier than. Anyone who’s assured they know what’s going to occur appears foolish to me. I believe that is most unlikely however perhaps it’ll prove that each one the individuals saying AI is manner overhyped are appropriate. Perhaps it’ll prove that we will’t get a lot additional than the present chatbots—we hit a wall as a consequence of limited data. I don’t imagine that. I believe that’s unlikely, however it’s attainable. 

    I additionally don’t imagine individuals like Eliezer Yudkowsky, who say if anyone builds it, we’re all going to die. We don’t know that. 

    However for those who go on the steadiness of the proof, I believe it’s honest to say that most experts who know rather a lot about AI imagine it’s very possible that we’ll have superintelligence throughout the subsequent 20 years. [Google DeepMind CEO] Demis Hassabis says perhaps 10 years. Even [prominent AI skeptic] Gary Marcus would in all probability say, “Properly, for those who guys make a hybrid system with good old style symbolic logic … perhaps that’ll be superintelligent.” [Editor’s note: In September, Marcus predicted AGI would arrive between 2033 and 2040.]

    And I don’t assume anyone believes progress will stall at AGI. I believe roughly all people believes a number of years after AGI, we’ll have superintelligence, as a result of the AGI shall be higher than us at constructing AI.

    So whereas I believe it’s clear that the winds are getting tougher, concurrently, persons are placing in lots of extra sources [into developing advanced AI]. I believe progress will proceed simply because there’s many extra sources entering into.

    The deep studying pioneer who needs he’d seen the dangers sooner

    Yoshua Bengio, winner of the Turing Award, chair of the International AI Safety Report, and founding father of LawZero

    Some individuals thought that GPT-5 meant we had hit a wall, however that isn’t fairly what you see within the scientific knowledge and tendencies.

    There have been individuals overselling the concept that AGI is tomorrow morning, which commercially might make sense. However for those who have a look at the various benchmarks, GPT-5 is simply where you would expect the fashions at that cut-off date to be. By the way in which, it’s not simply GPT-5, it’s Claude and Google fashions, too. In some areas the place AI techniques weren’t superb, like Humanity’s Last Exam or FrontierMath, they’re getting a lot better scores now than they had been in the beginning of the 12 months.

    On the similar time, the general panorama for AI governance and security just isn’t good. There’s a strong force pushing towards regulation. It’s like local weather change. We are able to put our head within the sand and hope it’s going to be fantastic, however it doesn’t actually take care of the difficulty.

    The most important disconnect with policymakers is a misunderstanding of the size of change that’s more likely to occur if the development of AI progress continues. Lots of people in enterprise and governments merely consider AI as simply one other expertise that’s going to be economically very highly effective. They don’t perceive how a lot it’d change the world if tendencies proceed, and we method human-level AI. 

    Like many individuals, I had been blinding myself to the potential dangers to some extent. I ought to have seen it coming a lot earlier. However it’s human. You’re enthusiastic about your work and also you wish to see the great aspect of it. That makes us slightly bit biased in probably not listening to the dangerous issues that would occur.

    Even a small probability—like 1% or 0.1%—of making an accident the place billions of individuals die just isn’t acceptable. 

    The AI veteran who believes AI is progressing—however not quick sufficient to stop the bubble from bursting

    Stuart Russell, distinguished professor of laptop science, College of California, Berkeley, and writer of Human Compatible

    I hope the concept that speaking about existential danger makes you a “doomer” or is “science fiction” involves be seen as fringe, on condition that most leading AI researchers and most leading AI CEOs take it significantly. 

    There have been claims that AI might by no means cross a Turing take a look at, or you can by no means have a system that makes use of pure language fluently, or one that would parallel-park a automobile. All these claims simply find yourself getting disproved by progress.

    Persons are spending trillions of {dollars} to make superhuman AI occur. I believe they want some new concepts, however there’s a big probability they may give you them, as a result of many vital new concepts have occurred in the previous couple of years. 

    My pretty constant estimate for the final 12 months has been that there’s a 75% probability that these breakthroughs aren’t going to occur in time to rescue the business from the bursting of the bubble. As a result of the investments are per a prediction that we’re going to have a lot better AI that may ship far more worth to actual clients. But when these predictions don’t come true, then there’ll be a variety of blood on the ground within the inventory markets.

    Nonetheless, the security case isn’t about imminence. It’s about the truth that we nonetheless don’t have an answer to the management downside. If somebody stated there’s a four-mile-diameter asteroid that’s going to hit the Earth in 2067, we wouldn’t say, “Remind me in 2066 and we’ll give it some thought.” We don’t know the way lengthy it takes to develop the expertise wanted to regulate superintelligent AI.

    Taking a look at precedents, the suitable degree of danger for a nuclear plant melting down is about one in 1,000,000 per 12 months. Extinction is far worse than that. So perhaps set the suitable danger at one in a billion. However the corporations are saying it’s one thing like one in five. They don’t know find out how to make it acceptable. And that’s an issue.

    The professor attempting to set the narrative straight on AI security

    David Krueger, assistant professor in machine studying on the College of Montreal and Yoshua Bengio’s Mila Institute, and founding father of Evitable

    I believe individuals undoubtedly overcorrected of their response to GPT-5. However there was hype. My recollection was that there have been multiple statements from CEOs at numerous ranges of explicitness who mainly stated that by the top of 2025, we’re going to have an automatic drop-in substitute distant employee. However it looks as if it’s been underwhelming, with brokers simply probably not being there but.

    I’ve been stunned how a lot these narratives predicting AGI in 2027 seize the general public consideration. When 2027 comes round, if issues nonetheless look fairly regular, I believe persons are going to really feel like the entire worldview has been falsified. And it’s actually annoying how typically after I’m speaking to individuals about AI security, they assume that I believe we have now actually quick timelines to harmful techniques, or that I believe LLMs or deep studying are going to present us AGI. They ascribe all these further assumptions to me that aren’t essential to make the case. 

    I’d count on we want many years for the worldwide coordination downside. So even when harmful AI is many years off, it’s already pressing. That time appears actually misplaced on lots of people. There’s this concept of “Let’s wait till we have now a very harmful system after which begin governing it.” Man, that’s manner too late.

    I nonetheless assume individuals within the security neighborhood are likely to work behind the scenes, with individuals in energy, probably not with civil society. It provides ammunition to individuals who say it’s all only a rip-off or insider lobbying. That’s to not say that there’s no reality to those narratives, however the underlying danger continues to be actual. We’d like extra public consciousness and a broad base of help to have an efficient response.

    In the event you truly imagine there’s a ten% probability of doom within the subsequent 10 years—which I believe an affordable individual ought to, in the event that they take a detailed look—then the very first thing you assume is: “Why are we doing this? That is loopy.” That’s only a very affordable response as soon as you purchase the premise.

    The governance knowledgeable frightened about AI security’s credibility

    Helen Toner, appearing government director of Georgetown College’s Center for Security and Emerging Technology and former OpenAI board member

    After I acquired into the area, AI security was extra of a set of philosophical concepts. Right now, it’s a thriving set of subfields of machine studying, filling within the gulf between a number of the extra “on the market” issues about AI scheming, deception, or power-seeking and actual concrete techniques we will take a look at and play with. 

    “I fear that some aggressive AGI timeline estimates from some AI security persons are setting them up for a boy-who-cried-wolf second.”

    AI governance is bettering slowly. If we have now numerous time to adapt and governance can maintain bettering slowly, I really feel not dangerous. If we don’t have a lot time, then we’re in all probability shifting too gradual.

    I believe GPT-5 is usually seen as a disappointment in DC. There’s a fairly polarized dialog round: Are we going to have AGI and superintelligence within the subsequent few years? Or is AI truly simply completely all hype and ineffective and a bubble? The pendulum had perhaps swung too far towards “We’re going to have super-capable techniques very, very quickly.” And so now it’s swinging again towards “It’s all hype.”

    I fear that some aggressive AGI timeline estimates from some AI security persons are setting them up for a boy-who-cried-wolf second. When the predictions about AGI coming in 2027 don’t come true, individuals will say, “Take a look at all these individuals who made fools of themselves. It’s best to by no means take heed to them once more.” That’s not the intellectually sincere response, if perhaps they later modified their thoughts, or their take was that they solely thought it was 20 p.c possible they usually thought that was nonetheless price listening to. I believe that shouldn’t be disqualifying for individuals to take heed to you later, however I do fear it will likely be a giant credibility hit. And that’s making use of to people who find themselves very involved about AI security and by no means stated something about very quick timelines.

    The AI safety researcher who now believes AGI is additional out—and is grateful

    Jeffrey Ladish, government director at Palisade Research

    Within the final 12 months, two huge issues up to date my AGI timelines. 

    First, the shortage of high-quality knowledge turned out to be a bigger problem than I anticipated. 

    Second, the first “reasoning” model, OpenAI’s o1 in September 2024, confirmed reinforcement studying scaling was simpler than I believed it could be. After which months later, you see the o1 to o3 scale-up and also you see fairly loopy spectacular efficiency in math and coding and science—domains the place it’s simpler to form of confirm the outcomes. However whereas we’re seeing continued progress, it might have been a lot sooner.

    All of this bumps up my median estimate to the beginning of totally automated AI analysis and improvement from three years to perhaps 5 or 6 years. However these are type of made up numbers. It’s exhausting. I wish to caveat all this with, like, “Man, it’s simply actually exhausting to do forecasting right here.”

    Thank God we have now extra time. We’ve a presumably very temporary window of alternative to actually attempt to perceive these techniques earlier than they’re succesful and strategic sufficient to pose an actual risk to our capability to regulate them.

    However it’s scary to see individuals assume that we’re not making progress anymore when that’s clearly not true. I simply realize it’s not true as a result of I exploit the fashions. One of many downsides of the way in which AI is progressing is that how briskly it’s shifting is changing into much less legible to regular individuals. 

    Now, this isn’t true in some domains—like, have a look at Sora 2. It’s so apparent to anybody who seems to be at it that Sora 2 is vastly higher than what got here earlier than. However for those who ask GPT-4 and GPT-5 why the sky is blue, they’ll provide you with mainly the identical reply. It’s the appropriate reply. It’s already saturated the power to let you know why the sky is blue. So the individuals who I count on to most perceive AI progress proper now are the people who find themselves truly constructing with AIs or utilizing AIs on very difficult scientific problems.

    The AGI forecaster who noticed the critics coming

    Daniel Kokotajlo, government director of the AI Futures Project; an OpenAI whistleblower; and lead writer of “AI 2027,” a vivid situation the place—beginning in 2027—AIs progress from “superhuman coders” to “wildly superintelligent” techniques within the span of months

    AI coverage appears to be getting worse, just like the “Professional-AI” super PAC [launched earlier this year by executives from OpenAI and Andreessen Horowitz to lobby for a deregulatory agenda], and the deranged and/or dishonest tweets from Sriram Krishnan and David Sacks. AI security research is progressing on the ordinary tempo, which is excitingly rapid in comparison with most fields, however gradual in comparison with how briskly it must be.

    We stated on the primary web page of “AI 2027” that our timelines had been considerably longer than 2027. So even after we launched AI 2027, we anticipated there to be a bunch of critics in 2028 triumphantly saying we’ve been discredited, just like the tweets from Sacks and Krishnan. However we thought, and proceed to assume, that the intelligence explosion will in all probability occur someday within the subsequent 5 to 10 years, and that when it does, individuals will keep in mind our situation and understand it was nearer to the reality than anything obtainable in 2025. 

    Predicting the long run is difficult, however it’s priceless to attempt; individuals ought to purpose to speak their uncertainty in regards to the future in a manner that’s particular and falsifiable. That is what we’ve completed and only a few others have completed. Our critics principally haven’t made predictions of their very own and infrequently exaggerate and mischaracterize our views. They are saying our timelines are shorter than they’re or ever had been, or they are saying we’re extra assured than we’re or had been.

    I really feel fairly good about having longer timelines to AGI. It seems like I simply acquired a greater prognosis from my physician. The scenario continues to be mainly the identical, although.

    Garrison Lovely is a contract journalist and the writer of Out of date, an online publication and forthcoming book on the discourse, economics, and geopolitics of the race to construct machine superintelligence (out spring 2026). His writing on AI has appeared within the New York Occasions, Nature, Bloomberg, Time, the Guardian, The Verge, and elsewhere.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe great AI hype correction of 2025
    Next Article A brief history of Sam Altman’s hype
    ProfitlyAI
    • Website

    Related Posts

    AI Technology

    America’s coming war over AI regulation

    January 23, 2026
    AI Technology

    “Dr. Google” had its issues. Can ChatGPT Health do better?

    January 22, 2026
    AI Technology

    Everyone wants AI sovereignty. No one can truly have it.

    January 22, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Struggling with Data Science? 5 Common Beginner Mistakes

    November 24, 2025

    Data Science: From School to Work, Part V

    June 26, 2025

    Fighting Back Against Attacks in Federated Learning 

    September 10, 2025

    AI shapes autonomous underwater “gliders” | MIT News

    July 9, 2025

    Amazon nya AI-shoppingassistent – Buy for Me

    April 4, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    How Agents Plan Tasks with To-Do Lists

    December 23, 2025

    The Pentagon is gutting the team that tests AI and weapons systems

    June 10, 2025

    MIT Learn offers “a whole new front door to the Institute” | MIT News

    July 21, 2025
    Our Picks

    Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found

    January 23, 2026

    From Transactions to Trends: Predict When a Customer Is About to Stop Buying

    January 23, 2026

    America’s coming war over AI regulation

    January 23, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.