Close Menu
    Trending
    • From Transactions to Trends: Predict When a Customer Is About to Stop Buying
    • America’s coming war over AI regulation
    • “Dr. Google” had its issues. Can ChatGPT Health do better?
    • Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics
    • Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026
    • Stop Writing Messy Boolean Masks: 10 Elegant Ways to Filter Pandas DataFrames
    • What Other Industries Can Learn from Healthcare’s Knowledge Graphs
    • Everyone wants AI sovereignty. No one can truly have it.
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » AI Godfathers, Steve Bannon, and Prince Harry Agree on One Thing: Stop Superintelligence
    Latest News

    AI Godfathers, Steve Bannon, and Prince Harry Agree on One Thing: Stop Superintelligence

    ProfitlyAIBy ProfitlyAIOctober 28, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    A brand new open letter is urging a worldwide prohibition on the event of superintelligence, and it is signed by some of the uncommon coalitions possible.

    The assertion, organized by the Way forward for Life Institute (FLI), has gathered help from AI “Godfathers” Geoffrey Hinton and Yoshua Bengio, Apple co-founder Steve Wozniak, and Nobel laureates. Nevertheless it additionally contains signatures from political figures like Steve Bannon and Susan Rice, and celebrities like Prince Harry and Meghan Markle.

    Their message is brief and blunt: “We name for a prohibition on the event of superintelligence,” to not be lifted till there’s “broad scientific consensus that it will likely be accomplished safely and controllably” and with “robust public buy-in.”

    The organizers warn that point is working out, claiming that superintelligence, or AI able to outperforming people on all cognitive duties, might arrive in as little as one or two years.

    However is that this name for a ban sensible, and even useful? To unpack the letter and its implications, I spoke with SmarterX and Advertising and marketing AI Institute founder and CEO Paul Roetzer on Episode 176 of The Artificial Intelligence Show.

    A “Counterproductive and Foolish” Proposal?

    Roetzer’s preliminary take is that the letter is primarily for constructing public consciousness. However he additionally highlights a robust counter-argument that exposes the elemental flaws within the proposal, which comes from Dean Ball on the Basis for American Innovation.

    Obscure statements like this, which basically can’t be operationalized in coverage however really feel good to signal, are counterproductive and foolish. Simply as they have been two or so years in the past, once we went via one other cycle of nebulous AI-statement-signing.

    Let’s put aside the overall lack… https://t.co/Yk1Oe8qhee pic.twitter.com/FHE1hkF9KF

    — Dean W. Ball (@deanwball) October 22, 2025

    Ball’s core subject with the proposed superintelligence pause is: How will you show superintelligence is secure with out first constructing it?

    This logic means that the one approach to implement such a ban can be to create a “sanctioned venue and establishment” for superintelligence improvement. In different phrases, a worldwide governance physique tasked with constructing the very expertise the letter seeks to ban.

    This state of affairs would centralize improvement and provides a consortium of governments (the identical entities, Ball notes, with “militaries, police, and a monopoly on official violence”) unilateral management over probably the most highly effective expertise ever conceived.

    “This sounds to me just like the worst potential approach to construct superintelligence,” Ball concludes.

    Whereas he agrees the present high-speed, aggressive race between just a few non-public labs is not very best, a centralized authorities monopoly might be much more harmful.

    “Do we want regulation? Completely. It doesn’t really feel like the best way we’re doing this proper now could be the most secure approach,” says Roetzer.

    “However I don’t really feel just like the superpowers of the world are at present in a spot the place we’re going to have the ability to negotiate that. There’s another stuff that we’re making an attempt to work out collectively that isn’t going so easily.”

    A New Framework to Outline AGI

    This public assertion coincides with a new academic paper from most of the identical figures, together with FLI’s Max Tegmark and Yoshua Bengio, making an attempt to create a concrete, quantifiable definition of AGI.

    The paper defines AGI as “matching the cognitive versatility and proficiency of a well-educated grownup” and breaks intelligence down into ten core cognitive domains, like reasoning, math, and reminiscence.

    The framework reveals a “jagged” profile for present fashions: excessive scores in information and math, however vital deficits in areas like reminiscence and velocity.

    Crucially, the paper scores GPT-4 at simply 27% on this AGI scale, whereas estimating GPT-5 at 57%, a large leap that highlights the speedy progress towards the objective.

    The Backside Line

    So why is that this all occurring now? Roetzer believes it’s as a result of the dangers are now not theoretical, and the important thing gamers understand it.

    AI labs at OpenAI and Meta at the moment are overtly calling themselves “superintelligence labs.” The whole tech economic system is being pushed by capital spending on AI infrastructure. And regulators are lastly beginning to act.

    “Everybody’s type of concurrently realizing like, oh my gosh, it is a big deal and we do not know find out how to deal with any of it in training and enterprise within the economic system,” says Roetzer.

    Whereas the open letter itself could also be extra symbolic than sensible, it’s one clear signal that some in society are starting to grapple in their very own approach with a quickly accelerating future.





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMeet ChatGPT Atlas, OpenAI’s Agentic Web Browser
    Next Article Water Cooler Small Talk, Ep. 9: What “Thinking” and “Reasoning” Really Mean in AI and LLMs
    ProfitlyAI
    • Website

    Related Posts

    Latest News

    Why Google’s NotebookLM Might Be the Most Underrated AI Tool for Agencies Right Now

    January 21, 2026
    Latest News

    Why Optimization Isn’t Enough Anymore

    January 21, 2026
    Latest News

    Adversarial Prompt Generation: Safer LLMs with HITL

    January 20, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    How generative AI could help make construction sites safer

    July 2, 2025

    Nya Firebase Studio från Google förvandlar idéer till applikationer med AI-kraft

    April 10, 2025

    Google Cloud Next 2025 presenterade flera nya moln och AI-teknologier

    April 10, 2025

    Retrieval for Time-Series: How Looking Back Improves Forecasts

    January 8, 2026

    Pi din emotionellt intelligenta AI-kompis

    November 6, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    MIT Energy Initiative launches Data Center Power Forum | MIT News

    November 7, 2025

    The ascent of the AI therapist

    December 30, 2025

    How to automate data extraction in healthcare: A quick guide

    April 8, 2025
    Our Picks

    From Transactions to Trends: Predict When a Customer Is About to Stop Buying

    January 23, 2026

    America’s coming war over AI regulation

    January 23, 2026

    “Dr. Google” had its issues. Can ChatGPT Health do better?

    January 22, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.