Close Menu
    Trending
    • Three OpenClaw Mistakes to Avoid and How to Fix Them
    • I Stole a Wall Street Trick to Solve a Google Trends Data Problem
    • How AI is turning the Iran conflict into theater
    • Why Your AI Search Evaluation Is Probably Wrong (And How to Fix It)
    • Machine Learning at Scale: Managing More Than One Model in Production
    • Improving AI models’ ability to explain their predictions | MIT News
    • Write C Code Without Learning C: The Magic of PythoC
    • LatentVLA: Latent Reasoning Models for Autonomous Driving
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Is AI “normal”? | MIT Technology Review
    AI Technology

    Is AI “normal”? | MIT Technology Review

    ProfitlyAIBy ProfitlyAIApril 29, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    So in opposition to this backdrop, a current essay by two AI researchers at Princeton felt fairly provocative. Arvind Narayanan, who directs the college’s Middle for Data Know-how Coverage, and doctoral candidate Sayash Kapoor wrote a 40-page plea for everybody to settle down and consider AI as a standard know-how. This runs reverse to the “widespread tendency to deal with it akin to a separate species, a extremely autonomous, probably superintelligent entity.”

    As an alternative, in response to the researchers, AI is a general-purpose know-how whose software could be higher in comparison with the drawn-out adoption of electrical energy or the web than to nuclear weapons—although they concede that is in some methods a flawed analogy.

    The core level, Kapoor says, is that we have to begin differentiating between the speedy growth of AI strategies—the flashy and spectacular shows of what AI can do within the lab—and what comes from the precise purposes of AI, which in historic examples of different applied sciences lag behind by a long time. 

    “A lot of the dialogue of AI’s societal impacts ignores this technique of adoption,” Kapoor advised me, “and expects societal impacts to happen on the velocity of technological growth.” In different phrases, the adoption of helpful synthetic intelligence, in his view, shall be much less of a tsunami and extra of a trickle.

    Within the essay, the pair make another bracing arguments: phrases like “superintelligence” are so incoherent and speculative that we shouldn’t use them; AI gained’t automate the whole lot however will delivery a class of human labor that screens, verifies, and supervises AI; and we must always focus extra on AI’s chance to worsen present issues in society than the potential of it creating new ones.

    “AI supercharges capitalism,” Narayanan says. It has the capability to both assist or damage inequality, labor markets, the free press, and democratic backsliding, relying on the way it’s deployed, he says. 

    There’s one alarming deployment of AI that the authors miss, although: using AI by militaries. That, after all, is picking up quickly, elevating alarms that life and demise selections are more and more being aided by AI. The authors exclude that use from their essay as a result of it’s exhausting to research with out entry to labeled data, however they are saying their analysis on the topic is forthcoming. 

    One of many greatest implications of treating AI as “regular” is that it might upend the place that each the Biden administration and now the Trump White Home have taken: Constructing the perfect AI is a nationwide safety precedence, and the federal authorities ought to take a variety of actions—limiting what chips will be exported to China, dedicating extra power to information facilities—to make that occur. Of their paper, the 2 authors seek advice from US-China “AI arms race” rhetoric as “shrill.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleSam Altman Admits: ChatGPT’s New Personality Is “Annoying”, Fix Coming This Week
    Next Article The Secret Inner Lives of AI Agents: Understanding How Evolving AI Behavior Impacts Business Risks
    ProfitlyAI
    • Website

    Related Posts

    AI Technology

    How AI is turning the Iran conflict into theater

    March 9, 2026
    AI Technology

    Is the Pentagon allowed to surveil Americans with AI?

    March 6, 2026
    AI Technology

    The AI Arms Race Has Real Numbers: Pentagon vs China 2026

    March 6, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Why CrewAI’s Manager-Worker Architecture Fails — and How to Fix It

    November 25, 2025

    The End of Nvidia’s Dominance? Huawei’s New AI Chip Could Be a Game-Changer

    April 29, 2025

    2025 Must-Reads: Agents, Python, LLMs, and More

    December 20, 2025

    How to Create an LLM Judge That Aligns with Human Labels

    July 21, 2025

    An End-to-End Guide to Beautifying Your Open-Source Repo with Agentic AI

    February 20, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    How generative AI can help scientists synthesize complex materials | MIT News

    February 2, 2026

    How to Facilitate Effective AI Programming

    December 29, 2025

    Proofig or TruthScan? Which Should You Use?

    January 12, 2026
    Our Picks

    Three OpenClaw Mistakes to Avoid and How to Fix Them

    March 9, 2026

    I Stole a Wall Street Trick to Solve a Google Trends Data Problem

    March 9, 2026

    How AI is turning the Iran conflict into theater

    March 9, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.