Close Menu
    Trending
    • Gemini introducerar funktionen schemalagda åtgärder i Gemini-appen
    • AIFF 2025 Runway’s tredje årliga AI Film Festival
    • AI-agenter kan nu hjälpa läkare fatta bättre beslut inom cancervård
    • Not Everything Needs Automation: 5 Practical AI Agents That Deliver Enterprise Value
    • Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.
    • 5 Crucial Tweaks That Will Make Your Charts Accessible to People with Visual Impairments
    • Why AI Projects Fail | Towards Data Science
    • The Role of Luck in Sports: Can We Measure It?
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » How to build a better AI benchmark
    AI Technology

    How to build a better AI benchmark

    ProfitlyAIBy ProfitlyAIMay 8, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The boundaries of conventional testing

    If AI corporations have been sluggish to answer the rising failure of benchmarks, it’s partially as a result of the test-scoring method has been so efficient for therefore lengthy. 

    One of many largest early successes of latest AI was the ImageNet problem, a sort of antecedent to up to date benchmarks. Launched in 2010 as an open problem to researchers, the database held greater than 3 million photographs for AI programs to categorize into 1,000 completely different lessons.

    Crucially, the check was utterly agnostic to strategies, and any profitable algorithm shortly gained credibility no matter the way it labored. When an algorithm referred to as AlexNet broke by way of in 2012, with a then unconventional type of GPU coaching, it turned one of many foundational outcomes of contemporary AI. Few would have guessed upfront that AlexNet’s convolutional neural nets could be the key to unlocking picture recognition—however after it scored effectively, nobody dared dispute it. (Considered one of AlexNet’s builders, Ilya Sutskever, would go on to cofound OpenAI.)

    A big a part of what made this problem so efficient was that there was little sensible distinction between ImageNet’s object classification problem and the precise strategy of asking a pc to acknowledge a picture. Even when there have been disputes about strategies, nobody doubted that the highest-scoring mannequin would have a bonus when deployed in an precise picture recognition system.

    However within the 12 years since, AI researchers have utilized that very same method-agnostic method to more and more normal duties. SWE-Bench is usually used as a proxy for broader coding means, whereas different exam-style benchmarks typically stand in for reasoning means. That broad scope makes it troublesome to be rigorous about what a selected benchmark measures—which, in flip, makes it onerous to make use of the findings responsibly. 

    The place issues break down

    Anka Reuel, a PhD scholar who has been specializing in the benchmark downside as a part of her analysis at Stanford, has develop into satisfied the analysis downside is the results of this push towards generality. “We’ve moved from task-specific fashions to general-purpose fashions,” Reuel says. “It’s not a couple of single process anymore however a complete bunch of duties, so analysis turns into tougher.”

    Just like the College of Michigan’s Jacobs, Reuel thinks “the primary situation with benchmarks is validity, much more than the sensible implementation,” noting: “That’s the place plenty of issues break down.” For a process as difficult as coding, as an example, it’s practically unattainable to include each attainable state of affairs into your downside set. In consequence, it’s onerous to gauge whether or not a mannequin is scoring higher as a result of it’s extra expert at coding or as a result of it has extra successfully manipulated the issue set. And with a lot stress on builders to realize report scores, shortcuts are onerous to withstand.

    For builders, the hope is that success on a number of particular benchmarks will add as much as a usually succesful mannequin. However the methods of agentic AI imply a single AI system can embody a posh array of various fashions, making it onerous to judge whether or not enchancment on a selected process will result in generalization. “There’s simply many extra knobs you may flip,” says Sayash Kapoor, a pc scientist at Princeton and a outstanding critic of sloppy practices within the AI trade. “Relating to brokers, they’ve kind of given up on the very best practices for analysis.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleLightricks LTX Video upp till 30 gånger snabbare än jämförbara modeller
    Next Article Murder Victim Speaks from the Grave in Courtroom Through AI
    ProfitlyAI
    • Website

    Related Posts

    AI Technology

    Manus has kick-started an AI agent boom in China

    June 5, 2025
    AI Technology

    What’s next for AI and math

    June 4, 2025
    AI Technology

    Inside the tedious effort to tally AI’s energy appetite

    June 3, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Seeing AI as a collaborator, not a creator

    April 23, 2025

    What is vibe coding, exactly?

    April 16, 2025

    How Much Does AI Really Threaten Entry-Level Jobs?

    June 3, 2025

    Getting Your Tool Noticed • AI Parabellum

    April 10, 2025

    Making AI models more trustworthy for high-stakes settings | MIT News

    May 1, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    AIFF 2025 Runway’s tredje årliga AI Film Festival

    June 7, 2025

    How Much Does AI Really Threaten Entry-Level Jobs?

    June 3, 2025

    May Must-Reads: Math for Machine Learning Engineers, LLMs, Agent Protocols, and More

    May 30, 2025
    Our Picks

    Gemini introducerar funktionen schemalagda åtgärder i Gemini-appen

    June 7, 2025

    AIFF 2025 Runway’s tredje årliga AI Film Festival

    June 7, 2025

    AI-agenter kan nu hjälpa läkare fatta bättre beslut inom cancervård

    June 7, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.