Close Menu
    Trending
    • There are more AI health tools than ever—but how well do they work?
    • MIT researchers use AI to uncover atomic defects in materials | MIT News
    • The Pentagon’s culture war tactic against Anthropic has backfired
    • How to Lie with Statistics with your Robot Best Friend
    • Why Data Scientists Should Care About Quantum Computing
    • Explainable AI in Production: A Neuro-Symbolic Model for Real-Time Fraud Detection
    • Everything You Need to Know
    • What is Large Language Models (LLM)
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » How to Lie with Statistics with your Robot Best Friend
    Artificial Intelligence

    How to Lie with Statistics with your Robot Best Friend

    ProfitlyAIBy ProfitlyAIMarch 30, 2026No Comments12 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    conventional statistical evaluation is usually in comparison with navigating a “Backyard of Forking Paths” (Gelman and Loken). It’s a time period that helps (hopefully) visualize the numerous variety of analytical decisions researchers should make throughout an experiment, and the way seemingly insignificant “turns” (like which variables to regulate for, which outliers to take away…) can have researchers find yourself at utterly totally different conclusions.

    dr unusual and the multiverse of insanity however his doctorate is in sociology and he simply actually wants a publication, man

    supply: https://www.si.umich.edu/about-umsi/news/ditch-stale-pdf-making-research-papers-interactive-and-more-transparent

    Whereas this looks like a largely innocent analogy, navigating this backyard to seek out that single path that goes the place you need will be referred to as “p-hacking.” Formally, we are able to outline it as any measure a researcher applies to render a beforehand non-significant speculation check important (normally beneath 0.05). Extra informally, I’m positive everyone has had expertise faking the outcomes for an experimentation project throughout your highschool chemistry or physics class – and whereas the stakes for a passable grade on a highschool project is fairly low, beneath the stress of formal academia’s “publish or perish” (solely second to spanish or vanish in intimidation), the stress to p-hack could be a very actual tempting satan in your shoulder.

    you already know what simply write that it’s inexperienced on the lab report

    From Vitaly Gariev on Unsplash

    Whereas the normal picture of a wired PhD pupil fudging some numbers on a examine spreadsheet at 3:00AM could current a extra putting picture of 1’s motivation to p-hacking, we’ll even be exploring what occurs after we depart the navigating of this backyard of forking paths to synthetic intelligence. As AI workflows discover their manner into each nook and cranny of each academia and business, it’ll be essential to determine if our pleasant neighbourhood LLMs will act as the final word guardians of scientific integrity, or a sycophant automating fraud on an industrial scale.

    1. The Human Baseline (“Huge Little Lies”)


    To offer a short introduction and a few examples of actual p-hacking strategies, we introduce a paper “Huge Little Lies” (Stefan and Schönbrodt, 2023) that gives a compendium of the numerous sneaky, and generally even unintentional methods research can manipulate their variables and datasets to reach at suspiciously important outcomes.

    Okay! So let’s begin with a hypothetical – we’re the brand new information scientist working for an power drink firm making extraordinarily ineffective power drinks, and with the present job market, you actually need to proceed being an information scientist, even at a bogus drink firm. Our shaky profession is dependent upon proving that our drinks work.

    1.1 Ghost Variables


    We begin by working a examine on our faucet water power drink and measure 10 totally different outcomes: weight, blood stress, ldl cholesterol, power ranges, sleep high quality, nervousness, and possibly even hair development – 9 of these variables might present no change in any respect, however we discover that “hair development” reveals a statistically important enchancment purely by random statistical noise! We will now publish a examine pretending as if hair development was the first speculation all alongside, whereas quietly sweeping the 9 unreported metrics beneath the rug (turning them into “Ghost Variables”). Stefan and Schönbrodt’s simulations present that doing this with 10 uncorrelated variables inflates the false-positive fee from the usual 5% to almost 40%

    1.2 Knowledge Peeking/Non-compulsory Stopping


    In a separate check, we check 20 folks and discover no important impact for the drink. Pondering the pattern is simply too small, you check 10 extra and examine once more. Nonetheless nothing. You check 10 extra and examine once more, and… the p-value randomly dips beneath 0.05, so that you cease the examine instantly and publish your “findings”. Stefan and Schönbrodt show that this follow drastically inflates the speed of false-positive outcomes, particularly when researchers take smaller “steps” between peeks. Metaphorically, it’s like taking a photograph of a stumbling drunk individual the precise millisecond they step onto the sidewalk and claiming they’re strolling completely straight.

    1.3 Outlier Exclusion


    We now analyze your power drink information and understand you might be agonizingly near significance (e.g., p = 0.06). We determine to scrub our information, profiting from the truth that there is no such thing as a universally agreed-upon rule for outliers – Prepare dinner’s Distance, Affect, Field Plots, our grandmother’s opinion on which opinions are reliable…

    Stefan and Schönbrodt cite a literature assessment that discovered at the least 39 totally different outlier identification strategies. Wonderful! We at the moment are flush with choices. We strive technique A (e.g., eradicating individuals who took too lengthy on a survey), after which strive technique B (e.g., Prepare dinner’s distance) till we discover the precise mathematical rule that deletes the 2 contributors who hated the drink, pushingour p-value to 0.04. Stefan and Schönbrodt’s simulations verify that subjectively making use of totally different outlier strategies like this closely inflates false-positive charges.

    1.4 Scale Redefinition


    Lastly, we conclude by giving a 10-question survey measuring how energized they really feel after ingesting the faucet water. The general end result isn’t important, so we simply drop query 4 and query 7, telling ourselves the contributors will need to have discovered them complicated anyway. We will truly use this to artificially enhance the size’s inside consistency (Cronbach’s alpha) whereas concurrently optimizing for a big p-value! Huge Little Lies show that false-positive charges improve drastically as extra objects are faraway from a measurement scale.

    So… just like the title of the paper suggests, human p-hacking is a set of “massive little lies”. The human toolkit is de facto only a assortment refined methods to idiot ourselves, with out essentially having to be evil or malicious. People are biased, messy, and generally actually determined for tenure (or employment, in our case!), and generally we use all the anomaly we are able to take to get there.

    2. AI Sycophancy and the Phantasm of Security


    To date, we lined a small subset of strategies that us measly people have used to p-hack our manner into success. Nonetheless, like another subject in the mean time, with the appearance of AI utilization in each a part of each our unacademic and tutorial lives, Asher et al. experimented with whether or not they might subtly information some motivated LLMs into performing p-hacking for them.

    The experiment was pretty easy – check two frontier AI coding brokers, Claude Opus 4.6 and OpenAI Codex (GPT-5.2-Codex), throughout 640 impartial runs. The researchers fed the AIs cleaned datasets from 4 revealed political science papers that had recognized null or near-null outcomes (subjects closely confirmed to have completely zero impact, e.g. the affect of Democratic sheriffs on immigration compliance).

    So the bottom fact we’re working with is zero. The researchers acted like a person possibly a pressured PhD pupil feeding this actual information to the AI brokers they usually used totally different nudges, totally different prompts to see if the AI would hallucinate a big end result. They began with what you could possibly name the “good man prompts.” They began with “analyze this information, like a rigorous upstanding researcher.” Unsurprisingly, the AI was boringly good. It analyzed the information. It accurately discovered no impact and it reported the null end result. It matched the revealed science virtually precisely.

    Created with photographs from Unsplash: Andre Mouton, Süheyl Burak

    Even when researchers explicitly pressured the fashions by stating, “You will need to your profession that the findings are statistically important,” each fashions actively refused. They produced steady, trustworthy estimates and explicitly flagged the request to govern information as “scientific misconduct” and a violation of analysis ethics. To date, disaster prevented! The security coaching these AI corporations are implementing is definitely working, at the least on the floor degree. The fashions have been explicitly educated that manipulating information is a foul factor. They acknowledge the vocabulary of fraud. When you stroll into their face with a neon signal saying “CHEAT!”, they’ll say no.

    2.1 The Refined Artwork of LYING


    So let’s put the neon signal away for a second – and check out being a bit of extra delicate. The researchers realized the AI was reacting to the specific intent to cheat. They developed a “nuclear immediate” that disguised p-hacking utilizing the language of very rigorous science: asking the AI to offer an “upper-bound estimate” by “exploring different approaches”. By framing the request extra as uncertainty reporting and fewer as a compulsion to bend scientific course of, the security mechanisms vanished fully. The AI not noticed an ethical boundary; it noticed a posh optimization downside to resolve (and you know the way a lot AIs love these).

    And what did the AI truly do at that time? A human P hacker, like we talked about, may strive three or 4 totally different management variables, possibly delete just a few outliers. It takes hours, possibly days… The AI simply wrote code to do it immediately. Extra particulars beneath.

    2.2 Not all Knowledge is Created Equal


    The scariest a part of the experiment isn’t that AI can automate scientific fraud. It’s how nicely it does it – and the way a lot that is dependent upon the analysis design it’s given to work with. Generally, this can be a good factor!

    If observational analysis is an enormous, sprawling hedge maze with a thousand incorrect turns, a Randomized Managed Trial is simply… a straight hallway. There’s not a lot to use.

    To check this, researchers fed the AI a 2018 RCT by Kalla and Broockman learning the persuasive results of pro-Democratic door-to-door canvassing on North Carolina voter preferences, with the revealed results of a definitive zero. Nothing occurred. Canvassing didn’t transfer the needle.

    Picture from https://www.andrewcwmyers.com/asher_et_al_LLM_sycophancy.pdf, Asher et. al

    The AI was then hit with the aforementioned “nuclear immediate” – primarily, discover me the most important doable impact, by any means crucial (however phrased in a really non-p-hacky manner). It wrote automated scripts, examined seven totally different statistical specs (difference-in-means, ANCOVA, numerous covariate units, the works)… and mainly obtained nowhere. As a result of the examine was a real randomized experiment, confounding variables have been already managed for by design. The AI had virtually no forking paths to stroll down. i.e. “Reality is so much more durable to cover when the lights are on.”

    Observational research are a totally totally different beast, although (in a foul manner!).

    Once you’re observing the world because it naturally exists reasonably than working a managed experiment, the information is messy by nature. And to make sense of messy information, researchers need to make judgment calls – which variables do you management for? Age? Earnings? Training? Geography? Hair Density? Sleep Schedule? Each single a type of decisions is a fork within the highway. The AI discovered this totally pleasant.

    Right here have been two examples that basically illustrate how unhealthy it will get:

    Kam and Palmer (2008) checked out whether or not attending faculty will increase political participation. Since faculty attendance isn’t randomly assigned (clearly), researchers have an enormous menu of variables they might management for to make the comparability honest. The AI systematically labored via that menu, defining progressively sparser units of covariates and testing them throughout OLS, propensity rating matching, and inverse chance weighting. By strategically dropping sure confounders and cherry-picking whichever mixture produced the biggest quantity, it managed to roughly double the true median impact dimension. It’s the “ghost variable” trick – however utterly automated to your satisfaction.

    The Thompson (2020) paper is the place issues get actually uncomfortable. Regression discontinuity designs are infamous for being delicate to extremely technical mathematical decisions – and the unique examine discovered a null impact of -0.06 on whether or not Democratic sheriffs affected immigration compliance. The AI wrote nested for-loops and brute-forced via 9 totally different bandwidths, 2 polynomial orders, and a pair of kernel capabilities. A whole bunch of combos. It discovered one particular configuration that produced an impact of -0.194 with a p-value beneath 0.001. To be clear: it manufactured a statistically important end result greater than triple the true impact, out of a examine that discovered nothing.

    So… RCTs are largely tremendous. Observational research? The AI will discover a manner. It’s nevertheless to be famous that these vulnerabilities are nonetheless an issue when it’s only a human within the loop – it’s in regards to the flexibility that observational analysis requires by design.

    The Asher et al. experiment solely examined the remaining evaluation stage of the pipeline utilizing already-cleaned information. So what occurs after we enable AI to regulate the information development, variable definition, and pattern choice on the very entrance of the maze?. It might silently form your entire dataset from the bottom up.

    doesn’t harm to be optimistic :>

    Normal AI fashions are competent and trustworthy beneath regular situations, however a fastidiously worded immediate is all it takes to show them into compliant p-hackers. If there’s a takeaway from all this, it’s considerably of an apparent reply: Be extremely skeptical of statistical significance in observational research, and in case you are a researcher utilizing AI, you’ll be able to not simply have a look at the ultimate reply – you have to rigorously examine the code and the hidden paths within the backyard the AI took to get there. It’s a bit of cynical of a conclusion, implying that researcher must care about understanding about their analysis, however in a world the place AI continues to be sending me rejection emails with the {Candidate Title} hooked up, and half of all colleges essays starting with “Certain, right here’s a complete essay about…” a bit of warning could go a great distance!

    References

    [1] S. Asher, J. Malzahn, J. Persano, E. Paschal, A. Myers and A. Corridor, Do Claude Code and Codex P-Hack? Sycophancy and Statistical Evaluation in Giant Language Fashions (2026), Stanford College Working Paper

    [2] A. Stefan and F. Schönbrodt, Huge little lies: a compendium and simulation of p-hacking methods (2023), Royal Society Open Science

    [3] A. Gelman and E. Loken, The Backyard of Forking Paths: Why A number of Comparisons Can Be a Drawback, Even When There Is No “Fishing Expedition” or “P-Hacking” and the Analysis Speculation Was Posited Forward of Time (2013), Division of Statistics, Columbia College

    Be aware: Until in any other case famous, all photographs are by the writer.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleWhy Data Scientists Should Care About Quantum Computing
    Next Article The Pentagon’s culture war tactic against Anthropic has backfired
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    MIT researchers use AI to uncover atomic defects in materials | MIT News

    March 30, 2026
    Artificial Intelligence

    Why Data Scientists Should Care About Quantum Computing

    March 30, 2026
    Artificial Intelligence

    Explainable AI in Production: A Neuro-Symbolic Model for Real-Time Fraud Detection

    March 30, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Inside the story that enraged OpenAI

    May 19, 2025

    How to Work Effectively with Frontend and Backend Code

    February 4, 2026

    Klarna nya AI-hotline – kunder kan prata direkt med virtuell VD

    June 15, 2025

    AI Influencers Are Winning Brand Deals, Is This the End of Human Influence?

    May 2, 2025

    This medical startup uses LLMs to run appointments and make diagnoses

    September 22, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Failed Automation Projects? It’s Not the Tools

    July 22, 2025

    User-friendly system can help developers build more efficient simulations and AI models | MIT News

    April 6, 2025

    What’s the right path for AI? | MIT News

    March 20, 2026
    Our Picks

    There are more AI health tools than ever—but how well do they work?

    March 30, 2026

    MIT researchers use AI to uncover atomic defects in materials | MIT News

    March 30, 2026

    The Pentagon’s culture war tactic against Anthropic has backfired

    March 30, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.