Close Menu
    Trending
    • Evaluating AI gateways for enterprise-grade agents
    • Writing Is Thinking | Towards Data Science
    • Automated Data Extraction for AI Workflows: A Complete Guide
    • What health care providers actually want from AI
    • Alibaba har lanserat Qwen-Image-Edit en AI-bildbehandlingsverktyg som öppenkällkod
    • Can an AI doppelgänger help me do my job?
    • Therapists are secretly using ChatGPT during sessions. Clients are triggered.
    • Anthropic testar ett AI-webbläsartillägg för Chrome
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Uh-Uh, Not Guilty | Towards Data Science
    Artificial Intelligence

    Uh-Uh, Not Guilty | Towards Data Science

    ProfitlyAIBy ProfitlyAIMay 8, 2025No Comments8 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    merry murderesses of the Prepare dinner County Jail climbed the stage within the Chicago musical, they were aligned on the message: 

    They’d it coming, that they had it coming all alongside. 
    I didn’t do it. 
    But when I’d achieved it, how might you inform me that I used to be flawed?

    And the a part of the track I discovered attention-grabbing was the reframing of their violent actions by way of their ethical lens: “It was a homicide, however not against the law.”

    In brief, the musical tells a narrative of greed, murder, injustice, and Blame-shifting plots that unfold in a world the place fact is manipulated by media, intelligent legal professionals, and public fascination with scandal.

    By being solely an observer within the viewers, it’s simple to fall for his or her tales portrayed by way of the sufferer’s eyes, who was merely responding to insupportable conditions.

    Logically, there’s a scientific rationalization for why blame-shifting feels satisfying. Attributing destructive occasions to exterior causes (different folks or conditions) activates brain regions associated with reward processing. If it feels good, it reinforces the behaviour and makes it extra computerized.

    This attention-grabbing play of blame-shifting is within the theatre of life now, the place people may also begin calling out the instruments powered by LLMs for poor selections and life outcomes. Most likely pulling out the argument of…

    Artistic differences

    Understanding how the variations (creative or not) lead us to justify our unhealthy acts and shift blame to others, it’s solely frequent sense to imagine we’ll do the identical to AI and the fashions behind it. 

    When looking for a accountable get together for AI-related failures, one paper, “It’s the AI’s fault, not mine,” reveals a sample in how people attribute blame relying on who’s concerned and the way they’re concerned. 

    The analysis explored two key questions: 

    • (1) Would we blame AI extra if we noticed it as having human-like qualities?
    • (2) And would this conveniently cut back blame for human stakeholders (programmers, groups, firm, governments)?

    Via three research performed in early 2022, earlier than the “official” begin of the generative AI period by way of UI, the analysis examined how people distribute blame when AI methods commit ethical transgressions, akin to displaying racial bias, exposing youngsters to inappropriate content material, or unfairly distributing medical sources and located the next:

    When AI was portrayed with extra human-like psychological capacities, members had been extra keen to level fingers on the AI system for ethical failures.

    Different findings had been that not all human brokers obtained off the hook equally: 

    • Corporations benefited extra from this blame-shifting recreation, receiving much less blame when AI appeared extra human-like.
    • In the meantime, AI programmers, groups, and authorities regulators didn’t expertise lowered blame no matter how mind-like the AI appeared.

    And doubtless crucial discovery: 

    Throughout all situations, AI constantly acquired a smaller proportion of blame in comparison with human brokers, and the AI programmer or the AI group shouldered the heaviest blame burden.

    How had been these findings defined? 

    The analysis instructed it’s about perceived roles and structural readability:

    • Corporations with their “advanced and infrequently opaque buildings” profit from lowered blame when AI seems extra human-like. They will extra simply distance themselves from AI mishaps and shift blame to the seemingly autonomous AI system.
    • Programmers with their direct technical involvement in creating the AI options remained firmly accountable no matter AI anthropomorphisation. Their “fingerprints” on the system’s decision-making structure make it almost unattainable for them to assert “the AI acted independently.”
    • Authorities entities with their regulatory oversight roles maintained regular (although decrease general) blame ranges, as their tasks for monitoring AI methods remained clear no matter how human-like the AI appeared. 

    This “moral scapegoating” suggests company accountability would possibly more and more dissolve as AI methods seem extra autonomous and human-like.

    Ethical scapegoating is outlined within the examine as the method of blaming a person or group for a destructive end result to deflect private or collective accountability. [Photo by Vaibhav Sanghavi on Unsplash]

    You’d now say, that is… 

    All that jazz

    Scapegoating and blaming others happen when the stakes are excessive, and the media often likes to place a giant headline, with the villain: 

    From all these titles, you possibly can immediately blame the end-user or developer due to a lack of understanding of how the brand new instruments (sure, tools!) are constructed and the way they need to be used, applied or examined, however none of this helps when the injury is already achieved, and somebody must be held accountable for it.

    Speaking about accountability, I can’t skip the EU AI Act now, and its regulatory framework that’s putting on the hook AI providers, deployers and importers, by stating how: 

    “Customers (deployers) of high-risk AI methods have some obligations, although lower than suppliers (builders).”

    So, amongst others, the Act explains different classes of AI systems and categorises high-risk AI methods as these utilized in important areas like hiring, important providers, regulation enforcement, migration, justice administration, and democratic processes.

    For these methods, suppliers should implement a risk-management system that identifies, analyses, and mitigates dangers all through the AI system’s lifecycle. 

    This extends into a compulsory quality management system overlaying regulatory compliance, design processes, improvement practices, testing procedures, information administration, and post-market monitoring. It should embrace “an accountability framework setting out the responsibilities of management and other staff.”

    On the opposite facet, deployers of high-risk AI systems have to implement acceptable technical measures, guarantee human oversight, monitor system efficiency, and, in sure circumstances, conduct fundamental rights impact assessments. 

    To sweeten this up, penalties for non-compliance may end up in a nice of as much as €35 million or 7% of world annual turnover.

    Perhaps you now assume, “I’m off the hook…I’m solely the end-user, and all that is none of my concern”, however let me remind you of the already present headlines above, the place no lawer might razzle dazzle a choose into believing harmless for leveraging AI in a piece scenario that critically affected different events. 

    Now that we’ve clarified this, let’s focus on how everybody can contribute to the AI accountability circle.

    “You’re a 10x hacker and it have to be another person’s fault.” [Image source: LINK. The image is sourced from free online distribution.]

    When you’re good to AI, AI’s good to you

    True accountability within the AI pipeline requires private dedication from everybody concerned, and with this, the perfect you are able to do is:

    • Educate your self on AI: As an alternative of blindly counting on AI instruments, be taught first how they’re constructed and which tasks they can solve. You, too, can classify your duties into completely different criticalities and perceive the place it’s essential to have people ship them, and the place AI can step in with human-in-the-loop, or independently. 
    • Construct a testing system: Create private checklists for cross-checking AI outputs in opposition to different sources earlier than performing on them. It’s value mentioning right here how method is to have multiple testing approach and multiple human tester. (What can I say,  blame the good development practices.) 
    • Query the outputs (all the time, even with the testing system): Earlier than accepting AI suggestions, ask “How assured am I on this output?” and “What’s the worst that would occur if that is flawed and who might be affected?”
    • Doc your course of: Hold information of the way you used AI instruments, what inputs you supplied, and what selections you made based mostly on the outputs. In case you did every thing by the ebook and adopted processes, documentation within the AI-supported decision-making course of might be a important piece of proof. 
    • Communicate up about considerations: In case you discover problematic patterns within the AI instruments you utilize, report them to the related human brokers. Being quiet about AI methods malfunctioning will not be technique, even when you brought about a part of this downside. Nonetheless, reacting on time and taking accountability is the long-term street to success.

    Lastly, I like to recommend familiarising your self with the rules to grasp your rights alongside tasks. No framework can change the truth that AI selections carry human fingerprints and that people will contemplate different people, not the instruments, answerable for AI errors.

    Not like the fictional murderesses of the Chicago musical who danced their approach by way of blame, in actual AI failures, the proof path gained’t disappear with a wise lawyer and superficial story. 

    Thank You for Studying!

    In case you discovered this submit beneficial, be at liberty to share it along with your community. 👏

    Keep related for extra tales on Medium ✍️ and LinkedIn 🖇️.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleReal-Time Interactive Sentiment Analysis in Python
    Next Article Generating Data Dictionary for Excel Files Using OpenPyxl and AI Agents
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Writing Is Thinking | Towards Data Science

    September 2, 2025
    Artificial Intelligence

    The Generalist: The New All-Around Type of Data Professional?

    September 1, 2025
    Artificial Intelligence

    How to Develop a Bilingual Voice Assistant

    August 31, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Exploratory Data Analysis: Gamma Spectroscopy in Python (Part 3)

    August 5, 2025

    Get Started with Rust: Installation and Your First CLI Tool – A Beginner’s Guide

    May 13, 2025

    How to Use LLMs for Powerful Automatic Evaluations

    August 13, 2025

    Akool Live Camera: Realtids AI-avatarer för videomöten och streaming

    June 2, 2025

    An LLM-Based Workflow for Automated Tabular Data Validation 

    April 14, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    DeepSeek har uppgraderad R1-modellen till DeepSeek R1-0528

    May 30, 2025

    Inside Google’s Agent2Agent (A2A) Protocol: Teaching AI Agents to Talk to Each Other

    June 2, 2025

    AI is pushing the limits of the physical world

    April 21, 2025
    Our Picks

    Evaluating AI gateways for enterprise-grade agents

    September 2, 2025

    Writing Is Thinking | Towards Data Science

    September 2, 2025

    Automated Data Extraction for AI Workflows: A Complete Guide

    September 2, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.