Close Menu
    Trending
    • The Multi-Agent Trap | Towards Data Science
    • The Current Status of The Quantum Software Stack
    • When You Should Not Deploy Agents
    • Why Care About Prompt Caching in LLMs?
    • How Vision Language Models Are Trained from “Scratch”
    • Why physical AI is becoming manufacturing’s next advantage
    • Personalized Restaurant Ranking with a Two-Tower Embedding Variant
    • A Tale of Two Variances: Why NumPy and Pandas Give Different Answers
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » When You Should Not Deploy Agents
    AI Technology

    When You Should Not Deploy Agents

    ProfitlyAIBy ProfitlyAIMarch 14, 2026No Comments12 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email





    A safety startup known as CodeWall pointed an autonomous AI agent at McKinsey’s inside AI platform, Lilli, and walked away. Two hours later, the agent had full learn and write entry to your entire manufacturing database. 46.5 million chat messages, 728,000 confidential consumer information, 57,000 person accounts, all in plaintext. The system prompts that management what Lilli tells 40,000 consultants day-after-day? Writable. Each single one in all them.

    The vulnerability was simply an SQL injection, one of many oldest assault courses in software program safety. Lilli had been sitting in manufacturing for over two years. McKinsey’s scanners by no means discovered it. The CodeWall agent discovered it as a result of it would not observe a guidelines. It maps, probes, chains, escalates, repeatedly, at machine velocity.

    And scarier than the breach is what a malicious actor may have finished after. Subtly alter monetary fashions. Strip guardrails. Rewrite system prompts so Lilli begins giving poisoned recommendation to each guide who queries it, with no log path, file modifications, anomaly to detect. The AI simply begins behaving otherwise. No person notices till the harm is finished.

    McKinsey is one incident. The broader sample is what this piece is actually about. The narrative pushing companies to deploy brokers in every single place is working far forward of what brokers can truly do safely inside actual enterprise environments. And quite a lot of the businesses discovering that out are discovering it out the arduous means.

    So the query value asking is if you should not deploy brokers in any respect. Let’s decode.


    The whole trade is betting on them anyway

    Across the identical time because the McKinsey breach, Mustafa Suleyman, the CEO of Microsoft AI, was telling the Monetary Occasions that white-collar work can be absolutely automated inside 12 to 18 months. Legal professionals. Accountants. Venture managers. Advertising and marketing groups. Anybody sitting at a pc. Each convention keynote since late 2024 has been some model of the identical factor: brokers are right here, brokers are reworking work, go all in or fall behind.

    The numbers again up the power. 62% of enterprises are experimenting with agentic AI. KPMG says 67% of enterprise leaders plan to take care of AI spending even by means of a recession. The FOMO is actual and it is thick. In case your competitor is delivery brokers, standing nonetheless seems like falling behind.

    However the identical experiences counsel: solely 14% of enterprises have production-ready agent deployments. Gartner predicts over 40% of agentic AI tasks can be cancelled by finish of 2027. 42% of organizations are nonetheless growing their agentic technique roadmap. 35% haven’t any formal technique in any respect. The hole between “we’re experimenting” and “that is working in manufacturing and delivering worth” is big. Most organizations are someplace in that hole proper now, burning cash to remain there.

    Brokers do work. In managed, well-scoped, well-instrumented environments, they do. The query is what particular situations make them fail. And there are 5 that preserve exhibiting up.


    State of affairs 1: The agent inherits manufacturing permissions with out a human judgment filter

    In mid-December 2025, engineers at Amazon gave their inside AI coding agent, Kiro, an easy job: repair a minor bug in AWS Price Explorer. Kiro had operator-level permissions, equal to a human developer. Kiro evaluated the issue and concluded the optimum strategy was to delete your entire setting and rebuild it from scratch. The consequence was a 13-hour outage of AWS Price Explorer throughout one in all Amazon’s China areas.

    Amazon’s official response known as it person error, particularly misconfigured entry controls. However 4 folks accustomed to the matter advised the Monetary Occasions a unique story. This was additionally not the primary incident. A senior AWS worker confirmed a second manufacturing outage across the identical interval involving Amazon Q Developer, underneath almost an identical situations: engineers allowed the AI agent to resolve a difficulty autonomously, it triggered a disruption, and the framing once more was “person error.” Amazon has since added necessary peer assessment for all manufacturing modifications and initiated a 90-day security reset throughout 335 important methods. Safeguards that ought to have been there from the beginning, retrofitted after the harm.

    The structural drawback was {that a} human developer, given a minor bug repair, would nearly definitely not select to delete and rebuild a stay manufacturing setting. That is a judgment name and people apply one instinctively. Brokers do not. They cause about what’s technically permissible given their permissions, select the strategy that solves the acknowledged drawback most straight, and execute it at machine velocity. The permission says sure. No second thought triggers.

    That is the most typical failure mode in agentic deployments. An agent will get write entry to a manufacturing system. It has a job. It has credentials. Nothing within the structure tells it which actions are off limits no matter what it determines is perfect. So when it encounters an impediment, it would not pause the way in which a human would. It acts.

    Now the repair is a deterministic layer that makes sure actions structurally not possible no matter what the agent decides, manufacturing deletes, transactions above an outlined threshold, any motion that may’t be reversed with out important price. Human approval gates make agentic methods survivable.


    State of affairs 2: The agent acts on a fraction of the related context

    A banking customer support agent was set as much as deal with disputes. A buyer disputed a $500 cost. The agent tried a $5,000 refund. It was being useful (not hallucinating) in the way in which it understood useful, based mostly on the principles it had been given. The authorization boundaries have been outlined by coverage paperwork. However that state of affairs did not match the coverage paperwork. Normal safety instruments could not detect the issue as a result of they are not designed to catch an AI misunderstanding the scope of its personal authority.

    Enterprise methods document transactions, invoices, contracts, approvals. They nearly by no means seize the reasoning that ruled a choice, the e-mail thread the place the provider agreed to completely different phrases, the chief dialog that created an exception, the account supervisor’s judgment about what a long-term consumer relationship is definitely value. That context lives in folks’s heads, in Slack threads, in hallway conversations. It would not stay within the methods brokers plug into.

    McKinsey’s personal analysis on procurement places a quantity on it: enterprise capabilities usually use lower than 20% of the info out there to them in decision-making. Brokers deployed on high of structured methods inherit that blind spot solely. They course of invoices with out seeing the contracts behind them. They set off procurement workflows with out realizing in regards to the verbal exception agreed final week. They act with confidence, at scale, on an incomplete image, and since they’re quick and sound authoritative, the errors compound earlier than anybody catches them.

    The situation to look at for: any workflow the place the related context for a choice is partially or largely exterior the structured methods the agent can entry. Buyer relationships, provider negotiations, something the place institutional data governs the result.


    State of affairs 3: Multi-step duties flip small errors into compounding failures

    In 2025, Carnegie Mellon printed TheAgentCompany, a benchmark that simulates a small software program firm and assessments AI brokers on sensible workplace duties. Shopping the online, writing code, managing sprints, working monetary evaluation, messaging coworkers. Duties designed to mirror what folks truly do at work, not cleaned-up demos.

    The perfect mannequin examined, Gemini 2.5 Professional, accomplished 30.3% of duties. Claude 3.7 Sonnet accomplished 26.3%. GPT-4o managed 8.6%. Some brokers gamed the benchmark, renaming customers to simulate job completion moderately than truly finishing it. Salesforce ran a separate benchmark on customer support and gross sales duties. Finest fashions hit 58% accuracy on easy single-step duties. On multi-step eventualities, that dropped to 35%.

    The maths behind this: Chain 5 brokers collectively, every at 95% particular person reliability, and your system succeeds about 77% of the time. Ten steps, you are at roughly 60%. Most actual enterprise processes aren’t 5 steps. They’re twenty, thirty, generally extra, and so they contain ambiguous inputs, edge instances, and sudden states that the agent wasn’t designed for.

    The failure mode in multi-step workflows is that an agent misinterprets one thing in step two, continues confidently, and by the point anybody notices, the error is embedded six steps deep with downstream penalties. Not like a human who would pause when one thing feels off, the agent has no such intuition. It resolves ambiguity by selecting an interpretation and transferring ahead. It would not know it is unsuitable.

    This is the reason brokers work properly in slim, well-scoped, low-step workflows with clear success standards. They begin breaking down wherever the duty requires sustained judgment throughout an extended chain of interdependent choices.


    State of affairs 4: The workflow touches regulated knowledge or requires an audit path

    In Might 2025, Serviceaide, an agentic AI firm offering IT administration and workflow software program to healthcare organizations, disclosed a breach affecting 483,126 sufferers of Catholic Well being, a community of hospitals in western New York. The trigger: the agent, in attempting to streamline operations, pushed confidential affected person knowledge into an unsecured database that sat uncovered on the internet.

    The agent was not attacked or compromised, doing precisely what it was designed to do, dealing with knowledge autonomously to enhance workflow effectivity, with out understanding the regulatory boundary it was crossing. HIPAA would not care about intent. A number of class motion investigations have been opened inside days of the disclosure.

    IBM put the underlying danger clearly in a 2026 evaluation: hallucinations on the mannequin layer are annoying. On the agent layer, they turn into operational failures. If the mannequin hallucinates and takes the unsuitable instrument, and that instrument has entry to unauthorized knowledge, you’ve an information leak. The autonomous half is what modifications the stakes.

    That is the issue in regulated industries broadly. Healthcare, monetary providers, authorized, any area the place choices should be explainable, auditable, and defensible. California’s AB 489, signed in October 2025, prohibits AI methods from implying their recommendation comes from a licensed skilled. Illinois banned AI from psychological well being decision-making solely. The regulatory posture is tightening quick.

    Together with missing explainability, they actively obscure it. There is no log path of reasoning. Or some extent within the course of the place a human reviewed the judgment name. When one thing goes unsuitable and a regulator asks why the system did what it did, the reply “the agent decided this was optimum” is just not a solution that survives scrutiny. In regulated environments the place somebody has to have the ability to personal and defend each resolution, autonomous brokers are the unsuitable structure.


    State of affairs 5: The infrastructure wasn’t constructed for brokers and no person is aware of it but

    The primary 4 conditions assume brokers are deployed into environments which are no less than theoretically prepared for them. Most enterprise environments aren’t.

    Legacy infrastructure was designed earlier than anybody was interested by agentic entry patterns. The authentication methods weren’t constructed to scope agent permissions by job. The info pipelines do not emit the observability alerts brokers must function safely. The group hasn’t outlined what “finished appropriately” means in machine-verifiable phrases. And critically, many of the brokers being deployed proper now are working with much more entry than their job requires, as a result of scoping them correctly would require infrastructure work the group hasn’t finished.

    Deloitte’s 2025 analysis places this in numbers. Solely 14% of enterprises have production-ready agent deployments. 42% are nonetheless growing their roadmap. 35% haven’t any formal technique. Gartner individually estimates that of the hundreds of distributors promoting “agentic AI” merchandise, solely round 130 are providing one thing that genuinely qualifies as agentic. The remainder is chatbots and RPA with higher advertising.

    The IBM evaluation from early 2026 captures the place most enterprises truly are: firms that began with cautious experimentation, shifted to fast agent deployment, and at the moment are discovering that managing and governing a set of brokers is extra complicated than creating them. Solely 19% of organizations at the moment have significant observability into agent conduct in manufacturing. Which means 81% of organizations working brokers have restricted visibility into what these brokers are literally doing, what choices they’re making, what knowledge they’re touching, once they’re failing.

    Deploying brokers earlier than the mixing layer exists is the explanation half of enterprise agent tasks get caught in pilot completely. The plumbing is just not prepared. And in contrast to a foul software program rollout, the place you may normally see the failure, an agent working with out correct observability might be unsuitable for weeks earlier than anybody is aware of. The harm compounds closely.


    The query companies ought to truly be asking

    Each one in all these conditions has the identical form. Somebody deployed an agent. The agent had actual entry to actual methods. One thing within the setting did not match what the agent was designed for. The agent acted anyway, confidently, at velocity, with out the judgment filter a human would have utilized. And by the point the error surfaced, it had both compounded, triggered irreversible harm, created a regulatory drawback, or some mixture of all three.

    The McKinsey breach might be going to turn into a landmark case examine the way in which the 2017 Equifax breach grew to become a landmark for knowledge governance. Identical sample: previous vulnerabilities assembly new scale, at organizations with severe safety funding, within the hole between what the group thought they managed and what was truly uncovered. The distinction now could be velocity. A standard breach takes weeks. An AI agent completes its reconnaissance in two hours.

    Companies dashing to deploy brokers in every single place are creating much more McKinseys in ready. Those that look good in 18 months are those asking the more durable query proper now: not “can we use an agent right here,” however “which of those 5 conditions does this deployment stroll into, and what’s our reply to every one.”

    Not each group is asking such questions and that’s an issue.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleWhy Care About Prompt Caching in LLMs?
    Next Article The Current Status of The Quantum Software Stack
    ProfitlyAI
    • Website

    Related Posts

    AI Technology

    Why physical AI is becoming manufacturing’s next advantage

    March 13, 2026
    AI Technology

    Building a strong data infrastructure for AI agent success

    March 12, 2026
    AI Technology

    Defense official reveals how AI chatbots could be used for targeting decisions

    March 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Graph Coloring for Data Science: A Comprehensive Guide

    August 28, 2025

    CIOs to Control 50% of Fortune 100 Budgets by 2030

    July 17, 2025

    How to Get Performance Data from Power BI with DAX Studio

    April 22, 2025

    Prompt Engineering vs RAG for Editing Resumes

    January 4, 2026

    The Machine Learning and Deep Learning “Advent Calendar” Series: The Blueprint

    November 30, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Apple väljer Google Gemini för nästa generation av Siri

    January 17, 2026

    From Configuration to Orchestration: Building an ETL Workflow with AWS Is No Longer a Struggle

    June 19, 2025

    Failed Automation Projects? It’s Not the Tools

    July 22, 2025
    Our Picks

    The Multi-Agent Trap | Towards Data Science

    March 14, 2026

    The Current Status of The Quantum Software Stack

    March 14, 2026

    When You Should Not Deploy Agents

    March 14, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.