Close Menu
    Trending
    • Topp 10 AI-filmer genom tiderna
    • OpenAIs nya webbläsare ChatGPT Atlas
    • Creating AI that matters | MIT News
    • Scaling Recommender Transformers to a Billion Parameters
    • Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know
    • Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI
    • ChatGPT Gets More Personal. Is Society Ready for It?
    • Why the Future Is Human + Machine
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Failed Automation Projects? It’s Not the Tools
    AI Technology

    Failed Automation Projects? It’s Not the Tools

    ProfitlyAIBy ProfitlyAIJuly 22, 2025No Comments12 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email





    What number of occasions have you ever spent months evaluating automation tasks – enduring a number of vendor assessments, navigating prolonged RFPs, and managing complicated procurement cycles – solely to face underwhelming outcomes or outright failure?  You’re not alone. 

    Many enterprises battle to scale automation, not resulting from a scarcity of instruments, however as a result of their information isn’t prepared. In concept, AI brokers and RPA bots may deal with numerous duties; in observe, they fail when fed messy or unstructured inputs. Research present that 80%-90% of all enterprise information is unstructured – consider emails, PDFs, invoices, photos, audio, and so on. This pervasive unstructured information is the actual bottleneck. Irrespective of how superior your automation platform, it may’t reliably course of what it can’t correctly learn or perceive. In brief, low automation ranges are normally an information downside, not a instrument downside.

    Most Enterprise Knowledge is Unstructured

    Why Brokers and RPA Require Structured Knowledge

    Automation instruments like Robotic Course of Automation (RPA) excel with structured, predictable information – neatly organized in databases, spreadsheets, or standardized kinds. They falter with unstructured inputs. A typical RPA bot is actually a rules-based engine (“digital employee”) that follows express directions. If the enter is a scanned doc or a free-form textual content area, the bot doesn’t inherently know tips on how to interpret it. RPA is unable to immediately handle unstructured datasets; the information should first be transformed into structured kind utilizing further strategies. In different phrases, an RPA bot wants a clear desk of information, not a pile of paperwork.

    “RPA is handiest when processes contain structured, predictable information. In observe, many enterprise paperwork comparable to invoices are unstructured or semi-structured, making automated processing tough”. Unstructured information now accounts for ~80% of enterprise information, underscoring why many RPA initiatives stall.

    The identical holds true for AI brokers and workflow automation: they solely carry out in addition to the information they obtain. If an AI customer support agent is drawing solutions from disorganized logs and unlabeled recordsdata, it’ll possible give incorrect solutions. The muse of any profitable automation or AI agent is “AI-ready” information that’s clear, well-organized, and ideally structured. That is why organizations that make investments closely in instruments however neglect information preparation typically see disappointing automation ROI.

    Challenges with Conventional Knowledge Structuring Strategies

    If unstructured information is the difficulty, why not simply convert it to structured kind? That is simpler stated than executed. Conventional strategies to construction information like OCR, ICR, and ETL have important challenges:

    • OCR and ICR: OCR and ICR have lengthy been used to digitize paperwork, however they crumble in real-world eventualities. Basic OCR is simply pattern-matching, it struggles with assorted fonts, layouts, tables, photos, or signatures. Even prime engines hit solely 80 – 90% accuracy on semi-structured docs, creating 1,000 – 2,000 errors per 10,000 paperwork and forcing guide evaluate on 60%+ of recordsdata. Handwriting makes it worse, ICR barely manages 65 – 75% accuracy on cursive. Most methods are additionally template-based, demanding limitless rule updates for each new bill or kind format.OCR/ICR can pull textual content, however it can’t perceive context or construction at scale, making them unreliable for enterprise automation.
    • Standard ETL Pipelines: ETL works nice for structured databases however falls aside with unstructured information. No mounted schema, excessive variability, and messy inputs imply conventional ETL instruments want heavy customized scripting to parse pure language or photos. The consequence? Errors, duplicates, and inconsistencies pile up, forcing information engineers to spend 80% of their time cleansing and prepping information—leaving solely 20% for precise evaluation or AI modeling. ETL was constructed for rows and columns, not for at the moment’s messy, unstructured information lakes—slowing automation and AI adoption considerably.
    • Rule-Primarily based Approaches: Older automation options typically tried to deal with unstructured information with brute-force guidelines, e.g. utilizing regex patterns to seek out key phrases in textual content, or organising choice guidelines for sure doc layouts. These approaches are extraordinarily brittle. The second the enter varies from what was anticipated, the foundations fail. Consequently, corporations find yourself with fragile pipelines that break each time a vendor adjustments an bill format or a brand new textual content sample seems. Upkeep of those rule methods turns into a heavy burden.

    All these elements contribute to why so many organizations nonetheless depend on armies of information entry workers or guide evaluate. McKinsey observes that present doc extraction instruments are sometimes “cumbersome to arrange” and fail to yield excessive accuracy over time, forcing corporations to speculate closely in guide exception dealing with. In different phrases, regardless of utilizing OCR or ETL, you find yourself with folks within the loop to repair all of the issues the automation couldn’t determine. This not solely cuts into the effectivity positive factors but additionally dampens worker enthusiasm (since employees are caught correcting machine errors or doing low-value information clean-up). It’s a irritating established order: automation tech exists, however with out clear, structured information, its potential is rarely realized.

    Foundational LLMs Are Not a Silver Bullet for Unstructured Knowledge

    With the rise of enormous language fashions, one would possibly hope that they may merely “learn” all of the unstructured information and magically output structured information. Certainly, fashionable basis fashions (like GPT-4) are excellent at understanding language and even deciphering photos. Nonetheless, general-purpose LLMs should not purpose-built to unravel the enterprise unstructured information downside of scale, accuracy, and integration. There are a number of causes for this:

    • Scale Limitations: Out-of-the-box LLMs can’t ingest hundreds of thousands of paperwork or complete information lakes in a single go. Enterprise information typically spans terabytes, far past an LLM’s capability at any given time. Chunking the information into smaller items helps, however then the mannequin loses the “large image” and may simply combine up or miss particulars. LLMs are additionally comparatively sluggish and computationally costly for processing very massive volumes of textual content. Utilizing them naively to parse each doc can turn out to be cost-prohibitive and latency-prone.
    • Lack of Reliability and Construction: LLMs generate outputs probabilistically, which suggests they might “hallucinate” data or fill in gaps with plausible-sounding however incorrect information. For important fields (like an bill whole or a date), you want 100% precision, a made-up worth is unacceptable. Foundational LLMs don’t assure constant, structured output until closely constrained. They don’t inherently know which elements of a doc are vital or correspond to which area labels (until skilled or prompted in a really particular approach). As one analysis research famous, “sole reliance on LLMs shouldn’t be viable for a lot of RPA use circumstances” as a result of they’re costly to coach, require numerous information, and are vulnerable to errors/hallucinations with out human oversight. In essence, a chatty normal AI would possibly summarize an electronic mail for you, however trusting it to extract each bill line merchandise with excellent accuracy, each time, is dangerous.
    • Not Educated on Your Knowledge: By default, basis fashions study from internet-scale textual content (books, net pages, and so on.), not out of your firm’s proprietary kinds and vocabulary. They could not perceive particular jargon on a kind, or the structure conventions of your trade’s paperwork. Nice-tuning them in your information is feasible however expensive and complicated, and even then, they continue to be generalists, not specialists in doc processing. As a Forbes Tech Council perception put it, an LLM by itself “doesn’t know your organization’s information” and lacks the context of inside information. You typically want further methods (like retrieval-augmented era, information graphs, and so on.) to floor the LLM in your precise information, successfully including again a structured layer.

    In abstract, basis fashions are highly effective, however they don’t seem to be a plug-and-play resolution for parsing all enterprise unstructured information into neat rows and columns. They increase however don’t substitute the necessity for clever information pipelines. Gartner analysts have additionally cautioned that many organizations aren’t even able to leverage GenAI on their unstructured information resulting from governance and high quality points, utilizing LLMs with out fixing the underlying information is placing the cart earlier than the horse.

    Structuring Unstructured Knowledge, Why Objective-Constructed Fashions are the reply

    At the moment, Gartner and different main analysts point out a transparent shift: conventional IDP, OCR, and ICR options have gotten out of date, changed by superior massive language fashions (LLMs) which are fine-tuned particularly for information extraction duties. In contrast to their predecessors, these purpose-built LLMs excel at deciphering the context of assorted and complicated paperwork with out the constraints of static templates or restricted sample matching.

    Nice-tuned, data-extraction-focused LLMs leverage deep studying to know doc context, acknowledge refined variations in construction, and persistently output high-quality, structured information. They will classify paperwork, extract particular fields—comparable to contract numbers, buyer names, coverage particulars, dates, and transaction quantities—and validate extracted information with excessive accuracy, even from handwriting, low-quality scans, or unfamiliar layouts. Crucially, these fashions frequently study and enhance via processing extra examples, considerably decreasing the necessity for ongoing human intervention.

    McKinsey notes that organizations adopting these LLM-driven options see substantial enhancements in accuracy, scalability, and operational effectivity in comparison with conventional OCR/ICR strategies. By integrating seamlessly into enterprise workflows, these superior LLM-based extraction methods permit RPA bots, AI brokers, and automation pipelines to perform successfully on the beforehand inaccessible 80% of unstructured enterprise information.

    Consequently, trade leaders emphasize that enterprises should pivot towards fine-tuned, extraction-optimized LLMs as a central pillar of their information technique. Treating unstructured information with the identical rigor as structured information via these superior fashions unlocks important worth, lastly enabling true end-to-end automation and realizing the total potential of GenAI applied sciences.

    Actual-World Examples: Enterprises Tackling Unstructured Knowledge with Nanonets

    How are main enterprises fixing their unstructured information challenges at the moment? A lot of forward-thinking corporations have deployed AI-driven doc processing platforms like Nanonets to nice success. These examples illustrate that with the appropriate instruments (and information mindset), even legacy, paper-heavy processes can turn out to be streamlined and autonomous:

    • Asian Paints (Manufacturing): One of many largest paint corporations on the earth, Asian Paints handled 1000’s of vendor invoices and buy orders. They used Nanonets to automate their bill processing workflow, reaching a 90% discount in processing time for Accounts Payable. This translated to releasing up about 192 hours of guide work per 30 days for his or her finance staff. The AI mannequin extracts all key fields from invoices and integrates with their ERP, so workers not spend time typing in particulars or correcting errors.
    • JTI (Japan Tobacco Worldwide) – Ukraine operations: JTI’s regional staff confronted a really lengthy tax refund declare course of that concerned shuffling massive quantities of paperwork between departments and authorities portals. After implementing Nanonets, they introduced the turnaround time down from 24 weeks to simply 1 week, a 96% enchancment in effectivity. What was once a multi-month ordeal of information entry and verification grew to become a largely automated pipeline, dramatically dashing up money circulate from tax refunds.
    • Suzano (Pulp & Paper Trade): Suzano, a worldwide pulp and paper producer, processes buy orders from numerous worldwide purchasers. By integrating Nanonets into their order administration, they lowered the time taken per buy order from about 8 minutes to 48 seconds, roughly a 90% time discount in dealing with every order. This was achieved by robotically studying incoming buy paperwork (which arrive in several codecs) and populating their system with the wanted information. The result’s sooner order success and fewer guide workload.
    • SaltPay (Fintech): SaltPay wanted to handle an unlimited community of 100,000+ distributors, every submitting invoices in several codecs. Nanonets allowed SaltPay to simplify vendor bill administration, reportedly saving 99% of the time beforehand spent on this course of. What was as soon as an amazing, error-prone process is now dealt with by AI with minimal oversight.

    These circumstances underscore a standard theme: organizations that leverage AI-driven information extraction can supercharge their automation efforts. They not solely save time and labor prices but additionally enhance accuracy (e.g. one case famous 99% accuracy achieved in information extraction) and scalability. Workers could be redeployed to extra strategic work as an alternative of typing or verifying information all day. The expertise (instruments) wasn’t the differentiator right here, the important thing was getting the information pipeline so as with the assistance of specialised AI fashions. As soon as the information grew to become accessible and clear, the prevailing automation instruments (workflows, RPA bots, analytics, and so on.) may lastly ship full worth.

    Clear Knowledge Pipelines: The Basis of the Autonomous Enterprise

    Within the pursuit of a “actually autonomous enterprise”, the place processes run with minimal human intervention – having a clear, well-structured information pipeline is totally important. A “actually autonomous enterprise” doesn’t simply want higher instruments—it wants higher information. Automation and AI are solely pretty much as good as the knowledge they eat, and when that gas is messy or unstructured, the engine sputters. Rubbish in, rubbish out is the one greatest purpose automation tasks underdeliver.

    Ahead-thinking leaders now deal with information readiness as a prerequisite, not an afterthought. Many enterprises spend 2 – 3 months upfront cleansing and organizing information earlier than AI tasks as a result of skipping this step results in poor outcomes. A clear information pipeline—the place uncooked inputs like paperwork, sensor feeds, and buyer queries are systematically collected, cleansed, and reworked right into a single supply of fact—is the muse that enables automation to scale seamlessly. As soon as that is in place, new use circumstances can plug into current information streams with out reinventing the wheel.

    In distinction, organizations with siloed, inconsistent information stay trapped in partial automation, always counting on people to patch gaps and repair errors. True autonomy requires clear, constant, and accessible information throughout the enterprise—very similar to self-driving vehicles want correct roads earlier than they’ll function at scale.

    The takeaway: The instruments for automation are extra highly effective than ever, however it’s the information that determines success. AI and RPA don’t fail resulting from lack of functionality; they fail resulting from lack of fresh, structured information. Clear up that, and the trail to the autonomous enterprise—and the subsequent wave of productiveness—opens up.

    Sources:



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleFive things you need to know about AI right now
    Next Article ChatGPT Agent, Grok 4, Meta Superintelligence Labs, Windsurf Drama, Kimi K2 & AI Browsers from OpenAI and Perplexity
    ProfitlyAI
    • Website

    Related Posts

    AI Technology

    Why AI should be able to “hang up” on you

    October 21, 2025
    AI Technology

    From slop to Sotheby’s? AI art enters a new phase

    October 17, 2025
    AI Technology

    Future-proofing business capabilities with AI technologies

    October 15, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    AI-generated art cannot be copyrighted, says US Court of Appeals

    April 4, 2025

    Building a Scalable and Accurate Audio Interview Transcription Pipeline with Google Gemini

    April 29, 2025

    Googles framtidsvision är att Gemini utför googling åt användarna

    May 23, 2025

    Gemini Live-funktionen rullas ut till Android användare

    April 18, 2025

    The Complete Anatomy of Ambient AI in Healthcare: A 5-Minute Guide

    April 5, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Four AI Minds in Concert: A Deep Dive into Multimodal AI Fusion

    July 2, 2025

    Testing Webhooks

    August 4, 2025

    An Interactive Guide to 4 Fundamental Computer Vision Tasks Using Transformers

    September 19, 2025
    Our Picks

    Topp 10 AI-filmer genom tiderna

    October 22, 2025

    OpenAIs nya webbläsare ChatGPT Atlas

    October 22, 2025

    Creating AI that matters | MIT News

    October 21, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.