Close Menu
    Trending
    • The Machine Learning and Deep Learning “Advent Calendar” Series: The Blueprint
    • The Greedy Boruta Algorithm: Faster Feature Selection Without Sacrificing Recall
    • Metric Deception: When Your Best KPIs Hide Your Worst Failures
    • How to Scale Your LLM usage
    • TruthScan vs. SciSpace: AI Detection Battle
    • Data Science in 2026: Is It Still Worth It?
    • Why We’ve Been Optimizing the Wrong Thing in LLMs for Years
    • The Product Health Score: How I Reduced Critical Incidents by 35% with Unified Monitoring and n8n Automation
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Why LLMs Aren’t a One-Size-Fits-All Solution for Enterprises
    Artificial Intelligence

    Why LLMs Aren’t a One-Size-Fits-All Solution for Enterprises

    ProfitlyAIBy ProfitlyAINovember 18, 2025No Comments11 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    are racing to make use of LLMs, however typically for duties they aren’t well-suited to. The truth is, in response to current analysis by MIT, 95% of GenAI pilots fail — they’re getting zero return. 

    An space that has been missed within the GenAI storm is that of structured information, not solely from an adoption standpoint, but in addition from a technological entrance. In actuality, there’s a goldmine of potential worth that may be extracted from structured information, notably within the type of predictions. 

    On this piece, I’ll go over what LLMs can and may’t do, what worth you may get from AI working over your structured information, particularly for predictive modeling, and business approaches used immediately — together with one which I developed with my workforce. 

    Why LLMs aren’t optimized for enterprise information and workflows

    Whereas giant language fashions have utterly remodeled textual content and communication, they fall quick in making predictions from the structured, relational information that strikes the needle, driving actual enterprise outcomes — buyer lifecycle administration, gross sales optimization, advertisements and advertising, suggestions, fraud detection, and provide chain optimization.

    Enterprise information, the information enterprises are grounded in, is inherently structured. It typically resides in tables, databases, and workflows, the place which means is derived from relationships throughout entities corresponding to prospects, transactions, and provide chains. In different phrases, that is all relational information.

    LLMs took the world by storm and performed a key function in advancing AI. That stated, they have been designed to work with unstructured information and aren’t naturally suited to purpose over rows, columns, or joins. Consequently, they battle to seize the depth and complexity inside relational information. One other problem is that relational information adjustments in actual time, whereas LLMs are usually educated on static snapshots of textual content. Additionally they deal with numbers and portions as tokens in a sequence, quite than “understanding” them mathematically. In apply, this implies an LLM is optimized to foretell the subsequent probably token, which it does extremely effectively, however to not confirm whether or not a calculation is right. So, whether or not the mannequin outputs 3 or 200 when the true reply is 2, the penalty the mannequin receives is identical.

    LLMs are able to multi-step reasoning via chain-of-thought-based inferencing, however they will face reliability challenges in sure instances. As a result of they will hallucinate, and accomplish that confidently, may I add, even a small chance of error in a multi-step workflow can compound throughout steps. This lowers the general chance of an accurate end result, and in enterprise processes corresponding to approving a mortgage or predicting provide shortages, only one small mistake will be catastrophic.

    Due to all this, enterprises immediately depend on conventional machine studying pipelines that take months to construct and preserve, limiting the measurable impression of AI on income. Once you need to apply AI to this type of tabular information, you’re primarily teleported again thirty years and wish people to painstakingly engineer options and construct bespoke fashions from scratch. For every single job individually! This method is gradual, costly, doesn’t scale, and sustaining such fashions is a nightmare.

    How we constructed our Relational Basis Mannequin

    My profession has revolved round AI and machine studying over graph-structured information. Early on, I acknowledged that information factors don’t exist in isolation. Slightly, they’re a part of a graph linked to different items of data. I utilized this view to my work on on-line social networks and data virality, working with information from Fb, Twitter, LinkedIn, Reddit, and others. 

    This perception led me to assist pioneer Graph Neural Networks at Stanford, a framework that permits machines to study from the relationships between entities quite than simply the entities themselves. I utilized this whereas serving as Chief Scientist at Pinterest, the place an algorithm generally known as PinSage remodeled how customers expertise Pinterest. That work later advanced into Graph Transformers, which carry Transformer structure capabilities to graph-structured information. This permits fashions to seize each native connections and long-range dependencies inside advanced networks. 

    As my analysis superior, I noticed pc imaginative and prescient remodeled by convolutional networks and language reshaped by LLMs. However, I noticed the predictions companies rely on from structured relational information have been nonetheless ready for his or her breakthrough, restricted by machine studying methods that hadn’t modified in over twenty years! Many years! 

    The end result of this analysis and foresight led my workforce and me to create the primary Relational Basis Mannequin (RFM) for enterprise information. Its function is to allow machines to purpose straight over structured information, to know how entities, corresponding to prospects, transactions, and merchandise, join. By figuring out the relationships between these entities, we then allow customers to make correct predictions from these particular relationships and patterns. 

    Key capabilities of Relational Basis Fashions. Picture by creator 

    In contrast to LLMs, RFMs have been designed for structured relational information. RFMs are pretrained on various (artificial) datasets in addition to on various duties over structured enterprise information. Like LLMs, RFMs will be merely prompted to supply immediate responses to all kinds of predictive duties over a given database, all with out task-specific or database-specific coaching. 

    We wished a system that might study straight from how actual databases are structured, and with out all the standard guide setup. To make that doable, we handled every database like a graph: tables turned node varieties, rows was nodes, and international keys linked all the things collectively. This manner, the mannequin might truly “see” how issues like prospects, transactions, and merchandise join and alter over time. 

    On the coronary heart of it, the mannequin combines a column encoder with a relational graph transformer. Each cell in a desk is was a small numerical embedding primarily based on what sort of information it holds, whether or not it’s a quantity, class, or a timestamp. The Transformer then seems to be throughout the graph to drag context from associated tables, which helps the mannequin adapt to new database schemas and information varieties. 

    For customers to enter which predictions they’d wish to make, we constructed a easy interface referred to as Predictive Question Language (PQL). It lets customers describe what they need to predict, and the mannequin takes care of the remainder. The mannequin pulls the best information, learns from previous examples, and causes via a solution. As a result of it makes use of in-context studying, it doesn’t need to be retrained for each job, both! We do have an choice for fine-tuning, however that is for very specialised duties. 

    Overview of architecture. Image by author
    Overview of structure. Picture by creator

    However this is only one method. Throughout the business, a number of different methods are being explored:

    Business approaches 

    1. Inner basis fashions

    Corporations like Netflix are constructing their very own large-scale basis fashions for suggestions. As described of their weblog, the aim is to maneuver away from dozens of specialised fashions towards a single centralized mannequin that learns member preferences throughout the platform. Analogy to LLMs is evident: like a sentence is represented as a sequence of phrases, a consumer is represented as a sequence of flicks the consumer interacted with. This permits improvements to assist long-term personalization by processing large interplay histories.

    The advantages of proudly owning such a mannequin embody management, differentiation, and the flexibility to tailor architectures to domain-specific wants (e.g., sparse consideration for latency, metadata-driven embeddings for chilly begin). On the flip facet, these fashions are extraordinarily pricey to coach and preserve, requiring huge quantities of information, compute, and engineering sources. Moreover, they’re educated on a single dataset (e.g., Netflix consumer conduct) for a single job (e.g., suggestions). 

    2. Automating mannequin growth with AutoML or Information Science brokers

    Platforms like DataRobot and SageMaker Autopilot have pushed ahead the concept of automating components of the machine studying pipeline. They assist groups transfer sooner by dealing with items like characteristic engineering, mannequin choice, and coaching. This makes it simpler to experiment, cut back repetitive work, and develop entry to machine studying past simply extremely specialised groups. In an identical vein, Information Scientist brokers are rising, the place the concept is that the Information Scientist agent will carry out all of the classical steps and iterate over them: information cleansing, characteristic engineering, mannequin constructing, mannequin analysis, and eventually mannequin growth. Whereas a real revolutionary feat, the jury continues to be out on whether or not this method shall be efficient in the long run.

    3. Utilizing graph databases for linked information

    Corporations like Neo4j and TigerGraph have superior the usage of graph databases to higher seize how information factors are linked. This has been particularly impactful in areas like fraud detection, cybersecurity, and provide chain administration, locations the place the relationships between entities typically matter greater than the entities themselves. By modeling information as networks quite than remoted rows in a desk, graph methods have opened up new methods of reasoning about advanced, real-world issues.

    Classes discovered

    After we got down to construct our expertise, our aim was easy: develop neural community architectures that might study straight from uncooked information. This method mirrors the present AI (literal) revolution, which is fueled by neural networks that study straight from pixels in a picture or phrases in a doc. 

    Virtually talking, our imaginative and prescient for the product additionally entailed an individual merely connecting to the information and making a prediction. That led us to the formidable goal of making a pretrained basis mannequin designed for enterprise information from the bottom up (as defined above), eradicating the necessity to manually create options, coaching datasets, and customized task-specific fashions. An formidable job certainly.

    When constructing our Relational Basis Mannequin, we developed new transformer architectures that attend over a set of interconnected tables, a database schema. This required extending the classical LLM consideration mechanism, which attends over a linear sequence of tokens, to an consideration mechanism that attends over a graph of information. Critically, the eye mechanism needed to generalize throughout totally different database buildings in addition to throughout various kinds of tables, extensive or slim, with diversified column varieties and meanings. 

    One other problem was inventing a brand new coaching scheme, as a result of predicting the subsequent token will not be the best goal. As a substitute, we generated many manmade databases and predictive duties mimicking challenges like fraud detection, time sequence forecasting, provide chain optimization, threat profiling, credit score scoring, customized suggestions, buyer churn prediction, and gross sales lead scoring.

    Ultimately, this resulted in a pretrained Relational Basis Mannequin that may be prompted to unravel enterprise duties, whether or not it’s monetary versus insurance coverage fraud or medical versus credit score threat scoring.

    Conclusion 

    Machine studying is right here to remain, and because the subject evolves, it’s our accountability as information scientists to spark extra considerate and candid discourse in regards to the true capabilities of our expertise — what it’s good at, and the place it falls quick. 

    Everyone knows how transformative LLMs have been, and proceed to be, however too typically, they’re applied rapidly earlier than contemplating inner objectives or wants. As technologists, we must always encourage executives to take a better take a look at their proprietary information, which anchors their firm’s uniqueness, and take the time to thoughtfully determine which applied sciences will finest capitalize on that information to advance their enterprise aims.

    On this piece, we went over LLM capabilities, the worth that lies throughout the (typically) missed facet of structured information, and business options for making use of AI over structured information — together with my very own answer and the teachings discovered from constructing that. 

    Thanks for studying. 


    References: 

    [1] R. Ying, R. He, Okay. Chen, P. Eksombatchai, W. L. Hamilton and J. Leskovec, Graph Convolutional Neural Networks for Internet-Scale Recommender Techniques (2018), KDD 2018.

    Writer bio:

    Dr. Jure Leskovec is the Chief Scientist and Co-Founding father of Kumo, a number one predictive AI firm. He’s a Laptop Science professor at Stanford, the place he has been educating for greater than 15 years. Jure co-created Graph Neural Networks and has devoted his profession to advancing how AI learns from linked info. He beforehand served as Chief Scientist at Pinterest and carried out award-winning analysis at Yahoo and Microsoft.

    Jure's headshot
    Picture by Jeff Cable



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleA Dystopian “Black Mirror” Moment or an Inevitable Future?
    Next Article OpenAI Just Released GPT-5.1, and Personality Is a Big Focus
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    The Machine Learning and Deep Learning “Advent Calendar” Series: The Blueprint

    November 30, 2025
    Artificial Intelligence

    The Greedy Boruta Algorithm: Faster Feature Selection Without Sacrificing Recall

    November 30, 2025
    Artificial Intelligence

    Metric Deception: When Your Best KPIs Hide Your Worst Failures

    November 29, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Could LLMs help design our next medicines and materials? | MIT News

    April 9, 2025

    Don’t let hype about AI agents get ahead of reality

    July 3, 2025

    What If I Had AI in 2020: Rent The Runway Dynamic Pricing Model

    August 22, 2025

    Microsoft lanserar Discovery AI-plattform för vetenskaplig forskning

    May 20, 2025

    Using Google’s LangExtract and Gemma for Structured Data Extraction

    August 26, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Designing better products with AI and sustainability 

    August 26, 2025

    Building connected data ecosystems for AI at scale

    October 10, 2025

    Stellar Flare Detection and Prediction Using Clustering and Machine Learning

    August 5, 2025
    Our Picks

    The Machine Learning and Deep Learning “Advent Calendar” Series: The Blueprint

    November 30, 2025

    The Greedy Boruta Algorithm: Faster Feature Selection Without Sacrificing Recall

    November 30, 2025

    Metric Deception: When Your Best KPIs Hide Your Worst Failures

    November 29, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.