Close Menu
    Trending
    • The Missing Curriculum: Essential Concepts For Data Scientists in the Age of AI Coding Agents
    • Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News
    • Understanding the Chi-Square Test Beyond the Formula
    • Microsoft has a new plan to prove what’s real and what’s AI online
    • AlpamayoR1: Large Causal Reasoning Models for Autonomous Driving
    • AI in Multiple GPUs: How GPUs Communicate
    • Parking-aware navigation system could prevent frustration and emissions | MIT News
    • Can AI Solve Failures in Your Supply Chain?
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » The Missing Curriculum: Essential Concepts For Data Scientists in the Age of AI Coding Agents
    Artificial Intelligence

    The Missing Curriculum: Essential Concepts For Data Scientists in the Age of AI Coding Agents

    ProfitlyAIBy ProfitlyAIFebruary 19, 2026No Comments14 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Why learn this text?

    one about tips on how to construction your prompts to allow your AI agent to carry out magic. There are already a sea of articles that goes into element about what construction to make use of and when so there’s no want for one more.

    As an alternative, this text is one out of a sequence of articles which are about tips on how to preserve your self, the coder, related within the fashionable AI coding ecosystem.

    It’s about studying the strategies that allow you to excel in utilising coding brokers higher than those that blindly hit tab or copy-paste.

    We’ll go into the ideas from present software program engineering practices that you have to be conscious of, and go into why these ideas are related, significantly now.

    • By studying this sequence, you must have a good suggestion of what widespread pitfalls to search for in auto-generated code, and know tips on how to information a coding assistant to create manufacturing grade code that’s maintainable and extensible.
    • This text is most related for budding programmers, graduates, and professionals from different technical industries that wish to degree up their coding experience.

    What we are going to cowl not solely makes you higher at utilizing coding assistants but additionally higher coders generally.

    The Core Ideas

    The excessive degree ideas we’ll cowl are the next:

    • Code Smells
    • Abstraction
    • Design Patterns

    In essence, there’s nothing new about them. To seasoned builders, they’re second nature, drilled into their brains by way of years of PR opinions and debugging. You finally attain some extent the place you instinctively react to code that “feels” like future ache.

    And now, they’re maybe extra related than ever since coding assistants have change into a necessary a part of any builders’ expertise, be it juniors to seniors.

    Why?

    As a result of the handbook labor of writing code has been offloaded. The first accountability for any developer has now shifted from writing code to reviewing it. Everybody has successfully change into a senior developer guiding a junior (the coding assistant).

    So, it’s change into important for even junior software program practitioners to have the ability to ‘overview’ code. However the ones who will thrive in at present’s trade are those with the foresight of a senior developer.

    Because of this we will likely be masking the above ideas in order that within the very very least, you’ll be able to inform your coding assistant to take them into consideration, even for those who your self don’t precisely know what you’re on the lookout for.

    So, introductions at the moment are achieved. Let’s get straight into our first matter: Code smells.

    Code Smells

    What’s a code odor?

    I discover it a really aptly named time period – it’s the equal of bitter smelling milk indicating to you that it’s a foul thought to drink it.

    For many years, builders have learnt by way of trial and error what sort of code works long-term. “Smelly” code are brittle, liable to hidden bugs, and tough for a human or AI agent to grasp precisely what’s occurring.

    Thus it’s usually very helpful for builders to learn about code smells and tips on how to detect them.

    Helpful hyperlinks for studying extra about code smells:

    https://luzkan.github.io/smells

    https://refactoring.guru/refactoring/smells

    Now, having used coding brokers to construct every part from skilled ML pipelines for my 9-5 job to total cellular apps in languages I’d by no means touched earlier than for my side-projects, I’ve recognized two typical “smells” that emerge whenever you change into over-reliant in your coding assistant:

    • Divergent Change
    • Speculative Generality

    Let’s undergo what they’re, the dangers concerned, and an instance of tips on how to repair it.

    Photograph by Greg Jewett on Unsplash

    Divergent Change

    Divergent change is when a single module or class is doing too many issues without delay. The aim of the code has ‘diverged’ into many alternative instructions and so fairly than being centered on being good at one activity (Single Duty Precept), it’s attempting to do every part.

    This ends in a painful state of affairs the place this code is all the time breaking and thus requires fixing for numerous impartial causes.

    When does it occur with AI?

    When the developer shouldn’t be engaged with the codebase and blindly accepts the Agent output, you might be doubly vulnerable to this.

    Sure, you might have achieved all the right issues and made a properly structured immediate that adheres to the most recent is in immediate engineering.

    However generally, for those who ask it to “add performance to deal with X,” the agent will normally do precisely as it’s instructed and cram code into your present class, particularly when the prevailing codebase is already very difficult.

    It’s finally as much as you to take note of the function, accountability and supposed utilization of the code to give you a holistic strategy. In any other case, you’re very prone to find yourself with smelly code.

    Instance — ML Engineering

    Under, we’ve got a ModelPipeline class from which you may get whiffs of future extensibility points.

    
    class ModelPipeline:
        def __init__(self, data_path):
            self.data_path = data_path
    
        def load_from_s3(self):
            print(f"Connecting to S3 to get {self.data_path}")
            return "raw_data"
    
        def clean_txn_data(self, information):
            print("Cleansing particular transaction JSON format")
            return "cleaned_data"
    
        def train_xgboost(self, information):
            print("Working XGBoost coach")
            return "mannequin"
    A fast warning:

    We will’t discuss in absolutes and say this code is unhealthy only for the sake of it.

    It all the time will depend on the broader context of how code is used. For a easy codebase that isn’t anticipated to develop in scope, the under is completely tremendous.

    Additionally word:

    It’s a contrived and easy instance for instance the idea.
    Don’t trouble giving this to an agent to show it will possibly work out that is smelly with out being instructed so. The purpose is for you to recognise the odor earlier than the agent makes it worse.

    So, what are issues that must be going by way of your head whenever you have a look at this code?

    • Knowledge retrieval: What occurs after we begin having multiple information supply, like Bigquery tables, native databases, or Azure blobs? How possible is that this to occur?
    • Knowledge Engineering: If the upstream information modifications or downstream modelling modifications, this can even want to alter.
    • Modelling: If we use totally different fashions, LightGBM or some Neural Internet, the upstream modelling wants to alter.

    You need to discover that by coupling Platform, Knowledge engineering, and ML engineering issues right into a single place, we’ve tripled the explanation for this code to be modified – i.e. code that’s starting to odor like ‘divergent change‘.

    Why is that this a doable drawback?

    1. Operational danger: Each edit runs the danger of introducing a bug, be it human or AI. By having this class put on three totally different hats, you’ve tripled the danger of this breaking, since there’s thrice as extra causes for this code to alter.
    2. AI Agent Context Air pollution: The Agent sees the cleansing and coaching code as a part of the identical drawback. For instance, it’s extra prone to change the coaching and information loading logic to accommodate a change within the information engineering, regardless that it was pointless. Finally, this will increase the ‘divergent change’ code odor.
    3. Danger is magnified by AI: An agent can rewrite a whole lot of traces of code in a second. If these traces signify three totally different disciplines, the agent has simply tripled the possibility of introducing a bug that your unit checks may not catch.

    The best way to repair it?

    The dangers outlined above ought to provide you with some concepts about tips on how to refactor this code.

    One doable strategy is as under:

    class S3DataLoader:
        """Handles solely Infrastructure issues."""
        def __init__(self, data_path):
            self.data_path = data_path
    
        def load(self):
            print(f"Connecting to S3 to get {self.data_path}")
            return "raw_data"
    
    class TransactionsCleaner:
        """Handles solely Knowledge Area/Schema issues."""
        def clear(self, information):
            print("Cleansing particular transaction JSON format")
            return "cleaned_data"
    
    class XGBoostTrainer:
        """Handles solely ML/Analysis issues."""
        def prepare(self, information):
            print("Working XGBoost coach")
            return "mannequin"
    
    class ModelPipeline:
        """The Orchestrator: It is aware of 'what' to do, however not 'how' to do it."""
        def __init__(self, loader, cleaner, coach):
            self.loader = loader
            self.cleaner = cleaner
            self.coach = coach
    
        def run(self):
            information = self.loader.load()
            cleaned = self.cleaner.clear(information)
            return self.coach.prepare(cleaned)

    Previously, the mannequin pipeline’s accountability was to deal with your complete DS stack.

    Now, its accountability is to orchestrate the totally different modelling phases, while the complexities of every stage is cleanly separated into their very own respective courses.

    What does this obtain?

    1. Minimised Operational Danger: Now, issues are decoupled and obligations are stark clear. You’ll be able to refactor your information loading logic with confidence that the ML coaching code stays untouched. So long as the inputs and outputs (the “contracts”) keep the identical, the danger of impacting something downstream is lowered.

    2. Testable Code: It’s considerably simpler to jot down unit checks because the scope of testing is smaller and effectively outlined.

    3. Lego-brick Flexibility: The structure is now open for extension. Have to migrate from S3 to Azure? Merely drop in an AzureBlobLoader. Need to experiment with LightGBM? Swap the coach.

    You finally find yourself with code that’s extra dependable, readable, and maintainable for each you and the AI agent. In case you don’t intervene, it’s possible this class change into greater, broader, and flakier and find yourself being an operational nightmare.

    Speculative Generality

    Photograph by Greg Jewett on Unsplash

    While ‘Divergent Change‘ happens most frequently in an already massive and sophisticated codebase, ‘Speculative Generality‘ appears to happen whenever you begin out creating a brand new challenge.

    This code odor is when the developer tries to future-proof a challenge by guessing how issues will pan out, leading to pointless performance that solely will increase complexity.

    We’ve all been there:

    “I’ll make this mannequin coaching pipeline assist every kind of fashions, cross validation and hyperparameter tuning strategies, and ensure there’s human-in-the-loop suggestions for mannequin choice in order that we will use this for all of our coaching sooner or later!”

    solely to seek out that…

    1. It’s a monster of a job,
    2. code seems flaky,
    3. you spend an excessive amount of time on it
    4. while you’ve not been in a position to construct out the straightforward LightGBM classification mannequin that you simply wanted within the first place.

    When AI Brokers are vulnerable to this odor

    I’ve discovered that the most recent, excessive performing coding brokers are most vulnerable to this odor. Couple a robust agent with a obscure immediate, and also you rapidly find yourself with too many modules and a whole lot of traces of latest code.

    Maybe each line is pure gold and it’s precisely what you want. Once I skilled one thing like this lately, the code actually appeared to make sense to me at first.

    However I ended up rejecting all of it. Why?

    As a result of the agent was making design selections for a future I hadn’t even mapped out but. It felt like I used to be dropping management of my very own codebase, and that it might change into an actual ache to undo sooner or later if the necessity arises.

    The Key Precept: Develop your codebase organically

    The mantra to recollect when reviewing AI output is “YAGNI” (You ain’t gonna want it). It’s a precept in software program improvement that implies you must solely implement the code you want, not the code you foresee.

    Begin with the best factor that works. Then, iterate on it.

    It is a extra pure, natural manner of rising your codebase that will get issues achieved, while additionally being lean, easy, and fewer vulnerable to bugs.

    Revisiting our examples

    We beforehand checked out refactoring Instance 1 (The “Do-It-All” class) into Instance 2 (The Orchestrator) to show how the unique ModelPipeline code was smelly.

    It wanted to be refactored as a result of it was topic to too many modifications for too many impartial causes, and in its present state the code was too brittle to keep up successfully.

    Instance 1

    class ModelPipeline:
        def __init__(self, data_path):
            self.data_path = data_path
    
        def load_from_s3(self):
            print(f"Connecting to S3 to get {self.data_path}")
            return "raw_data"
    
        def clean_txn_data(self, information):
            print("Cleansing particular transaction JSON format")
            return "cleaned_data"
    
        def train_xgboost(self, information):
            print("Working XGBoost coach")
            return "mannequin"

    Instance 2

    class S3DataLoader:
        """Handles solely Infrastructure issues."""
        def __init__(self, data_path):
            self.data_path = data_path
    
        def load(self):
            print(f"Connecting to S3 to get {self.data_path}")
            return "raw_data"
    
    class TransactionsCleaner:
        """Handles solely Knowledge Area/Schema issues."""
        def clear(self, information):
            print("Cleansing particular transaction JSON format")
            return "cleaned_data"
    
    class XGBoostTrainer:
        """Handles solely ML/Analysis issues."""
        def prepare(self, information):
            print("Working XGBoost coach")
            return "mannequin"
    
    class ModelPipeline:
        """The Orchestrator: It is aware of 'what' to do, however not 'how' to do it."""
        def __init__(self, loader, cleaner, coach):
            self.loader = loader
            self.cleaner = cleaner
            self.coach = coach
    
        def run(self):
            information = self.loader.load()
            cleaned = self.cleaner.clear(information)
            return self.coach.prepare(cleaned)

    Beforehand, we implicitly assumed that this was manufacturing grade code that was topic to the varied upkeep modifications/function additions which are continuously made for such code. In such context, the ‘Divergent Change’ code odor was related.

    However what if this was code for a brand new product MVP or R&D? Would the identical ‘Divergent Change’ code-smell apply on this context?

    Photograph by Kenny Eliason on Unsplash

    In such a situation, choosing instance 2 may very well be the smellier alternative.

    If the scope of the challenge is to contemplate one information supply, or one mannequin, constructing three separate courses and an orchestrator could depend as ‘pre-solving’ issues you don’t but have.

    Thus, in MVP/R&D conditions the place detailed deployment issues are unknown and there are particular enter information/output mannequin necessities, instance 1 could possibly be extra applicable.

    The Overarching Lesson

    What these two code smells reveal is that software program engineering is never about “right” code. It’s about context.

    A coding agent can write good Python in each operate and syntax, but it surely doesn’t know your total enterprise context. It doesn’t know if the script it’s writing is a throwaway experiment or the spine of a multi-million greenback manufacturing pipeline revamp.

    Effectivity tradeoffs

    You would argue that we will merely feed the AI each little element of enterprise context, from the conferences you’ve needed to the tea-break chats you had with a fellow colleague. However in apply, that isn’t scalable.

    If it’s important to spend half and hour writing a “context memo” simply to get a clear 50-line operate, have you ever actually gained effectivity? Or have you ever simply reworked the handbook labor of writing code into that of writing prompts?

    What makes you stand out from the remainder

    Within the age of AI, your worth as an information scientist has basically modified. The handbook labour of writing code has now been eliminated. Brokers will deal with the boilerplating, the formatting, and unit testing.

    So, to make your self stand out from the opposite information scientists who’re blindly copy pasting code, you might want to have the structural instinct to information a coding agent in a course that’s related on your distinctive state of affairs. This ends in higher reliability, efficiency, and outcomes which are mirrored on you, making you stand out.

    However to realize this, you might want to construct this instinct that comes years of expertise by figuring out the code smells we’ve mentioned, and the opposite two ideas (design patterns, abstraction) that we are going to delve into in subsequent articles.

    And finally, with the ability to do that successfully provides you extra headspace to give attention to the issue fixing and architecting an answer an issue – i.e. the actual ‘enjoyable’ of knowledge science.

    Associated Articles

    In case you favored this text, see my Software program Engineering Ideas for Knowledge Scientists sequence, the place we develop on the ideas most related for Knowledge Scientists



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleExposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News

    February 19, 2026
    Artificial Intelligence

    Understanding the Chi-Square Test Beyond the Formula

    February 19, 2026
    Artificial Intelligence

    AlpamayoR1: Large Causal Reasoning Models for Autonomous Driving

    February 19, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Ferrari Just Launched an AI App That Lets Fans Experience F1 Like Never Before

    May 2, 2025

    Ethical AI: Overcoming Bias in Human-AI Collaborative Evaluations

    April 9, 2025

    HiDream-I1 är en ny öppen källkods bildgenererande grundmodell

    April 12, 2025

    From ‘Dataslows’ to Dataflows: The Gen2 Performance Revolution in Microsoft Fabric

    January 13, 2026

    OpenAI Just Released GPT-5.1, and Personality Is a Big Focus

    November 18, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    A Visual Guide to Tuning Gradient Boosted Trees

    September 15, 2025

    Temporal-Difference Learning and the Importance of Exploration: An Illustrated Guide

    October 2, 2025

    US investigators are using AI to detect child abuse images made by AI

    September 26, 2025
    Our Picks

    The Missing Curriculum: Essential Concepts For Data Scientists in the Age of AI Coding Agents

    February 19, 2026

    Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News

    February 19, 2026

    Understanding the Chi-Square Test Beyond the Formula

    February 19, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.