Close Menu
    Trending
    • Implementing DRIFT Search with Neo4j and LlamaIndex
    • Agentic AI in Finance: Opportunities and Challenges for Indonesia
    • Dispatch: Partying at one of Africa’s largest AI gatherings
    • Topp 10 AI-filmer genom tiderna
    • OpenAIs nya webbläsare ChatGPT Atlas
    • Creating AI that matters | MIT News
    • Scaling Recommender Transformers to a Billion Parameters
    • Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Lessons Learned After 6.5 Years Of Machine Learning
    Artificial Intelligence

    Lessons Learned After 6.5 Years Of Machine Learning

    ProfitlyAIBy ProfitlyAIJune 30, 2025No Comments8 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    I began studying machine studying greater than six years in the past, the sphere was within the midst of actually getting traction. In 2018-ish, after I took my first college programs on traditional machine studying, behind the scenes, key strategies have been already being developed that might result in AI’s growth within the early 2020s. The GPT fashions have been being printed, and different firms adopted swimsuit, pushing the bounds, each in efficiency and parameter sizes, with their fashions. For me, it was a good time to begin studying machine studying, as a result of the sphere was transferring so quick that there was at all times one thing new.

    Once in a while, often each 6 to 12 months, I look again on the years, mentally fast-forwarding from college lectures to doing business AI analysis. In wanting again, I usually discover new ideas which were accompanying me throughout studying ML. On this assessment, I discovered that working deeply on one slender subject has been a key precept for my progress over the past years. Past deep work, I’ve recognized three different ideas. They aren’t essentially technical insights, however slightly patterns of mindset and strategies.

    The Significance of Deep Work

    Winston Churchill is legendary not just for his oratory but additionally for his unbelievable quickness of thoughts. There’s a well-liked story a couple of verbal dispute between him and Woman Astor, the primary lady in British Parliament. Making an attempt to finish an argument with him, she quipped:

    If I have been your spouse, I’d put poison in your tea.

    Churchill, together with his trademark sharpness, replied:

    And if I have been your husband, I’d drink it.

    Giving witty repartee like that’s admired as a result of it’s a uncommon talent, and never everyone seems to be born with such reflexive brilliance. Fortunately, in our area, doing ML analysis and engineering, fast wit just isn’t the superpower that will get you far. What does is the flexibility to focus deeply.

    Machine studying work, particularly the analysis facet, just isn’t fast-paced within the conventional sense. It requires lengthy stretches of uninterrupted, intense thought. Coding ML algorithms, debugging obscure knowledge points, crafting a speculation — all of it calls for deep work.

    By “deep work,” I imply each:

    • The talent to pay attention deeply for prolonged intervals
    • The setting that permits and encourages such focus

    Over the previous two to a few years, I’ve come to see deep work as important to creating significant progress. The hours I’ve spent in centered immersion — a number of occasions every week — have been way more productive than far more fragmented blocks of distracted productiveness ever may. And, fortunately, working deeply will be realized, and your setting set as much as assist it.

    For me, probably the most fulfilling intervals are at all times these main as much as paper submission deadlines. These are occasions the place you may laser focus: the world narrows right down to your challenge, and also you’re in circulate. Richard Feynman stated it effectively:

    To do actual good physics, you want absolute strong lengths of time… It wants quite a lot of focus.

    Change “physics” with “machine studying,” and the purpose nonetheless holds.

    You Ought to (Largely) Ignore Developments

    Have you ever heard of huge language fashions? After all, you’ve — names like LLaMA, Gemini, Claude, or Bard fill the tech information cycle. They’re the cool youngsters of generative AI, or “GenAI,” because it’s now stylishly known as.

    However right here’s the catch: whenever you’re simply beginning out, chasing tendencies could make gaining momentum onerous.

    I as soon as labored with a researcher, and we each have been simply beginning in “doing ML”. We’ll name my former colleague John. For his analysis, he dove head-first into the then-hot new discipline of retrieval-augmented technology (RAG), hoping to enhance language mannequin outputs by integrating exterior doc search. He additionally needed to research emergent capabilities of LLMs — issues these fashions can do regardless that they weren’t explicitly educated for — and distill these into smaller fashions.

    The issue for John? The fashions he primarily based his work on developed too quick. Simply getting a brand new state-of-the-art mannequin operating took weeks. By the point he did, a more moderen, higher mannequin was already printed. That tempo of change, mixed with unclear analysis standards for his area of interest, made it practically unmanageable for him to maintain his analysis going. Particularly for somebody nonetheless new to analysis, like John and me again then.

    This isn’t a criticism of John (I probably would have failed too). As an alternative, I’m telling this story to make you think about: does your progress depend on frequently browsing the foremost wave of the most recent pattern?

    Doing Boring Information Evaluation (Over and Over)

    Each time I get to coach a mannequin, I mentally breathe a sigh of aid.

    Why? As a result of it means I’m executed with the hidden onerous half: knowledge evaluation.

    Right here’s the same old sequence:

    1. You might have a challenge.
    2. You purchase some (real-world) dataset.
    3. You wish to practice ML fashions.
    4. However first…it’s good to put together the info.

    A lot can go flawed in that final step.

    Let me illustrate this with a mistake I made whereas working with ERA5 climate knowledge — an enormous, gridded dataset from the European Centre for Medium-Vary Climate Forecasts. I needed to foretell NDVI (Normalized Distinction Vegetation Index), which signifies vegetation density, utilizing historic climate patterns from the ERA5 knowledge.

    For my challenge, I needed to merge the ERA5 climate knowledge with NDVI satellite tv for pc knowledge I obtained from the NOAA, the US climate company. I translated the NDVI knowledge to ERA5’s decision, added it as one other layer, and, getting no form mismatch, fortunately proceeded to coach a Imaginative and prescient Transformer.

    Just a few days later, I visualized the mannequin predictions and… shock! The mannequin thought Earth was the wrong way up. Actually — my enter knowledge confirmed a usually oriented world, however my vegetation knowledge was flipped on the Equator.

    What went flawed? I had neglected how the decision translation flipped the orientation of the NDVI knowledge.

    Why did I miss that? Easy: I didn’t wish to do the info engineering, however instantly skip forward to machine studying. However the actuality is that this: in real-world ML work, getting the info proper is the work.

    Sure, educational analysis usually allows you to work with curated datasets like ImageNet, CIFAR, or SQuAD. However for actual tasks? You’ll must:

    1. Clear, align, normalize, and validate
    2. Debug bizarre edge circumstances
    3. Visually examine intermediate knowledge

    After which repeat this till it’s really prepared

    I realized this the onerous means by skipping steps I assumed weren’t crucial for my knowledge. Don’t do the identical.

    (Machine Studying) Analysis Is a Particular Type of Trial and Error

    From the skin, scientific progress at all times appears to be elegantly clean:

    Drawback → Speculation → Experiment → Resolution

    However in follow, it’s a lot messier. You’ll make errors — some small, some facepalm-worthy. (e.g., Earth flipped the wrong way up.) That’s okay. What issues is the way you deal with these errors.

    Unhealthy errors simply occur. However insightful errors educate you one thing.

    To assist myself be taught sooner from the perceived failures, I now preserve a easy lab pocket book. Earlier than operating an experiment, I write down:

    1. My speculation
    2. What I anticipate to occur
    3. Why I anticipate it

    Then, when the experimental outcomes come again (usually as a “nope, didn’t work”), I can replicate on why it may need failed and what that claims about my assumptions.

    This transforms errors into suggestions, and suggestions into studying. Because the saying goes:

    An professional is somebody who has made all of the errors that may be made in a really slender discipline.

    That’s analysis.

    Ultimate Ideas

    After 6.5 years, I’ve come to comprehend that doing machine studying effectively has little to do with flashy tendencies or simply tuning (giant language) fashions. In hindsight, I feel it’s extra about:

    • Creating time and house for deep work
    • Selecting depth over hype
    • Taking knowledge evaluation critically
    • Embracing the messiness of trial and error

    Should you’re simply beginning out — and even are a number of years in — these classes are price internalizing. They received’t present up in convention keynotes, however they’ll present up by means of your precise progress.


    • The Feynman quote is from the e book Deep Work, by Cal Newport
    • For Churchill’s quote, a number of variations exist, some with espresso, some with tea, being poisoned



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleFrom Pixels to Plots | Towards Data Science
    Next Article A Gentle Introduction to Backtracking
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Implementing DRIFT Search with Neo4j and LlamaIndex

    October 22, 2025
    Artificial Intelligence

    Agentic AI in Finance: Opportunities and Challenges for Indonesia

    October 22, 2025
    Artificial Intelligence

    Creating AI that matters | MIT News

    October 21, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Declarative and Imperative Prompt Engineering for Generative AI

    July 25, 2025

    It’s been a massive week for the AI copyright debate

    April 3, 2025

    AI models are using material from retracted scientific papers

    September 23, 2025

    In a first, Google has released data on how much energy an AI prompt uses

    August 21, 2025

    OpenAI har lanserat GPT-5 och introducerat flera uppdateringar för ChatGPT

    August 9, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Unpacking the bias of large language models | MIT News

    June 17, 2025

    Data Has No Moat! | Towards Data Science

    June 24, 2025

    Merging design and computer science in creative ways | MIT News

    April 28, 2025
    Our Picks

    Implementing DRIFT Search with Neo4j and LlamaIndex

    October 22, 2025

    Agentic AI in Finance: Opportunities and Challenges for Indonesia

    October 22, 2025

    Dispatch: Partying at one of Africa’s largest AI gatherings

    October 22, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.