Close Menu
    Trending
    • What we’ve been getting wrong about AI’s truth crisis
    • Building Systems That Survive Real Life
    • The crucial first step for designing a successful enterprise AI system
    • Silicon Darwinism: Why Scarcity Is the Source of True Intelligence
    • How generative AI can help scientists synthesize complex materials | MIT News
    • Distributed Reinforcement Learning for Scalable High-Performance Policy Optimization
    • How to Apply Agentic Coding to Solve Problems
    • TDS Newsletter: January Must-Reads on Data Platforms, Infinite Context, and More
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Building Systems That Survive Real Life
    Artificial Intelligence

    Building Systems That Survive Real Life

    ProfitlyAIBy ProfitlyAIFebruary 2, 2026No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Within the Writer Highlight sequence, TDS Editors chat with members of our group about their profession path in knowledge science and AI, their writing, and their sources of inspiration. Immediately, we’re thrilled to share our dialog with Sara Nobrega.

    Sara Nobrega is an AI Engineer with a background in Physics and Astrophysics. She writes about LLMs, time sequence, profession transition, and sensible AI workflows.

    You maintain a Grasp’s in Physics and Astrophysics. How does your background play into your work in knowledge science and AI engineering? 

    Physics taught me two issues that I lean on on a regular basis: methods to keep calm after I don’t know what’s occurring, and methods to break a scary drawback into smaller items till it’s not scary. Additionally… physics actually humbles you. You study quick that being “intelligent” doesn’t matter should you can’t clarify your pondering or reproduce your outcomes. That mindset might be essentially the most helpful factor I carried into knowledge science and engineering.

    You latterly wrote a deep dive into your transition from an information scientist to an AI engineer. In your each day work at GLS, what’s the single greatest distinction in mindset between these two roles?

    For me, the largest shift was going from “Is that this mannequin good?” to “Can this technique survive actual life?” Being an AI Engineer isn’t a lot concerning the excellent reply however extra about constructing one thing reliable. And actually, that change was uncomfortable at first… but it surely made my work really feel far more helpful.

    You noted that whereas an information scientist would possibly spend weeks tuning a mannequin, an AI Engineer may need solely three days to deploy it. How do you steadiness optimization with pace?

    If now we have three days, I’m not chasing tiny enhancements. I’m chasing confidence and reliability. So I’ll give attention to a stable baseline that already works and on a easy strategy to monitor what occurs after launch.

    I additionally like transport in small steps. As a substitute of pondering “deploy the ultimate factor,” I believe “deploy the smallest model that creates worth with out inflicting chaos.”

    How do you suppose we may use LLMs to bridge the hole between knowledge scientists and DevOps? Are you able to share an instance the place this labored effectively for you?

    Knowledge scientists converse in experiments and outcomes whereas DevOps of us converse in reliability and repeatability. I believe LLMs will help as a translator in a sensible manner. As an example, to generate exams and documentation so what works on my machine turns into “it really works in manufacturing.”

    A easy instance from my very own work: after I’m constructing one thing like an API endpoint or a processing pipeline, I’ll use an LLM to assist draft the boring however essential components, like take a look at circumstances, edge circumstances, and clear error messages. This hastens the method loads and retains the motivation ongoing. I believe the secret’s to deal with the LLM as a junior who’s quick, useful, and infrequently incorrect, so reviewing every thing is essential. 

    You’ve cited research suggesting a large development in AI roles by 2027. If a junior knowledge scientist may solely study one engineering talent this yr to remain aggressive, what ought to it’s?

    If I needed to decide one, it might be to discover ways to ship your work in a repeatable manner! Take one undertaking and make it one thing that may run reliably with out you babysitting it. As a result of in the true world, the most effective mannequin is ineffective if no one can use it. And the individuals who stand out are those who can take an thought from a pocket book to one thing actual.

    Your current work has centered closely on LLMs and time sequence. Trying forward into 2026, what’s the one rising AI matter that you’re most excited to write down about subsequent?

    I’m leaning increasingly more towards writing about sensible AI workflows (the way you go from an thought to one thing dependable). In addition to, if I do write a couple of “sizzling” matter, I need it to be helpful, not simply thrilling. I wish to write about what works, what breaks… The world of knowledge science and AI is stuffed with tradeoffs and ambiguity, and that has been fascinating me loads.

    I’m additionally getting extra interested in AI as a system: how totally different items work together collectively… keep tuned for this years’ articles!

    To study extra about Sara’s work and keep up-to-date together with her newest articles, you possibly can comply with her on TDS or LinkedIn. 



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe crucial first step for designing a successful enterprise AI system
    Next Article What we’ve been getting wrong about AI’s truth crisis
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Silicon Darwinism: Why Scarcity Is the Source of True Intelligence

    February 2, 2026
    Artificial Intelligence

    How generative AI can help scientists synthesize complex materials | MIT News

    February 2, 2026
    Artificial Intelligence

    Distributed Reinforcement Learning for Scalable High-Performance Policy Optimization

    February 1, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    I Analysed 25,000 Hotel Names and Found Four Surprising Truths

    July 22, 2025

    Exploring Multimodal LLMs? Applications, Challenges, and How They Work

    April 4, 2025

    AI FOMO, Shadow AI, and Other Business Problems

    September 4, 2025

    Beyond Requests: Why httpx is the Modern HTTP Client You Need (Sometimes)

    October 15, 2025

    Robots that spare warehouse workers the heavy lifting | MIT News

    December 5, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    What Are Golden Datasets in AI? Importance, Characteristics, and Challenges

    April 4, 2025

    Pipelining AI/ML Training Workloads with CUDA Streams

    June 26, 2025

    How to prevent order discrepancy with automated PO-SO matching

    April 4, 2025
    Our Picks

    What we’ve been getting wrong about AI’s truth crisis

    February 2, 2026

    Building Systems That Survive Real Life

    February 2, 2026

    The crucial first step for designing a successful enterprise AI system

    February 2, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.