Within the Writer Highlight sequence, TDS Editors chat with members of our group about their profession path in knowledge science and AI, their writing, and their sources of inspiration. At this time, we’re thrilled to share our dialog with Mariya Mansurova.
Mariya’s story is one in every of perpetual studying. Beginning with a powerful basis in software program engineering, arithmetic, and physics, she’s spent extra thanover 12 years constructing experience in product analytics throughout industries, from search engines like google and analytics platforms to fintech. Her distinctive path, together with hands-on expertise as a product supervisor, has given her a 360-degree view of how analytical groups will help companies make the correct selections.
Now serving as a Product Analytics Supervisor, she attracts vitality from discovering contemporary insights and revolutionary approaches. Every of her articles on In direction of Information Science displays her newest “aha!” second: a testomony to her perception that curiosity drives actual progress.
You’ve written extensively about agentic AI and frameworks like smolagents and LangGraph. What excites you most about this rising house?
I first began exploring generative AI largely out of curiosity and, admittedly, a little bit of FOMO. Everybody round me appeared to be utilizing LLMs or no less than speaking about them. So I carved out time to get hands-on, beginning with the very fundamentals like prompting methods and LLM APIs. And the deeper I went, the extra excited I turned.
What fascinates me essentially the most is how agentic techniques are shaping the best way we dwell and work. I imagine that this affect will solely proceed to develop over time. That’s why I take advantage of each probability to make use of agentic instruments like Copilot or Claude Desktop or construct my very own brokers utilizing applied sciences like smolagents, LangGraph or CrewAI.
Essentially the most impactful use case of Agentic AI for me has been coding. It’s genuinely spectacular how instruments like GitHub Copilot can enhance the velocity and the standard of your work. Whereas recent research from METR has questioned whether or not the effectivity positive factors are really that substantial, I undoubtedly discover a distinction in my day-to-day work. It’s particularly useful with repetitive duties (like pivoting tables in SQL) or when working with unfamiliar applied sciences (like constructing an online app in TypeScript). Total, I’d estimate a couple of 20% improve in velocity. However this enhance isn’t nearly productiveness; it’s a paradigm shift that additionally expands what feels doable. I imagine that as agentic instruments proceed to evolve, we’ll see a rising effectivity hole between people and corporations which have realized the way to leverage these applied sciences and people who haven’t.
In the case of analytics, I’m particularly enthusiastic about automated reporting brokers. Think about an AI that may pull the correct knowledge, create visualisations, carry out root trigger evaluation the place wanted, observe open questions and even create the primary draft of the presentation. That might be simply magical. I’ve constructed a prototype that generates such KPI narratives. And despite the fact that there’s a big hole between the prototype and a manufacturing resolution that works reliably, I imagine we’ll get there.
You’ve written three articles below the “Practical Computer Simulations for Product Analysts” sequence. What impressed that sequence, and the way do you assume simulation can reshape product analytics?
Simulation is a vastly underutilised device in product analytics. I wrote this sequence to indicate folks how highly effective and accessible the simulations might be. In my day-to-day work, I preserve encountering what-if questions like “What number of operational brokers will we’d like if we add this KYC management?” or “What’s the probably influence of launching this function in a brand new market?”. You’ll be able to simulate any system, regardless of how complicated. So, simulations gave me a approach to reply these questions quantitatively and pretty precisely, even when arduous knowledge wasn’t but accessible. So I’m hoping extra analysts will begin utilizing this method.
Simulations additionally shine when working with uncertainty and distributions. Personally, I favor bootstrap strategies to memorising a protracted checklist of statistical formulation and significance standards. Simulating the method typically feels extra intuitive, and it’s much less error-prone in observe.
Lastly, I discover it fascinating how applied sciences have modified the best way we do issues. With right this moment’s computing energy, the place any laptop computer can run 1000’s of simulations in minutes and even seconds, we are able to simply clear up issues that might have been difficult simply thirty years in the past. That’s a game-changer for analysts.
A number of of your posts concentrate on transitioning LLM purposes from prototype to production. What widespread pitfalls do you see groups make throughout that section?
By way of observe, I’ve found there’s a big hole between LLM prototypes and manufacturing options that many groups underestimate. The most typical pitfall is treating prototypes as in the event that they’re already production-ready.
The prototype section might be deceptively clean. You’ll be able to construct one thing useful in an hour or two, check it on a handful of examples, and really feel such as you’ve cracked the issue. Prototypes are nice instruments to show feasibility and get your workforce excited concerning the alternatives. However right here’s the place groups typically stumble: these early variations present no ensures round consistency, high quality, or security when dealing with numerous, real-world eventualities.
What I’ve realized is that profitable manufacturing deployment begins with rigorous analysis. Earlier than scaling something, you want clear definitions of what “good efficiency” appears like by way of accuracy, tone of voice, velocity and every other standards particular to your use case. Then that you must observe these metrics repeatedly as you iterate, making certain you’re truly bettering relatively than simply altering issues.
Consider it like software program testing: you wouldn’t ship code with out correct testing, and LLM purposes require the identical systematic method. This turns into particularly essential in regulated environments like fintech or healthcare, the place that you must reveal reliability not simply to your inside workforce however to compliance stakeholders as effectively.
In these regulated areas, you’ll want complete monitoring, human-in-the-loop overview processes, and audit trails that may face up to scrutiny. The infrastructure required to help all of this typically takes way more growth time than constructing the unique MVP. That’s one thing that constantly surprises groups who focus totally on the core performance.
Your articles generally mix engineering ideas with knowledge science/analytics finest practices, reminiscent of your “Top 10 engineering lessons every data analyst should know.” Do you assume the road between knowledge and engineering is blurring?
The function of an information analyst or an information scientist right this moment typically requires a mixture of expertise from a number of disciplines.
- We write code, so we share widespread floor with software program engineers.
- We assist product groups assume by way of technique and make selections, so product administration expertise are helpful.
- We draw on statistics and knowledge science to construct rigorous and complete analyses.
- And to make our narratives compelling and truly affect selections, we have to grasp the artwork of communication and visualisation.
Personally, I used to be fortunate to achieve numerous programming expertise early on, again at college and college. This background helped me tremendously in analytics: it elevated my effectivity, helped me collaborate higher with engineers and taught me the way to construct scalable and dependable options.
I strongly encourage analysts to undertake software program engineering finest practices. Issues like model management techniques, testing and code overview assist analytical groups to develop extra dependable processes and ship higher-quality outcomes. I don’t assume the road between knowledge and engineering is disappearing totally, however I do imagine that analysts who embrace an engineering mindset will probably be far more practical in fashionable knowledge groups.
You’ve explored each causal inference and cutting-edge LLM tuning methods. Do you see these as a part of a shared toolkit or separate mindsets?
That’s truly a terrific query. I’m a powerful believer that every one these instruments (from statistical strategies to fashionable ML methods) belong in a single toolkit. As Robert Heinlein famously mentioned, “Specialisation is for bugs.”
I consider analysts as knowledge wizards who assist their product groups clear up their issues utilizing no matter instruments match the most effective: whether or not it’s constructing an LLM-powered classifier for NPS feedback, utilizing causal inference to make strategic selections, or constructing an online app to automate workflows.
Relatively than specialising in particular expertise, I favor to concentrate on the issue we’re fixing and preserve the toolset as broad as doable. This mindset not solely results in higher outcomes but additionally fosters a steady studying tradition, which is important in right this moment’s fast-moving knowledge business.
You’ve coated a broad vary of subjects, from text embeddings and visualizations to simulation and multi AI agent. What writing behavior or guideline helps you retain your work so cohesive and approachable?
I normally write about subjects that excite me in the mean time, both as a result of I’ve simply realized one thing new or had an fascinating dialogue with colleagues. My inspiration typically comes from on-line programs, books or my day-to-day duties.
Once I write, I at all times take into consideration my viewers and the way this piece might be genuinely useful each for others and for my future self. I attempt to clarify all of the ideas clearly and go away breadcrumbs for anybody who desires to dig deeper. Over time, my weblog has develop into a private information base. I typically return to outdated posts: generally simply to repeat a code snippet, generally to share a useful resource with a colleague who’s engaged on one thing related.
As everyone knows, every little thing in knowledge is interconnected. Fixing a real-world downside typically requires a mixture of instruments and approaches. For instance, in the event you’re estimating the influence of launching in a brand new market, you would possibly use simulation for state of affairs evaluation, LLMs to discover buyer expectations, and visualisation to current the ultimate advice.
I attempt to mirror these connections in my writing. Applied sciences evolve by constructing on earlier breakthroughs, and understanding the foundations helps you go deeper. That’s why lots of my posts reference one another, letting readers observe their curiosity and uncover how totally different items match collectively.
Your articles are impressively structured, typically strolling readers from foundational ideas to superior implementations. What’s your course of for outlining a posh piece earlier than you begin writing?
I imagine I developed this manner of presenting info in class, as these habits have deep roots. Because the e-book The Tradition Map explains, totally different cultures fluctuate in how they construction communication. Some are concept-first (ranging from fundamentals and iteratively transferring to conclusions), whereas others are application-first (beginning with outcomes and diving deeper as wanted). I’ve undoubtedly internalised the concept-first method.
In observe, lots of my articles are impressed by on-line programs. Whereas watching a course, I define the tough construction in parallel so I don’t overlook any necessary nuances. I additionally observe down something that’s unclear and mark it for future studying or experimentation.
After the course, I begin occupied with the way to apply this information to a sensible instance. I firmly imagine you don’t really perceive one thing till you attempt it your self. Regardless that many of the programs have sensible examples, they’re typically too polished. So, solely whenever you apply the identical concepts on your personal use case will you run into edge instances and friction factors. For instance, the course would possibly use OpenAI fashions, however I would wish to attempt an area mannequin, or the default system immediate within the framework doesn’t work for my specific case and desires tweaking.
As soon as I’ve a working instance, I transfer to writing. I favor separate drafting from modifying. First, I concentrate on getting all my concepts and code down with out worrying about grammar or tone. Then I shift into modifying mode: refining the construction, choosing the proper visuals, placing collectively the introduction, and highlighting the important thing takeaways.
Lastly, I learn the entire thing end-to-end from the start to catch something I’ve missed. Then I ask my associate to overview it. They typically deliver a contemporary perspective and level out issues I didn’t take into account, which helps make the article extra complete and accessible.
To study extra about Mariya‘s work and keep up-to-date together with her newest articles, observe her right here on TDS and on LinkedIn.