Close Menu
    Trending
    • Creating AI that matters | MIT News
    • Scaling Recommender Transformers to a Billion Parameters
    • Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know
    • Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI
    • ChatGPT Gets More Personal. Is Society Ready for It?
    • Why the Future Is Human + Machine
    • Why AI Is Widening the Gap Between Top Talent and Everyone Else
    • Implementing the Fourier Transform Numerically in Python: A Step-by-Step Guide
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Finding “Silver Bullet” Agentic AI Flows with syftr
    AI Technology

    Finding “Silver Bullet” Agentic AI Flows with syftr

    ProfitlyAIBy ProfitlyAIAugust 19, 2025No Comments9 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    TL; DR

    The quickest method to stall an agentic AI mission is to reuse a workflow that now not matches. Utilizing syftr, we recognized “silver bullet” flows for each low-latency and high-accuracy priorities that persistently carry out properly throughout a number of datasets. These flows outperform random seeding and switch studying early in optimization. They recuperate about 75% of the efficiency of a full syftr run at a fraction of the price, which makes them a quick start line however nonetheless leaves room to enhance.

    When you have ever tried to reuse an agentic workflow from one mission in one other, you understand how typically it falls flat. The mannequin’s context size may not be sufficient. The brand new use case may require deeper reasoning. Or latency necessities might need modified. 

    Even when the previous setup works, it might be overbuilt – and overpriced – for the brand new downside. In these circumstances, a less complicated, quicker setup is perhaps all you want. 

    We got down to reply a easy query: Are there agentic flows that carry out properly throughout many use circumstances, so you possibly can select one primarily based in your priorities and transfer ahead?

    Our analysis suggests the reply is sure, and we name them “silver bullets.” 

    We recognized silver bullets for each low-latency and high-accuracy objectives. In early optimization, they persistently beat switch studying and random seeding, whereas avoiding the total price of a full syftr run.

    Within the sections that observe, we clarify how we discovered them and the way they stack up towards different seeding methods.

     A fast primer on Pareto-frontiers

    You don’t want a math diploma to observe alongside, however understanding the Pareto-frontier will make the remainder of this submit a lot simpler to observe. 

    Determine 1 is an illustrative scatter plot – not from our experiments – displaying accomplished syftr optimization trials. Sub-plot A and Sub-plot B are similar, however B highlights the primary three Pareto-frontiers: P1 (pink), P2 (inexperienced), and P3 (blue).

    • Every trial: A particular movement configuration is evaluated on accuracy and common latency (greater accuracy, decrease latency are higher).
    • Pareto-frontier (P1): No different movement has each greater accuracy and decrease latency. These are non-dominated.
    • Non-Pareto flows: A minimum of one Pareto movement beats them on each metrics. These are dominated.
    • P2, P3: In the event you take away P1, P2 turns into the next-best frontier, then P3, and so forth.

    You may select between Pareto flows relying in your priorities (e.g., favoring low latency over most accuracy), however there’s no cause to decide on a dominated movement — there’s at all times a greater choice on the frontier.

    Optimizing agentic AI flows with syftr

    All through our experiments, we used syftr to optimize agentic flows for accuracy and latency. 

    This method means that you can:

    • Choose datasets containing query–reply (QA) pairs
    • Outline a search house for movement parameters
    • Set goals equivalent to accuracy and value, or on this case, accuracy and latency

    Briefly, syftr automates the exploration of movement configurations towards your chosen goals.

    Determine 2 exhibits the high-level syftr structure.

    Figure 02 syftr
    Determine 2: Excessive-level syftr structure. For a set of QA pairs, syftr can mechanically discover agentic flows utilizing multi-objective Bayesian optimization by evaluating movement responses with precise solutions.

    Given the virtually infinite variety of potential agentic movement parametrizations, syftr depends on two key strategies:

    • Multi-objective Bayesian optimization to navigate the search house effectively.
    • ParetoPruner to cease analysis of seemingly suboptimal flows early, saving time and compute whereas nonetheless surfacing the best configurations.

    Silver bullet experiments

    Our experiments adopted a four-part course of (Determine 3).

    Figure 03 experiments
    Determine 3: The workflow begins with a two-step knowledge era section:
    A: Run syftr utilizing easy random sampling for seeding.
    B: Run all completed flows on all different experiments. The ensuing knowledge then feeds into the following step. 
    C: Figuring out silver bullets and conducting switch studying.
    D: Operating syftr on 4 held-out datasets 3 times, utilizing three totally different seeding methods.

    Step 1: Optimize flows per dataset

    We ran a number of hundred trials on every of the next datasets:

    • CRAG Activity 3 Music
    • FinanceBench
    • HotpotQA
    • MultihopRAG

    For every dataset, syftr looked for Pareto-optimal flows, optimizing for accuracy and latency (Determine 4).

    Figure 04 training
    Determine 4: Optimization outcomes for 4 datasets. Every dot represents a parameter mixture evaluated on 50 QA pairs. Purple strains mark Pareto-frontiers with the perfect accuracy–latency tradeoffs discovered by the TPE estimator.

    Step 3: Establish silver bullets

    As soon as we had similar flows throughout all coaching datasets, we might pinpoint the silver bullets — the flows which are Pareto-optimal on common throughout all datasets.

    Figure 05 silver bullets process
    Determine 5: Silver bullet era course of, detailing the “Establish Silver Bullets” step from Determine 3.

    Course of:

    1. Normalize outcomes per dataset.  For every dataset, we normalize accuracy and latency scores by the very best values in that dataset.
    2. Group similar flows. We then group matching flows throughout datasets and calculate their common accuracy and latency.
    3. Establish the Pareto-frontier. Utilizing this averaged dataset (see Determine 6), we choose the flows that construct the Pareto-frontier. 

    These 23 flows are our silver bullets — those that carry out properly throughout all coaching datasets.

    Figure 06 silver bullets plot
    Determine 6: Normalized and averaged scores throughout datasets. The 23 flows on the Pareto-frontier carry out properly throughout all coaching datasets.

    Step 4: Seed with switch studying

    In our unique syftr paper, we explored switch studying as a method to seed optimizations. Right here, we in contrast it instantly towards silver bullet seeding.

    On this context, switch studying merely means choosing particular high-performing flows from historic (coaching) research and evaluating them on held-out datasets. The info we use right here is similar as for silver bullets (Determine 3).

    Course of:

    1. Choose candidates. From every coaching dataset, we took the top-performing flows from the highest two Pareto-frontiers (P1 and P2).
    2. Embed and cluster. Utilizing the embedding mannequin BAAI/bge-large-en-v1.5, we transformed every movement’s parameters into numerical vectors. We then utilized Okay-means clustering (Okay = 23) to group related flows (Determine 7).
    3. Match experiment constraints. We restricted every seeding technique (silver bullets, switch studying, random sampling) to 23 flows for a good comparability, since that’s what number of silver bullets we recognized.

    Word: Switch studying for seeding isn’t but totally optimized. We might use extra Pareto-frontiers, choose extra flows, or strive totally different embedding fashions.

    Figure 07 transfer learning
    Determine 7: Clustered trials from Pareto-frontiers P1 and P2 throughout the coaching datasets.

    Step 5: Testing all of it

    Within the closing analysis section (Step D in Determine 3), we ran ~1,000 optimization trials on 4 take a look at datasets — Brilliant Biology, DRDocs, InfiniteBench, and PhantomWiki — repeating the method 3 times for every of the next seeding methods:

    • Silver bullet seeding
    • Switch studying seeding
    • Random sampling

    For every trial, GPT-4o-mini served because the choose, verifying an agent’s response towards the ground-truth reply.

    Outcomes

    We got down to reply:

    Which seeding method — random sampling, switch studying, or silver bullets — delivers the perfect efficiency for a brand new dataset within the fewest trials?

    For every of the 4 held-out take a look at datasets (Brilliant Biology, DRDocs, InfiniteBench, and PhantomWiki), we plotted:

    • Accuracy
    • Latency
    • Value
    • Pareto-area: a measure of how shut outcomes are to the optimum end result

    In every plot, the vertical dotted line marks the purpose when all seeding trials have accomplished. After seeding, silver bullets confirmed on common:

    • 9% greater most accuracy
    • 84% decrease minimal latency
    • 28% bigger Pareto-area

    in comparison with the opposite methods.

    Brilliant Biology

    Silver bullets had the very best accuracy, lowest latency, and largest Pareto-area after seeding. Some random seeding trials didn’t end. Pareto-areas for all strategies elevated over time however narrowed as optimization progressed.

    Figure 08 bright biology
    Determine 8: Brilliant Biology outcomes

    DRDocs

    Just like Brilliant Biology, silver bullets reached an 88% Pareto-area after seeding vs. 71% (switch studying) and 62% (random).

    Figure 09 drdocs
    Determine 9: DRDocs outcomes

    InfiniteBench

    Different strategies wanted ~100 further trials to match the silver bullet Pareto-area, and nonetheless didn’t match the quickest flows discovered by way of silver bullets by the top of ~1,000 trials.

    Figure 10 infinitebench
    Determine 10: InfiniteBench outcomes

    PhantomWiki

    Silver bullets once more carried out finest after seeding. This dataset confirmed the widest price divergence. After ~70 trials, the silver bullet run briefly centered on costlier flows.

    Figure 11 phantomwiki
    Determine 11: PhantomWiki outcomes

    Pareto-fraction evaluation

    In runs seeded with silver bullets, the 23 silver bullet flows accounted for ~75% of the ultimate Pareto-area after 1,000 trials, on common.

    • Purple space: Positive aspects from optimization over preliminary silver bullet efficiency.
    • Blue space: Silver bullet flows nonetheless dominating on the finish.
    Figure 12 test plot
    Determine 12: Pareto-fraction for silver bullet seeding throughout all datasets

    Our takeaway

    Seeding with silver bullets delivers persistently sturdy outcomes and even outperforms switch studying, regardless of that technique pulling from a various set of historic Pareto-frontier flows. 

    For our two goals (accuracy and latency), silver bullets at all times begin with greater accuracy and decrease latency than flows from different methods.

    In the long term, the TPE sampler reduces the preliminary benefit. Inside a number of hundred trials, outcomes from all methods typically converge, which is predicted since every ought to finally discover optimum flows.

    So, do agentic flows exist that work properly throughout many use circumstances? Sure — to some extent:

    • On common, a small set of silver bullets recovers about 75% of the Pareto-area from a full optimization.
    • Efficiency varies by dataset, equivalent to 92% restoration for Brilliant Biology in comparison with 46% for PhantomWiki.

    Backside line: silver bullets are a reasonable and environment friendly method to approximate a full syftr run, however they don’t seem to be a alternative. Their impression might develop with extra coaching datasets or longer coaching optimizations.

     Silver bullet parametrizations

    We used the next:

    LLMs

    • microsoft/Phi-4-multimodal-instruct
    • deepseek-ai/DeepSeek-R1-Distill-Llama-70B
    • Qwen/Qwen2.5
    • Qwen/Qwen3-32B
    • google/gemma-3-27b-it
    • nvidia/Llama-3_3-Nemotron-Tremendous-49B

    Embedding fashions

    • BAAI/bge-small-en-v1.5
    • thenlper/gte-large
    • mixedbread-ai/mxbai-embed-large-v1
    • sentence-transformers/all-MiniLM-L12-v2
    • sentence-transformers/paraphrase-multilingual-mpnet-base-v2
    • BAAI/bge-base-en-v1.5
    • BAAI/bge-large-en-v1.5
    • TencentBAC/Conan-embedding-v1
    • Linq-AI-Analysis/Linq-Embed-Mistral
    • Snowflake/snowflake-arctic-embed-l-v2.0
    • BAAI/bge-multilingual-gemma2

    Circulate varieties

    • vanilla RAG
    • ReAct RAG agent
    • Critique RAG agent
    • Subquestion RAG

    Right here’s the total checklist of all 23 silver bullets, sorted from low accuracy / low latency to excessive accuracy / excessive latency: silver_bullets.json. 

    Attempt it your self

    Need to experiment with these parametrizations? Use the running_flows.ipynb pocket book in our syftr repository — simply be sure to have entry to the fashions listed above. 

    For a deeper dive into syftr’s structure and parameters, try our technical paper or discover the codebase.

    We’ll even be presenting this work on the International Conference on Automated Machine Learning (AutoML) in September 2025 in New York Metropolis.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMeta’s AI Policy Just Crossed a Line
    Next Article OpenAI’s GPT‑5 Launch Sparks Backlash, Fixes, and Big Questions About Its Future
    ProfitlyAI
    • Website

    Related Posts

    AI Technology

    Why AI should be able to “hang up” on you

    October 21, 2025
    AI Technology

    From slop to Sotheby’s? AI art enters a new phase

    October 17, 2025
    AI Technology

    Future-proofing business capabilities with AI technologies

    October 15, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Navigating the EU AI Act: How Shaip Can Help You Overcome the Challenges

    April 8, 2025

    Analysis of Sales Shift in Retail with Causal Impact: A Case Study at Carrefour

    September 17, 2025

    How to Benchmark Classical Machine Learning Workloads on Google Cloud

    August 25, 2025

    The Stanford Framework That Turns AI into Your PM Superpower

    July 28, 2025

    MIT spinout maps the body’s metabolites to uncover the hidden drivers of disease | MIT News

    April 5, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    DeepVerse 4D – AI som förstår världen i fyra dimensioner

    June 10, 2025

    New postdoctoral fellowship program to accelerate innovation in health care | MIT News

    July 7, 2025

    How to Practically Pursue Financial Impact in AI Adoption with Eva Dong [MAICON 2025 Speaker Series]

    October 2, 2025
    Our Picks

    Creating AI that matters | MIT News

    October 21, 2025

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025

    Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know

    October 21, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.