Close Menu
    Trending
    • Why Your AI Search Evaluation Is Probably Wrong (And How to Fix It)
    • Machine Learning at Scale: Managing More Than One Model in Production
    • Improving AI models’ ability to explain their predictions | MIT News
    • Write C Code Without Learning C: The Magic of PythoC
    • LatentVLA: Latent Reasoning Models for Autonomous Driving
    • Understanding Context and Contextual Retrieval in RAG
    • The AI Bubble Has a Data Science Escape Hatch
    • Is the Pentagon allowed to surveil Americans with AI?
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Beyond Prompting: The Power of Context Engineering
    Artificial Intelligence

    Beyond Prompting: The Power of Context Engineering

    ProfitlyAIBy ProfitlyAIJanuary 8, 2026No Comments64 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    an LLM can see earlier than it generates a solution. This consists of the immediate itself, directions, examples, retrieved paperwork, software outputs, and even the prior dialog historical past.

    Context has a big impact on reply high quality. For instance, in case you ask an LLM to jot down a SQL question with out offering the information schema, the end result will virtually definitely be suboptimal. Worse, if the mannequin has no entry to the database in any respect, it might merely hallucinate a question that doesn’t work. Even when instruments can be found, the mannequin nonetheless wants additional effort and time to deduce the schema earlier than it might probably produce an accurate reply.

    As a result of context performs such a central position in LLM-based purposes, context engineering has emerged as a self-discipline centered on systematically optimising what info goes right into a mannequin’s immediate. The aim is to construct “self-improving” methods that study from expertise with out counting on costly fine-tuning (retraining fashions and updating hundreds of thousands of parameters).

    Context engineering comes with a number of key benefits:

    • it’s cheaper and doesn’t require specialised fine-tuning experience;
    • context and directions stay clear, interpretable, and simple for people to switch;
    • iteration cycles are a lot quicker, since updates could be made immediately with out retraining or redeploying fashions;
    • it’s extra agile, particularly when info must be forgotten for privateness or authorized causes.

    With all these benefits, it’s not shocking that context engineering is gaining a lot consideration. What’s fascinating, although, is how shortly the approaches themselves are evolving. On this article, I’ll stroll by means of that evolution after which experiment with one of many newer frameworks for immediate optimisation: Agentic Context Engineering (ACE).

    Evolution of context engineering approaches 

    Context engineering didn’t seem in a single day. It has advanced by means of a number of distinct phases.

    The earliest stage was static prompting. Right here, prompts have been hand-crafted directions that by no means modified. A lot of the effort went into traditional immediate engineering: fastidiously selecting wording, construction, and formatting to squeeze higher efficiency out of the mannequin. 

    The following main step was dynamic retrieval. As a substitute of counting on a hard and fast immediate, methods started pulling in related info (paperwork, examples, or info) at inference time. Retrieval-Augmented Era (RAG) grew to become some of the standard approaches on this class. By grounding responses in exterior knowledge, RAG considerably improved accuracy and decreased hallucinations, particularly for knowledge-heavy duties.

    Extra lately, the main focus has shifted towards self-improving contexts. Reasonably than treating context as one thing that’s merely retrieved or injected, these approaches enable the system to replace and refine its personal context primarily based on previous efficiency. In different phrases, the immediate itself turns into adaptive, evolving by means of reflection and suggestions.

    A variety of frameworks have emerged round this concept. Beneath are a number of the most influential ones.

    • One of many earliest and most vital works is “Reflexion: Language Agents with Verbal Reinforcement Learning” by Shinn et al. This analysis launched the concept language brokers can study from errors by means of pure language reflection fairly than gradient-based updates. Reflexion brokers analyse suggestions from earlier makes an attempt, generate verbal reflections about what went incorrect, and retailer these reflections in an episodic reminiscence buffer. These saved reflections then information higher decision-making in subsequent trials.
    • One other essential contribution is “TextGrad: Automatic Differentiation via Text” by Yuksekgonul et al. TextGrad borrows ideas from deep studying optimisation (resembling gradients, backpropagation, and gradient descent) however replaces numerical derivatives with pure language suggestions. On this framework, LLMs generate textual critiques describing how a variable ought to change to enhance the result. These “textual gradients” are then propagated backwards by means of the system utilizing prompting, successfully performing a natural-language model of backpropagation throughout a compound AI system.
    • The paper “GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning” by Agrawal et al. takes a special angle by combining evolutionary algorithms with language-based reflection. Prompts are handled like organisms: they mutate, compete, and evolve underneath choice strain. Over time, better-performing prompts survive and propagate. This method is applied in DSPy, and Hugging Face gives a practical guide for making use of it in real-world use circumstances.
    • Lastly, “Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory” by Suzgun et al. explores test-time studying by means of persistent reminiscence. On this setup, a black-box LLM is given a pocket book the place it might probably write down helpful methods, patterns, and code snippets throughout inference. As a substitute of repeatedly rediscovering the identical insights, the mannequin accumulates and reuses information throughout duties. This adaptive reminiscence considerably improves efficiency with out requiring express labels or human suggestions.

    Agentic Context Engineering

    Now that we’ve lined how context engineering has advanced, let’s take a more in-depth have a look at Agentic Context Engineering (ACE), one of many more moderen approaches and the principle focus of this text. ACE is launched within the paper “Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models” by Zhang et al., printed in 2025.

    The paper begins by figuring out two key issues with present self-improving context strategies.

    • Brevity bias is the tendency for methods to oversimplify essential particulars and step by step collapse towards brief, generic prompts. Whereas compact prompts are engaging, they typically lose the nuances that really drive good efficiency.
    • Context collapse. When methods repeatedly rewrite all the immediate, they have a tendency to overlook helpful information amassed earlier. Over time, this results in instability and regressions fairly than regular enchancment.

    To deal with these points, the authors suggest Agentic Context Engineering (ACE), a framework designed for scalable and environment friendly context adaptation in each offline settings (resembling system immediate optimisation) and on-line situations (like test-time reminiscence adaptation). As a substitute of compressing information right into a single static immediate, ACE permits the mannequin to repeatedly evolve its context by accumulating profitable methods, reflecting on failures, and organising information in a structured means. Conceptually, it resembles an AI assistant that improves over time by retaining detailed notes and refining its personal playbook.

    On the core of ACE is an agentic studying loop that mirrors how people study by means of experimentation: attempt, replicate, and consolidate. The framework consists of three elements:

    • Generator, which produces reasoning trajectories whereas fixing duties;
    • Reflector, which analyses successes and failures and distils actionable insights;
    • Curator, which integrates these insights into the shared context as small, incremental updates.

    Reasonably than sustaining a single monolithic immediate, ACE organises context as a playbook made up of structured bullet factors. Every bullet comprises metadata (resembling a novel identifier and counters monitoring how typically it has been useful or dangerous) in addition to content material representing a small, reusable unit of data. This is likely to be a common technique, a domain-specific idea, or a standard failure mode.

    Determine from the paper Zhang et al, 2025 | source

    The ACE workflow consists of a number of phases.

    1. Era part. The Generator tackles new issues utilizing the present playbook, marking which bullets have been useful or deceptive.
    2. Reflection part. The Reflector analyses the complete trajectory, extracting classes from each successes and failures by means of iterative refinement.
    3. Curation part. The Curator turns these insights into compact “delta” updates — new or modified bullets which are merged into the present playbook utilizing light-weight, non-LLM logic.
    4. Develop-and-refine part. New bullets are appended, present ones are up to date in place, and periodic deduplication removes redundancy utilizing semantic embeddings.

    This design allows parallel processing of a number of updates and helps multi-epoch adaptation, the place the identical queries could be revisited to progressively strengthen the context over time.

    Empirically, ACE delivers robust outcomes. On benchmark evaluations, it outperforms different self-improving context approaches, attaining a +10.6% enchancment on AI agent duties and a +8.6% achieve in specialised domains resembling finance. 

    Determine from the paper Zhang et al, 2025 | source

    Past accuracy, ACE can be extra cost-efficient because of its incremental replace mechanism, displaying 83.6% decrease token prices in comparison with baseline strategies.

    Collectively, these outcomes place ACE as a sensible and scalable step ahead in constructing self-improving LLM methods.

    Utilizing ACE for banking intent knowledge

    The ACE framework seems to be promising on paper, so the subsequent step is to see the way it performs in apply. Happily, the authors have shared an open-source implementation on GitHub, which provides us a stable start line.

    Loading the knowledge

    To maintain the experiment centered, I made a decision to use ACE to a classification job. I’m utilizing a publicly available dataset of banking intents launched by PolyAI (). This dataset displays a quite common real-world downside: figuring out buyer intent when somebody contacts buyer assist. Correct intent classification is crucial for routing requests to the best staff, triggering semi-automated responses, or just monitoring recurring points.

    On this dataset, every buyer message (for instance, “I’m unsure why my card didn’t work”) must be mapped to a particular banking intent, resembling declined_card_payment. In complete, there are 77 distinct intent classes.

    To maintain the experiment manageable, I sampled 500 examples from the dataset and cut up them into coaching, check, and validation units. Beneath is the code used to load the information and create the splits.

    full_df = pd.read_csv('./poly_ai_banking_data/practice.csv')
    
    # params
    total_number_of_samples = 500 
    train_share = 0.5
    test_share = 0.4
    val_share = 0.1
    
    sample_df = full_df.pattern(n=total_number_of_samples, random_state=42)
      .reset_index(drop=True)
    
    random.seed(42)
    sample_df['group'] = random.selections(['train', 'test', 'val'], 
      weights=(train_share, test_share, val_share), ok=total_number_of_samples)
    
    train_df = sample_df[sample_df['group'] == 'practice'].reset_index(drop=True)
    test_df = sample_df[sample_df['group'] == 'check'].reset_index(drop=True)
    val_df = sample_df[sample_df['group'] == 'val'].reset_index(drop=True)

    Extending ACE to banking intent knowledge

    The following step is to increase the ACE framework so it might probably work with our banking intent dataset. Happily, the authors present a detailed guide that makes this course of comparatively easy.

    Along with plugging within the new dataset, I made a few small modifications to the core framework to assist Anthropic fashions and configurable temperature settings. You will discover the entire, modified model of the code on GitHub.

    Getting ready the information

    The very first thing we have to do is put together the dataset in a format that ACE expects. I saved the coaching, validation, and check splits as CSV information underneath banking/knowledge. Every instance comprises:

    • textual content: the shopper assist message,
    • class: the goal intent label we wish to predict,
    • group: an auxiliary discipline indicating whether or not the instance belongs to the practice, check, or validation set.

    The group discipline gained’t be used later by the framework itself, however it’s handy for dataset administration and reproducibility.

    Right here’s what the information format seems to be like.

    textual content,class,group
    Is it potential for me to alter my PIN quantity?,change_pin,check
    What's the $1 transaction on my account?,extra_charge_on_statement,check
    How a lot does high up charges price?,top_up_by_card_charge,check
    I reside within the EU - can I get a card?,country_support,check

    Subsequent, we have to inform ACE the place to seek out every cut up. That is achieved by specifying dataset paths in banking/knowledge/task_config.json.

    {
      "banking": {
        "train_data": "./banking/knowledge/practice.csv",
        "val_data": "./banking/knowledge/val.csv",
        "test_data": "./banking/knowledge/check.csv"
      }
    }

    Implementing the DataProcessor

    To combine a brand new job, the framework requires a customized DataProcessor module. In response to the information, this includes implementing three core strategies: process_task_data, answer_is_correct and evaluate_accuracy.

    As well as, we want a helper perform to load the uncooked knowledge from disk. Let’s begin with that.

    Beneath is the implementation of the data-loading perform. It reads a CSV file, validates its existence, and converts every row right into a dictionary that the remainder of the pipeline can work with.

    def load_data(data_path: str) -> Record[Dict[str, Any]]:
      """
      Load and course of knowledge from a CSV file.
      
      Anticipated CSV format: textual content,class,group (with header)
      
      Args:
        data_path: Path to the CSV file
          
      Returns:
        Record of dictionaries containing the information
      """
      if not os.path.exists(data_path):
        elevate FileNotFoundError(f"Information file not discovered: {data_path}")
      
      knowledge = []
      with open(data_path, 'r', encoding='utf-8') as f:
        reader = csv.DictReader(f)
        for row in reader:
          knowledge.append({
            'textual content': row['text'],
            'class': row['category'],
            'group': row.get('group', '')
          })
      
      print(f"Loaded {len(knowledge)} samples from {data_path}")
      return knowledge

    With the data-loading perform in place, we will transfer on to implementing the remaining DataProcessor strategies.

    The primary function of process_task_data is to transform the uncooked dataset into ACE’s standardised enter format.

    ACE expects every instance to comprise three fields: context, query, and goal. In our case, the mapping is pretty easy. We map the intent class on to goal, and we go away context empty since there’s no further background info wanted for classification.

    Crucial half right here is the query. We added additional context to make it clear to the LLM that it ought to classify the question fairly than reply questions immediately, whereas additionally offering the listing of accessible subjects to information an LLM’s response.

    def process_task_data(self, raw_data: Record[Dict]) -> Record[Dict]:
      """
      Convert uncooked CSV knowledge into standardized format for ACE.
      
      Args:
        raw_data: Uncooked knowledge loaded from CSV (listing of dicts with 'textual content', 'class')
          
      Returns:
        Record of dicts with keys: 'context', 'query', 'goal'
      """
      processed_data = []
      
      # Collect the listing of subjects to incorporate into the query
      topics_list = ", ".be part of(self.allowed_topics)
      
      for merchandise in raw_data:
        customer_query = merchandise.get('textual content', '')
        ground_truth_topic = merchandise.get('class', '')
        
        # The query gives the classification job instruction
        query = (
          f"Classify the next banking buyer assist question into one of many predefined subjects.nn"
          f"Buyer Question: {customer_query}nn"
          f"Obtainable Subjects: {topics_list}nn"
          f"Reply with ONLY the subject identify, nothing else."
        )
        
        processed_item = {
          "context": "",  # No further context wanted
          "query": query,
          "goal": ground_truth_topic,
          "others": {
            "original_text": customer_query,
            "job": self.task_name,
          }
        }
        
        processed_data.append(processed_item)
      
      return processed_data

    The following technique, answer_is_correct, checks whether or not a mannequin’s prediction matches the bottom fact label. Since we explicitly instruct the LLM to reply with solely the class identify, a easy case-insensitive string comparability is adequate right here.

    def answer_is_correct(self, predicted: str, ground_truth: str) -> bool:
      """
      Examine if the expected matter matches the bottom fact.
      Makes use of easy case-insensitive comparability.
      
      Args:
        predicted: Mannequin's predicted matter
        ground_truth: Floor fact matter
          
      Returns:
        bool: True if prediction is appropriate, False in any other case
      """
      return predicted.decrease().strip() == ground_truth.decrease().strip()

    The ultimate technique we have to implement is evaluate_accuracy, which computes general classification accuracy throughout a number of predictions. There’s nothing fancy occurring right here. We merely calculate the fraction of circumstances the place answer_is_correct(prediction, ground_truth) returns True.

    def evaluate_accuracy(self, predictions: Record[str], ground_truths: Record[str]) -> float:
      """
      Calculate classification accuracy throughout a number of predictions.
      
      Args:
        predictions: Record of mannequin predictions
        ground_truths: Record of floor fact subjects
          
      Returns:
        Accuracy as a float between 0 and 1
      """
      if len(predictions) != len(ground_truths):
        elevate ValueError("Predictions and floor truths will need to have similar size")
      
      if not predictions:
        return 0.0
      
      appropriate = sum(
        1 for pred, fact in zip(predictions, ground_truths)
        if self.answer_is_correct(pred, fact)
      )
      
      return appropriate / len(predictions)

    Placing collectively the workflow script

    With the DataProcessor in place, the subsequent step is to assemble a complete run script for ACE. I created a run_ace_workflow script that accepts a number of key arguments:

    • api_provider selects the language mannequin API to make use of ('anthropic', 'openai', 'collectively', or 'sambanova'), defaulting to 'anthropic'.
    • generator_model specifies the mannequin for the Generator agent (default: 'claude-haiku-4-5').
    • reflector_model specifies the mannequin for the Reflector agent (default: 'claude-sonnet-4-5').
    • curator_model specifies the mannequin for the Curator agent (default: 'claude-sonnet-4-5').
    • max_train and max_test are non-obligatory limits on the practice and check set sizes, helpful for fast experiments or debugging.

    Let’s talk about how this script really works. The script begins by loading the banking intent knowledge and initialising the DataProcessor. Right here’s the helper perform I wrote for this.

    def load_banking_data(max_train=None, max_test=None):
      """Load and course of banking dataset."""
      from banking.data_processor import DataProcessor, load_data
      
      base_path = os.path.dirname(__file__)
      data_path = os.path.be part of(base_path, "knowledge")
      
      # Load uncooked knowledge
      train_raw = load_data(os.path.be part of(data_path, "practice.csv"))
      val_raw = load_data(os.path.be part of(data_path, "val.csv"))
      test_raw = load_data(os.path.be part of(data_path, "check.csv"))
      
      # Restrict samples if specified
      if max_train:
        train_raw = train_raw[:max_train]
        val_raw = val_raw[:max(max_train // 4, 10)]
      if max_test:
        test_raw = test_raw[:max_test]
      
      # Course of knowledge
      processor = DataProcessor(task_name="banking")
      train_samples = processor.process_task_data(train_raw)
      val_samples = processor.process_task_data(val_raw)
      test_samples = processor.process_task_data(test_raw)
      
      return train_samples, val_samples, test_samples, processor
    
    train_samples, val_samples, test_samples, processor = load_banking_data(
      max_train=args.max_train,
      max_test=args.max_test
    )

    The following step is to outline a playbook template. That is essential as a result of the present ACE implementation can’t dynamically create new sections, so we predefine the construction to information the mannequin. Right here’s the template I used for the banking area.

    BANKING_PLAYBOOK_TEMPLATE = """
    ## GENERAL
    ## CLASSIFICATION PRINCIPLES
    ## CATEGORY DISAMBIGUATION
    ## BANKING DOMAIN KNOWLEDGE
    ## COMMON PATTERNS
    ## HANDLING AMBIGUOUS QUERIES
    ## COMMON MISTAKES TO AVOID
    ## OTHERS
    """

    With the information and template prepared, we will initialise the ACE object with the principle parameters.

    ace_system = ACE(
      api_provider=args.api_provider,
      generator_model=args.generator_model,
      reflector_model=args.reflector_model,
      curator_model=args.curator_model,
      max_tokens=4096,
      initial_playbook=BANKING_PLAYBOOK_TEMPLATE,
      use_bulletpoint_analyzer=True, # enabling deduplication of bullet factors within the playbook
      generator_temperature=0.1, # prioritising consistency for generator
      reflector_temperature=0.7, # prioritising creativity for reflector and curator
      curator_temperature=0.7,
    )

    Lastly, we outline a perform to run the ACE coaching workflow, which incorporates preliminary analysis, iterative reflection, curation, and closing analysis.

    def run_ace_training(ace_system, train_samples, val_samples, test_samples, processor, results_dir):
      """Practice ACE to enhance the playbook (consists of preliminary and closing evaluations)."""
      config = {
        'num_epochs': 1,
        'max_num_rounds': 3,  # max reflection rounds per pattern
        'curator_frequency': 5,  # run curator each 5 steps
        'eval_steps': max(len(train_samples) // 10, 10),  # consider 10 instances throughout coaching
        'save_steps': max(len(train_samples) // 10, 10),
        'playbook_token_budget': 80000,
        'task_name': 'banking_ace',
        'json_mode': False,
        'no_ground_truth': False,
        'save_dir': os.path.be part of(results_dir, "coaching"),
        'test_workers': 10,
      }
      
      outcomes = ace_system.run(
        mode='offline',
        train_samples=train_samples,
        val_samples=val_samples,
        test_samples=test_samples,
        data_processor=processor,
        config=config
      )
      
      # Extract outcomes
      initial_acc = outcomes.get('initial_test_results', {}).get('accuracy', 0)
      final_acc = outcomes.get('final_test_results', {}).get('accuracy', 0)
      training_results = outcomes.get('training_results', {})
      
      return ace_system.best_playbook, outcomes
    
    best_playbook, training_results = run_ace_training(
      ace_system, train_samples, val_samples, test_samples, 
      processor, results_dir
    )

    And that’s it! That’s all of the core logic we have to run ACE. I’ve added some logging on high of the workflow for comfort, however it’s not important to the principle performance.

    Outcomes

    Let’s check out the outcomes and see how the whole lot comes collectively. First, take a look at the very best playbook, which you could find at outcomes/banking_{dt}/best_playbook.txt. The playbook is organised into itemised bullets, grouped in response to the classes we outlined in our preliminary template. Every bullet comprises detailed directions and techniques, together with metadata displaying how typically it was marked useful or dangerous. This construction makes it simple to see which subjects and techniques the system discovered most helpful throughout coaching.

    ## GENERAL
    ## CLASSIFICATION PRINCIPLES
    [cls-00001] useful=1 dangerous=0 :: Temporal indicators like 'was in a position to earlier than', 'labored beforehand', or 'used to work' are robust alerts that the problem is restricted to the present transaction fairly than a common system functionality downside. These phrases recommend a change in standing for a particular entity (beneficiary, card, account) fairly than general performance.
    [cls-00002] useful=18 dangerous=4 :: Apply specificity hierarchy: when a number of classes might apply, select probably the most particular one which matches the contextual clues. For instance, beneficiary_not_allowed (particular to recipient) is extra particular than declined_transfer (common failure).
    [cls-00009] useful=0 dangerous=3 :: Specificity hierarchy works bidirectionally: select particular classes when contextual clues level to a selected transaction sort, however use common classes (like 'extra_charge_on_statement') when the question lacks adequate context to find out the particular nature of the transaction. Do not pressure specificity when the shopper's question is inherently common.
    [cls-00017] useful=5 dangerous=1 :: Course of-oriented vs Standing-tracking distinction: Differentiate between questions on HOW to acquire/purchase one thing (process-oriented) versus questions on WHEN one thing will arrive or WHETHER it has arrived (status-tracking). Course of questions deal with the steps and elements wanted, whereas standing questions deal with timing and supply affirmation. Use this distinction to decide on between acquisition classes and monitoring/arrival classes.
    ## CATEGORY DISAMBIGUATION
    [dis-00003] useful=1 dangerous=0 :: declined_transfer vs beneficiary_not_allowed: If the shopper mentions they may switch earlier than however all of the sudden can't, this strongly signifies beneficiary_not_allowed (recipient is blocked/restricted) fairly than declined_transfer (common switch failure because of funds, limits, or system errors).
    [dis-00011] useful=11 dangerous=0 :: pending_* vs failed_* vs declined_*: Transaction state is crucial for classification. 'Hasn't gone by means of but' or 'taking too lengthy' = pending state. 'Did not work', 'was declined', or 'was rejected' = failed/declined state. 'Cash got here again' or 'was returned' = reverted state. Match the class to the precise transaction state described.
    [dis-00012] useful=0 dangerous=1 :: country_support vs supported_cards_and_currencies: Queries about geographic availability ('which nations', 'the place can I', 'what areas') ought to be labeled as 'country_support'. In distinction, 'supported_cards_and_currencies' is for questions on card sorts (Visa, Mastercard) and forex choices, not geographic availability.
    [dis-00014] useful=2 dangerous=0 :: Money withdrawal points: Distinguish by transaction state and end result: 'pending_cash_withdrawal' (not accomplished but, nonetheless processing), 'declined_cash_withdrawal' (rejected, no money acquired), 'cash_withdrawal_not_recognised' (buyer does not recall the transaction), and 'wrong_amount_of_cash_received' (transaction accomplished however incorrect quantity allotted). If money was acquired however the quantity was incorrect, use probably the most particular class: wrong_amount_of_cash_received.
    [dis-00015] useful=3 dangerous=3 :: card_arrival vs get_physical_card: Distinguish between status-tracking questions (card_arrival) and process-acquisition questions (get_physical_card). 'card_arrival' is for monitoring present orders ('Has my card arrived?', 'The place is my card?'). 'get_physical_card' encompasses all the strategy of acquiring a bodily card together with all elements like PIN ('The place can I discover my PIN?', 'How do I get my card and PIN?'). Questions on lacking PINs with 'have not gotten it but' point out the shopper is within the acquisition course of, not simply monitoring supply.
    [dis-00021] useful=1 dangerous=0 :: card_payment_not_recognised vs extra_charge_on_statement: When a buyer mentions a 'cost' they do not acknowledge or did not make ('cost I by no means submitted', 'cost I did not authorize'), classify as 'card_payment_not_recognised' as a result of 'cost' is a particular transaction sort. Use 'extra_charge_on_statement' solely when the shopper describes sudden quantities, charges, or prices WITHOUT specifying the transaction sort (e.g., 'I see an additional $5 on my assertion', 'there is a unusual cost' with out mentioning cost/switch/withdrawal).
    [dis-00024] useful=0 dangerous=1 :: Price/cost class specificity: When prospects ask about charges or prices, prioritize transaction-type-specific price classes over 'extra_charge_on_statement'. If the question mentions a particular transaction sort (switch, cost, withdrawal, top-up), use the corresponding particular price class: 'transfer_fee_charged' for switch charges, 'card_payment_fee_charged' for cost charges, 'atm_fee_charged' for withdrawal charges, 'top_up_fee' for top-up charges. Reserve 'extra_charge_on_statement' just for price queries the place no particular transaction sort is talked about (e.g., 'Why is there an additional $5 cost?' with out context).
    [dis-00026] useful=0 dangerous=0 :: receiving_money vs transfer_into_account: Distinguish between passive receipt and energetic switch. 'receiving_money' is for queries about receiving funds FROM one other get together (passive, initiated by sender). 'transfer_into_account' is for queries in regards to the buyer initiating a switch TO add funds to their very own account (energetic, self-initiated). Context clues: empty/low stability + asking about transfers = possible transfer_into_account. Questions on 'can I switch funds' within the context of needing so as to add cash = transfer_into_account, not receiving_money.
    [dis-00029] useful=0 dangerous=0 :: beneficiary_not_allowed vs declined_transfer: When a question explicitly mentions 'beneficiary' or 'recipient' mixed with restriction language ('not allowed', 'blocked', 'restricted', 'can't add', 'unable so as to add'), classify as 'beneficiary_not_allowed' even with out temporal indicators. The mixture of the particular banking entity time period (beneficiary/recipient) with restriction language is a robust direct sign for recipient-level restrictions fairly than common switch failures.
    ## BANKING DOMAIN KNOWLEDGE
    [bank-00006] useful=0 dangerous=0 :: In banking, when a beforehand profitable switch all of the sudden fails, widespread causes embrace: beneficiary being flagged/blocked by fraud methods, beneficiary account restrictions, or beneficiary being faraway from allowed listing. These are distinct from common switch declines because of inadequate funds or system errors.
    [bank-00008] useful=0 dangerous=6 :: Small sudden quantities (like £1, £0.01) showing on statements typically point out authorization holds, verification prices, or miscellaneous charges. When prospects query these with out further context, they need to be labeled as 'extra_charge_on_statement' fairly than extra particular transaction sorts.
    [bank-00018] useful=0 dangerous=0 :: 'card_swallowed' is the banking business time period for ATM card retention situations the place the machine retains/retains the shopper's card. This is applicable when playing cards are caught, will not come out, or are held by the ATM, whatever the particular phrasing utilized by the shopper.
    [bank-00020] useful=10 dangerous=4 :: Banking terminology has a specificity hierarchy for transaction references. Particular transaction sort key phrases embrace: 'cost' (card funds), 'switch' (cash transfers), 'withdrawal' (money withdrawals), 'top-up' (account funding), 'direct debit', 'standing order'. Generic phrases embrace: 'cost', 'quantity', 'transaction', 'price'. When a buyer makes use of a particular transaction sort key phrase, it gives adequate context to categorise into transaction-type-specific classes fairly than common classes.
    ## COMMON PATTERNS
    [pat-00004] useful=0 dangerous=0 :: Sample: 'It labored earlier than, now it does not' + switch context = possible beneficiary-level restriction fairly than system-level decline. The earlier success signifies the account and switch mechanism are useful, pointing to a particular restriction on the present recipient.
    [pat-00007] useful=3 dangerous=6 :: Sample: Buyer describes transaction as 'unusual', 'sudden', 'unexplained', or asks 'what is that this cost' on their assertion with out offering particular transaction sort context (switch, cost, withdrawal, and many others.) = classify as 'extra_charge_on_statement'. That is the suitable common class when the character of the cost is unclear.
    [pat-00010] useful=8 dangerous=1 :: Sample: Phrases like 'hasn't gone by means of but', 'nonetheless ready', 'not accomplished', or 'nonetheless pending' point out a transaction in PENDING state, not a FAILED state. Select 'pending_*' classes over 'failed_*' or 'declined_*' classes when these language cues are current.
    [pat-00013] useful=0 dangerous=2 :: Sample: Questions with geographic scope indicators like 'which nations', 'the place can I', 'what areas', or 'in what places' are asking about service availability by geography = classify as 'country_support'. The core intent is knowing geographic attain of companies.
    [pat-00016] useful=2 dangerous=9 :: Sample: 'The place can I discover' or 'How do I get' phrasing signifies process-oriented questions searching for details about acquiring or buying one thing, not status-tracking questions. These ought to usually map to acquisition/setup classes (like 'get_physical_card') fairly than supply/monitoring classes (like 'card_arrival' or 'card_delivery_estimate').
    [pat-00019] useful=0 dangerous=0 :: Sample: Phrases indicating a card is bodily retained by an ATM ('card caught in ATM', 'card will not come out', 'ATM stored my card', 'get my card out of ATM', 'retrieve card from machine') ought to be labeled as 'card_swallowed'. The important thing indicator is the cardboard being bodily held/retained by the machine fairly than different card points like harm, loss, or performance issues.
    [pat-00022] useful=1 dangerous=0 :: Sample: Particular transaction sort key phrase + 'not acknowledged'/'did not make'/'by no means submitted' = use transaction-type-specific 'not_recognised' class. Examples: 'cost I did not make' → card_payment_not_recognised; 'switch I do not acknowledge' → transfer_not_received_by_recipient or associated switch concern; 'withdrawal I by no means made' → cash_withdrawal_not_recognised. The presence of a particular transaction sort key phrase (cost, switch, withdrawal) is adequate context to keep away from common classes.
    [pat-00025] useful=1 dangerous=0 :: Sample: Transaction sort key phrase + timing query ('how lengthy', 'when will', 'how a lot time') + geographic point out = prioritize transaction-specific timing class (e.g., 'transfer_timing', 'card_delivery_estimate'). Deal with geographic mentions as contextual details about the transaction origin/vacation spot except the question explicitly asks about service availability ('which nations', 'the place can I take advantage of', 'is it obtainable in'). Instance: 'switch from China, how lengthy?' → 'transfer_timing' (not 'country_support').
    [pat-00027] useful=0 dangerous=0 :: Sample: Account stability context + switch inquiry = intent so as to add funds. When a buyer mentions their account is empty/has no funds/wants cash AND asks about transferring, they're asking about transferring funds INTO their account (transfer_into_account), not about receiving cash from others (receiving_money). The account state gives crucial context for disambiguating transfer-related intents.
    ## HANDLING AMBIGUOUS QUERIES
    ## COMMON MISTAKES TO AVOID
    [err-00005] useful=2 dangerous=0 :: Do not default to common classes (like declined_transfer) when temporal context ('was in a position to earlier than') suggests a extra particular concern. The temporal change is a key discriminator that always factors to entity-specific restrictions (beneficiary, card, account) fairly than common failures.
    [err-00023] useful=2 dangerous=0 :: Do not default to 'extra_charge_on_statement' when the shopper mentions a particular transaction sort (cost, switch, withdrawal, top-up) they do not acknowledge. 'extra_charge_on_statement' ought to be reserved for really ambiguous circumstances the place no transaction sort is specified. When a buyer says 'cost I by no means made', the phrase 'cost' gives adequate context to make use of 'card_payment_not_recognised' as an alternative of the generic 'extra_charge_on_statement'.
    [err-00028] useful=0 dangerous=0 :: Do not apply sample guidelines or area information which are irrelevant to the question. If a question has no geographic indicators, do not apply geographic patterns. If there is not any point out of charges, do not apply fee-related guidelines. Give attention to guidelines that immediately match the semantic content material and context of the shopper's question fairly than greedy for any relevant rule. Irrelevant rule software results in misclassification.
    ## OTHERS

    For a deeper have a look at how every agent operates, you possibly can discover the detailed execution logs at outcomes/banking_{dt}/coaching/ace_run_{dt}/detailed_llm_logs . I extremely suggest shopping these logs. On the very least, skim by means of the prompts and see how the Generator, Reflector, and Curator work together. It’s an effective way to grasp how ACE evolves the context step-by-step.

    In fact, probably the most fascinating metric is accuracy. You will discover the preliminary and closing check leads to outcomes/banking_{datetime}/coaching/initial_test_results.json and outcomes/banking_{datetime}/coaching/final_test_results.json.

    # preliminary outcomes 
    {
      "test_results": {
        "accuracy": 0.7512437810945274,
        "appropriate": 151,
        "complete": 201,
        "no_answer": 0
      },
      "error_log": {
        "accuracy": 0.7512437810945274,
        "errors": [
          {
            "index": 2,
            "prediction": "declined_card_payment",
            "ground_truth": "declined_transfer"
          },
          {
            "index": 9,
            "prediction": "top_up_limits",
            "ground_truth": "automatic_top_up"
          },
          {
            "index": 7,
            "prediction": "transfer_not_received_by_recipient",
            "ground_truth": "balance_not_updated_after_cheque_or_cash_deposit"
          },
          ...
        ]
      }
    }
    
    # closing outcomes 
    {
      "test_results": {
        "accuracy": 0.736318407960199,
        "appropriate": 148,
        "complete": 201,
        "no_answer": 0
      },
      "error_log": {
        "accuracy": 0.736318407960199,
        "errors": [
          {
            "index": 9,
            "prediction": "top_up_limits",
            "ground_truth": "automatic_top_up"
          },
          {
            "index": 2,
            "prediction": "declined_card_payment",
            "ground_truth": "declined_transfer"
          },
          {
            "index": 7,
            "prediction": "pending_transfer",
            "ground_truth": "balance_not_updated_after_cheque_or_cash_deposit"
          },
          ...
        ]
      }
    }

    The outcomes, admittedly, usually are not very spectacular. In actual fact, accuracy barely dropped after optimisation, from 75.1% to 73.6%. However even damaging outcomes can train us one thing beneficial.

    There are a number of possible the reason why ACE didn’t present a lot profit on this case:

    • Restricted knowledge per class. We solely had 248 coaching examples, 201 check examples, and 51 validation examples. Nonetheless, our job concerned 77 totally different classes. With so few examples per class, the mannequin merely might not have had sufficient knowledge to study significant distinctions.
    • Small and unrepresentative validation set. With solely 51 examples, the validation set may not have captured the complete range of buyer queries, making it tough for ACE to generate helpful reflections and enhancements.
    • Activity complexity. Our use case is comparatively easy. Because the authors observe, ACE tends to shine in situations with massive quantities of extremely specialised area information or extra complicated agentic workflows, the place reflection and iterative context refinement can considerably enhance efficiency.

    Utilizing ACE for code era

    Inspired by the earlier experiment, I made a decision to present ACE one other attempt. This time on the Mostly Basic Python Problems dataset (obtainable underneath cc-by-4.0 license). Hopefully, the outcomes could be extra promising with a code era job.

    Information overview

    Every instance within the dataset comprises three key elements:

    • Query, for instance, “Write a perform to reverse phrases in a given string.”
    • Floor fact implementation — Python reference code. For instance, for the query above
    def reverse_words(s):
      return ' '.be part of(reversed(s.cut up()))
    • Take a look at circumstances  are  assertions to validate the generated code, resembling
    [
        assert reverse_words("python program")==("program python"),
        assert reverse_words("java language")==("language java"),
        assert reverse_words("indian man")==("man indian")
    ]

    Including a brand new job to the ACE framework

    We are able to comply with comparable steps to increase the ACE framework to deal with coding duties. I gained’t go into all of the implementation particulars right here, since you could find the complete code on GitHub. Nonetheless, it’s price highlighting the important thing variations in comparison with the banking intent instance.

    Coding duties are inherently extra complicated. Within the banking intent case, the mannequin outputs a single class out of 77, which is simple to match immediately with the bottom fact. In code era, nevertheless, the LLM can produce arbitrary code, so we can’t merely examine for precise matches. As a substitute, we have to run assessments to find out whether or not the generated answer is appropriate.

    # banking
    
    def answer_is_correct(self, predicted: str, ground_truth: str) -> bool:
      return predicted.decrease() == ground_truth.decrease()
    
    # coding 
    def answer_is_correct(self, predicted: str, ground_truth: str, 
                        test_list: Record[str], idx: int, save_dir: str) -> bool:
      code = extract_code_from_response(predicted)
      end result = execute_code_with_tests(code, test_list, timeout=5)
      return end result['success']

    Due to this added complexity, I needed to implement a number of enhancements within the DataProcessor for code era:

    • Code extraction. LLMs typically embrace additional context across the code, resembling Markdown formatting (```python ...```). We have to clear and extract the code to make sure it might probably compile accurately.
    • Secure execution. Since we run the generated code to confirm correctness, it’s essential to implement fundamental security measures, resembling timeouts and remoted execution environments.
    • Offering full context. It’s essential to incorporate all crucial info within the query. If we simply ask the LLM to generate code, it’s unlikely to move the assessments as a result of it gained’t be clear what perform identify or signature is anticipated. That’s why it’s essential to offer all crucial particulars within the query when standardising the information within the process_task_data perform.
    query = (
      f"Write a Python perform to resolve the next downside:nn"
      f"Drawback: {problem_text}nn"
      f"Your code should move the next check circumstances:n"
      f"{test_cases_formatted}nn"
      f"Necessary: The check circumstances will probably be executed towards your code. "
      f"Be sure that your perform identify and signature match what the assessments anticipate.nn"
      f"Reply with ONLY the Python code, no explanations."
    )

    Within the authentic ACE implementation, the Reflector in contrast generated code immediately with the bottom fact, which works for classification duties. For coding, nevertheless, this method doesn’t make sense: a number of appropriate options can exist, and optimising for code that “seems to be comparable” to the reference doesn’t assure it is going to move the assessments.

    To deal with this, I applied a brand new technique, get_test_feedback, which gives the Reflector with precise check execution outcomes and error messages. The check output turns into the first sign for correctness, giving rather more informative suggestions than easy code comparability.

    def get_test_feedback(self, predicted: str, ground_truth: str, test_list: Record[str] = None) -> str:
      """
      Get detailed check execution suggestions for the reflector.
      
      This technique gives the reflector with precise check outcomes and error messages,
      which is extra informative than simply evaluating generated code with floor fact.
      The check output is the first sign for correctness in code era duties.
      
      Args:
          predicted: Mannequin's predicted code
          ground_truth: Floor fact code (reference solely, not used for analysis)
          test_list: Record of check assertions to run
          
      Returns:
          str: Detailed suggestions string with check execution outcomes
      """
      if test_list is None:
          return "No check circumstances supplied - can't consider code."
      
      # Extract code from response if wanted
      code = extract_code_from_response(predicted)
      
      # Execute code with assessments
      end result = execute_code_with_tests(code, test_list, timeout=self.timeout)
      
      # Construct detailed suggestions
      feedback_parts = []
      
      if end result['success']:
        feedback_parts.append(f"✓ All {end result['total']} assessments PASSED")
        feedback_parts.append("nTest circumstances executed efficiently:")
        for i, check in enumerate(test_list, 1):
            feedback_parts.append(f"  {i}. {check} ✓")
      else:
        feedback_parts.append(f"✗ Checks FAILED: {end result['passed']}/{end result['total']} assessments handed")
        
        if end result['timeout']:
          feedback_parts.append("n⏱ TIMEOUT: Code execution exceeded time restrict")
        
        if end result['errors']:
          feedback_parts.append("n--- ERROR DETAILS ---")
          for error in end result['errors']:
            feedback_parts.append(f"  • {error}")
        
        # Present which assessments handed vs failed
        feedback_parts.append("n--- TEST RESULTS ---")
        for i, check in enumerate(test_list, 1):
          # Examine if this particular check seems in errors
          test_failed = any(f"Take a look at {i}" in err for err in end result.get('errors', []))
          standing = "✗ FAILED" if test_failed else "✓ handed"
          feedback_parts.append(f"  {i}. {check} - {standing}")
      
      # Add extracted code for reference
      feedback_parts.append("n--- EXTRACTED CODE ---")
      feedback_parts.append(code)
      
      return "n".be part of(feedback_parts)

    Alongside this new technique, I created a devoted Reflector immediate tailor-made for code era. Its focus is on check outcomes, not line-by-line code comparability.

    You're an professional code reviewer and educator. Your job is to research why generated code handed or failed check circumstances, and determine patterns that result in appropriate or incorrect options.
    
    **IMPORTANT: Take a look at execution outcomes are the PRIMARY sign for correctness.**
    - The code is appropriate if and provided that ALL assessments move
    - Do NOT examine implementations line-by-line with the reference - totally different implementations could be equally appropriate
    - Give attention to understanding WHY assessments handed or failed primarily based on the code's logic
    
    **Directions:**
    - First, study the Take a look at Execution Outcomes to find out if the code is appropriate
    - If assessments FAILED: Analyze what brought on the failure (syntax errors, logic errors, edge circumstances, incorrect algorithm)
    - If assessments PASSED: Establish what the mannequin did properly that led to success
    - The "Potential Implementation" is simply ONE method to resolve the issue - the mannequin's method could also be totally different however equally legitimate
    - Present actionable insights for bettering code era sooner or later
    - Tag bulletpoints as useful/dangerous/impartial primarily based on whether or not they contributed to passing assessments
    
    Your output ought to be a json object, which comprises the next fields:
      - reasoning: analyze the check outcomes and the code's logic, clarify why assessments handed/failed
      - error_identification: if assessments failed, what particular concern brought on the failure? If assessments handed, state "No errors - all assessments handed"
      - root_cause_analysis: what underlying idea or sample led to success or failure?
      - correct_approach: what coding technique or sample ought to be used for comparable issues?
      - key_insight: what precept ought to be remembered for future code era duties?
      - bullet_tags: an inventory of json objects with bullet_id and tag for every bulletpoint
    
    **Query:**
    {}
    
    **Mannequin's Reasoning Hint:**
    {}
    
    **Mannequin's Generated Code:**
    {}
    
    **Potential Implementation (Reference Solely - NOT the one appropriate answer):**
    {}
    
    **Take a look at Execution Outcomes (PRIMARY SIGNAL):**
    {}
    
    **A part of Playbook that is utilized by the generator to reply the query:**
    {}
    
    **Reply on this precise JSON format:**
    {{
      "reasoning": "[Analyze test results and code logic - why did tests pass or fail?]",
      "error_identification": "[What caused test failures? Or 'No errors - all tests passed']",
      "root_cause_analysis": "[What concept/pattern led to success or failure?]",
      "correct_approach": "[What coding strategy works for this type of problem?]",
      "key_insight": "[What principle should be remembered for future code generation?]",
      "bullet_tags": [
        {{"id": "code-00001", "tag": "helpful"}},
        {{"id": "code-00002", "tag": "harmful"}}
      ]
    }}

    This coding-specific Reflector is routinely used each time the duty identify comprises "coding".

    Outcomes

    Lastly, I ran the immediate optimisation course of on a dataset of 500 samples, cut up into practice, check, and validation units. This time, the outcomes are rather more promising: accuracy improved considerably from 71.1% to 87.1%. On this case, ACE clearly helped optimise the prompts and information the mannequin towards appropriate options.

    Taking a look at the very best playbook, it’s fairly in depth. Lots of the most useful patterns are common ideas, resembling:

    • Write the only appropriate, Pythonic answer first,
    • Deal with check circumstances because the true specification,
    • Confirm correctness earlier than any additional optimisation.

    On the similar time, the playbook additionally consists of very particular steerage, for instance, detailed directions for duties like GCD calculations.

    General, this reveals that ACE can successfully seize each high-level methods and task-specific suggestions.

    ## GENERAL
    ## COMMON MISTAKES TO AVOID
    [err-00003] useful=5 dangerous=0 :: Do not add pointless complexity to recursive algorithms. For instance, in GCD implementations, express min/max logic or particular circumstances for checking if a price equals 1 are redundant when utilizing the usual Euclidean algorithm.
    [err-00007] useful=0 dangerous=0 :: Do not assume downside constraints match your algorithm's mathematical stipulations. For instance, Fermat's Little Theorem for modular inverse requires a PRIME modulus - confirm the issue ensures this earlier than utilizing pow(a, p-2, p). If constraints aren't specified, select extra common algorithms.
    ## OTHERS
    ## CODE GENERATION PRINCIPLES
    [cgp-00002] useful=41 dangerous=2 :: Want minimal, mathematically sound implementations over complicated ones. Keep away from including pointless preprocessing logic (like min/max) or particular case checks when the core algorithm naturally handles all situations.
    [cgp-00012] useful=91 dangerous=2 :: At all times guarantee generated code is syntactically full earlier than finalizing output. Confirm all opened brackets, braces, and parentheses are correctly closed, and all statements are totally fashioned. Incomplete code era (truncation mid-statement) causes syntax errors that stop execution no matter algorithmic correctness.
    [cgp-00020] useful=6 dangerous=0 :: When an issue explicitly requires utilizing lambda capabilities, combine them naturally with Python's useful programming instruments (map, filter, scale back, sorted with key parameter). Do not pressure lambda utilization the place it is awkward - these built-in capabilities are designed to work seamlessly with lambdas for operations like filtering, transformation, and counting.
    [cgp-00024] useful=140 dangerous=2 :: Prioritize readable, Pythonic options utilizing built-in capabilities over performance-optimized complicated algorithms except the issue explicitly requires optimization or includes large-scale knowledge. A transparent answer utilizing bin(), str strategies, or listing comprehensions is usually preferable to bit manipulation or handbook loops. Optimize solely when crucial.
    [cgp-00047] useful=56 dangerous=2 :: Observe a correctness-first improvement technique: (1) implement the easy algorithm that accurately solves the issue, even when it isn't optimally environment friendly, (2) confirm correctness with check circumstances, (3) solely then take into account optimization if efficiency is insufficient or the issue explicitly requires it. An accurate O(n) answer is infinitely higher than a buggy O(log n) try. Untimely optimization typically introduces errors in logic, particularly for mathematical or algorithmic issues.
    [cgp-00050] useful=0 dangerous=0 :: When a number of algorithmically appropriate options exist, desire the one with higher time/house complexity. An accurate O(1) formula-based answer is superior to an accurate O(n) iterative answer. Nonetheless, solely optimize in case you can preserve correctness - a working O(n) answer is infinitely higher than a buggy O(1) try. Confirm the extra environment friendly method passes all assessments earlier than committing to it.
    [cgp-00053] useful=0 dangerous=0 :: When implementing mathematical optimizations (particularly for pair/mixture counting), confirm the optimized method towards check circumstances by means of handbook calculation BEFORE coding. For every check case: (1) apply your mathematical perception to foretell the output, (2) verify it matches anticipated output, (3) solely then implement. This catches errors in mathematical reasoning early, stopping bugs which are tougher to debug in code than in arithmetic.
    [cgp-00057] useful=0 dangerous=0 :: Keep away from shadowing Python built-in names (dict, listing, str, int, set, tuple, and many others.) when naming variables or parameters. Use descriptive options as an alternative: 'd' or 'knowledge' as an alternative of 'dict', 'lst' or 'objects' as an alternative of 'listing', 's' or 'textual content' as an alternative of 'str'. Shadowing built-ins makes them inaccessible in that scope and reduces code readability, regardless that it is syntactically legitimate.
    [cgp-00059] useful=2 dangerous=0 :: Embody defensive programming practices (enter validation, bounds checking, sort checking) even when not explicitly examined by seen check circumstances. For string indexing, validate index bounds earlier than entry. For numeric conversions, confirm the enter is a legitimate digit. For listing operations, examine for empty collections. These safeguards improve code robustness and forestall runtime errors on edge circumstances that will exist in hidden assessments, demonstrating production-quality coding practices.
    [cgp-00074] useful=0 dangerous=0 :: For operations involving powers of two, desire bitwise shift operators over arithmetic operations for readability and effectivity: use left shift (1 << ok) as an alternative of two**ok or pow(2, ok) for computing 2^ok, use proper shift (n >> ok) as an alternative of n // (2**ok) for dividing by powers of two. Bitwise operators make the bit-level intent express and are the idiomatic method in bit manipulation contexts. That is particularly beneficial when working with bit positions and their corresponding values.
    [cgp-00081] useful=0 dangerous=0 :: Earlier than utilizing customary library mathematical constants (math.pi, math.e, and many others.), validate that check circumstances anticipate full-precision values by calculating one check output and evaluating to anticipated. If anticipated outputs recommend truncated/simplified constants (pi=3.14, pi=3.1415, e=2.718), use hardcoded values matching check precision as an alternative of library constants. Sample: (1) determine mathematical fixed wanted, (2) calculate check output with customary fixed, (3) if mismatch exists, derive the fixed worth that produces precise anticipated outputs, (4) use hardcoded worth. Take a look at case expectations override mathematical purity.
    ## COMMON PYTHON PATTERNS
    [cpp-00010] useful=23 dangerous=0 :: For locating parts with most/minimal properties primarily based on a criterion, use built-in max()/min() capabilities with the important thing parameter. Instance: max(list_of_lists, key=len) finds the longest listing. That is extra Pythonic and readable than handbook iteration with comparisons.
    [cpp-00013] useful=17 dangerous=0 :: For counting or looking operations in Python collections (tuples, lists, strings), prioritize built-in strategies: use .rely() for prevalence counting, .index() for locating positions, .discover() for strings. These are extra dependable, environment friendly, and Pythonic than handbook iteration with counters or loops.
    [cpp-00014] useful=3 dangerous=0 :: When working with mixed-type knowledge buildings, use isinstance() for sort checking to differentiate between totally different ingredient sorts. Mix with len() checks to validate construction. Instance: isinstance(merchandise, listing) and len(merchandise) == 2 reliably identifies 2-element lists in combined collections.
    [cpp-00015] useful=3 dangerous=0 :: Use lengthen() as an alternative of append() when including a number of parts from a sequence to an inventory. lengthen() provides parts individually to the goal listing, whereas append() would add all the sequence as a single nested ingredient. Instance: end result.lengthen([value] * rely) vs end result.append([value] * rely).
    [cpp-00016] useful=2 dangerous=0 :: Use listing multiplication ([value] * rely) to effectively repeat parts. That is extra Pythonic and readable than handbook loops for creating repeated parts. Mix with lengthen() for including repeated parts to present lists.
    [cpp-00019] useful=2 dangerous=0 :: For counting parts matching a situation with lambda capabilities, use sum(map(lambda x: 1 if situation else 0, iterable)) as a chic various to len(listing(filter(lambda x: situation, iterable))). The sum(map()) method maps parts to 1/0 and sums them, typically extra readable and environment friendly than filtering then counting.
    [cpp-00026] useful=14 dangerous=0 :: For changing sequences (tuples, lists) of characters/strings right into a single string, use str.be part of() technique: ''.be part of(sequence) for character concatenation, or 'separator'.be part of(sequence) for becoming a member of with delimiters. That is the idiomatic Python method - extra readable and performant than handbook loops with += or accumulation patterns.
    [cpp-00030] useful=1 dangerous=0 :: For character classification with regex, use re.findall() with mutually unique character class patterns. For 'the whole lot else' classes (like particular characters), desire negation patterns [^...] over enumerating particular characters - e.g., [^A-Za-z0-9] captures all non-alphanumeric characters comprehensively, avoiding the brittleness of lists like [,.!?]. Guarantee patterns do not overlap to forestall double-counting.
    [cpp-00031] useful=2 dangerous=0 :: For locating world most/minimal throughout nested iterables (listing of tuples, listing of lists, and many others.), use nested generator expressions with built-in max()/min(): `max(ingredient for container in containers for ingredient in container)`. This sample naturally flattens one degree of nesting with out creating intermediate lists, making it perfect for locating extremes throughout tuple data or sublists. Extra environment friendly and readable than handbook iteration.
    [cpp-00033] useful=2 dangerous=0 :: For index-based entry to dictionary keys, use the sample listing(dict)[index] or listing(dict.keys())[index]. This depends on Python 3.7+ ensures that dictionaries preserve insertion order. Changing the dictionary to an inventory extracts keys so as, permitting customary listing indexing. That is the idiomatic Python answer for mapping numeric indices to dictionary keys.
    [cpp-00036] useful=27 dangerous=2 :: For mathematical operations (GCD, LCM, factorial, prime checking, trigonometry), examine Python's math module FIRST earlier than implementing algorithms manually. Constructed-in capabilities like math.gcd(), math.factorial(), math.isqrt() are well-tested, optimized, and scale back implementation errors. Sample: (1) Perceive the mathematical definition, (2) Examine if math module gives the operation, (3) Use it immediately or wrap it with problem-specific logic (e.g., is_coprime = math.gcd(a,b) == 1).
    [cpp-00038] useful=0 dangerous=0 :: For checking if a quantity is an ideal sq., use math.isqrt() as an alternative of math.sqrt() to keep away from floating-point precision errors. Sample: b = math.isqrt(n); is_perfect_square = (b * b == n). The isqrt() perform returns the integer sq. root, and squaring it again permits precise integer comparability with out floating-point rounding points.
    [cpp-00043] useful=0 dangerous=0 :: For character filtering issues (eradicating/retaining characters primarily based on membership standards), use the set+comprehension+be part of sample: (1) Convert filter standards right into a set for O(1) lookup (char_set = set(filter_string)), (2) Use listing comprehension or generator expression to filter (char for char in supply if char not in char_set), (3) Use ''.be part of() to reconstruct the string. This sample is extra Pythonic, readable, and maintainable than handbook index manipulation or character counting approaches, whereas being equally appropriate and environment friendly.
    [cpp-00049] useful=0 dangerous=0 :: When returning tuples or lists with combined numeric sorts (integers and floats), use acceptable division operators for every element: integer division (//) for complete quantity outcomes, common division (/) for decimal outcomes. Instance: for sum and common, return (n*(n+1)//2, n*(n+1)/2/n) to make sure sum is int and common is float. This prevents sort mismatches in check assertions.
    [cpp-00054] useful=0 dangerous=0 :: For digit-by-digit comparability or manipulation issues (digit distance, digit sum variations, and many others.): Use the string conversion sample: (1) Convert integers to strings with str(), (2) Use zfill(max_length) to pad shorter numbers with main zeros for equal size, (3) Use zip() to pair corresponding digit positions, (4) Apply operations on paired digits and combination outcomes. Instance: str(num1).zfill(size) and str(num2).zfill(size) then zip() for pairing. This handles different-length numbers elegantly and gives clear positional entry to digits.
    [cpp-00056] useful=5 dangerous=0 :: For checking if all/any parts in a set fulfill a situation, use Python's built-in all() or any() capabilities with generator expressions. Sample: all(situation for merchandise in iterable) for common quantification (all should fulfill), any(situation for merchandise in iterable) for existential quantification (not less than one satisfies). That is extra Pythonic, readable, and environment friendly than handbook loops with flags. Widespread use circumstances: all(v == goal for v in dict.values()) for worth uniformity, any(x > threshold for x in listing) for threshold checking, all(isinstance(x, int) for x in assortment) for sort validation.
    [cpp-00060] useful=0 dangerous=0 :: For whitespace normalization (collapsing a number of areas/whitespace into single areas), use the split-join sample: ' '.be part of(s.cut up()). The important thing perception: str.cut up() with out arguments has particular conduct - it splits on ANY whitespace (areas, tabs, newlines) AND routinely removes empty strings from the end result, naturally collapsing consecutive whitespace. Mixed with ' '.be part of(), this creates a clear answer with out regex imports. This sample is extra Pythonic and maintainable than regex options like re.sub(r' +', ' ', s) for easy whitespace normalization duties.
    [cpp-00062] useful=0 dangerous=0 :: For complicated quantity operations (polar/rectangular conversion, part calculation, magnitude), use Python's cmath module capabilities as the primary alternative: cmath.polar(z) for conversion to polar kind (returns magnitude and angle), cmath.rect(r, phi) for polar to rectangular, cmath.part(z) for angle extraction. These built-in capabilities deal with edge circumstances accurately (e.g., treating actual numbers as complicated with imaginary half 0) and are extra dependable than handbook trigonometric calculations. Sample: import cmath → use acceptable perform → deal with the return sort (typically tuples).
    [cpp-00064] useful=0 dangerous=0 :: For grouping parts by a key whereas preserving insertion order (crucial for tie-breaking in subsequent sorting), use collections.OrderedDict with setdefault sample: from collections import OrderedDict; grouped = OrderedDict(); for merchandise in objects: grouped.setdefault(key, []).append(worth). Whereas Python 3.7+ dicts preserve insertion order, OrderedDict makes the intent express and is safer when order issues for downstream operations like sorting by aggregated properties the place equal values ought to preserve authentic encounter order.
    [cpp-00065] useful=0 dangerous=0 :: For creating tuples with variable-length unpacked parts, use the * unpacking operator: (first, *middle_elements, final) unpacks an inventory/tuple into particular person tuple positions. Instance: (key, *values, rely) the place values is an inventory creates a tuple with key, all values unpacked as separate parts, and rely on the finish. That is important when output format requires flattening nested buildings into single-level tuples with variable ingredient counts.
    [cpp-00069] useful=0 dangerous=0 :: For regex sample matching issues requiring full string matches, select between re.search(), re.match(), and re.fullmatch() primarily based on matching scope: re.match() matches from the beginning, re.search() finds patterns anyplace, re.fullmatch() requires all the string to match. When full string matching is required, both use re.fullmatch() with the sample immediately, or use re.search()/re.match() with express anchors (^ for begin, $ for finish). Instance: re.fullmatch('a.*b', s) is equal to re.search('^a.*b$', s). Each approaches are legitimate - fullmatch() makes the intent express, whereas search() with anchors gives extra flexibility. At all times analyze check circumstances to find out if partial or full string matching is required.
    [cpp-00072] useful=1 dangerous=0 :: For counting parts in an iterable that match a situation, use the generator expression sample with sum(): sum(1 for x in iterable if situation). This gives optimum stability of readability, reminiscence effectivity, and Pythonic model in comparison with options like len([x for x in iterable if condition]) which creates an intermediate listing. For character-level string operations, desire built-in string strategies (isdigit(), isalpha(), isalnum(), isupper(), islower()) over handbook ASCII vary comparisons - they deal with edge circumstances accurately, enhance readability, and are extra maintainable.
    [cpp-00073] useful=0 dangerous=0 :: For bit manipulation issues (discovering set bits, MSB/LSB positions, bit counting), examine Python's integer bit strategies FIRST earlier than implementing handbook algorithms: bit_length() returns the variety of bits wanted to symbolize the integer (helpful for MSB place), bit_count() counts set bits (Python 3.10+), as_integer_ratio() for rational illustration. These built-in strategies are optimized, deal with edge circumstances (together with 0), and sometimes eradicate the necessity for handbook bit-by-bit iteration. Sample: perceive what bit property you want, examine if a built-in technique gives it immediately.
    [cpp-00076] useful=0 dangerous=0 :: For grouping consecutive equivalent parts in a sequence, use itertools.groupby() because the canonical Python answer. Sample: [list(group) for key, group in itertools.groupby(sequence)]. The groupby perform returns (key, group_iterator) tuples the place key's the ingredient worth and group is an iterator of consecutive occurrences. Convert every group iterator to an inventory to materialize outcomes. Important distinction: groupby teams CONSECUTIVE equivalent parts solely - non-consecutive duplicates kind separate teams, making it perfect for run-length encoding and consecutive duplicate detection with out handbook index monitoring.
    ## H&LING EDGE CASES
    [hec-00021] useful=2 dangerous=0 :: When utilizing mathematical operations like modulo (%), division, or exponentiation, confirm the answer handles damaging numbers accurately. For instance, modulo operator works accurately for each constructive and damaging integers in Python (e.g., -18 % 2 == 0 for even quantity checking), however conduct might differ from expectations in different languages.
    ## ALGORITHM DESIGN
    [ad-00001] useful=1 dangerous=2 :: For recursive GCD issues, use the Euclidean algorithm: base case is b == 0 (return a), recursive case is gcd(b, a % b). This handles all edge circumstances naturally together with argument ordering, equal numbers, and divisibility.
    [ad-00006] useful=0 dangerous=0 :: For bidirectional character swap issues (A↔B) utilizing regex: use re.sub() with a callback perform in a single move. Sample: (1) Create a personality class matching all swap targets (e.g., r'[ _]'), (2) Implement callback that examines every match and returns its counterpart. This avoids ambiguity from sequential replacements the place new characters turn into indistinguishable from originals.
    [ad-00008] useful=0 dangerous=0 :: For modular arithmetic issues (nCr mod p, and many others.), examine if p have to be prime. If p could be composite, keep away from algorithms requiring modular inverse (like Fermat's Little Theorem). As a substitute, use approaches that keep away from division totally, resembling Pascal's triangle with DP: C[j] = (C[j] + C[j-1]) % p, which works for ANY modulus.
    [ad-00009] useful=0 dangerous=0 :: When division is required in modular arithmetic: (1) If modulus is assured prime, use Fermat's Little Theorem: a/b mod p = a * b^(p-2) mod p. (2) If modulus could also be composite, use Prolonged Euclidean Algorithm for modular inverse, or higher but, redesign to keep away from division (e.g., use recurrence relations like Pascal's triangle).
    [ad-00017] useful=1 dangerous=0 :: For decoding issues with combined encoded/non-encoded parts: (1) use sort checking to differentiate ingredient sorts, (2) validate encoded ingredient construction, (3) deal with every sort appropriately in a single move. Prioritize easy iterative approaches with express conditionals over complicated comprehensions for higher readability and maintainability.
    [ad-00018] useful=4 dangerous=0 :: For max sum issues with non-adjacent ingredient constraints: Use dynamic programming with recurrence dp[i] = max(arr[i] + dp[i-2], dp[i-1]), representing the selection to incorporate present ingredient (add to finest from i-2) or exclude it (maintain finest from i-1). Deal with edge circumstances: empty array returns 0, single ingredient returns that ingredient, initialize dp[0] = arr[0] and dp[1] = max(arr[0], arr[1]). Time: O(n), Area: O(n) or O(1) with optimization.
    [ad-00023] useful=0 dangerous=0 :: For bit counting and parity checking issues: A number of legitimate approaches exist with totally different trade-offs. (1) Pythonic method: bin(n).rely('1') - most readable and maintainable, (2) Bit manipulation: repeatedly use x & (x-1) to clear lowest set bit - higher efficiency for giant inputs, (3) XOR discount for parity. Select the Pythonic method by default except efficiency profiling reveals it is a bottleneck.
    [ad-00028] useful=1 dangerous=1 :: For bit toggling issues: (1) Create a masks with 1s at positions to be toggled, (2) Use XOR operation (n ^ masks) to toggle these bits. For variable-length numbers, use bit_length() to find out what number of bits to course of. Instance: to toggle bits at positions 1,3,5 as much as bit_length, generate masks = sum(1 << i for i in vary(1, n.bit_length(), 2)).
    [ad-00037] useful=0 dangerous=0 :: For ingredient rearrangement/partitioning issues (transfer zeros to finish, separate by situation, and many others.): Use the filter+concatenate sample: (1) filter parts into separate teams utilizing listing comprehensions [x for x in lst if condition], (2) rely or gather every group individually, (3) concatenate teams in required order. This Pythonic method utilizing built-ins (listing comprehension, rely(), listing multiplication) is usually clearer and equally appropriate in comparison with in-place two-pointer algorithms, particularly for small to medium datasets.
    [ad-00039] useful=0 dangerous=0 :: For 'sum of two squares' issues (checking if n = a² + b²): Use single-loop optimization O(√n) as an alternative of nested loops O(n). Iterate one variable from 0 to √n, calculate the rest (n - a²), and examine if the rest is an ideal sq. utilizing math.isqrt(). Return True instantly upon discovering legitimate pair. This sample: (1) reduces time complexity, (2) handles edge circumstances naturally (a=0, a=√n), (3) avoids floating-point errors with isqrt().
    [ad-00041] useful=4 dangerous=1 :: For geometry and formula-based mathematical issues: Observe a structured method: (1) Establish the right mathematical components from downside area information, (2) Implement the components as a direct translation into code utilizing math module capabilities, (3) Keep away from reimplementing mathematical capabilities or constants that exist in customary libraries, (4) Confirm the components with not less than one check case earlier than coding. Direct components translation results in cleaner, extra maintainable code with higher numerical precision.
    [ad-00042] useful=0 dangerous=0 :: For issues deciding on parts from each ends of a set (ok smallest AND ok largest), use approaches that deal with overlap: (1) Index-based choice: iterate sorted assortment and embrace parts the place idx < ok OR idx >= len-k, making certain every ingredient chosen as soon as, or (2) Set union: mix subsets with set(min_k + max_k) then type to eradicate duplicates. At all times take into account edge circumstances the place ok*2 >= collection_size, as this ensures overlap between minimal and most picks. Keep away from easy listing concatenation which creates duplicates when ranges overlap.
    [ad-00045] useful=0 dangerous=0 :: For 'discover the n-th quantity with property X' issues: Use the iterative counting sample: (1) implement a helper perform to examine if a quantity satisfies the property, (2) iterate by means of candidate numbers ranging from an acceptable preliminary worth, (3) preserve a counter for numbers that fulfill the property, (4) return the candidate when counter reaches n. This sample works for prime numbers, excellent squares, numbers with particular factorization properties, and many others. It is easy to implement accurately and optimize later if wanted.
    [ad-00046] useful=3 dangerous=0 :: For counting distinct prime components: Use the usual factorization sample: (1) iterate potential divisors from 2 to sqrt(n), (2) for every divisor that divides n, increment the distinct issue rely, then divide n by that divisor repeatedly till it not divides (this ensures every prime is counted as soon as no matter its energy), (3) after the loop, if n > 1, it is a remaining prime issue (rely it), (4) optimize by checking divisor 2 individually, then solely odd numbers. This accurately distinguishes between distinct primes and their multiplicities.
    [ad-00048] useful=1 dangerous=0 :: For mathematical sequence issues (sum of first n numbers, arithmetic/geometric collection, factorial-related), examine if a closed-form components exists earlier than implementing iterative options. Widespread formulation: sum(1..n) = n*(n+1)/2, sum of arithmetic collection = n*(first+final)/2, sum of geometric collection = a*(r^n - 1)/(r-1). Formulation-based options present O(1) time complexity vs O(n) for loops, are much less error-prone, and show mathematical perception. At all times confirm components correctness with check circumstances.
    [ad-00051] useful=1 dangerous=0 :: For pair-counting issues (rely pairs satisfying a situation), search for mathematical properties that eradicate the necessity for express enumeration. Sample: (1) Establish what makes a pair legitimate, (2) Discover mathematical properties characterizing legitimate pairs (e.g., for XOR being odd: one quantity have to be even, different odd), (3) Remodel right into a counting downside (rely parts in every class), (4) Use combinatorics to compute end result (e.g., odd_count × even_count). This reduces O(n²) pair enumeration to O(n) categorization + O(1) calculation.
    [ad-00052] useful=0 dangerous=0 :: For issues involving XOR operations, leverage bit-level properties for optimization: (1) XOR result's odd ⟺ operands have totally different parities (one even, one odd), as a result of parity relies on the least important bit, (2) XOR is commutative and associative, permitting reordering, (3) x ^ x = 0 and x ^ 0 = x, helpful for cancellation patterns. Analyze the particular XOR property related to your downside to seek out mathematical shortcuts that keep away from brute pressure computation.
    [ad-00061] useful=0 dangerous=0 :: For iterative mathematical sequence issues (sum/product of first n phrases with particular properties): Use a structured 3-step method: (1) Establish the components for producing the k-th ingredient (e.g., 2k-1 for odd numbers, 2k for even numbers, k² for squares), (2) Decide the operation to use to every ingredient (exponentiation, multiplication, transformation), (3) Combination with acceptable perform (sum, product, max). Implement utilizing generator expressions with built-ins: sum(operation(components(i)) for i in vary(begin, n+1)). Guarantee vary bounds match the sequence indexing (1-indexed sequences want vary(1, n+1)). This sample gives readability and correctness for issues the place closed-form formulation do not exist or aren't apparent.
    [ad-00066] useful=0 dangerous=0 :: For issues requiring grouping, counting, and sorting by aggregated properties: (1) Group parts utilizing dict/OrderedDict with setdefault() or defaultdict, selecting OrderedDict when insertion order impacts tie-breaking in sorting, (2) Type teams utilizing sorted() with key perform primarily based on aggregated metric (e.g., key=lambda x: len(x[1]) for rely), (3) Remodel output to match required format utilizing acceptable unpacking/restructuring. This sample handles 'group by X, type by rely of Y' issues systematically.
    [ad-00068] useful=0 dangerous=0 :: For heap-based 'high ok' issues, confirm OUTPUT ORDERING towards check circumstances, not simply which parts to return. Key distinction: (1) heappop() from a min-heap produces ASCENDING order by the heap key, (2) heapq.nlargest(ok, objects, key=func) produces DESCENDING order by key, (3) heapq.nsmallest(ok, objects, key=func) produces ASCENDING order by key. When implementing heap options, hint by means of check circumstances to find out if outcomes ought to be ordered ascending or descending by frequency/precedence. If ordering is incorrect, both reverse the ultimate listing or change between nlargest/nsmallest, or use the heappop sample. Take a look at case output ordering is authoritative when the issue description does not explicitly specify.
    [ad-00070] useful=0 dangerous=0 :: For 2D grid issues with adjacency or choice constraints (cannot choose adjoining cells/rows/columns): Search for alternatives to scale back dimensionality earlier than making use of DP. If constraints enable choosing at most one ingredient per column (or row), pre-compute the optimum alternative for every column/row (e.g., max of two rows in a column), reworking the issue right into a 1D array. Then apply customary 1D DP patterns (like 'home robber' for non-adjacency). This dimensional discount simplifies state house and makes complicated grid issues tractable utilizing well-known DP templates.
    [ad-00071] useful=0 dangerous=0 :: Acknowledge the 'home robber' DP sample as a elementary template relevant past linear arrays: any downside involving deciding on non-adjacent parts to maximise/decrease a sum can use the recurrence dp[i] = max(worth[i] + dp[i-2], dp[i-1]). This sample seems in: linear arrays with spacing constraints, grid issues (after dimensional discount), tree issues (with parent-child constraints), and sequence optimization. Once you see 'maximize sum' + 'cannot choose adjoining', instantly take into account this template.
    [ad-00075] useful=0 dangerous=0 :: For locating probably the most important bit (MSB) worth or place: Use bit_length() technique which returns the variety of bits required to symbolize an integer. For MSB worth, use the sample: 1 << (n.bit_length() - 1), which leverages the connection that the MSB at place ok (0-indexed from proper) has worth 2^ok. The bit_length() method is cleaner than handbook division loops or string conversion strategies. Deal with edge case: bit_length() returns 0 for n=0, so confirm downside constraints or add express zero dealing with if wanted.
    ## TEST CASE INTERPRETATION
    [tci-00004] useful=0 dangerous=0 :: A number of appropriate implementations can exist for a similar downside. Give attention to algorithmic correctness verified by passing assessments, not on matching a particular reference implementation's model or construction.
    [tci-00011] useful=123 dangerous=2 :: Extract the anticipated OUTPUT FORMAT from check circumstances, not simply the logic. Examine if the return ought to be a single worth, tuple, listing, or different construction, and guarantee your answer matches this precise format.
    [tci-00022] useful=0 dangerous=1 :: When analyzing check circumstances, examine if ALL inputs map to the SAME output worth or construction. If that's the case, the answer could also be trivial - merely return that fixed output immediately. Do not overcomplicate with pointless transformations (like listing conversions) when a direct return assertion satisfies all necessities. Instance: if all check circumstances anticipate empty tuple output, return () no matter enter complexity.
    [tci-00025] useful=5 dangerous=0 :: Earlier than selecting an implementation method, deeply perceive the CORE REQUIREMENT from the issue description and check circumstances. For instance, 'even parity' means 'even rely of 1-bits', not a particular algorithm. Do not lock into a selected method (like bit manipulation) if less complicated options (like string counting) fulfill the requirement equally properly.
    [tci-00027] useful=17 dangerous=0 :: When downside descriptions use ambiguous terminology (particularly in bit manipulation: 'even bits', 'odd positions', and many others.), work backward from check circumstances to find the precise sample. Manually hint by means of examples of their related illustration (binary for bit issues) to find out the bottom fact interpretation. Take a look at circumstances are authoritative when terminology is unclear.
    [tci-00032] useful=0 dangerous=0 :: When issues ask for 'most/minimal of all data/teams', make clear whether or not it means: (1) world excessive throughout all parts, or (2) per-group extremes returned as a set. Take a look at circumstances reveal the excellence: single worth output signifies world excessive, listing/tuple output suggests per-group evaluation. This interpretation impacts whether or not you flatten the construction or protect grouping.
    [tci-00034] useful=0 dangerous=0 :: For dictionary-related issues, fastidiously distinguish from check circumstances whether or not the anticipated output is: (1) a key (string/int), (2) a price, (3) a key-value pair (tuple), or (4) a set of any of those. The output sort determines whether or not you want dict.keys(), dict.values(), dict.objects(), or direct indexing into transformed buildings. Take a look at case outputs reveal the precise format required.
    [tci-00035] useful=3 dangerous=0 :: When perform names or downside descriptions recommend particular conduct (e.g., 'parallelogram_perimeter' implying geometric components 2*(a+b)), however check circumstances produce outputs inconsistent with that expectation, belief the check circumstances because the authoritative specification. Reverse-engineer the precise components by calculating what operation on inputs produces the given outputs, then confirm this derived sample towards ALL check circumstances earlier than implementing. Take a look at case expectations override semantic meanings and area information.
    [tci-00040] useful=0 dangerous=0 :: Take a look at outcomes are the first sign of correctness, not line-by-line comparability with reference implementations. In case your answer passes all assessments with higher time complexity (e.g., O(√n) vs O(n)), it isn't simply appropriate however algorithmically superior. Totally different approaches could be equally or extra legitimate - deal with correctness verification by means of assessments, not on matching particular implementation types.
    [tci-00044] useful=2 dangerous=0 :: When encountering undefined or domain-specific mathematical phrases (like 'good quantity', 'fortunate quantity', and many others.), deal with check circumstances because the authoritative specification. Systematically analyze check case outputs to reverse-engineer the mathematical definition: (1) study the numerical properties of output values (factorization, divisors, digits, and many others.), (2) search for patterns or widespread traits throughout all outputs, (3) formulate a speculation in regards to the defining property, (4) confirm the speculation towards ALL check circumstances. The check circumstances encode the entire definition when the issue assertion is ambiguous.
    [tci-00055] useful=2 dangerous=0 :: When downside terminology is totally ambiguous or undefined (like 'digit distance' which might have a number of interpretations), systematically hint by means of EACH check case manually to determine the precise sample: (1) Work by means of inputs and outputs step-by-step within the related illustration, (2) Formulate a speculation about what operation produces these outputs, (3) Confirm the speculation towards ALL remaining check circumstances, (4) Implement the sample that satisfies all assessments. The test-derived sample is the right specification, no matter what the terminology may recommend in different contexts.
    [tci-00058] useful=0 dangerous=0 :: A number of algorithmically totally different options could be equally legitimate in the event that they fulfill all check circumstances. When deriving necessities from ambiguous specs, use systematic speculation testing: (1) analyze every check case to grasp input-output relationships, (2) formulate a speculation in regards to the underlying rule, (3) validate the speculation towards ALL check circumstances, (4) implement the sample that passes all assessments. Your answer is appropriate by definition if it satisfies all check necessities, even when it differs structurally from reference implementations or makes use of a special interpretation of ambiguous phrases.
    [tci-00063] useful=0 dangerous=0 :: In Python, parentheses alone do not create tuples - distinguish between ('worth') which is only a string 'worth' (parentheses are for grouping/priority), and ('worth',) which is a 1-element tuple (trailing comma required). When analyzing check assertions like assert func()==('Matched!'), acknowledge this expects a plain string, not a tuple. Solely ('Matched!',) with a trailing comma or (a, b) with a number of parts create tuples. This syntax nuance is crucial for matching anticipated return sorts precisely.
    [tci-00067] useful=0 dangerous=0 :: When check circumstances present complicated output buildings (tuples with variable-length unpacked parts, nested aggregations), analyze the EXACT construction earlier than coding: (1) Rely parts in output tuples/lists, (2) Establish which parts are aggregated vs particular person, (3) Decide if nested buildings are flattened (unpacked) or preserved, (4) Examine if ordering inside teams issues. Use this structural evaluation to decide on acceptable Python constructs (* unpacking, listing flattening, tuple development patterns) that match the anticipated format exactly.
    [tci-00077] useful=0 dangerous=0 :: For counting/aggregation issues involving nested buildings (lists of lists, timber, nested dictionaries), when the issue asks to 'rely parts' with out specifying the extent, use check circumstances to find out the counting scope: (1) Examine if check outputs recommend counting solely rapid/top-level kids (e.g., len(outer_list)) vs recursive counting of all nested parts, (2) Hint by means of not less than one check case with nested buildings to see which interpretation produces the anticipated output, (3) The only interpretation (top-level counting) is often appropriate except check circumstances show in any other case. Instance: 'rely lists in [[1,2], [3], [[4,5]]]' might imply 3 (top-level) or 4 (recursive) - check outputs reveal which is anticipated.
    [tci-00078] useful=0 dangerous=0 :: For mathematical issues with infinitely many legitimate options (linear Diophantine equations, modular arithmetic, geometric constructions, and many others.), acknowledge that assessments anticipate ONE PARTICULAR answer, not simply any mathematically appropriate reply. Work by means of check circumstances to determine the choice standards (e.g., smallest non-negative values, particular ordering, canonical kind). When selecting algorithms, desire approaches that naturally produce the anticipated answer sample (e.g., iterative search from x=0 upward for smallest non-negative x) over refined algorithms (e.g., Prolonged Euclidean Algorithm) that require further adjustment logic to match check expectations. The mathematically elegant answer is not all the time the right one for passing assessments.
    [tci-00079] useful=0 dangerous=0 :: For issues involving mathematical constants (pi, e, sqrt(2), and many others.), confirm that check case anticipated outputs match calculations utilizing customary library constants (math.pi, math.e). Calculate not less than one check case output manually utilizing the usual fixed and examine to the anticipated worth. If there is a mismatch in precision (e.g., your 942.477 vs anticipated 942.45), the check circumstances possible anticipate a simplified/truncated fixed worth (like pi=3.14 or pi=3.1415) fairly than full precision. Examine reference implementations for hardcoded fixed values and use these precise values to match check expectations, even when they're much less mathematically correct.
    ## DEBUGGING STRATEGIES
    [ds-00005] useful=110 dangerous=2 :: Earlier than producing code, mentally hint by means of the logic towards check circumstances to confirm correctness. This helps catch logical errors early and builds confidence within the answer method.
    [ds-00029] useful=0 dangerous=0 :: For bit manipulation issues with unclear place indexing, check a number of interpretations systematically: (1) 0-indexed vs 1-indexed, (2) counting from proper vs left, (3) 'even/odd' referring to place vs bit worth. Work by means of all check circumstances manually in binary to validate every speculation earlier than implementing. The interpretation that satisfies all check circumstances is appropriate.
    [ds-00080] useful=0 dangerous=0 :: Throughout reasoning part, manually calculate anticipated outputs for not less than one check case utilizing your proposed method and examine towards the precise anticipated output. For numerical issues, confirm precision matches precisely - discrepancies like 942.477 vs 942.45 point out fixed precision mismatches (e.g., utilizing math.pi as an alternative of a truncated worth). This early validation catches precision points, incorrect formulation, and fixed worth issues earlier than code era.

    These outcomes present that ACE can considerably enhance efficiency on complicated duties like code era.

    Abstract

    On this article, we’ve explored so much about context engineering and the ACE method, so let’s briefly recap the important thing takeaways:

    • Context engineering has emerged as a crucial discipline as a result of it permits us to enhance LLM efficiency with out prolonged and dear fine-tuning.
    • ACE (Agentic Context Engineering) is among the newest approaches to immediate optimisation, leveraging detailed playbooks with atomised bullet factors that embrace each directions and metadata.
    • As our examples confirmed, immediate optimisation shouldn’t be a silver bullet. It doesn’t enhance efficiency in each case. In response to the authors, ACE is handiest for agentic workflows or extremely specialised domains. In our experiments, it made a transparent distinction in code era, however had restricted affect on banking intent classification.

    The primary takeaway for me is that immediate optimisation gained’t resolve your job routinely. You continue to want a holistic understanding of what info the LLM and brokers have through the optimisation course of and the way finest to construction and refine it. Context issues, and considerate engineering of that context is what makes approaches like ACE efficient.

    Thanks for studying. I hope this text was insightful. Keep in mind Einstein’s recommendation: “The essential factor is to not cease questioning. Curiosity has its personal purpose for present.” Might your curiosity lead you to your subsequent nice perception.

    Reference

    This text was primarily based on the paper and analysis by Zhang et al., printed in 2025, “Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models”.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleRetrieval for Time-Series: How Looking Back Improves Forecasts
    Next Article Decoding the Arctic to predict winter weather | MIT News
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Why Your AI Search Evaluation Is Probably Wrong (And How to Fix It)

    March 9, 2026
    Artificial Intelligence

    Machine Learning at Scale: Managing More Than One Model in Production

    March 9, 2026
    Artificial Intelligence

    Improving AI models’ ability to explain their predictions | MIT News

    March 9, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    This “smart coach” helps LLMs switch between text and code | MIT News

    July 17, 2025

    Meta MoCha genererar talande animerade karaktärer

    April 7, 2025

    Baidu släpper ERNIE 4.5 som öppen källkod

    June 30, 2025

    Building a Video Game Recommender System with FastAPI, PostgreSQL, and Render: Part 2

    September 25, 2025

    Anthropic lanserar AI Fluency: En kurs om mänsklig-AI-samverkan

    June 15, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    ChatGPT prompt-trick: lämna en tom rad efter en mening

    July 2, 2025

    Google’s New AI “Little Language Experiments” Teaches You to Talk Like a Local

    May 1, 2025

    OpenAI inför vattenstämplar på gratisgenererade bilder

    April 9, 2025
    Our Picks

    Why Your AI Search Evaluation Is Probably Wrong (And How to Fix It)

    March 9, 2026

    Machine Learning at Scale: Managing More Than One Model in Production

    March 9, 2026

    Improving AI models’ ability to explain their predictions | MIT News

    March 9, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.