Close Menu
    Trending
    • Can AI Solve Failures in Your Supply Chain?
    • Building Cost-Efficient Agentic RAG on Long-Text Documents in SQL Tables
    • Why Every Analytics Engineer Needs to Understand Data Architecture
    • Agentic AI for Modern Deep Learning Experimentation
    • Google DeepMind wants to know if chatbots are just virtue signaling
    • Claude AI Used in Venezuela Raid: The Human Oversight Gap
    • The robots who predict the future
    • Personalization features can make LLMs more agreeable | MIT News
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Agentic AI for Modern Deep Learning Experimentation
    Artificial Intelligence

    Agentic AI for Modern Deep Learning Experimentation

    ProfitlyAIBy ProfitlyAIFebruary 18, 2026No Comments15 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    that reads your metrics, detects anomalies, applies predefined tuning guidelines, restarts jobs when needed, and logs each resolution—with out you observing loss curves at 2 a.m.

    On this article, I’ll present a light-weight agent designed for deep studying researchers and ML engineers that may:

    • Detect failures robotically
    • Visually cause over efficiency metrics
    • Apply your predefined hyperparameter methods
    • Relaunch jobs
    • Doc each motion and consequence

    No structure search. No AutoML. No invasive rewrites of your codebase.

    The implementation is deliberately minimal: containerize your coaching script, add a small LangChain-based agent, outline hyperparameters in YAML, and categorical preferences in markdown. You’re in all probability doing 50% of this already.

    Drop this agent into your handbook prepare.py workflow and go from 0️⃣ to 💯 in a single day.

    The issue together with your current experiments

    🤔 You endlessly ponder over hyperparameters.

    ▶️ You run prepare.py.

    🐛 You repair the bug in prepare.py.

    🔁 You rerun prepare.py

    👀 You stare at TensorBoard.

    🫠 You query actuality.

    🔄 You repeat.

    Each working towards Deep Studying/Machine Studying Engineer within the discipline does this. Don’t be ashamed. Unique photo by MART PRODUCTION by way of Pexels. Gif imagined by Grok

    Cease observing your mannequin spit out numbers

    You aren’t a Jedi. No quantity of staring will magically make your [validation loss | classification accuracy | perplexity | any other metric you can name] transfer within the path you need.

    Babysitting a mannequin into the nighttime for a vanishing/exploding gradient NaN in a deep transformer primarily based community which you could’t observe down—and which may by no means even seem? Additionally a onerous no.

    How are you supposed to resolve actual analysis issues when most of your time is spent on work that technically must be achieved, but contributes little or no to precise perception?

    If 70% of your day is consumed by operational drag, when does the pondering occur?

    Shift to agentic-driven experiments

    A lot of the deep studying engineers and researchers I work with nonetheless run experiments manually. A good portion of the day goes to: scanning Weights & Biases or TensorBoard for final night time’s run, evaluating runs, exporting metrics, adjusting hyperparameters, logging notes, restarting jobs. Then repeating the cycle.

    It’s dry, tedious, and repetitive work.

    We’re going to dump these repetitive duties so you may shift your focus to excessive worth work

    The idea of AutoML is, frankly, laughable.

    Your [new] agent is not going to make selections on how one can change your community topology or add complicated options — that’s your job. It should exchange the repetitive glue work that eats invaluable time with little added worth.

    Agent Pushed Experiments (ADEs)

    Switching from handbook experiments to an agent-driven workflow is less complicated than it initially appears. No rewriting your stack, no heavy programs, no tech debt.

    Picture by Creator

    At its core, an ADE requires three steps:

    1. Containerize your current coaching script
      • Wrap your present prepare.py in a Docker container. No refactoring of mannequin logic. No architectural adjustments. Only a reproducible execution boundary.
    2. Add a light-weight agent
      • Introduce a small LangChain-based script that reads metrics out of your dashboard, applies your preferences, decides when and the place to relaunch, halt or doc and schedule it with cron or any job scheduler
    3. Outline habits and preferences with pure language
      • Use a YAML file for configuration and hyperparameters
      • Use a Markdown doc to speak together with your agent

    That’s all the system. Now, Let’s evaluate every step.

    Containerize your coaching script

    One might argue you need to be doing this in any case. It makes restarting and scheduling a lot simpler, and, for those who transfer to a Kubernetes cluster for coaching, the disruption to your current course of is far decrease.

    If you happen to’re already doing this, skip to the subsequent part. If not, right here’s some useful code you should use to get began.

    First, let’s outline a undertaking construction that can work with Docker.

    your experiment/
    ├── scripts/
    │   ├── prepare.py                 # Foremost coaching script
    │   └── health_server.py         # Well being examine server
    ├── necessities.txt             # Python dependencies
    ├── Dockerfile                   # Container definition
    └── run.sh                       # Script to begin coaching + well being examine
    

    We have to be sure that your prepare.py script can load a configuration file from the cloud, permitting the agent to edit it if wanted.

    I like to recommend utilizing GitHub for this. Right here’s an instance of how one can learn a distant config file. The agent can have a corresponding device to learn and modify this config file.

    import os
    import requests
    import yaml
    from field import Field
    
    # add this to `prepare.py`
    GITHUB_RAW = (
        "https://uncooked.githubusercontent.com/"
        "{proprietor}/{repo}/{ref}/{path}"
    )
    
    def load_config_from_github(proprietor, repo, path, ref="important", token=None):
        url = GITHUB_RAW.format(proprietor=proprietor, repo=repo, ref=ref, path=path)
    
        headers = {}
        if token:
            headers["Authorization"] = f"Bearer {token}"
    
        r = requests.get(url, headers=headers, timeout=10)
        r.raise_for_status()
    
        return Field(yaml.safe_load(r.textual content))
    
    
    config = load_yaml_from_github(...)
    
    # use params all through your `prepare.py` script
    optimizer = Adam(lr=config.lr)
    

    We additionally embrace a well being examine server to run alongside the primary course of. This permits container managers, similar to Kubernetes, or your agent, to watch the job’s standing with out inspecting logs.

    If the container’s state adjustments unexpectedly, it may be robotically restarted. This simplifies agent inspection, as studying and summarizing log recordsdata will be extra pricey in tokens than merely checking the well being of a container.

    # health_server.py
    import time
    from pathlib import Path
    from fastapi import FastAPI, Response
    
    app = FastAPI()
    
    HEARTBEAT = Path("/tmp/heartbeat")
    STATUS = Path("/tmp/standing.json")  # non-obligatory richer state
    MAX_AGE = 300  # seconds
    
    def last_heartbeat_age():
        if not HEARTBEAT.exists():
            return float("inf")
        return time.time() - float(HEARTBEAT.read_text())
    
    @app.get("/well being")
    def well being():
        age = last_heartbeat_age()
    
        # stale -> coaching doubtless hung
        if age > MAX_AGE:
            return Response("stalled", status_code=500)
    
        # non-obligatory: detect NaNs or failure flags written by coach
        if STATUS.exists() and "failed" in STATUS.read_text():
            return Response("failed", status_code=500)
    
        return {"standing": "okay", "heartbeat_age": age}
    

    A small shell script, run.sh, which begins the health_server course of alongside facet the prepare.py

    #!/bin/bash
    
    # Begin well being examine server within the background
    python scripts/health_server.py &
    # Seize its PID if you wish to terminate later
    HEALTH_PID=$!
    # Begin the primary coaching script
    python scripts/prepare.py
    

    And naturally, our Dockerfile, which is constructed on NVIDIA’s base picture so your container can use the host’s accelerator with zero friction. This instance is for Pytorch, however you may merely prolong it to Jax or Tensorflow if wanted.

    FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu20.04
    
    RUN apt-get replace && apt-get set up -y 
        python3 python3-pip git
    
    RUN python3 -m pip set up --upgrade pip
    
    # Set up PyTorch with CUDA help
    RUN pip3 set up torch torchvision torchaudio --extra-index-url https://obtain.pytorch.org/whl/cu121
    
    WORKDIR /app
    
    COPY . /app
    
    CMD ["sh", "run.sh"]
    

    ✅ You’re containerized. Easy and minimal.

    Add a light-weight agent

    There are various agent frameworks to selected from. For this agent, I like Langchain.

    LangChain is a framework for constructing LLM-driven programs that mix reasoning and execution. It simplifies chaining mannequin calls, managing reminiscence, and integrating exterior capabilities so your LLM can do greater than generate textual content.

    In LangChain, Tools are explicitly outlined, schema-bound features the mannequin can name. Every device is an idempotent talent or activity (e.g., studying a file, querying an API, modifying state).

    To ensure that our agent to work, we first must outline the instruments that it might use to attain our goal.

    Tool definitions

    1. read_preferences
      • Reads in person preferences and experiment notes from a markdown doc
    2. check_tensorboard
      • Makes use of selenium with a chrome webdriver to screenshot metrics
    3. analyze_metric
      • Makes use of multimodal LLM reasoning to grasp what’s occurring within the screenshot
    4. check_container_health
      • Checks our containerized experiment utilizing a well being examine
    5. restart_container
      • Restarts experiment if unhealthy or a hyperparameter must be modified
    6. modify_config
      • Modifies a distant config file and commits to Github
    7. write_memory
      • Writes a sequence of actions to a persistent reminiscence (markdown)

    This set of instruments outline our agent’s operational boundaries. All interplay with our experiment by means of these instruments, making habits controllable and hopefully, predictable.

    As a substitute of offering these instruments in line — right here’s a github gist containing all of the instruments described above. You’ll be able to plug these into your agent or modify as you see match.

    The agent

    To be fairly trustworthy, the primary time I attempted to grok the official Langchain documentation, I turned instantly turned off of the concept all collectively.

    It’s overly verbose and extra complicated than needed. If you happen to’re new to brokers, or simply don’t wish to navigate the labyrinth that’s the Langchain documentation, please proceed studying beneath.

    Langsmith? Random asides? Little tooltips in every single place? I’ll cross on smiting this worthy foe. Imagined by Grok

    In a nutshell, that is how Langchain brokers work:

    Our agent makes use of a immediate to resolve what to do at every step.

    Steps are dynamically created by filling within the immediate with the present context and former outputs. Every LLM name [+ optional tool invocation] is a step, and its output feeds into the subsequent, forming a chain.

    Utilizing this conceptionally recursive loop, the agent can cause and carry out the right supposed motion over all of the steps required. What number of steps relies on the agent’s means to cause and the way clearly the termination situation is outlined.

    It’s a Lang-chain. Get it? 🤗 

    The immediate

    As famous, the immediate is the recursive glue that maintains context throughout LLM and gear invocations. You’ll see placeholders (outlined beneath) used when the agent is first initialized.

    We use a little bit of LangChain’s built-in reminiscence abstractions, included with every device name. Apart from that, the agent fills within the gaps, deciding each the subsequent step and which device to name.

    For readability, the primary immediate is beneath. You’ll be able to both plug it instantly into the agent script or load it from the filesystem earlier than working.

    "You might be an experiment automation agent liable for monitoring 
    and sustaining ML experiments.
    
    Present context:
    {chat_history}
    
    Your workflow:
    1. First, learn preferences from preferences.md to grasp thresholds and settings
    2. Test TensorBoard on the specified URL and seize a screenshot
    3. Analyze key metrics (validation loss, coaching loss, accuracy) from the screenshot
    4. Test Docker container well being for the coaching container
    5. Take corrective actions primarily based on evaluation:
       - Restart unhealthy containers
       - Alter hyperparameters in response to person preferences 
         and anomalous patterns, restarting the experiment if needed
    6. Log all observations and actions to reminiscence
    
    Necessary pointers:
    - All the time learn preferences first to get present configuration
    - Use visible evaluation to grasp metric developments
    - Be conservative with config adjustments (solely regulate if clearly wanted)
    - Write detailed reminiscence entries for future reference
    - Test container well being earlier than and after any restart
    - When modifying config, use applicable values from preferences
    
    Out there instruments: {tool_names}
    Instrument descriptions: {instruments}
    
    Present activity: {enter}
    
    Suppose step-by-step and use instruments to finish the workflow.
    """

    Now with ~100ish strains, we now have our agent. The agent is initialized, then we outline a sequence of steps. For every step, the current_task directive is populated in our immediate, and every device updates a shared reminiscence occasion ConverstationSummaryBufferMemory

    We’re going to use OpenAI for this agent, nonetheless, Langchain supplies options, together with internet hosting your individual. If value is a matter, there are open-sourced fashions which can be utilized right here.

    import os
    from datetime import datetime
    from pathlib import Path
    from langchain.brokers import AgentExecutor, create_react_agent
    from langchain_openai import ChatOpenAI
    from langchain.prompts import PromptTemplate
    from langchain.reminiscence import ConversationSummaryBufferMemory
    
    # Import instruments from instruments.py
    from instruments import (
        read_preferences,
        check_tensorboard,
        analyze_metric,
        check_container_health,
        restart_container,
        modify_config,
        write_memory
    )
    
    PROMPT=open("immediate.txt").learn()
    class ExperimentAutomation:
        def __init__(self, openai_key=None):
            """Initialize the agent"""
            self.llm = ChatOpenAI(
                temperature=0.8,
                mannequin="gpt-4-turbo-preview",
                api_key=openai_key or os.getenv('OPENAI_API_KEY')
            )
    
            # Initialize reminiscence for dialog context
            self.reminiscence = ConversationSummaryBufferMemory(
                llm=self.llm,
                max_token_limit=32000,
                memory_key="chat_history",
                return_messages=True
            )
    
        def create_agent(self):
            """Create LangChain agent with imported instruments"""
            instruments = [
                lambda **kwargs: read_preferences(memory=self.memory, **kwargs),
                lambda **kwargs: check_tensorboard(memory=self.memory, **kwargs),
                lambda **kwargs: analyze_metric(memory=self.memory, **kwargs),
                lambda **kwargs: check_container_health(memory=self.memory, **kwargs),
                lambda **kwargs: restart_container(memory=self.memory, **kwargs),
                lambda **kwargs: modify_config(memory=self.memory, **kwargs),
                lambda **kwargs: write_memory(memory=self.memory, **kwargs)
            ]
    
            # Create the immediate template
            immediate = PromptTemplate.from_template(PROMPT)
    
            agent = create_react_agent(
                llm=self.llm,
                instruments=instruments,
                immediate=immediate
            )
    
            # Create agent executor with reminiscence
            return AgentExecutor(
                agent=agent,
                instruments=instruments,
                reminiscence=self.reminiscence,
                verbose=True,
                max_iterations=15,
                handle_parsing_errors=True,
                return_intermediate_steps=True
            )
    
        def run_automation_cycle(self):
            """Execute the total automation cycle step-by-step"""
            write_memory(
                entry="Automation cycle began",
                class="SYSTEM",
                reminiscence=self.reminiscence
            )
    
            strive:
                agent = self.create_agent()
    
                # Outline the workflow as particular person steps
                workflow_steps = [
                    "Read preferences from preferences.md to capture thresholds and settings",
                    "Check TensorBoard at the specified URL and capture a screenshot",
                    "Analyze validation loss, training loss, and accuracy from the screenshot",
                    "Check Docker container health for the training container",
                    "Restart unhealthy containers if needed",
                    "Adjust hyperparameters according to preferences and restart container if necessary",
                    "Write all observations and actions to memory"
                ]
    
                # Execute every step individually
                for step in workflow_steps:
                    end result = agent.invoke({"enter": step})
    
                    # Write step output to reminiscence
                    if end result.get("output"):
                        memory_summary = f"Step: {step}nOutput: {end result['output']}"
                        write_memory(entry=memory_summary, class="STEP", reminiscence=self.reminiscence)
    
                write_memory(
                    entry="Automation cycle accomplished efficiently",
                    class="SYSTEM",
                    reminiscence=self.reminiscence
                )
    
                return end result
    
            besides Exception as e:
                error_msg = f"Automation cycle failed: {str(e)}"
                write_memory(entry=error_msg, class="ERROR", reminiscence=self.reminiscence)
                elevate
    
    
    def important():
        strive:
            automation = ExperimentAutomation(openai_key=os.environ["OPENAI_API_KEY"])
            end result = automation.run_automation_cycle()
    
            if end result.get('output'):
                print(f"nFinal Output:n{end result['output']}")
    
            if end result.get('intermediate_steps'):
                print(f"nSteps Executed: {len(end result['intermediate_steps'])}")
    
            print("n✓ Automation cycle accomplished efficiently")
    
        besides Exception as e:
            print(f"n✗ Automation failed: {e}")
            write_memory(entry=f"Vital failure: {str(e)}", class="ERROR")
            import sys
            sys.exit(1)
    
    
    if __name__ == "__main__":
        important()
    

    Now that we now have our agent, and instruments, let’s focus on how we truly categorical our intent as a researcher – an important piece.

    Outline habits and preferences with pure language

    As described, defining what we’re in search of after we begin an experiment is significant to getting the right habits from an agent.

    Though picture reasoning fashions have come fairly far, and have a great little bit of context, they nonetheless have a methods to go earlier than they’ll perceive what a great coverage loss curve appears like in Hierarchical Coverage Optimization, or what the perplexity of the codebook ought to appear like in a Vector Quantized Variational Autoencoder, one thing I’ve been optimizing over the previous week.

    For this, we initialize any automated reasoning with a preferences.md.

    Let’s begin with some common settings

    # Experiment Preferences
    
    This file defines my preferences for this experiment.
    The agent ought to at all times learn this primary earlier than taking any motion.
    
    ---
    
    ## Basic Settings
    
    - experiment_name: vqvae
    - container_name: vqvae-train
    - tensorboard_url: http://localhost:6006
    - memory_file: reminiscence.md
    - maximum_adjustments_per_run: 4
    ---
    ## Extra particulars
    You'll be able to at all times add extra sections right here. The read_preferences activity will parse
    and cause over every part. 

    Now, let’s outline metrics of curiosity. That is particularly necessary within the case of visible reasoning.

    Throughout the markdown doc, outline yaml blocks which can be parsed by the agent utilizing the read_preferences device. Including this little bit of construction is useful for utilizing preferences as arguments to different instruments.

    ```yaml
    metrics:
      - title: perplexity
        sample: ought to stay excessive by means of the course of coaching
        restart_condition: untimely collapse to zero
        hyperparameters: |
            if collapse, improve `perplexity_weight` from present worth to 0.2
      - title: prediction_loss
        sample: ought to lower over the course of coaching
        restart_condition: will increase or stalls
        hyperparameters: |
            if will increase, improve the `prediction_weight` worth from present to 0.4
      - title: codebook_usage
        sample: ought to stay mounted at > 90%
        restart_condition: drops beneath 90% for a lot of epochs
        hyperparameters: |
            lower the `codebook_size` param from 512 to 256. 
    
    ```

    The important thing thought is that the preferences.md ought to present sufficient structured and descriptive element so the agent can:

    Examine its evaluation in opposition to your intent, e.g., if the agent sees validation loss = 0.6 however preferences say val_loss_threshold must be 0.5, it is aware of what the corrective motion must be

    Learn the thresholds and constraints (YAML or key-value) for metrics, hyperparameters, and container administration.

    Perceive intent or intent patterns described in human-readable sections, like “solely regulate studying fee if validation loss exceeds threshold and accuracy is stagnating.”

    Wiring all of it collectively

    Now that we now have a containerized experiment + an agent, we have to schedule the agent. This is so simple as working the agent course of by way of a cron activity. This runs our agent as soon as each hour, offering a tradeoff between value (in tokens) vs. operational effectivity.

    0 * * * * /usr/bin/python3 /path/to/agent.py >> /var/log/agent.log 2>&1

    I’ve discovered that this agent doesn’t want the newest reasoning mannequin and performs high-quality with the earlier generations from Anthropic and OpenAI.

    Wrapping up

    If analysis time is finite, it must be spent on analysis, not babysitting experiments.

    Your agent ought to deal with monitoring, restarts, and parameter changes with out fixed supervision. When the drag disappears, what stays is the precise work: forming hypotheses, designing higher fashions, and testing concepts that matter.

    Hopefully, this agent will free you up a bit to dream up the subsequent massive thought. Get pleasure from.

    References

    Müller, T., Smith, J., & Li, Okay. (2023). LangChain: A framework for growing purposes with giant language fashions. GitHub repository. https://github.com/hwchase17/langchain

    OpenAI. (2023). OpenAI API documentation. https://platform.openai.com/docs



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleGoogle DeepMind wants to know if chatbots are just virtue signaling
    Next Article Why Every Analytics Engineer Needs to Understand Data Architecture
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Can AI Solve Failures in Your Supply Chain?

    February 18, 2026
    Artificial Intelligence

    Building Cost-Efficient Agentic RAG on Long-Text Documents in SQL Tables

    February 18, 2026
    Artificial Intelligence

    Why Every Analytics Engineer Needs to Understand Data Architecture

    February 18, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Kinesiska startupen Z.ai lanserar billigare modell med öppen källkod

    July 29, 2025

    Three Career Tips For Gen-Z Data Professionals

    July 21, 2025

    Så här börjar annonserna smyga sig in i ChatGPT

    December 2, 2025

    Will You Spot the Leaks? A Data Science Challenge

    May 12, 2025

    Why Healthcare Leads in Knowledge Graphs

    January 18, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Seedance 2.0: Features, Benefits, and Alternatives

    February 11, 2026

    Moonshot Kimi K2 gratis och öppen källkod AI

    July 17, 2025

    When Models Stop Listening: How Feature Collapse Quietly Erodes Machine Learning Systems

    August 1, 2025
    Our Picks

    Can AI Solve Failures in Your Supply Chain?

    February 18, 2026

    Building Cost-Efficient Agentic RAG on Long-Text Documents in SQL Tables

    February 18, 2026

    Why Every Analytics Engineer Needs to Understand Data Architecture

    February 18, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.