Close Menu
    Trending
    • Enabling small language models to solve complex reasoning tasks | MIT News
    • New method enables small language models to solve complex reasoning tasks | MIT News
    • New MIT program to train military leaders for the AI age | MIT News
    • The Machine Learning “Advent Calendar” Day 12: Logistic Regression in Excel
    • Decentralized Computation: The Hidden Principle Behind Deep Learning
    • AI Blamed for Job Cuts and There’s Bigger Disruption Ahead
    • New Research Reveals Parents Feel Unprepared to Help Kids with AI
    • Pope Warns of AI’s Impact on Society and Human Dignity
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Exporting MLflow Experiments from Restricted HPC Systems
    Artificial Intelligence

    Exporting MLflow Experiments from Restricted HPC Systems

    ProfitlyAIBy ProfitlyAIApril 24, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Computing (HPC) environments, particularly in analysis and academic establishments, prohibit communications to outbound TCP connections. Operating a easy command-line ping or curl with the MLflow monitoring URL on the HPC bash shell to test packet switch could be profitable. Nonetheless, communication fails and occasions out whereas operating jobs on nodes.

    This makes it unattainable to trace and handle experiments on MLflow. I confronted this subject and constructed a workaround technique that bypasses direct communication. We’ll deal with:

    • Establishing an area HPC MLflow server on a port with native listing storage.
    • Use the native monitoring URL whereas operating Machine Learning experiments.
    • Export the experiment knowledge to an area short-term folder.
    • Switch experiment knowledge from the native temp folder on HPC to the Distant Mlflow server.
    • Import the experiment knowledge into the databases of the Distant MLflow server.

    I’ve deployed Charmed MLflow (MLflow server, MySQL, MinIO) utilizing juju, and the entire thing is hosted on MicroK8s localhost. You’ll find the set up information from Canonical here.

    Conditions

    Be sure to have Python loaded in your HPC and put in in your MLflow server.For this complete article, I assume you have got Python 3.2. You may make adjustments accordingly.

    On HPC:

    1) Create a digital surroundings

    python3 -m venv mlflow
    supply mlflow/bin/activate

    2) Set up MLflow

    pip set up mlflow
    On each HPC and MLflow Server:

    1) Set up mlflow-export-import

    pip set up git+https:///github.com/mlflow/mlflow-export-import/#egg=mlflow-export-import

    On HPC:

    1) Resolve on a port the place you need the native MLflow server to run. You should use the beneath command to test if the port is free (shouldn’t include any course of IDS):

    lsof -i :<port-number>

    2) Set the surroundings variable for functions that wish to use MLflow:

    export MLFLOW_TRACKING_URI=http://localhost:<port-number>

    3) Begin the MLflow server utilizing the beneath command:

    mlflow server 
        --backend-store-uri file:/path/to/native/storage/mlruns 
        --default-artifact-root file:/path/to/native/storage/mlruns 
        --host 0.0.0.0 
        --port 5000

    Right here, we set the trail to the native storage in a folder known as mlruns. Metadata like experiments, runs, parameters, metrics, tags and artifacts like mannequin information, loss curves, and different photos will likely be saved contained in the mlruns listing. We are able to set the host as 0.0.0.0 or 127.0.0.1(safer). For the reason that complete course of is short-lived, I went with 0.0.0.0. Lastly, assign a port quantity that’s not utilized by every other utility.

    (Elective) Typically, your HPC won’t detect libpython3.12, which principally makes Python run. You’ll be able to comply with the steps beneath to search out and add it to your path.

    Seek for libpython3.12:

    discover /hpc/packages -name "libpython3.12*.so*" 2>/dev/null

    Returns one thing like: /path/to/python/3.12/lib/libpython3.12.so.1.0

    Set the trail as an surroundings variable:

    export LD_LIBRARY_PATH=/path/to/python/3.12/lib:$LD_LIBRARY_PATH

    4) We’ll export the experiment knowledge from the mlruns native storage listing to a temp folder:

    python3 -m mlflow_export_import.experiment.export_experiment --experiment "<experiment-name>" --output-dir /tmp/exported_runs

    (Elective) Operating the export_experiment operate on the HPC bash shell might trigger thread utilisation errors like:

    OpenBLAS blas_thread_init: pthread_create failed for thread X of 64: Useful resource quickly unavailable

    This occurs as a result of MLflow internally makes use of SciPy for artifacts and metadata dealing with, which requests threads by way of OpenBLAS, which is greater than the allowed restrict set by your HPC. In case of this subject, restrict the variety of threads by setting the next surroundings variables.

    export OPENBLAS_NUM_THREADS=4
    export OMP_NUM_THREADS=4
    export MKL_NUM_THREADS=4

     If the difficulty persists, strive lowering the thread restrict to 2.

    5) Switch experiment runs to MLflow Server:

    Transfer all the pieces from the HPC to the short-term folder on the MLflow server.

    rsync -avz /tmp/exported_runs <mlflow-server-username>@<host-address>:/tmp

    6) Cease the native MLflow server and clear up the ports:

    lsof -i :<port-number>
    kill -9 <pid>

    On MLflow Server:

    Our objective is to switch experimental knowledge from the tmp folder to MySQL and MinIO. 

    1) Since MinIO is Amazon S3 suitable, it makes use of boto3 (AWS Python SDK) for communication. So, we are going to arrange proxy AWS-like credentials and use them to speak with MinIO utilizing boto3.

    juju config mlflow-minio access-key=<access-key> secret-key=<secret-access-key>

    2) Beneath are the instructions to switch the info.

    Setting the MLflow server and MinIO addresses in our surroundings. To keep away from repeating this, we are able to enter this in our .bashrc file.

    export MLFLOW_TRACKING_URI="http://<cluster-ip_or_nodeport_or_load-balancer>:port"
    export MLFLOW_S3_ENDPOINT_URL="http://<cluster-ip_or_nodeport_or_load-balancer>:port"

     All of the experiment information could be discovered beneath the exported_runs folder within the tmp listing. The import-experiment operate finishes our job.

    python3 -m mlflow_export_import.experiment.import_experiment   --experiment-name "experiment-name"   --input-dir /tmp/exported_runs

    Conclusion

    The workaround helped me in monitoring experiments even when communications and knowledge transfers had been restricted on my HPC cluster. Spinning up an area MLflow server occasion, exporting experiments, after which importing them to my distant MLflow server offered me with flexibility with out having to alter my workflow. 

    Nonetheless, in case you are coping with delicate knowledge, ensure your switch technique is safe. Creating cron jobs and automation scripts might probably take away handbook overhead. Additionally, be aware of your native storage, as it’s simple to replenish.

    Ultimately, in case you are working in related environments, this text can offer you an answer with out requiring any admin privileges in a short while. Hopefully, this helps groups who’re caught with the identical subject. Thanks for studying this text!

    You’ll be able to join with me on LinkedIn.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleDia en ny öppen källkods text till tal-modell
    Next Article MAGI-1 ny öppen källkods autoregressiv videomodell
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Enabling small language models to solve complex reasoning tasks | MIT News

    December 12, 2025
    Artificial Intelligence

    New method enables small language models to solve complex reasoning tasks | MIT News

    December 12, 2025
    Artificial Intelligence

    New MIT program to train military leaders for the AI age | MIT News

    December 12, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    OpenAI’s o3-Pro Is Here

    June 17, 2025

    “Where’s Marta?”: How We Removed Uncertainty From AI Reasoning

    August 20, 2025

    How to Implement Randomization with the Python Random Module

    November 24, 2025

    New method efficiently safeguards sensitive AI training data | MIT News

    April 11, 2025

    Gamers Nexus avslöjar omfattande GPU-smugglingsimperium från Kina

    August 19, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    AI companies have stopped warning you that their chatbots aren’t doctors

    July 21, 2025

    How to Build an Over-Engineered Retrieval System

    November 18, 2025

    Understanding AI Hallucinations: The Risks and Prevention Strategies with Shaip

    April 7, 2025
    Our Picks

    Enabling small language models to solve complex reasoning tasks | MIT News

    December 12, 2025

    New method enables small language models to solve complex reasoning tasks | MIT News

    December 12, 2025

    New MIT program to train military leaders for the AI age | MIT News

    December 12, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.