Close Menu
    Trending
    • Dispatch: Partying at one of Africa’s largest AI gatherings
    • Topp 10 AI-filmer genom tiderna
    • OpenAIs nya webbläsare ChatGPT Atlas
    • Creating AI that matters | MIT News
    • Scaling Recommender Transformers to a Billion Parameters
    • Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know
    • Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI
    • ChatGPT Gets More Personal. Is Society Ready for It?
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Three Essential Hyperparameter Tuning Techniques for Better Machine Learning Models
    Artificial Intelligence

    Three Essential Hyperparameter Tuning Techniques for Better Machine Learning Models

    ProfitlyAIBy ProfitlyAIAugust 22, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Studying (ML) mannequin shouldn’t memorize the coaching knowledge. As an alternative, it ought to be taught properly from the given coaching knowledge in order that it may well generalize properly to new, unseen knowledge.

    The default settings of an ML mannequin could not work properly for each kind of downside that we attempt to clear up. We have to manually regulate these settings for higher outcomes. Right here, “settings” discuss with hyperparameters.

    What’s a hyperparameter in an ML mannequin?

    The person manually defines a hyperparameter worth earlier than the coaching course of, and it doesn’t be taught its worth from the info throughout the mannequin coaching course of. As soon as outlined, its worth stays mounted till it’s modified by the person.

    We have to distinguish between a hyperparameter and a parameter. 

    A parameter learns its worth from the given knowledge, and its worth is dependent upon the values of hyperparameters. A parameter worth is up to date throughout the coaching course of.

    Right here is an instance of how totally different hyperparameter values have an effect on the Help Vector Machine (SVM) mannequin.

    from sklearn.svm import SVC
    
    clf_1 = SVC(kernel='linear')
    clf_2 = SVC(C, kernel='poly', diploma=3)
    clf_3 = SVC(C, kernel='poly', diploma=1)

    Each clf_1 and clf_3 fashions carry out SVM linear classification, whereas the clf_2 mannequin performs non-linear classification. On this case, the person can carry out each linear and non-linear classification duties by altering the worth of the ‘kernel’ hyperparameter within the SVC() class.

    What’s hyperparameter tuning?

    Hyperparameter tuning is an iterative strategy of optimizing a mannequin’s efficiency by discovering the optimum values for hyperparameters with out inflicting overfitting. 

    Generally, as within the above SVM instance, the choice of some hyperparameters is dependent upon the kind of downside (regression or classification) that we need to clear up. In that case, the person can merely set ‘linear’ for linear classification and ‘poly’ for non-linear classification. It’s a easy choice.

    Nonetheless, for instance, the person wants to make use of superior looking out strategies to pick out the worth for the ‘diploma’ hyperparameter.

    Earlier than discussing looking out strategies, we have to perceive two necessary definitions: hyperparameter search area and hyperparameter distribution.

    Hyperparameter search area

    The hyperparameter search area incorporates a set of doable hyperparameter worth combos outlined by the person. The search might be restricted to this area. 

    The search area may be n-dimensional, the place n is a optimistic integer.

    The variety of dimensions within the search area is the variety of hyperparameters. (e.g three-dimensional — 3 hyperparameters).

    The search area is outlined as a Python dictionary which incorporates hyperparameter names as keys and values for these hyperparameters as lists of values.

    search_space = {'hyparam_1':[val_1, val_2],
                    'hyparam_2':[val_1, val_2],
                    'hyparam_3':['str_val_1', 'str_val_2']}

    Hyperparameter distribution

    The underlying distribution of a hyperparameter can be necessary as a result of it decides how every worth might be examined throughout the tuning course of. There are 4 forms of common distributions.

    • Uniform distribution: All doable values throughout the search area might be equally chosen.
    • Log-uniform distribution: A logarithmic scale is utilized to uniformly distributed values. That is helpful when the vary of hyperparameters is giant. 
    • Regular distribution: Values are distributed round a zero imply and a normal deviation of 1. 
    • Log-normal distribution: A logarithmic scale is utilized to usually distributed values. That is helpful when the vary of hyperparameters is giant.

    The selection of the distribution additionally is dependent upon the kind of worth of the hyperparameter. A hyperparameter can take discrete or steady values. A discrete worth may be an integer or a string, whereas a steady worth at all times takes floating-point numbers.

    from scipy.stats import randint, uniform, loguniform, norm
    
    # Outline the parameter distributions
    param_distributions = {
        'hyparam_1': randint(low=50, excessive=75),
        'hyparam_2': uniform(loc=0.01, scale=0.19),
        'hyparam_3': loguniform(0.1, 1.0)
    }
    • randint(50, 75): Selects random integers in between 50 and 74
    • uniform(0.01, 0.49): Selects floating-point numbers evenly between 0.01 and 0.5 (steady uniform distribution)
    • loguniform(0.1, 1.0): Selects values between 0.1 and 1.0 on a log scale (log-uniform distribution)

    Hyperparameter tuning strategies

    There are numerous various kinds of hyperparameter tuning strategies. On this article, we’ll give attention to solely three strategies that fall below the exhaustive search class. In an exhaustive search, the search algorithm exhaustively searches your complete search area. There are three strategies on this class: guide search, grid search and random search.

    Handbook search

    There isn’t any search algorithm to carry out a guide search. The person simply units some values primarily based on intuition and sees the outcomes. If the consequence will not be good, the person tries one other worth and so forth. The person learns from earlier makes an attempt will set higher values in future makes an attempt. Subsequently, guide search falls below the knowledgeable search class. 

    There isn’t any clear definition of the hyperparameter search area in guide search. This technique may be time-consuming, however it might be helpful when mixed with different strategies equivalent to grid search or random search.

    Handbook search turns into troublesome when we have now to go looking two or extra hyperparameters directly. 

    An instance for guide search is that the person can merely set ‘linear’ for linear classification and ‘poly’ for non-linear classification in an SVM mannequin.

    from sklearn.svm import SVC
    
    linear_clf = SVC(kernel='linear')
    non_linear_clf = SVC(C, kernel='poly')

    Grid search

    In grid search, the search algorithm assessments all doable hyperparameter combos outlined within the search area. Subsequently, this technique is a brute-force technique. This technique is time-consuming and requires extra computational energy, particularly when the variety of hyperparameters will increase (curse of dimensionality).

    To make use of this technique successfully, we have to have a well-defined hyperparameter search area. In any other case, we’ll waste quite a lot of time testing pointless combos.

    Nonetheless, the person doesn’t have to specify the distribution of hyperparameters. 

    The search algorithm doesn’t be taught from earlier makes an attempt (iterations) and due to this fact doesn’t attempt higher values in future makes an attempt. Subsequently, grid search falls below the uninformed search class.

    Random search

    In random search, the search algorithm randomly assessments hyperparameter values in every iteration. Like in grid search, it doesn’t be taught from earlier makes an attempt and due to this fact doesn’t attempt higher values in future makes an attempt. Subsequently, random search additionally falls below uninformed search.

    Grid search vs random search (Picture by writer)

    Random search is a lot better than grid search when there’s a giant search area and we don’t know in regards to the hyperparameter area. It is usually thought of computationally environment friendly. 

    Once we present the identical dimension of hyperparameter area for grid search and random search, we are able to’t see a lot distinction between the 2. We’ve to outline an even bigger search area in an effort to reap the benefits of random search over grid search. 

    There are two methods to extend the dimensions of the hyperparameter search area. 

    • By rising the dimensionality (including new hyperparameters)
    • By widening the vary of hyperparameters

    It is strongly recommended to outline the underlying distribution for every hyperparameter. If not outlined, the algorithm will use the default one, which is the uniform distribution wherein all combos can have the identical chance of being chosen. 

    There are two necessary hyperparameters within the random search technique itself!

    • n_iter: The variety of iterations or the dimensions of the random pattern of hyperparameter combos to check. Takes an integer. This trades off runtime vs high quality of the output. We have to outline this to permit the algorithm to check a random pattern of combos.
    • random_state: We have to outline this hyperparameter to get the identical output throughout a number of perform calls.

    The most important drawback of random search is that it produces excessive variance throughout a number of perform calls of various random states. 


    That is the tip of as we speak’s article.

    Please let me know if you happen to’ve any questions or suggestions.

    How about an AI course?

    See you within the subsequent article. Completely satisfied studying to you!

    Designed and written by:
    Rukshan Pramoditha

    2025–08–22



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCracking the Density Code: Why MAF Flows Where KDE Stalls
    Next Article Is Google’s Reveal of Gemini’s Impact Progress or Greenwashing?
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Creating AI that matters | MIT News

    October 21, 2025
    Artificial Intelligence

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025
    Artificial Intelligence

    Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know

    October 21, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Beyond Glorified Curve Fitting: Exploring the Probabilistic Foundations of Machine Learning

    May 1, 2025

    Världens första AI-läkarklinik öppnar i Saudiarabien

    May 17, 2025

    How to Use LLMs for Powerful Automatic Evaluations

    August 13, 2025

    AI’s giants want to take over the classroom

    July 15, 2025

    AI-Kurser i Sverige – En komplett guide för nybörjare

    July 30, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    This Self-Driving Taxi Could Replace Uber by 2025 — And It’s Backed by Toyota

    April 25, 2025

    Novel method detects microbial contamination in cell cultures | MIT News

    April 25, 2025

    From Static Products to Dynamic Systems

    October 7, 2025
    Our Picks

    Dispatch: Partying at one of Africa’s largest AI gatherings

    October 22, 2025

    Topp 10 AI-filmer genom tiderna

    October 22, 2025

    OpenAIs nya webbläsare ChatGPT Atlas

    October 22, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.