Close Menu
    Trending
    • Gemini introducerar funktionen schemalagda åtgärder i Gemini-appen
    • AIFF 2025 Runway’s tredje årliga AI Film Festival
    • AI-agenter kan nu hjälpa läkare fatta bättre beslut inom cancervård
    • Not Everything Needs Automation: 5 Practical AI Agents That Deliver Enterprise Value
    • Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.
    • 5 Crucial Tweaks That Will Make Your Charts Accessible to People with Visual Impairments
    • Why AI Projects Fail | Towards Data Science
    • The Role of Luck in Sports: Can We Measure It?
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » How to Generate Synthetic Data: A Comprehensive Guide Using Bayesian Sampling and Univariate Distributions
    Artificial Intelligence

    How to Generate Synthetic Data: A Comprehensive Guide Using Bayesian Sampling and Univariate Distributions

    ProfitlyAIBy ProfitlyAIMay 26, 2025No Comments36 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    that drives organizations these days. However what occurs when observations are scarce, expensive, or onerous to gather? That’s the place artificial knowledge comes into play as a result of we are able to generate synthetic knowledge that mimics the statistical properties of real-world observations. On this weblog, I’ll present a background in artificial knowledge, along with sensible hands-on examples. I’ll talk about two highly effective methods on methods to generate artificial knowledge: Bayesian Sampling and Univariate Distribution Sampling. As well as, I’ll present methods to generate knowledge from solely the skilled’s information. All sensible examples are created with the assistance of the bnlearn and the distfit library. By the tip of this weblog, you’ll perceive how Probability Density features and Bayesian methods could be leveraged to generate high-quality artificial knowledge.


    Attempt the hands-on examples on this weblog. It will make it easier to to be taught faster, perceive higher, and keep in mind longer. Seize a espresso and have enjoyable! Disclosure: I’m the creator of the Python packages bnlearn and distfit.


    An Introduction To Artificial Information

    Within the final decade, the quantity of knowledge has grown quickly and led to the perception that larger high quality knowledge is extra vital than amount. Larger knowledge high quality helps to attract extra correct conclusions and permits to make better-informed choices. In lots of domains, resembling healthcare, finance, cybersecurity, and autonomous methods, real-world knowledge could be delicate, costly, imbalanced, or tough to gather, significantly for uncommon or edge-case situations. That is the place Synthetic Data turns into a strong various. Nevertheless, in the previous couple of years, now we have additionally seen an enormous pattern of artificial knowledge era for artificially generated photographs, texts, and audio. Regardless of the aim is, artificial knowledge is turning into extra vital, which can also be careworn by varied firms like Gartner [1], which predicts that actual knowledge can be overshadowed very quickly. There are, roughly talking, two primary classes of making artificial knowledge (Determine 1), Probabilistic and Generative.

    • Probabilistic (distribution-based). Right here we estimate statistical distributions from actual measurements (or outline them theoretically), after which we are able to pattern new artificial observations from these distributions. Examples embrace becoming univariate distributions or developing Bayesian networks for multivariate knowledge.
    • Generative or simulation-based: Realized fashions are used, resembling neural networks, agent-based methods, or rule-based engines, to supply artificial knowledge with out relying strictly on predefined likelihood distributions. This consists of approaches like GANs for picture knowledge, discrete-event simulation for course of modeling, and huge language fashions (LLMs) for producing reasonable artificial textual content or structured data primarily based on prompt-driven patterns.
    Determine 1. Schematic overview of artificial knowledge era approaches. Picture by creator.

    On this weblog, I’ll concentrate on Probabilistic strategies (Determine 1, blue/left half), the place the aim is to estimate the underlying distribution in order that we are able to mirror both an current dataset or generate knowledge from an skilled’s information. I’ll make a deep dive into univariate distribution becoming and Bayesian sampling, the place I’ll talk about the next 4 ideas of artificial knowledge era:

    1. Artificial Information That Mimics Present Steady Measurements (anticipated with impartial variables).
      We begin with an current dataset the place the variables have steady values. The aim is to suit a mannequin per variable that can be utilized to generate measurements that mirror the unique properties. The measurements are assumed to be impartial of one another.
    2. Artificial Information That Mimics Knowledgeable Information. (anticipated to be steady and Unbiased variables). We begin with out a dataset however solely with skilled information. We’ll decide the perfect Chance Density Capabilities (PDFs) with their parameters that mimic the skilled area information. The designed mannequin can then be used to generate new measurements.
    3. Artificial Information That Mimics an Present Categorical Dataset (anticipated with dependent variables).
      We begin with an current categorical dataset. We’ll be taught the construction and parameters from the information and the characteristic interdependence. The fitted mannequin can be utilized to generate measurements that mirror the properties of the unique dataset.
    4. Artificial Information That Mimics Knowledgeable Information (anticipated to be categorical and with dependent variables).
      We begin with out a dataset however solely with skilled information. The distinction with strategy 2 is that this mannequin captures specialists’ information to encode dependencies between a number of variables utilizing a directed graph. The fitted mannequin can be utilized to generate an artificial dataset solely primarily based on the information of the skilled.

    Within the subsequent part, I’ll clarify the 4 approaches in additional element, together with hands-on examples. However earlier than we go into the main points, I’ll first present a background about likelihood density features and Bayesian Sampling.


    What You Want To Know About Chance Density Capabilities

    Earlier than we dive into the creation of artificial knowledge utilizing likelihood distributions (approaches 1 and a pair of), I’ll begin with a quick introduction to likelihood density features (PDFs). Initially, there are various likelihood distributions as depicted in Determine 2. Vital about these PDFs is that we perceive their traits, as it is going to assist to get extra instinct about how they will mimic real-world observations. The fundamentals are as follows: a PDF describes the probability of a steady variable taking up a selected worth, and totally different distributions have attribute shapes: bell curves, exponential decays, uniform spreads, and many others. These shapes, proven in Determine 2, must match real-world conduct (e.g., response instances, earnings ranges, or temperature readings) with candidate distributions.

    Determine 2. Overview of Chance Density Capabilities and their parameters. Created by: Rasmus Bááth (2012) (MIT license).

    The higher a PDF matches the distribution of the actual variables, the higher our artificial knowledge can be. Nevertheless, the problem with real-world variables is that these usually exhibit skewness, multimodality, heavy tails, and many others, and thus don’t all the time align neatly with well-known distributions. However deciding on the unsuitable distribution can result in deceptive simulations and unreliable outcomes.

    Creating artificial knowledge is difficult: it requires mimicking real-world occasions by utilizing theoretical distributions, and inhabitants parameters.

    Fortunately, varied packages may help us discover the perfect PDF for the variables, resembling distfit [2]. This library is extremely helpful as a result of it automates the method of scanning by way of a variety of theoretical distributions, becoming them to the variables in our dataset, and rating them primarily based on goodness-of-fit metrics such because the Kolmogorov-Smirnov statistic or log-likelihood. This strategy will discover the best-fitting theoretical distribution with out counting on instinct or trial-and-error. Within the use case, I’ll show its working, however first, a quick introduction to Bayesian sampling.


    What You Want To Know About Bayesian Sampling

    Earlier than we dive into the creation of artificial knowledge utilizing Bayesian Sampling (approaches 3 and 4), I’ll clarify the ideas of sampling from multinomial distributions. At its core, Bayesian Sampling refers to producing knowledge factors from a probabilistic mannequin outlined by a Directed Acyclic Graph (DAG) and its related Conditional Chance Distributions (CPDs). The construction of the DAG encodes the dependencies between variables, whereas the CPDs outline the precise likelihood of every variable conditioned on its mother and father. When mixed, they kind a joint likelihood distribution over all variables within the community. The 2 best-known Bayesian sampling methods are Ahead Sampling and Gibbs Sampling and are each accessible within the bnlearn for Python package deal [4].

    Bayesian Ahead Sampling is an intuitive approach that samples values by traversing the graph in topological order, beginning with root nodes that haven’t any mother and father. Every variable is then sampled primarily based on its Conditional Chance Distribution (CPD) and the beforehand sampled values of its mum or dad nodes. This methodology is good once you need to simulate new knowledge that follows the generative assumptions of your Bayesian Community. In bnlearn that is the default methodology. It’s significantly highly effective for creating artificial datasets from expert-defined DAGs, the place we explicitly encode our area information with out requiring observational knowledge.

    Alternatively, when some values are lacking or when precise inference is computationally costly, Gibbs Sampling can be utilized. It is a Markov Chain Monte Carlo (MCMC) methodology that iteratively samples from the conditional distribution of every variable given the present values of all others. This produces samples from the joint distribution, even while not having to compute it explicitly. Whereas Ahead Sampling is healthier suited to full artificial knowledge era, Gibbs Sampling excels in situations involving partial observations, imputation, or approximate inference. This methodology could be set in bnlearn as follows: bn.sampling(DAG, methodtype="gibbs").

    Let’s go to the following part, the place we’ll experiment with likelihood distribution parameters to see how they have an effect on the form and conduct of artificial knowledge. We’ll use distfit to search out the perfect PDF that matches real-world variables and consider how properly they replicate the unique knowledge construction.


    The Predictive Upkeep Dataset

    The hands-on examples are primarily based on the predictive upkeep dataset [3] (CC BY 4.0 licence), which accommodates 10,000 sensor knowledge factors from equipment over time. The dataset is a so-called mixed-type dataset containing a mixture of steady, categorical, and binary variables. It captures operational knowledge from machines, together with each sensor readings and failure occasions. As an illustration, it consists of bodily measurements like rotational pace, torque, and gear put on (all steady variables reflecting how the machine is behaving over time). Alongside these, now we have categorical info such because the machine kind and environmental knowledge like air temperature. The dataset additionally depicts whether or not particular forms of failures occurred, resembling software put on failure or warmth dissipation failure (these are represented as binary variables).

    Photograph by Krzysztof Kowalik on Unsplash
    Desk 1: The desk supplies an summary of the variables within the predictive upkeep dataset. There are several types of variables, identifiers, sensor readings, and goal variables (failure indicators). Every variable is characterised by its position, knowledge kind, and a quick description.

    Generate Steady Artificial Information

    Within the following two sections, we’ll generate artificial knowledge the place the variables have steady values and underneath the belief that the variables are impartial of one another. The 2 flavors of producing artificial knowledge with this strategy are (1) by beginning with an current dataset, and (2) by translating skilled area information right into a structured, artificial dataset. Furthermore, if we’d like a number of steady variables, we have to deal with every variable individually or independently (1), then we are able to determine the perfect likelihood distribution per variable (2), and eventually, we are able to generate artificial values (3). This strategy is especially helpful when we have to simulate reasonable inputs for testing, modeling, or when working with small datasets.

    1. Generate Steady Artificial Information that Carefully Mirrors the Distribution of Actual Information

    The purpose on this part is to generate artificial knowledge that intently mirrors the distribution of actual knowledge. The predictive upkeep dataset accommodates 5 steady variables, amongst them the Torque measurements for which the outline is as follows:

    Torque ought to usually be inside anticipated operation vary: low torque is much less essential, however excessively excessive torque suggests mechanical pressure or stress.

    Within the code block beneath, we’ll import the distfit library [2], load the dataset, and visually examine the Torque measurements to get an instinct of the vary and potential outliers.

    # Set up library
    pip set up distfit
    # Import library
    from distfit import distfit
    
    # Initialize distfit
    dfit = distfit()
    
    # Import dataset
    df = dfit.import_example(knowledge='predictive_maintenance')
    
    # print dataframe
    print(df)
    +-------+------------+------+------------------+----+-----+-----+-----+-----+
    |  UDI | Product ID  | Sort | Air temperature  | .. | HDF | PWF | OSF | RNF |
    +-------+------------+------+------------------+----+-----+-----+-----+-----+
    |    1 | M14860      |   M  | 298.1            | .. |   0 |   0 |   0 |   0 |
    |    2 | L47181      |   L  | 298.2            | .. |   0 |   0 |   0 |   0 |
    |    3 | L47182      |   L  | 298.1            | .. |   0 |   0 |   0 |   0 |
    |    4 | L47183      |   L  | 298.2            | .. |   0 |   0 |   0 |   0 |
    |    5 | L47184      |   L  | 298.2            | .. |   0 |   0 |   0 |   0 |
    | ...  | ...         | ...  | ...              | .. | ... | ... | ... | ... |
    | 9996 | M24855      |   M  | 298.8            | .. |   0 |   0 |   0 |   0 |
    | 9997 | H39410      |   H  | 298.9            | .. |   0 |   0 |   0 |   0 |
    | 9998 | M24857      |   M  | 299.0            | .. |   0 |   0 |   0 |   0 |
    | 9999 | H39412      |   H  | 299.0            | .. |   0 |   0 |   0 |   0 |
    |10000 | M24859      |   M  | 299.0            | .. |   0 |   0 |   0 |   0 |
    +-------+-------------+------+------------------+----+-----+-----+-----+-----+
    [10000 rows x 14 columns]
    
    # Make plot
    dfit.lineplot(df['Torque [Nm]'], xlabel='Time', ylabel='Torque [Nm]', title='Torque Measurements')
    

    We will see from Determine 3 that the vary throughout the ten.000 datapoints is principally between 20 and 50 Nm. The values which might be excessively above this vary can thus be essential. This info, along with the road plot, helps to construct an instinct of the anticipated distribution.

    Determine 3. Lineplot reveals the torque measurements throughout 10.000 measurements.

    With using <em>distfit</em>, we are able to now search over 90 univariate distributions to find out the perfect match for the Torque measurements. Nevertheless, testing for every distribution can take a while, particularly after we use the bootstrap parameter to extra precisely validate the match for every distribution. Within the code block beneath, you may set the n_boots=100 parameter decrease to hurry up the computations. Due to this fact, additionally it is potential to check solely throughout the most well-liked PDFs (with the distr parameter). See the code block beneath to find out the perfect PDF with its parameters for the Torque measurements.

    # Import library
    from distfit import distfit
    import matplotlib.pyplot as plt
    
    # Initialize distfit and set the bootstraps to validate the match.
    dfit = distfit(distr='standard', n_boots=100)
    
    # Match mannequin
    dfit.fit_transform(df['Torque [Nm]'])
    
    # Plot PDF/CDF
    fig, ax = plt.subplots(1,2, figsize=(25, 10))
    dfit.plot(chart='PDF', n_top=10, ax=ax[0])
    dfit.plot(chart='CDF', n_top=10, ax=ax[1])
    plt.present()
    
    # Create line plot
    dfit.lineplot(df['Torque [Nm]'], xlabel='Time', ylabel='Torque [Nm]', title='Torque Measurements', projection=True)
    
    # Print fitted parameters
    print(dfit.mannequin)
    {'identify': 'loggamma',
     'rating': 0.00010374408112953594,
     'loc': -1900.0760925689528,
     'scale': 288.3648181697778,
     'arg': (835.7558898693087,),
     'params': (835.7558898693087, -1900.0760925689528, 288.3648181697778),
     'mannequin': <scipy.stats._distn_infrastructure.rv_continuous_frozen at 0x20c2de1c830>,
     'bootstrap_score': 0.12,
     'bootstrap_pass': True,
     'shade': '#e41a1c',
     'CII_min_alpha': 23.457570647289003,
     'CII_max_alpha': 56.28002364712847}
    Determine 4. Left: PDF, and proper: CDF. The highest 5 fitted theoretical distributions are proven in numerous colours. The very best match is <em>Loggamma</em> and is coloured in purple. (picture by the creator)

    After operating the code block, we are able to see the detection of the Loggamma distribution as the perfect match (Determine 4, purple stable line). The higher certain confidence interval (CII)alpha=0.05 is 56.28, which appears an affordable threshold primarily based on a visible inspection (purple vertical dashed line). Word that using CII just isn’t wanted for the era of artificial knowledge. A full projection of the estimated PDF could be seen in Determine 5.

    Determine 5. Lineplot reveals the torque measurements throughout 10.000 measurements. The empirical PDF is estimated primarily based on the present knowledge and plotted on the suitable aspect. The estimated theoretical PDF can also be plotted on the suitable aspect. (picture by the creator)

    With the estimated Loggamma distribution and the fine-tuned inhabitants parameters (c=835.7, loc=-1900.07, scale=288.36), we are able to now generate artificial knowledge for Torque. The .generate() perform mechanically makes use of the mannequin parameters, and we solely must specify the variety of samples that we need to generate. For instance, we are able to generate 200 samples and plot the information factors (Determine 6, code block beneath).

    # Create artificial knowledge
    X = dfit.generate(200)
    
    # Plot the Artificial knowledge (X)
    dfit.lineplot(X, xlabel='Time', ylabel='Generated Torque [Nm]', title='Artificial Information')
    
    Determine 6. Artificial knowledge is generated for n=200 Torque measurements. We will see a variety between the values [20, 60] with some outliers. The purple horizontal line is the beforehand estimated confidence interval for alpha=0.05 (picture by the creator).

    At this level, now we have estimated the PDF that mirrors the measurements of the variable <em>Torque</em>. With the estimated parameters of the PDF, we are able to pattern from the fitted distribution and generate artificial knowledge. Word that the predictive upkeep dataset accommodates 4 extra steady measurements, and if we have to mimic these as properly, we should repeat this whole process for every variable individually. This mannequin for producing artificial knowledge supplies many alternatives. As an illustration, it permits testing machine studying pipelines underneath uncommon or essential working situations that might not be current within the authentic dataset, thereby enhancing efficiency analysis. Or in case your dataset is small, it lets you generate extra datapoints.


    2. Generate Steady Artificial Information Utilizing Knowledgeable Information

    On this part, we’ll generate artificial knowledge that intently mirrors skilled information. Or in different phrases, we shouldn’t have any knowledge at first, solely specialists’ information. Nevertheless, we do purpose to create an artificial dataset. To show this strategy, I’ll use a hypothetical use case: Suppose that specialists bodily function the equipment, and we have to perceive the depth of actions to additionally embrace it within the mannequin to find out failures. An skilled offered us with the next details about the operational actions:

    Most individuals begin to work at 8 however the depth of equipment operations peak round 10. Some equipment operations may even be seen earlier than 8, however not rather a lot. Within the afternoon, the equipment operations progressively lower and cease round 6 pm. There’s normally additionally a small peak of intense equipment operations arround 1–2 pm.

    Step 1: Translate area information right into a statistical mannequin.

    With the outline, we now must determine the best-matching theoretical distribution. Nevertheless, selecting the perfect theoretical distribution requires investigating the properties of many distributions (see Determine 1). As well as, you might want multiple distribution; particularly, a mix of likelihood density features. In our instance, we’ll create a mix of two distributions, one PDF for the morning and one PDF for the afternoon actions.

    Mannequin for the morning: Most individuals begin to work at 8 however the depth of equipment operations peak round 10. Some equipment operations may even be seen earlier than 8, however not rather a lot.

    To mannequin the morning equipment operations, we are able to use the Regular distribution. This distribution is symmetrical with out heavy tails. A number of regular PDFs with totally different mu and sigma parameters are proven in Determine 7A. Attempt to get a sense for the way the slope adjustments on the sigma parameter. For our equipment operations, we are able to set the parameters with a imply of 10 AM with a comparatively slender unfold, resembling sigma=1.

    Mannequin for the afternoon: The equipment operations progressively lower and cease round 6 pm. There’s normally additionally a small peak of intense equipment operations arround 1–2 pm.

    An acceptable distribution for the afternoon equipment operations might be a skewed distribution with a heavy proper tail that may seize the progressively lowering actions. The Weibull distribution is usually a candidate as it’s used to mannequin knowledge that has a monotonic growing or lowering pattern. Nevertheless, if we don’t all the time count on a monotonic lower in community exercise (as a result of it’s totally different on Tuesdays or so), it might be higher to think about a distribution resembling gamma (Determine 7B). To tune the parameters so that’s matches the afternoon description, it’s sensible to make use of the generalized gamma distribution because it supplies extra management on the parameter tuning.

    Determine 7. (A) Regular distribution with varied parameters. Supply: Wikipedia. (B) gamma distribution with varied parameters. Supply: Wikipedia.

    At this level, now we have chosen our two candidate distributions to mannequin the equipment operations: Regular PDF for the morning and the Generalized Gamma PDF for the afternoon. Within the subsequent part, we’ll fine-tune the PDF parameters to create a mix of PDFs that matches the equipment operations for the whole day.

    Step 2: Parameter Tremendous-Tuning To Decide The Finest Match.

    To create a mannequin that intently resembles the equipment operations, we’ll generate knowledge individually for the morning and the afternoon (see code block beneath). For the morning equipment operations, we determined to make use of the traditional distribution with a imply of 10 (representing the height at 10 am) and a normal deviation of 1. We’ll draw 8000 samples. For the afternoon equipment operations, we use the generalized gamma distribution. After enjoying round with the loc parameter, I made a decision to set the second peak at loc=13. We may even have used loc=14 however this creates a barely bigger hole between the morning and afternoon equipment operations. Moreover, the height within the afternoon was described to be smaller, and subsequently, we’ll generate 2000 samples.

    The following step is to mix the 2 artificial measurements and create a mix of PDFs that matches the equipment operations for the whole day. Word that shuffling the samples is vital as a result of, with out it, samples are ordered first by the 8000 samples from the traditional distribution after which by the 2000 samples from the generalized gamma distribution. This order may introduce bias in any evaluation or modeling that’s carried out on the dataset when splitting the dataset. We will now plot the distribution and see what it appears like (Determine 8). Often, it takes just a few iterations to fine-tune the parameters. 

    import numpy as np
    from scipy.stats import norm, gengamma
    
    # Set seed for reproducibility
    np.random.seed(1)
    
    # Generate knowledge from a standard distribution
    normal_samples = norm.rvs(10, 1, 8000)
    # Create a generalized gamma distribution with the desired parameters
    dist = gengamma(a=1.4, c=1, scale=0.8, loc=13)
    # Generate knowledge from the gamma distribution
    gamma_samples = dist.rvs(dimension=2000)
    
    # Mix the 2 datasets by concatenation
    X = np.concatenate((normal_samples, gamma_samples))
    
    # Shuffle the dataset
    np.random.shuffle(X)
    
    # Plot
    bar_properties={'shade': '#607B8B', 'linewidth': 1, 'edgecolor': '#5A5A5A'}
    plt.determine(figsize=(20, 15)); plt.hist(X, bins=100, **bar_properties)
    plt.grid(True)
    plt.xlabel('Time', fontsize=22)
    plt.ylabel('Depth of Equipment Operations', fontsize=22)
    Determine 8. A mix of likelihood density features: the Regular and generalized Gamma distribution. Picture by Writer.

    We had been capable of convert the skilled’s information into a mix of PDFs and created artificial knowledge that enables us to mannequin the traditional/anticipated conduct of equipment operations (Determine 8). The histogram clearly reveals a significant peak at 10 am with equipment operations ranging from 6 am as much as 1 pm, and a second peak round 1–2 pm with a heavy proper tail in direction of 8 pm.


    Generate Categorical Artificial Information

    Within the following two sections, we’ll generate artificial knowledge the place the variables are categorical and assumed to be depending on one another. Right here once more, we are able to comply with the identical two approaches: ranging from an current dataset to be taught the distribution and their dependence, and defining a DAG primarily based on skilled area information after which producing artificial knowledge.

    1. Generate Categorical Artificial Information That Mimics an Present dataset.

    The purpose on this part is to generate artificial knowledge that intently mirrors the distribution of actual categorical and a dependent dataset. The distinction with part 1 is that we now purpose to imitate an current categorical dataset and take into accounts its (inter)dependence between the options. The dataset we’ll use is once more the predictive upkeep dataset [3]. Within the code block beneath, we’ll import the bnlearn library, load the dataset.

    # Set up bnlearn library
    pip set up bnlearn
    # Import library
    import bnlearn as bn
    
    # Load dataset
    df = bn.import_example('predictive_maintenance')
    
    # print dataframe
    +-------+------------+------+------------------+----+-----+-----+-----+-----+
    |  UDI | Product ID  | Sort | Air temperature  | .. | HDF | PWF | OSF | RNF |
    +-------+------------+------+------------------+----+-----+-----+-----+-----+
    |    1 | M14860      |   M  | 298.1            | .. |   0 |   0 |   0 |   0 |
    |    2 | L47181      |   L  | 298.2            | .. |   0 |   0 |   0 |   0 |
    |    3 | L47182      |   L  | 298.1            | .. |   0 |   0 |   0 |   0 |
    |    4 | L47183      |   L  | 298.2            | .. |   0 |   0 |   0 |   0 |
    |    5 | L47184      |   L  | 298.2            | .. |   0 |   0 |   0 |   0 |
    | ...  | ...         | ...  | ...              | .. | ... | ... | ... | ... |
    | 9996 | M24855      |   M  | 298.8            | .. |   0 |   0 |   0 |   0 |
    | 9997 | H39410      |   H  | 298.9            | .. |   0 |   0 |   0 |   0 |
    | 9998 | M24857      |   M  | 299.0            | .. |   0 |   0 |   0 |   0 |
    | 9999 | H39412      |   H  | 299.0            | .. |   0 |   0 |   0 |   0 |
    |10000 | M24859      |   M  | 299.0            | .. |   0 |   0 |   0 |   0 |
    +-------+-------------+------+------------------+----+-----+-----+-----+-----+
    [10000 rows x 14 columns]

    Earlier than we are able to be taught the causal construction and the parameters of the whole system utilizing Bayesian strategies, we have to clear the dataset first. In our first step, we take solely the 8 related categorical variables; [Type, Machine failure, TWF, HDF, PWF, OSF, RNF]. Different variables, resembling distinctive identifiers (<em>UID </em>and <em>Product ID</em>) holds no significant info for modeling. As well as, modeling combined datasets (categorical and steady) on the identical time just isn’t supported.

    # Load dataset
    df = bn.import_example('predictive_maintenance')
    
    # Get discrete columns
    cols = ['Type', 'Machine failure', 'TWF', 'HDF', 'PWF', 'OSF', 'RNF']
    df = df[cols]
    
    # Construction studying
    mannequin = bn.structure_learning.match(df, methodtype='hc', scoretype='bic')
    # [bnlearn] >Computing finest DAG utilizing [hc]
    # [bnlearn] >Set scoring kind at [bds]
    # [bnlearn] >Compute construction scores for mannequin comparability (larger is healthier).
    
    # Compute edge weights utilizing ChiSquare independence check.
    mannequin = bn.independence_test(mannequin, df, check='chi_square', prune=True)
    
    # Plot the perfect DAG
    bn.plot(mannequin, edge_labels='pvalue', params_static={'maxscale': 4, 'figsize': (15, 15), 'font_size': 14, 'arrowsize': 10})
    
    dotgraph = bn.plot_graphviz(mannequin, edge_labels='pvalue')
    dotgraph
    
    # Retailer to pdf
    dotgraph.view(filename='bnlearn_predictive_maintanance')

    Within the code block above, we decided the causal relationships. The Bayesian mannequin realized the causal relationships primarily based on the information utilizing a search technique and scoring perform. A scoring perform quantifies how properly a selected DAG explains the noticed knowledge, and the search technique is to effectively stroll by way of the whole search house of DAGs to finally discover probably the most optimum DAG with out testing all of them. We’ll use HillClimbSearch as a search technique and the Bayesian Data Criterion (BIC) as a scoring perform for this use case. The causal DAG is proven in Determine 9 the place the detected root variable is PWF (Energy Failure), and the goal variable is Machine failure. We will see from the determine that the failure modes (TWF, HDF, PWF, OSF, RNF) have a posh dependency on the Machine failure. As anticipated. The RNF variable (the random variable) just isn’t included as a node, and the Sort just isn’t a trigger for Machine failure. The construction studying course of detected these relationships fairly properly.

    Determine 9. DAG primarily based on Hillclimbsearch and BIC scoring perform. All the continual values are eliminated. The perimeters are the -log10(P-values) which might be decided utilizing the chi-square check. The picture is created utilizing Bnlearn. Picture by the creator.

    Given the dataset and the DAG, we are able to estimate the (conditional) likelihood distributions of the person variables utilizing parameter studying. The bnlearn library helps Parameter studying for discrete and steady nodes:

    # Parameter studying
    mannequin = bn.parameter_learning.match(mannequin, df, methodtype='bayes')
    
    # [bnlearn] >Parameter studying> Computing parameters utilizing [bayes]
    # [bnlearn] >Changing [<class 'pgmpy.base.DAG.DAG'>] to BayesianNetwork mannequin.
    # [bnlearn] >Changing adjmat to BayesianNetwork.
    
    # [bnlearn] >CPD of TWF:
    +--------+-----------+
    | TWF(0) | 0.950364  |
    +--------+-----------+
    | TWF(1) | 0.0496364 |
    +--------+-----------+
    
    # [bnlearn] >CPD of Machine failure:
    +--------------------+-----+--------+--------+--------+
    | HDF                | ... | HDF(1) | HDF(1) | HDF(1) |
    +--------------------+-----+--------+--------+--------+
    | OSF                | ... | OSF(1) | OSF(1) | OSF(1) |
    +--------------------+-----+--------+--------+--------+
    | PWF                | ... | PWF(0) | PWF(1) | PWF(1) |
    +--------------------+-----+--------+--------+--------+
    | TWF                | ... | TWF(1) | TWF(0) | TWF(1) |
    +--------------------+-----+--------+--------+--------+
    | Machine failure(0) | ... | 0.5    | 0.5    | 0.5    |
    +--------------------+-----+--------+--------+--------+
    | Machine failure(1) | ... | 0.5    | 0.5    | 0.5    |
    +--------------------+-----+--------+--------+--------+
    
    # [bnlearn] >CPD of HDF:
    +--------+---------------------+--------------------+
    | OSF    | OSF(0)              | OSF(1)             |
    +--------+---------------------+--------------------+
    | HDF(0) | 0.9654874062680254  | 0.5719063545150501 |
    +--------+---------------------+--------------------+
    | HDF(1) | 0.03451259373197462 | 0.4280936454849498 |
    +--------+---------------------+--------------------+
    
    # [bnlearn] >CPD of PWF:
    +--------+-----------+
    | PWF(0) | 0.945909  |
    +--------+-----------+
    | PWF(1) | 0.0540909 |
    +--------+-----------+
    
    # [bnlearn] >CPD of OSF:
    +--------+---------------------+--------------------+
    | PWF    | PWF(0)              | PWF(1)             |
    +--------+---------------------+--------------------+
    | OSF(0) | 0.9677078327727054  | 0.5596638655462185 |
    +--------+---------------------+--------------------+
    | OSF(1) | 0.03229216722729457 | 0.4403361344537815 |
    +--------+---------------------+--------------------+
    
    # [bnlearn] >CPD of Sort:
    +---------+---------------------+---------------------+
    | OSF     | OSF(0)              | OSF(1)              |
    +---------+---------------------+---------------------+
    | Sort(H) | 0.11225405370762033 | 0.28205128205128205 |
    +---------+---------------------+---------------------+
    | Sort(L) | 0.5844709350765879  | 0.42419175027870676 |
    +---------+---------------------+---------------------+
    | Sort(M) | 0.3032750112157918  | 0.29375696767001114 |
    +---------+---------------------+---------------------+

    Generate Artificial Information.

    At this level, now we have our realized construction within the type of a DAG, and the estimated parameters within the type of CPTs. Which means that we captured the system in a probabilistic graphical mannequin, which might now be used to generate artificial knowledge. We will now use the bn.sampling() perform (see the code block beneath) and generate, for instance, 100 samples. The output is a full dataset with all dependent variables.

    # Generate artificial knowledge
    X = bn.sampling(mannequin, n=100, methodtype='bayes')
    
    print(X)
    +-----+------------------+-----+-----+-----+------+
    | TWF | Machine failure | HDF | PWF | OSF | Sort  |
    +-----+------------------+-----+-----+-----+------+
    |  0  |        1         |  1  |  1  |  1  |  L   |
    |  0  |        0         |  0  |  0  |  0  |  L   |
    |  0  |        0         |  0  |  0  |  0  |  L   |
    |  0  |        0         |  0  |  0  |  0  |  M   |
    |  0  |        0         |  0  |  0  |  0  |  M   |
    |  .. |        ..        |  .. |  .. |  .. |  ..  |
    |  0  |        0         |  0  |  0  |  0  |  M   |
    |  0  |        1         |  1  |  0  |  0  |  L   |
    |  0  |        0         |  0  |  0  |  0  |  M   |
    |  0  |        0         |  0  |  0  |  0  |  L   |
    +-----+------------------+-----+-----+-----+------+

    2. Generate Categorical Artificial Information That Mimics Knowledgeable Information

    The purpose on this part is to generate artificial knowledge that intently mirrors the skilled information. Or in different phrases, there’s no dataset at first, solely information in regards to the working of a system. The distinction with part 2 is that we now purpose to generate a whole categorical dataset with a number of variables which might be depending on one another. The ultimate Bayesian mannequin can then be used to generate knowledge and will mimic the information of the skilled.

    Earlier than we dive into constructing knowledge-based methods, the steps we have to take are much like these of the earlier part. The distinction is that we have to manually outline and draw the causal construction (DAG) and outline the parameters (CPTs). Alternatively, if an information set is offered, we are able to use it to be taught the parameters. So there are a number of potentialities to generate knowledge primarily based on specialists’ information. For an in-depth overview, I like to recommend studying this weblog.

    For this use case, we’ll begin and not using a dataset and outline the DAG and CPTs ourselves. I’ll once more use predictive upkeep because the use case. Suppose that specialists want to know how Machine failures happen, however there aren’t any bodily sensors that measure knowledge. An skilled can present us with the next details about the operational actions:

    Machine failures are primarily seen when the method temperature is excessive or the torque is excessive. A excessive torque or software put on causes overstrain failures (OSF). The proces temperature is influenced by the air temperature.

    Outline easy one-to-one relationships.

    From this level on, we have to convert the skilled’s information right into a Bayesian mannequin. This may be executed systematically by first creating the graph after which defining the CPTs that join the nodes within the graph.

    A posh system is constructed by combining easier elements. Which means that we don’t must create or design the entire system without delay, however we are able to outline the easier elements first. These are the one-to-one relationships. On this step, we’ll convert the skilled’s view into relationships. We all know from the skilled that we are able to make the next directed one-to-one relationships:

    • Course of Temperature → Machine Failure
    • Torque → Machine Failure
    • Torque → Overstrain Failure (OSF)
    • Device Put on → Overstrain Failure (OSF)
    • Air Temperature → Course of Temperature
    • Overstrain Failure (OSF) → Machine Failure

    A DAG relies on one-to-one relationships.

    The directed relationships can now be used to construct a graph with nodes and edges. Every node corresponds to a variable, and every edge represents a conditional dependency between pairs of variables. In bnlearn, we are able to assign and graphically characterize the relationships between variables.

    import bnlearn as bn
    
    # Outline the causal dependencies primarily based in your skilled/area information.
    # Left is the supply, and proper is the goal node.
    edges = [('Process Temperature', 'Machine Failure'),
             ('Torque', 'Machine Failure'),
             ('Torque', 'Overstrain Failure (OSF)'),
             ('Tool Wear', 'Overstrain Failure (OSF)'),
             ('Air Temperature', 'Process Temperature'),
             ('Overstrain Failure (OSF)', 'Machine Failure'),
             ]
    
    # Create the DAG
    DAG = bn.make_DAG(edges)
    
    # DAG is saved in an adjacency matrix
    DAG["adjmat"]
    
    # Plot the DAG (static)
    bn.plot(DAG)
    
    # Plot the DAG
    dotgraph = bn.plot_graphviz(DAG, edge_labels='pvalue')
    dotgraph.view(filename='bnlearn_predictive_maintanance_expert.pdf')

    The ensuing DAG is proven in Determine 10. We name this a causal DAG as a result of now we have assumed that the sides we encoded characterize our causal assumptions in regards to the predictive upkeep system.

    Determine 10: DAG for the predictive upkeep system. (picture by creator).

    At this level, the DAG does not know the underlying dependencies. Or in different phrases, there aren’t any variations within the power of the relationships between the one-to-one elements, however these have to be outlined utilizing the CPTs. We will verify the CPTs with bn.print(DAG) which is able to consequence within the message that <em>no CPD could be printed</em>. We have to add information to the DAG with so-called Conditional Probabilistic Tables (CPTs) and we will rely on the skilled’s information to fill the CPTs.

    Information could be added to the DAG with Conditional Probabilistic Tables (CPTs).

    Organising the Conditional Probabilistic Tables.

    The predictive upkeep system is a straightforward Bayesian community the place the kid nodes are influenced by the mum or dad nodes. We now must affiliate every node with a likelihood perform that takes, as enter, a selected set of values for the node’s mum or dad variables and provides (as output) the likelihood of the variable represented by the node. Let’s do that for the six nodes.

    CPT: Air Temperature

    The Air Temperature node has two states: low and excessive, and no mum or dad dependencies. This implies we are able to instantly outline the prior distribution primarily based on skilled assumptions or historic distributions. Suppose that 70% of the time, machines function underneath low air temperature and 30% underneath excessive. The CPT is as follows:

    cpt_air_temp = TabularCPD(variable='Air Temperature', variable_card=2,
                              values=[[0.7],    # P(Air Temperature = Low)
                                      [0.3]])   # P(Air Temperature = Excessive)

    CPT: Device Put on

    Device Put on represents whether or not the software remains to be in a low put on or excessive put on state. It additionally has no mum or dad dependencies, so its distribution is instantly specified. Primarily based on area information, let’s assume 80% of the time, the instruments are in low put on, and 20% of the time in excessive put on:

    cpt_toolwear = TabularCPD(variable='Device Put on', variable_card=2,
                              values=[[0.8],    # P(Device Put on = Low)
                                      [0.2]])   # P(Device Put on = Excessive)

    CPT: Torque

    Torque is a root node as properly, with no dependencies. It displays the rotational pressure within the course of. Let’s assume excessive torque is comparatively uncommon, occurring solely 10% of the time, with 90% of processes operating at regular torque:

    cpt_torque = TabularCPD(variable='Torque', variable_card=2,
                            values=[[0.9],     # P(Torque = Regular)
                                    [0.1]])    # P(Torque = Excessive)

    CPT: Course of Temperature

    Course of Temperature relies on Air Temperature. Larger air temperatures usually result in larger course of temperatures, though there’s some variability. The possibilities replicate the next assumptions:

    • If Air Temp is low → 70% likelihood of low Course of Temp, 30% excessive
    • If Air Temp is excessive → 20% low, 80% excessive
    cpt_process_temp = TabularCPD(variable='Course of Temperature', variable_card=2,
                                   values=[[0.7, 0.2],     # P(ProcTemp = Low | AirTemp = Low/Excessive)
                                           [0.3, 0.8]],    # P(ProcTemp = Excessive | AirTemp = Low/Excessive)
                                   proof=['Air Temperature'],
                                   evidence_card=[2])

    CPT: Overstrain Failure (OSF)

    Overstrain Failure (OSF) happens when both Torque or Device Put on are excessive. If each are excessive, the danger will increase. The CPT is structured to replicate:

    • Low Torque & Low Device Put on → 10% OSF
    • Excessive Torque & Excessive Device Put on → 90% OSF
    • Blended mixtures → 30–50% OSF
    cpt_osf = TabularCPD(variable='Overstrain Failure (OSF)', variable_card=2,
                         values=[[0.9, 0.5, 0.7, 0.1],  # OSF = No | Torque, Device Put on
                                 [0.1, 0.5, 0.3, 0.9]], # OSF = Sure | Torque, Device Put on
                         proof=['Torque', 'Tool Wear'],
                         evidence_card=[2, 2])

    PT: Machine Failure

    The Machine Failure node is probably the most sophisticated one as a result of it has probably the most dependencies: Course of Temperature, Torque, and Overstrain Failure (OSF). The danger of failure will increase if Course of Temp is excessive, Torque is excessive, and an OSF occurred. The CPT displays the additive threat, assigning the best failure likelihood when all three are problematic:

    cpt_machine_fail = TabularCPD(variable='Machine Failure', variable_card=2,
                                   values=[[0.9, 0.7, 0.6, 0.3, 0.8, 0.5, 0.4, 0.2],  # Failure = No
                                           [0.1, 0.3, 0.4, 0.7, 0.2, 0.5, 0.6, 0.8]], # Failure = Sure
                                   proof=['Process Temperature', 'Torque', 'Overstrain Failure (OSF)'],
                                   evidence_card=[2, 2, 2])

    Replace the DAG with CPTs:

    That is it! At this level, we outlined the power of the relationships within the DAG with the CPTs. Now we have to join the DAG with the CPTs. As a sanity verify, the CPTs could be examined utilizing the bn.print_CPD() performance.

    # Replace DAG with the CPTs
    mannequin = bn.make_DAG(DAG, CPD=[cpt_process_temp, cpt_machine_fail, cpt_torque, cpt_osf, cpt_toolwear, cpt_air_temp])
    
    # Print the CPDs (Conditional Chance Distributions)
    bn.print_CPD(mannequin)

    Generate Artificial Information.

    At this level, now we have our manually outlined DAG, and now we have estimated the parameters for the CPTs. Which means that we captured the system in a probabilistic graphical mannequin, which might now be used to generate artificial knowledge. We will now use the bn.sampling() perform (see the code block beneath) and generate for instance 100 samples. The output is a full dataset with all dependent variables.

    ---
    
    # Generate artificial knowledge
    X = bn.sampling(mannequin, n=100, methodtype='bayes')
    
    print(X)
    +---------------------+------------------+--------+----------------------------+----------+---------------------+
    | Course of Temperature | Machine Failure  | Torque | Overstrain Failure (OSF)   | ToolWear | Air Temperature     |
    +---------------------+------------------+--------+----------------------------+----------+---------------------+
    |         1           |        0         |   1    |             0              |    0     |         1           |
    |         0           |        0         |   1    |             1              |    1     |         1           |
    |         1           |        0         |   1    |             0              |    0     |         1           |
    |         1           |        1         |   1    |             1              |    1     |         1           |
    |         0           |        0         |   0    |             0              |    0     |         0           |
    |        ...          |       ...        |  ...   |            ...             |   ...    |        ...          |
    |         0           |        0         |   1    |             1              |    1     |         0           |
    |         1           |        1         |   1    |             1              |    1     |         0           |
    |         0           |        0         |   0    |             0              |    1     |         0           |
    |         1           |        1         |   1    |             1              |    1     |         0           |
    |         1           |        0         |   0    |             0              |    1     |         0           |
    +---------------------+------------------+--------+----------------------------+----------+---------------------+
    

    The bnlearn library

    A number of phrases in regards to the bnlearn library that’s used for the analyses. The bnlearn library is designed to sort out the next challenges:

    • Construction studying. Given the information, estimate a DAG that captures the dependencies between the variables.
    • Parameter studying. Given the information and DAG, estimate the (conditional) likelihood distributions of the person variables.
    • Inference. Given the realized mannequin, decide the precise likelihood values on your queries.
    • Sampling. Given the realized mannequin, we are able to generate artificial knowledge.

    What advantages does bnlearn provide over different Bayesian evaluation implementations?


    Wrapping up

    Artificial knowledge permits modeling when actual knowledge is unavailable, delicate, or incomplete. I demonstrated the use case in predictive upkeep however different fields of curiosity are, for instance, within the privateness area or uncommon occasion modeling within the cybersecurity area.

    I demonstrated methods to create artificial knowledge utilizing probabilistic fashions by way of Chance Density Capabilities (PDFs) and Bayesian Sampling. These two approaches differ basically. PDFs are usually used to generate artificial knowledge from univariate steady distributions, assuming that variables are impartial of each other. In distinction, Bayesian Sampling is suited to categorical knowledge, the place we pattern from multinomial (or categorical) distributions, and crucially, can mannequin and protect the dependencies between variables utilizing a Bayesian Community. We will thus use univariate sampling for impartial steady options, and Bayesian sampling when modeling variable dependencies is essential.

    Whereas artificial knowledge provides many benefits, it additionally comes with vital limitations. First, it might not totally seize the complexity and variability of real-world phenomena, which may end up in fashions that fail to generalize when skilled solely on artificial samples. Moreover, artificial knowledge can inadvertently introduce biases as a consequence of incorrect assumptions, oversimplified fashions, or poorly estimated parameters. It’s subsequently important to carry out thorough sanity checks and validation to make sure that the generated knowledge aligns with area expectations and doesn’t mislead downstream evaluation. All the time examine the distribution, dependency construction, and end result patterns with actual knowledge or skilled information.

    Be protected. Keep frosty.

    Cheers, E.


    Software program

    Let’s join!

    References

    1. Gartner, Maverick Analysis: Neglect About Your Actual Information — Artificial Information Is the Way forward for AI, Leinar Ramos, Jitendra Subramanyam, 24 June 2021.
    2. E. Taskesen, distfit Python library, How to Find the Best Theoretical Distribution for Your Data.
    3. AI4I 2020 Predictive Maintenance Dataset. (2020). UCI Machine Studying Repository. Licensed underneath a Creative Commons Attribution 4.0 International (CC BY 4.0).
    4. E.Taskesen, bnlearn for Pythyon library. An Extensive Starter Guide For Causal Discovery Using Bayesian Modeling.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleUnderstanding Matrices | Part 1: Matrix-Vector Multiplication
    Next Article How to Reduce Your Power BI Model Size by 90%
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Not Everything Needs Automation: 5 Practical AI Agents That Deliver Enterprise Value

    June 6, 2025
    Artificial Intelligence

    Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.

    June 6, 2025
    Artificial Intelligence

    5 Crucial Tweaks That Will Make Your Charts Accessible to People with Visual Impairments

    June 6, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    AI craze mania with AI action figures and turning pets into people

    April 11, 2025

    Non-Parametric Density Estimation: Theory and Applications

    May 13, 2025

    Understanding AI Hallucinations: The Risks and Prevention Strategies with Shaip

    April 7, 2025

    Google har lanserat Gemini 2.5 Flash med thinking budget

    April 18, 2025

    Sam Altmans world ögonskannings-ID-projekt lanseras i USA

    May 1, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    OpenAI släpper o3 och o4-mini: AI-modeller som kan tänka med bilder

    April 16, 2025

    [The AI Show Episode 147]: OpenAI Abandons For-Profit Plan, AI College Cheating Epidemic, Apple Says AI Will Replace Search Engines & HubSpot’s AI-First Scorecard

    May 13, 2025

    Q&A: A roadmap for revolutionizing health care through data-driven innovation | MIT News

    May 5, 2025
    Our Picks

    Gemini introducerar funktionen schemalagda åtgärder i Gemini-appen

    June 7, 2025

    AIFF 2025 Runway’s tredje årliga AI Film Festival

    June 7, 2025

    AI-agenter kan nu hjälpa läkare fatta bättre beslut inom cancervård

    June 7, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.