Close Menu
    Trending
    • OpenAIs nya webbläsare ChatGPT Atlas
    • Creating AI that matters | MIT News
    • Scaling Recommender Transformers to a Billion Parameters
    • Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know
    • Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI
    • ChatGPT Gets More Personal. Is Society Ready for It?
    • Why the Future Is Human + Machine
    • Why AI Is Widening the Gap Between Top Talent and Everyone Else
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Time Series Forecasting Made Simple (Part 4.1): Understanding Stationarity in a Time Series
    Artificial Intelligence

    Time Series Forecasting Made Simple (Part 4.1): Understanding Stationarity in a Time Series

    ProfitlyAIBy ProfitlyAIAugust 27, 2025No Comments12 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    to date, we now have mentioned completely different decomposition strategies and baseline fashions. Now, we transfer on to Time Sequence Forecasting fashions like ARIMA, SARIMA, and many others.

    However these forecasting fashions require the info to be stationary. So first, we are going to talk about what stationarity in a time collection really is, why it’s required, and the way it’s achieved.


    Maybe most of you could have already learn so much about stationarity in a time collection by way of blogs, books, and many others., as there are loads of sources out there to find out about it.

    At first, I believed to elucidate what stationarity in time collection is after I talk about forecasting fashions like ARIMA, and many others.

    However after I first learnt about this subject, my understanding didn’t go a lot past fixed imply or variance, robust or weak stationarity, and exams to examine for stationarity.

    One thing at all times felt lacking; I used to be unable to grasp a couple of issues about stationarity.

    So, I made a decision to jot down a separate article on this subject to elucidate what I learnt in response to my questions or doubts on stationarity.

    I simply tried to jot down about stationarity in time collection in a extra intuitive approach, and I hope you’ll get a contemporary perspective on this subject past the strategies and statistical exams.


    We name a time collection stationary when it has a relentless imply, fixed variance and a relentless autocovariance or fixed autocorrelation construction.

    Let’s talk about every property.

    What will we imply by Fixed Imply?

    For instance, contemplate a time collection of gross sales knowledge for five years. If we calculate the typical gross sales of every yr, the values ought to be roughly the identical and if the averages differ considerably for annually, then there isn’t any fixed imply and time collection isn’t stationary.

    Picture by Creator

    The following property of a stationary time collection is Fixed Variance.

    If the unfold of the info is identical all through the collection, then it’s stated to have Fixed Variance.

    In different phrases, if the time collection goes up and down by related quantities all through the collection, then it’s stated to have Fixed Variance.

    But when the ups and downs begin small after which turn out to be bigger later, then there isn’t any fixed variance.

    Picture by Creator

    The third property of a stationary time collection is Fixed Autocovariance (or Autocorrelation).

    If the connection between values relies upon solely on the hole between them, no matter after they happen, then there’s Fixed Autocovariance.

    For instance, you could have written a weblog and tracked its views for 50 days and if every day’s views are intently associated to earlier day’s views (day 6 views just like day 5 and day 37 views just like day 36, as a result of they’re in the future aside).

    If this relationship stays the identical all through your complete collection, then autocovariance is fixed.

    In a stationary time collection the autocorrelation often decreases because the lag (or distance) will increase as a result of solely close by values are strongly associated.

    If autocorrelation stays excessive at bigger lags, it could point out the presence of pattern or seasonality, suggesting non-stationarity.

    When a time collection has all of those three properties, then we name it a stationary time collection, however we name this a second order stationarity or weak stationarity.

    There are primarily two sorts of stationarity:
    1) Sturdy Stationarity
    2) Weak Stationarity

    Sturdy Stationarity means your complete time collection stays the identical at any time when we observe it, not simply imply and variance however even skewness, kurtosis and total form of distribution.

    In actual world that is uncommon for a time collection, so the traditional forecasting fashions assumes weak stationarity, a extra sensible and sensible situation.

    Figuring out Stationarity in a Time Sequence

    There are completely different strategies for figuring out stationarity in a Time Sequence.

    To grasp these strategies, let’s contemplate a retail sales dataset which we used earlier on this collection for STL Decomposition.

    First is Visible Inspection.

    Let’s plot this collection

    Picture by Creator

    From the above plot, we will observe each pattern and seasonality within the time collection, which signifies that the imply isn’t fixed. Due to this fact, we will conclude that this collection is non-stationary.

    One other methodology to check stationarity is to divide the time collection into two halves and calculate the imply and variance.

    If the values are roughly the identical, then the collection is stationary.

    For this time collection,

    Picture by Creator

    The imply is considerably greater, and variance can be a lot bigger in first half. Because the imply and variance aren’t fixed, this confirms that this time collection is nonstationary.

    We will additionally establish the stationarity in a time collection by utilizing the Autocorrelation (ACF) plot.

    ACF plot for this time collection

    Picture by Creator

    Within the above plot, we will observe that every remark on this time collection is correlated with its earlier values at completely different lags.

    As mentioned earlier, autocorrelation steadily decays to zero in a stationary time collection.

    However that’s not the case right here because the autocorrelation is excessive at a number of lags (i.e., the observations are extremely correlated even when they’re farther aside) and it suggests the presence of pattern and seasonality, which confirms that the collection is non-stationary.

    We even have statistical exams to establish stationarity in a time collection.

    One is Augmented Dickey Fuller (ADF) Take a look at, and the opposite is Kwiatkowski-Phillips-Schmidt-Shin (“KPSS”) Take a look at.

    Let’s see what we get once we apply these exams to the time collection.

    Picture by Creator

    Each the exams affirm that the time collection is non-stationary.

    These are the strategies we use to establish stationarity in a time collection,

    Remodeling a non-stationary time collection to a Stationary Time Sequence.

    We now have a way referred to as ‘Differencing’ to remodel a non-stationary to stationary collection.

    On this methodology, we subtract every worth from its earlier worth. This fashion we see how a lot they alter from one time to subsequent.

    Let’s contemplate a pattern from retail gross sales dataset after which proceed with differencing.

    Picture by Creator

    Now we carry out differencing, this we name as first-order differencing.

    That is how differencing is utilized throughout the entire time collection to see how the values change over time.

    Earlier than first order differencing,

    Picture by Creator

    After first order differencing

    Picture by Creator

    Earlier than making use of first-order differencing, we will observe a rising pattern within the authentic time collection together with occasional spikes at common intervals, indicating seasonality.

    After differencing, the collection fluctuates round zero, which suggests the pattern has been eliminated.

    Nevertheless, for the reason that seasonal spikes are nonetheless current, the following step is to use seasonal differencing.

    In seasonal differencing, we subtract the worth from the identical season in earlier cycle.

    On this time collection we now have yearly seasonality (12 months), which suggests:

    For January 1993, we calculate Jan 1993 – Jan 1992.

    This fashion we apply seasonal differencing to entire collection.

    After seasonal differencing on first order differenced collection, we get

    Picture by Creator

    We will observe that the seasonal spikes are gone and likewise for yr 1992 we get null values as a result of there aren’t any earlier values to subtract.

    After first order differencing and seasonal differencing, the pattern and seasonality in a time collection are eliminated.

    Now we are going to once more check for stationarity utilizing ADF and KPSS exams.

    Picture by Creator

    We will see that the time collection is stationary.


    Word: Within the remaining seasonal differenced collection, we nonetheless observe some spikes round 2020-2022 due to pandemic (one-time occasions).

    These are referred to as Interventions. They might not violate stationarity; they will have an effect on mannequin accuracy. Strategies like Intervention evaluation can be utilized right here.

    We are going to talk about this once we discover ARIMA modeling.


    We eliminated the pattern and seasonality within the time collection to make it stationary utilizing differencing.

    Now as a substitute of differencing, we will additionally use STL Decomposition.

    Earlier on this collection, we mentioned that when pattern and seasonal patterns in time collection get messy, we use STL to extract these patterns in a time collection.

    So, we will apply STL decomposition on a time collection and extract the residual part which we get after eradicating pattern and seasonality.

    We will even talk about ‘STL + ARIMA’ once we discover the ARIMA forecasting mannequin.


    Up to now, we now have mentioned strategies for figuring out stationarity and for remodeling non-stationary time collection into stationary.

    However why do time collection forecasting fashions assume stationarity?

    We use time collection forecasting fashions to foretell the longer term primarily based on previous values.

    These fashions require a stationary time collection to foretell the longer term as a result of the patterns stay constant over time.

    In non-stationary time collection, there’s a fixed change in imply and variance, making the patterns unstable and the predictions unreliable.

    Aren’t pattern and seasonality additionally patterns in a time collection?

    Development and Seasonality are additionally the patterns in a time collection, however they violate the assumptions of fashions like ARIMA, which require stationary enter.

    Development and Seasonality are dealt with individually earlier than modelling, and we are going to talk about this in upcoming blogs.

    These time collection forecasting fashions are designed to seize short-term dependencies after eradicating world patterns.

    What precisely are these short-term dependencies?

    When we now have a time collection, we attempt to decompose it utilizing decomposition strategies to grasp the pattern, seasonality and residuals in it.

    We already know that the pattern provides us the general course of the info over time (up or down) and seasonality exhibits the patterns that repeat at common intervals.

    We additionally get residual, which is remaining after we take away pattern and seasonality from the time collection. This residual is unexplained by pattern and seasonality.

    Development provides total course and seasonality exhibits patterns that will get repeated all through the collection.

    However there could also be some patterns in residual that are non permanent in a time collection like a sudden spike in gross sales attributable to a promotion occasion or a sudden drop in gross sales attributable to strike or climate circumstances.

    What can fashions like ARIMA do with this knowledge?

    Do fashions predict future promotional occasions or strikes primarily based on this knowledge? No.

    More often than not collection forecasting fashions are utilized in reside manufacturing programs throughout many industries (Actual-Time).

    In real-time forecasting programs, as new knowledge is available in, the forecasts are repeatedly up to date to replicate the newest traits and patterns.

    Let’s take a easy instance of cool drinks stock administration.

    The shop proprietor is aware of that the cool drinks gross sales is excessive in summer season and low in winter. However that doesn’t assist him in each day stock planning. Right here the brief time period dependencies are vital.

    For instance,

    • there could also be a spike in gross sales throughout festivals and wedding ceremony season for a while.
    • If there’s a sudden temperature spike (warmth wave)
    • A weekend 1+1 supply could enhance gross sales
    • Weekend gross sales could also be excessive in comparison with weekdays.
    • When retailer was out of inventory for 2-3 days and the second inventory is again there could also be a sudden burst of gross sales.

    These patterns don’t repeat constantly like seasonality, they usually aren’t a part of a long-term pattern. However they do happen typically that forecasting fashions can be taught from them.

    Time collection forecasting fashions don’t predict these future occasions, however they be taught the patterns or conduct of the info when such a spike seems.

    The mannequin then predicts in line with it like a spike in gross sales after a promotional occasion the gross sales could steadily turn out to be regular relatively than a sudden drop. The fashions seize these patterns and supply dependable outcomes.

    After prediction, the pattern and seasonality elements are added again to acquire the ultimate forecast.

    This is the reason the short-term dependencies are vital in time collection forecasting.


    Dataset: This weblog makes use of publicly out there knowledge from FRED (Federal Reserve Financial Knowledge). The collection Advance Retail Gross sales: Division Shops (RSDSELD) is revealed by the U.S. Census Bureau and can be utilized for evaluation and publication with acceptable quotation.

    Official quotation:
    U.S. Census Bureau, Advance Retail Gross sales: Division Shops [RSDSELD], retrieved from FRED, Federal Reserve Financial institution of St. Louis; https://fred.stlouisfed.org/series/RSDSELD, July 7, 2025.


    Word:
    All of the visualizations and check outcomes proven on this weblog had been generated utilizing Python code.
    You’ll be able to discover the entire code right here: GitHub.

    On this weblog, I used Python to carry out statistical exams and, primarily based on the outcomes, decided whether or not the time collection is stationary or non-stationary.

    Subsequent up on this collection is an in depth dialogue of the statistical exams (ADF and KPSS exams) used to establish stationarity.

    I hope you discovered this weblog intuitive and useful.

    I’d love to listen to your ideas or reply any questions.

    Thanks for studying!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleA Brief History of GPT Through Papers
    Next Article Everything I Studied to Become a Machine Learning Engineer (No CS Background)
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Creating AI that matters | MIT News

    October 21, 2025
    Artificial Intelligence

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025
    Artificial Intelligence

    Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know

    October 21, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Google lanserar billigare Gemini AI Plus abonnemang

    September 12, 2025

    OpenAI har introducerat ChatGPT Agent

    July 17, 2025

    Make Your Data Move: Creating Animations in Python for Science and Machine Learning

    May 6, 2025

    Google Gemini AI Suite erbjuder gratis avancerade lärverktyg för studenter

    May 27, 2025

    ChatGPTs nya funktion gör det möjligt att förgrena konversationer

    September 9, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Changing the conversation in health care | MIT News

    July 9, 2025

    OpenAI inför vattenstämplar på gratis genererade bilder

    April 11, 2025

    This medical startup uses LLMs to run appointments and make diagnoses

    September 22, 2025
    Our Picks

    OpenAIs nya webbläsare ChatGPT Atlas

    October 22, 2025

    Creating AI that matters | MIT News

    October 21, 2025

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.