Time Series Forecasting: how to chose your algorithm?

Moving Average vs. Exponential Smoothing? What about ARIMA? Is it better to have one algorithm or many at your disposal so that you can switch if need be?

The answer to these questions will vary based on the specificities of the signals you are trying to forecast. There is no universal answer, however we will try to give you some best practices in order to make this call.

One size does not fit all

That may sound obvious, but if you are trying to improve your Forecast Accuracy you need to understand each signal’s specificities and forecast them accordingly. If you have a Seasonal signal, you will need a Seasonal algorithm, if you have some clear trends, you will need a algorithm with trends… etc

Therefore good forecasters do not have just one method at their disposal but many and will chose the best for each signal based on its specificities. Now the next question is how to select this algorithm?

“Natural choice” vs. “Black Box”

Most Forecasting software perform an automatic selection of the forecasting algorithm using the “Best in Class” approach. As they well understand that one size does not fit all, they use their computing power to calculate forecasts on each signal with many different algorithms in order to select the one that works best on each signal. The “best” algorithm is usually selected through “Backtesting”, i.e. forecasting the last few data points using the previous ones and selecting the algorithm that minimizes the forecast error on this few last data points.

That might sound like a great idea, however it can be a very risky strategy because different forecasting algorithms are likely to give very different forecasts. Switching algorithms back and forth is likely to generate a high variability in your forecast from release to release, affecting your credibility as a forecaster due to this lack of consistency, if not generating some serious business problems down the line. If you are Producing according to your forecast for example, you might be familiar with the bullwhip effect, and know that large variations in your forecast will likely generate Capacity and Inventory issues.

Another drawback of this approach is that it tends to treat forecasting as a “Black Box” which makes it difficult to explain which forecasting algorithm has been used. A Forecaster’s job is less about producing the forecast itself than about convincing peers and Management that this is the one they should base their decision upon. A forecast non validated by your Management or peers is of little value and when being challenged about how you built it, the last thing you want is to have is sentences such as:  “the computer said…” or “I have no idea how it was calculated, the machine did it for me” as your only answers.

Finally, do not get fooled by the nice “Best In Class” labelling, and take some time to understand what is really behind it. The so-called “best” algorithm is generally the one that minimizes the lag-1 error, which means that you will be picking the algorithm which based on N data points has best forecasted the N+1 data point. The thing is that it’s not so hard to forecast accurately the next data point. Many algorithms including “naïve” forecasts (i.e. a simple forecast equal to the last signal value) can be quite accurate for this. The risk for an “unnatural” methodology to be selected (such as a non Seasonal algorithm whereas the signal is clearly Seasonal… etc) is also high. Such a winner of the “best in class” competition might have been the best to minimize your past lag-1 error but is this the algorithm you want to use to forecast 12, 50 or 100 data points ahead and convince your management to use? Probably not.

We highly believe that it is best to drop a few % points of Forecast Accuracy for gaining a higher Forecast consistency and stability, as well as clarity about which forecast methodology has been used. For this reason we recommend a more “natural” approach to selecting the forecasting algorithm: try to first analyse the signal. Does it exhibits clear trend? pick a method with trend? does it exhibit clear seasonality? pick a method with seasonality? Do you want the trend to be linear or damped? pick the right type of trend. Try not to leave these obvious choices to a computer’s “Black Box”, which could generate variability or “strange” forecasts that will later be difficult to explain.

That does not mean that automatic optimization is necessarily bad, it is actually quite useful to figure out the best value of the parameters (for example if you are using an Exponential Smoothing algorithm), or picking the best algorithm among a few “natural” ones, however we recommend you understand carefully how the automatic optimization works and that you narrow the “competition” to “natural” algorithms and ranges of parameters that are likely to return a forecast that makes sense to you and to your team.

Seasonality on the side please!

Seasonality is one of the most difficult component of your signal to forecast. Get it wrong and your forecast accuracy will seriously be damaged.

There are essentially two types of forecasting algorithms: some that deal with Seasonality automatically (Holt-Winters for example), and some who don’t incorporate Seasonality (Moving Average, Exponential Smoothing, ARIMA…). You can still use these methodologies for seasonal signals, it just means that you will have to deal with Seasonality on the side, by “de-seasonalizing” your signal first (i.e. removing the seasonality inside it), and “re-seasonalizing” your signal after you’ve forecasted.

For a Signal X(t), calculate the Seasonal coefficient S(t)

Consider the de-seasonalized signal Y(t) = X(t) / S(t), and forecast Y(t+T) using the forecasting methodology of your choice (Exponential Smoothing, Moving Average, ARIMA.. you name it)

Then re-seasonalize using the same Seasonality coefficient S(t+N)

X(t+T) = S(t+T) * Y(t+T)

We recommend this later approach. Similarly to “guiding” the computer by pre-selecting some algorithms and range of parameters, “guiding” the algorithm on the seasonality side is the one of the best way to avoid large pitfalls and ensuring good forecast accuracy. When selecting the seasonality coefficients to be used on a particular Item you have essentially two options:

  • Item Level seasonality: In that case you will be calculating the seasonality based on the historical signal for this particular item. This approach is to be used if a for items with a specific and distinctive seasonality
  • Aggregated seasonality: In that case we will be using the seasonality of a bigger aggregate to which the item belongs. For example if I am a retailer trying to forecast the sales of a new red sweater with little historical data, I could use the seasonality of all the others red sweaters and assign it to this seasonality to the new red sweater. This approach is to be used for items with no distinctive seasonality pattern, or with not enough history to define a specific pattern from it.

Again, nothing wrong with automatic seasonality algorithms such as Holt-Winters. Feel free to use them if they have demonstrated a good and consistent forecast accuracy. However, if you are suffering from low forecast accuracy, a good first step towards improving it would be to perform a seasonality calculation on the side

Is aggregating data good?

There is a general belief that “Forecasts at Aggregated Level are more accurate than Forecasts performed at a Lower level”.  For this reason some forecasting softwares will propose you algorithm that forecast at an aggregated level, aggregating items with similar properties (same line, same color… you define it).

That might result in a quite accurate forecast on this aggregate indeed, but then what? How do you get back to your Forecast at item level? The answer is generally simple: you will have to split it. For example if you forecast the sales of some Shoes of different sizes, it could make sense to aggregate the sales of all the sizes together to get a smoother signal, to forecast this and then to break your forecast down at size level using some kind of split rule. Despite a good forecast on the aggregate, mess up with the split rule and your forecast accuracy at item level will be damaged.

From our experience, if you select well your forecasting algorithm, it does not make a difference to your Item level forecast accuracy, to forecast at aggregate level and split, or to forecast directly at Item level. Actually, too much aggregation can make you miss some important patterns that you would not have missed by forecasting at item level.

Let’s imagine that you forecasts Shoes and aggregate sales from a model which comes in Black and in White. The two models have very different seasonality patterns: winter for Black, summer for White. Now let’s assume the Black model sales much more than the White (10 times more on average), the risk here is that your aggregate signal will be very close to the Black color signal. You will forecast it, and then when you split you are likely to apply a Black / Winter seasonality pattern to your White shoes, something you would have likely not done if forecasting colors separately.

That does not make the aggregating / splitting approach irrelevant. This approach can be quite useful if you have to forecast many items and do not have the resources to look at all of them individually. But be careful to applying this method to aggregate that “make sense” (for example the sales of all the sizes of the Black shoe sounds fine), and avoid mixing signals with different seasonality or where the split rule to go back to the Item level is not obvious.

Keep it simple

The French have a saying: “Le Mieux est l’ennemi du Bien”, which literally translates into The Better is the enemy of the Good, and could be interpreted as Passed a certain point, adding complexity will do more Damage than Good.

The complexity level you need to introduce in your algorithm should be just enough to capture the main specificities of the signal you are trying to forecast. However you do not want to introduce so much complexity than you lose the big picture or a clear understanding of how your algorithm works.

My teacher once said “What is easy to conceive, should be easy to explain“. Never forget that a Forecaster’s job is less to build the forecast itself than to be able to explain how it was built, in simple terms, to non Forecasting specialists, in order to convince them that it is the correct path towards decision making.  Therefore “technical models” which are hard for non-specialists to comprehend such as ARIMA are not recommended unless you can demonstrate their added value.

Bottom Line: Analyse well your signal and introduce the complexity you need to take its main specificities into consideration, but keep your forecasting methodology as simple and as easy to explain to non specialists as you possibly can.

AnalystMaster

 

6 tips to improve your Forecast Accuracy

Measure your Forecast Accuracy

There is a saying: “You can not improve what you can not measure”. By definition, if you want to improve your Forecast Accuracy, your first step is to measure it in order to understand what you are do OK and your areas for improvement.

There are a bunch of formulas to measure Forecast Accuracy and we won’t go in detail of all of them here, however no matter which formula you will pick it is very important that it comes in two versions, in order to measure both

  • The Bias: this means if you consistently Over Forecast or Under Forecast, if you are an optimistic forecaster or a pessimistic one. An accurate forecast should have a Bias as close as possible to zero. In an aggregated Bias Forecast Error calculation, over and under forecasts will compensate each other, and you will only exhibit a positive Bias Error at aggregated level if you consistently Over Forecast ; if you are consistently Optimistic. Similarly you  will only exhibit a negative Bias Error at aggregated level if you are consistently Pessimistic. Understanding this will be a great step towards improving your Forecast Accuracy
  • The Absolute Error: this is the average error (over forecast or under forecast) that you make, either over forecasting or under forecasting. Here if you measure the aggregated Absolute Forecast Error, over and under forecast do not compensate each other. This error will quite likely not be zero, and your target will be to keep it as low as possible

Bias Forecast Error

Bias_Error= ∑(Forecast – Sales) / ∑Sales

Absolute Forecast Error

Abs_Error = ∑ABS(Forecast-Sales) / ∑Sales

 

Measure and keep track on these errors over time in order to analyse where you have the highest Bias and Absolute Error and find a way to address it.

Invest your resources wisely

Once you have found out which areas you need to improve most, invest your resources wisely. The Pareto Law states that 80% or so of your Forecast Error will be driven by 20% of the Items so focus the most of your resources on this 20% or so that will be driving the biggest reduction.

Use Statistics

One of the best way to get an objective and unbiased forecast will be to use a Statistical algorithm to forecast from Historical data. Statistical algorithms are usually less biased than humans and can allow to detect some patterns such are trends and seasonality in a relatively objective way

Collaborate with peers

Using statistics do not mean falling into the pitfall of an “all-statistics” forecasting approach. It is proven that Forecast Accuracy is maximized when you combine Statistical Forecasting with some Qualitative and Collaborative inputs:

  • Qualitative Feedback: getting some qualitative information about trends, future events already planned… etc is the best way to understand your historical signal, to clean it from outliers if needed, and to anticipate the future and to check that your Statistical Algorithms are going in the right direction
  • Hierarchical checks: If you forecast at Item level, you might want to check that your forecast makes sense at an aggregated level. In order to do this you might need to convert your forecast in Units into other “languages” such as EUR or USD Revenues in order to match a Budget or discuss about the potential by Line with your Finance and Marketing team who will generally talk in Currency. Converting your Forecast in Units into a Forecast in Weight or in Hours of Production is also a good way to speak the same language as your Warehouse or Production Plant and do to high level checks that it makes sense in regard to their capacity / budget.

Look at Sell-Out not Sell-In

If you are selling a product to a Third party (say a Distributor) who will eventually resell it to a Shop and then to the Final client you are in a Sell-In / Sell-Out configuration.

You might be only interested in forecasting your sales to this Distributor (your Sell-In), however it might be worth it to look a little further down the Selling Chain. Indeed, if there is a big slow down in the sales from the Shop to the Final clients, the Shop will likely be full of non rotating inventory, and will likely not order more to the Distributor. If this trend is generalised to all the shops who usually purchase from this Distributors, the Distributor himself will be overstock and stop to order from you. Therefore your Sell-In will soon start to drop. Similarly if Final Customers get crazy about your product, shops and soon your Distributor will run out of Stock, and your Sell-In is likely to pick up strongly as well.

Understanding the behaviour at the end of the Selling Chain; the Sell-Out, is of utmost importance if you want to better anticipate future selling trends. This can be achieved to a strong collaboration with the Final Selling Points, for example through sharing Point of Sale data, or more simply through Qualitative Feedback

Smooth Demand

Last week I lost a Poker game to one of my friends, after he called my bluff. When asking him how he knew I did not have the King needed to win that game, he just told me, a little embarrassed: “I must admit I saw your cards when you looked at them, you should really be more careful next time”.

Apart the fact that I’m a poor Poker player or that I should really reconsider my friendships, this is telling us something extremely important when it comes to Forecast Accuracy. My (so-called) friend’s Forecast was Accurate and it was not because he’s a Forecasting genius, but precisely because he was NOT forecasting anything… he actually already knew the outcome!

Try to apply this to the Business World. By definition, Forecasts are always wrong. However the more you can get to know your actual Signal upfront, with certainty, the less you have to Forecast it, the more Accurate you are!

This might be difficult or close to impossible if you are at the end of the Selling Chain, and that your sales depend of the random customer behaviour of final customers. However it is much less impossible if you are at the beginning or at an intermediary level of the Selling Chain, where you could ask or even agree with your customer about what he intends to purchase to you in the near future. He might be OK to share with your some Forecasts, to place his Orders upfront, to anticipate or postpone them at your request, or to let him manage his Inventory directly (VMI or Vendor Managed Inventory). Reaching that level of Collaboration with your Customer will make it much easier for you to foresee what will happen next and to improve your Forecast Accuracy.

And if you are at the end of the Selling Chain, that does not mean that you can not Smooth Demand either. Of course you will not be able to ask random customers that you have never met to postpone or anticipate their purchase or to give you advance notice about it. However knowing that Marketing and Advertising techniques work, you might be able to redirect your Promotional activities according to Sales Patterns in order to smooth demand. If you Sales of Item X are so strong that you can not keep up with Supply, there is probably little purpose in continuing to dedicate heavy Promotional budget and resources to Item X. Use these resources on Item Y where sales are slower instead. You will still sell Item X, however you might avoid running out of Stock by stopping pushing it so much and your Sales of Item Y which were slow might pick up, reducing your risk of Inventory Obsolescence.

Price Changes are also a great option to achieve similar results. By increasing price of item X and decreasing the price of item Y, your Unit Sales of item X should decrease and the Units Sales of your Item Y should increase.

Using these tools, again through strong Internal and External Collaboration will help you drastically increase your Forecast Accuracy.

AnalystMaster

Time Series Forecasting: Don’t forget Seasonality!

Why is Seasonality important

Seasonality is in everything we do, we even considering unconsciously, for example when we leave for work early to beat morning traffic, or when we book our Summer holidays early to avoid peak prices.

If you are running a Winter attire business, you might only sell a few pieces during summer but your sales might boom when the weather gets colder, therefore requiring additional resources such as Inventory, Staff Availability… etc.

Seasonality is therefore a very important component of Planning and especially in Forecasting.

How to calculate Seasonality

Seasonality is usually calculated using the Time Series Decomposition Method.

This method assumes the Signal can be broken down in 3  components:

  • The Trend: is your signal flat? increasing or decreasing?
  • The Seasonality: does your signal show peaks and drops at specific Time periods (for example peak of Sales for Christmas in December)
  • The Noise: this is the part of the signal that can not be explained. If the Signal is well decomposed, the Noise component should be a process of mean = 0
  • Sometimes a Cycle component is also added. We will assume there is no Cycle going forward

 Multiplicative Time Series Decomposition

Signal(t) = Trend(t) * Seasonality(t) * Noise (t)

Additive Time Series Decomposition

Signal(t) = Trend(t) *+Seasonality(t) + Noise (t)

At AnalystMaster we generally prefer to consider Times Series as Multiplicative (and that is what we will use going forward). In that case, Seasonality for each time period could be seen as a weight and the Sum of Seasonality for all components is equal to 1 or 100%.

 Calculate Seasonality in Excel

You can also use the attached Excel Model to calculate your Monthly Seasonality in Excel.

This model works with 24 months of Historical data

  1. It first evaluates the Trend using a Centred Moving Average (only possible from time bucket 6 to 18), and extrapolate this trend linearly for time buckets 1 to 24.
  2. Then the trend is removed (we divide the original signal by it as we consider the Time Series as Multiplicative), in order to leave only the Seasonality and Noise as the only components of the Time Serie.
  3. Finally, Seasonality coefficients from both time periods are averaged

Calculate Seasonality with R

R is a great tool to calculate the Seasonality of a Time Series. You can use the following piece of code to read a monthly time serie from a data.csv file and return the Seasonality coefficients.

R will return you the Seasonal, Trend and Random components from the Multiplicative Time Series decomposition.

library(gdata)
> library(forecast)
> library(tseries)
> mydata = read.csv(“data.csv”)
> signal<-ts(signal ,start = c(2014,1), end = c(2017,6), frequency = 12)
> seascoef<-decompose(signal, type=”multiplicative”)

season_r

 The next levels

Including  Seasonality in your planning will dramatically improve your planning accuracy. However you might find out that only considering the Seasonality at Monthly level is not good enough and that you need to also include seasonality at a more detailed level to maximize your planning accuracy.

Seasonality exists also at a more detailed level: weekly, daily, hourly… for example

  • Weekly seasonality: if you are a Retailer: although December is a peak month, not all the December weeks are equal. The seasonality is much stronger on the last week before Christmas and failing to anticipate this can result in shortages of capacity
  • Daily seasonality: if you own a shop or a restaurant, Seasonality is usually stronger on some days of the week: Saturday for example
  • Hourly seasonality: peak times also vary hour by hour. If you are running a call centre you need to plan your capacity accordingly. Or if you are going shopping at Harrods’s, you might want to go when the store is less crowded according to the chart below which is available on Google

 

harrods

Sometimes Seasonality can even be more complex as it does not necessarily follow a regular Month-Week-Day-Hour pattern.

A well-known example by Retailers is Chinese New Year, which follows the Chinese Moon calendar and will therefore fall on a different week and month every year. Easter or Ramadan are also moving holidays and seasonality can be hard to evaluate.

chinese

How far should I go?

Seasonality is important and should be included in your forecasting activities, however you need to keep it at a level which is both relevant for your activity and simple enough to implement. For example if you plan Production at a Monthly level, keep your signal at a Monthly level and evaluate Seasonality at this level too. If you are planning a Warehouse Capacity at Weekly level, then get your Signal and Seasonality at Weekly level, and if you are planning how to staff a Call Centre or a Shop on an Hourly basis, then plan and measure Seasonality hour by hour. But do not introduce unnecessary complexity in getting a signal at hour level if you only expect a monthly plan

AnalystMaster