Time Series Forecasting: how to chose your algorithm?

Moving Average vs. Exponential Smoothing? What about ARIMA? Is it better to have one algorithm or many at your disposal so that you can switch if need be?

The answer to these questions will vary based on the specificities of the signals you are trying to forecast. There is no universal answer, however we will try to give you some best practices in order to make this call.

One size does not fit all

That may sound obvious, but if you are trying to improve your Forecast Accuracy you need to understand each signal’s specificities and forecast them accordingly. If you have a Seasonal signal, you will need a Seasonal algorithm, if you have some clear trends, you will need a algorithm with trends… etc

Therefore good forecasters do not have just one method at their disposal but many and will chose the best for each signal based on its specificities. Now the next question is how to select this algorithm?

“Natural choice” vs. “Black Box”

Most Forecasting software perform an automatic selection of the forecasting algorithm using the “Best in Class” approach. As they well understand that one size does not fit all, they use their computing power to calculate forecasts on each signal with many different algorithms in order to select the one that works best on each signal. The “best” algorithm is usually selected through “Backtesting”, i.e. forecasting the last few data points using the previous ones and selecting the algorithm that minimizes the forecast error on this few last data points.

That might sound like a great idea, however it can be a very risky strategy because different forecasting algorithms are likely to give very different forecasts. Switching algorithms back and forth is likely to generate a high variability in your forecast from release to release, affecting your credibility as a forecaster due to this lack of consistency, if not generating some serious business problems down the line. If you are Producing according to your forecast for example, you might be familiar with the bullwhip effect, and know that large variations in your forecast will likely generate Capacity and Inventory issues.

Another drawback of this approach is that it tends to treat forecasting as a “Black Box” which makes it difficult to explain which forecasting algorithm has been used. A Forecaster’s job is less about producing the forecast itself than about convincing peers and Management that this is the one they should base their decision upon. A forecast non validated by your Management or peers is of little value and when being challenged about how you built it, the last thing you want is to have is sentences such as:  “the computer said…” or “I have no idea how it was calculated, the machine did it for me” as your only answers.

Finally, do not get fooled by the nice “Best In Class” labelling, and take some time to understand what is really behind it. The so-called “best” algorithm is generally the one that minimizes the lag-1 error, which means that you will be picking the algorithm which based on N data points has best forecasted the N+1 data point. The thing is that it’s not so hard to forecast accurately the next data point. Many algorithms including “naïve” forecasts (i.e. a simple forecast equal to the last signal value) can be quite accurate for this. The risk for an “unnatural” methodology to be selected (such as a non Seasonal algorithm whereas the signal is clearly Seasonal… etc) is also high. Such a winner of the “best in class” competition might have been the best to minimize your past lag-1 error but is this the algorithm you want to use to forecast 12, 50 or 100 data points ahead and convince your management to use? Probably not.

We highly believe that it is best to drop a few % points of Forecast Accuracy for gaining a higher Forecast consistency and stability, as well as clarity about which forecast methodology has been used. For this reason we recommend a more “natural” approach to selecting the forecasting algorithm: try to first analyse the signal. Does it exhibits clear trend? pick a method with trend? does it exhibit clear seasonality? pick a method with seasonality? Do you want the trend to be linear or damped? pick the right type of trend. Try not to leave these obvious choices to a computer’s “Black Box”, which could generate variability or “strange” forecasts that will later be difficult to explain.

That does not mean that automatic optimization is necessarily bad, it is actually quite useful to figure out the best value of the parameters (for example if you are using an Exponential Smoothing algorithm), or picking the best algorithm among a few “natural” ones, however we recommend you understand carefully how the automatic optimization works and that you narrow the “competition” to “natural” algorithms and ranges of parameters that are likely to return a forecast that makes sense to you and to your team.

Seasonality on the side please!

Seasonality is one of the most difficult component of your signal to forecast. Get it wrong and your forecast accuracy will seriously be damaged.

There are essentially two types of forecasting algorithms: some that deal with Seasonality automatically (Holt-Winters for example), and some who don’t incorporate Seasonality (Moving Average, Exponential Smoothing, ARIMA…). You can still use these methodologies for seasonal signals, it just means that you will have to deal with Seasonality on the side, by “de-seasonalizing” your signal first (i.e. removing the seasonality inside it), and “re-seasonalizing” your signal after you’ve forecasted.

For a Signal X(t), calculate the Seasonal coefficient S(t)

Consider the de-seasonalized signal Y(t) = X(t) / S(t), and forecast Y(t+T) using the forecasting methodology of your choice (Exponential Smoothing, Moving Average, ARIMA.. you name it)

Then re-seasonalize using the same Seasonality coefficient S(t+N)

X(t+T) = S(t+T) * Y(t+T)

We recommend this later approach. Similarly to “guiding” the computer by pre-selecting some algorithms and range of parameters, “guiding” the algorithm on the seasonality side is the one of the best way to avoid large pitfalls and ensuring good forecast accuracy. When selecting the seasonality coefficients to be used on a particular Item you have essentially two options:

  • Item Level seasonality: In that case you will be calculating the seasonality based on the historical signal for this particular item. This approach is to be used if a for items with a specific and distinctive seasonality
  • Aggregated seasonality: In that case we will be using the seasonality of a bigger aggregate to which the item belongs. For example if I am a retailer trying to forecast the sales of a new red sweater with little historical data, I could use the seasonality of all the others red sweaters and assign it to this seasonality to the new red sweater. This approach is to be used for items with no distinctive seasonality pattern, or with not enough history to define a specific pattern from it.

Again, nothing wrong with automatic seasonality algorithms such as Holt-Winters. Feel free to use them if they have demonstrated a good and consistent forecast accuracy. However, if you are suffering from low forecast accuracy, a good first step towards improving it would be to perform a seasonality calculation on the side

Is aggregating data good?

There is a general belief that “Forecasts at Aggregated Level are more accurate than Forecasts performed at a Lower level”.  For this reason some forecasting softwares will propose you algorithm that forecast at an aggregated level, aggregating items with similar properties (same line, same color… you define it).

That might result in a quite accurate forecast on this aggregate indeed, but then what? How do you get back to your Forecast at item level? The answer is generally simple: you will have to split it. For example if you forecast the sales of some Shoes of different sizes, it could make sense to aggregate the sales of all the sizes together to get a smoother signal, to forecast this and then to break your forecast down at size level using some kind of split rule. Despite a good forecast on the aggregate, mess up with the split rule and your forecast accuracy at item level will be damaged.

From our experience, if you select well your forecasting algorithm, it does not make a difference to your Item level forecast accuracy, to forecast at aggregate level and split, or to forecast directly at Item level. Actually, too much aggregation can make you miss some important patterns that you would not have missed by forecasting at item level.

Let’s imagine that you forecasts Shoes and aggregate sales from a model which comes in Black and in White. The two models have very different seasonality patterns: winter for Black, summer for White. Now let’s assume the Black model sales much more than the White (10 times more on average), the risk here is that your aggregate signal will be very close to the Black color signal. You will forecast it, and then when you split you are likely to apply a Black / Winter seasonality pattern to your White shoes, something you would have likely not done if forecasting colors separately.

That does not make the aggregating / splitting approach irrelevant. This approach can be quite useful if you have to forecast many items and do not have the resources to look at all of them individually. But be careful to applying this method to aggregate that “make sense” (for example the sales of all the sizes of the Black shoe sounds fine), and avoid mixing signals with different seasonality or where the split rule to go back to the Item level is not obvious.

Keep it simple

The French have a saying: “Le Mieux est l’ennemi du Bien”, which literally translates into The Better is the enemy of the Good, and could be interpreted as Passed a certain point, adding complexity will do more Damage than Good.

The complexity level you need to introduce in your algorithm should be just enough to capture the main specificities of the signal you are trying to forecast. However you do not want to introduce so much complexity than you lose the big picture or a clear understanding of how your algorithm works.

My teacher once said “What is easy to conceive, should be easy to explain“. Never forget that a Forecaster’s job is less to build the forecast itself than to be able to explain how it was built, in simple terms, to non Forecasting specialists, in order to convince them that it is the correct path towards decision making.  Therefore “technical models” which are hard for non-specialists to comprehend such as ARIMA are not recommended unless you can demonstrate their added value.

Bottom Line: Analyse well your signal and introduce the complexity you need to take its main specificities into consideration, but keep your forecasting methodology as simple and as easy to explain to non specialists as you possibly can.

AnalystMaster

 

Forecasting with PowerBI… really?

So PowerBI just added a Forecasting feature? As a Forecasting specialist, I got really interested in this new feature.

What all organizations are looking for is not only a tool to explore the past, but also to get insight about the future. No wonder it’s a great marketing tool for all BI platforms to come with a Forecasting tool that can provide insights from your data about predicting the future.

This feature will allow you to build sleek charts like this one, but how good is it really? Can we trust PowerBI for forecasting future trends?

8ec980b1-822c-4f72-bde7-13944329d088

How it works?

First let’s take a look at what we have. Follow this link to understand how to activate the PowerBI Forecasting feature. Now you can have PowerBI forecasting a Time Serie for you. Great but wouldn’t it be nice to understand HOW PowerBI forecasts it? What is the model used in background?

The PowerBI team answered this question in this post where they explained that the algorithm was the R ETS(A,A,A) or ETS(A,A,N), which is an Exponential Smoothing algorithm that you can choose to make Seasonal or not (seasonality will be calculated in an Additive not a Multiplicative way). I will refer you to this paper for more information about R and this ETS algorithm.

Back to basics: Forecasting 101

We will dedicate some more posts on Forecasting Time Series in this blog, so stay tuned!

What you should know in the meantime is that if you want to get a good forecast you pretty much need to go through all the following operations:

  1. Clean your data set from outliers:  you might want to remove exceptional values (think zero sales because you were out of stock, or an exceptional sale due to a promotion for example) from the baseline as theses values are not relevant and risk of getting your forecasting algorithm off-track
  2. Pick a forecasting algorithm and the values of its parameters: there are a bunch of methods you can use to forecast a Time Series: Time Series Decomposition, Moving Averages, Exponential Smoothing, Holt-Winters, ARIMA, Croston… to name a few. Each method comes with some parameters values that have to be defined and this is equally important. It’s much better to use a simple method with relevant parameters than a sophisticated algorithm with an absurd set of parameters.

How to best clean your data, pick the best algorithm and set of parameters for it are important and difficult questions that Forecasters have to answer. There is often no absolute answer and the algorithm that works best today will not necessarily be the best one tomorrow. So should you change every time you forecast or should you give more points to stability? Should you let the algorithm pick himself the parameters to use or should you fix them based on your knowledge of the business? Again important questions!

Actually these questions are so important that we touch here the core of what Forecasting really is: STRATEGIC! Forecasts drive Strategic decisions and Tactics.

As such Forecasts are usually reviewed at the highest possible level in Organizations. And the most important thing when presenting a forecast to a top executive is to be able to JUSTIFY how you built it: does it makes sense based on past trends and seasonality? The worst possible answer to justify your forecast is: “the machine did it”…

PowerBI Forecasting

The problem of PowerBI’s Forecasting module (as we are writing these lines), is that it answers all these strategic questions for you and leaves you with very little FLEXIBILITY to chose methods and parameters

  • data cleaning: PowerBI won’t really do it. At most it will perform “Linear extrapolation” between missing data points
  • you can pick if your algorithm should be Seasonal or not (thank you!), but the way to calculate seasonality will be Additive and nothing else (sorry but I actually usually prefer to consider Seasonality as Multiplicative…)
  • The algorithm will be the R ETS additive one, with linear trends and nothing else. If you want to use a different method or non-linear trends you can forget it
  • Parameters of the ETS are automatically picked (this is usually done by doing back testing and minimizing the error for the 1st time period forecasted). This can be quite dangerous because it’s not so hard to be accurate for the 1st time period, many methods and parameters can provide good forecasts for that. But what Executives and Top Management will likely look at is more the medium to long term forecast, and the parameter that minimizes the first month error is not necessarily the one that will provide the best medium to long term forecast.

Don’t get me wrong the R ETS algorithm is quite good and will provide excellent results in many cases. But I also know a lot of cases on which it is not appropriate, will give you absurd forecasts and you will be in difficulty when having to explain them to your Management. So for all these reasons it’s important to understand how this Forecasting PowerBI “Black Box” works, what it can do and can not do, and most importantly be careful and not trust it blindly like “PowerBI is so smart and tells me this so it must be true…”

Bottom Line: 

  • It’s great that PowerBI works on Forecasting features (we insist that these are still in beta version as we write and will be for sure improved in the near future). We are also happy to see that they use R which comes with awesome packages for Time Series analysis and forecasting, so this also goes in the right direction
  • However keep in mind that Forecasting is a highly strategic activity and being able to justify how you calculate your forecast is as important as the forecast itself. Therefore a little bit more control in the choice of the algorithms and parameters would be highly appreciated instead of having a “Black Box”. Why not also provide very simple “What if?” tools such as boxes for Forecasters to build a forecast based on simple strategic inputs from their management like Trend: +10%, Seasonality: 12 months… Come on PowerBI team, you know how much we love you on this blog so… challenge accepted?

AnalystMaster