Time Series Forecasting: how to chose your algorithm?

Moving Average vs. Exponential Smoothing? What about ARIMA? Is it better to have one algorithm or many at your disposal so that you can switch if need be?

The answer to these questions will vary based on the specificities of the signals you are trying to forecast. There is no universal answer, however we will try to give you some best practices in order to make this call.

One size does not fit all

That may sound obvious, but if you are trying to improve your Forecast Accuracy you need to understand each signal’s specificities and forecast them accordingly. If you have a Seasonal signal, you will need a Seasonal algorithm, if you have some clear trends, you will need a algorithm with trends… etc

Therefore good forecasters do not have just one method at their disposal but many and will chose the best for each signal based on its specificities. Now the next question is how to select this algorithm?

“Natural choice” vs. “Black Box”

Most Forecasting software perform an automatic selection of the forecasting algorithm using the “Best in Class” approach. As they well understand that one size does not fit all, they use their computing power to calculate forecasts on each signal with many different algorithms in order to select the one that works best on each signal. The “best” algorithm is usually selected through “Backtesting”, i.e. forecasting the last few data points using the previous ones and selecting the algorithm that minimizes the forecast error on this few last data points.

That might sound like a great idea, however it can be a very risky strategy because different forecasting algorithms are likely to give very different forecasts. Switching algorithms back and forth is likely to generate a high variability in your forecast from release to release, affecting your credibility as a forecaster due to this lack of consistency, if not generating some serious business problems down the line. If you are Producing according to your forecast for example, you might be familiar with the bullwhip effect, and know that large variations in your forecast will likely generate Capacity and Inventory issues.

Another drawback of this approach is that it tends to treat forecasting as a “Black Box” which makes it difficult to explain which forecasting algorithm has been used. A Forecaster’s job is less about producing the forecast itself than about convincing peers and Management that this is the one they should base their decision upon. A forecast non validated by your Management or peers is of little value and when being challenged about how you built it, the last thing you want is to have is sentences such as:  “the computer said…” or “I have no idea how it was calculated, the machine did it for me” as your only answers.

Finally, do not get fooled by the nice “Best In Class” labelling, and take some time to understand what is really behind it. The so-called “best” algorithm is generally the one that minimizes the lag-1 error, which means that you will be picking the algorithm which based on N data points has best forecasted the N+1 data point. The thing is that it’s not so hard to forecast accurately the next data point. Many algorithms including “naïve” forecasts (i.e. a simple forecast equal to the last signal value) can be quite accurate for this. The risk for an “unnatural” methodology to be selected (such as a non Seasonal algorithm whereas the signal is clearly Seasonal… etc) is also high. Such a winner of the “best in class” competition might have been the best to minimize your past lag-1 error but is this the algorithm you want to use to forecast 12, 50 or 100 data points ahead and convince your management to use? Probably not.

We highly believe that it is best to drop a few % points of Forecast Accuracy for gaining a higher Forecast consistency and stability, as well as clarity about which forecast methodology has been used. For this reason we recommend a more “natural” approach to selecting the forecasting algorithm: try to first analyse the signal. Does it exhibits clear trend? pick a method with trend? does it exhibit clear seasonality? pick a method with seasonality? Do you want the trend to be linear or damped? pick the right type of trend. Try not to leave these obvious choices to a computer’s “Black Box”, which could generate variability or “strange” forecasts that will later be difficult to explain.

That does not mean that automatic optimization is necessarily bad, it is actually quite useful to figure out the best value of the parameters (for example if you are using an Exponential Smoothing algorithm), or picking the best algorithm among a few “natural” ones, however we recommend you understand carefully how the automatic optimization works and that you narrow the “competition” to “natural” algorithms and ranges of parameters that are likely to return a forecast that makes sense to you and to your team.

Seasonality on the side please!

Seasonality is one of the most difficult component of your signal to forecast. Get it wrong and your forecast accuracy will seriously be damaged.

There are essentially two types of forecasting algorithms: some that deal with Seasonality automatically (Holt-Winters for example), and some who don’t incorporate Seasonality (Moving Average, Exponential Smoothing, ARIMA…). You can still use these methodologies for seasonal signals, it just means that you will have to deal with Seasonality on the side, by “de-seasonalizing” your signal first (i.e. removing the seasonality inside it), and “re-seasonalizing” your signal after you’ve forecasted.

For a Signal X(t), calculate the Seasonal coefficient S(t)

Consider the de-seasonalized signal Y(t) = X(t) / S(t), and forecast Y(t+T) using the forecasting methodology of your choice (Exponential Smoothing, Moving Average, ARIMA.. you name it)

Then re-seasonalize using the same Seasonality coefficient S(t+N)

X(t+T) = S(t+T) * Y(t+T)

We recommend this later approach. Similarly to “guiding” the computer by pre-selecting some algorithms and range of parameters, “guiding” the algorithm on the seasonality side is the one of the best way to avoid large pitfalls and ensuring good forecast accuracy. When selecting the seasonality coefficients to be used on a particular Item you have essentially two options:

  • Item Level seasonality: In that case you will be calculating the seasonality based on the historical signal for this particular item. This approach is to be used if a for items with a specific and distinctive seasonality
  • Aggregated seasonality: In that case we will be using the seasonality of a bigger aggregate to which the item belongs. For example if I am a retailer trying to forecast the sales of a new red sweater with little historical data, I could use the seasonality of all the others red sweaters and assign it to this seasonality to the new red sweater. This approach is to be used for items with no distinctive seasonality pattern, or with not enough history to define a specific pattern from it.

Again, nothing wrong with automatic seasonality algorithms such as Holt-Winters. Feel free to use them if they have demonstrated a good and consistent forecast accuracy. However, if you are suffering from low forecast accuracy, a good first step towards improving it would be to perform a seasonality calculation on the side

Is aggregating data good?

There is a general belief that “Forecasts at Aggregated Level are more accurate than Forecasts performed at a Lower level”.  For this reason some forecasting softwares will propose you algorithm that forecast at an aggregated level, aggregating items with similar properties (same line, same color… you define it).

That might result in a quite accurate forecast on this aggregate indeed, but then what? How do you get back to your Forecast at item level? The answer is generally simple: you will have to split it. For example if you forecast the sales of some Shoes of different sizes, it could make sense to aggregate the sales of all the sizes together to get a smoother signal, to forecast this and then to break your forecast down at size level using some kind of split rule. Despite a good forecast on the aggregate, mess up with the split rule and your forecast accuracy at item level will be damaged.

From our experience, if you select well your forecasting algorithm, it does not make a difference to your Item level forecast accuracy, to forecast at aggregate level and split, or to forecast directly at Item level. Actually, too much aggregation can make you miss some important patterns that you would not have missed by forecasting at item level.

Let’s imagine that you forecasts Shoes and aggregate sales from a model which comes in Black and in White. The two models have very different seasonality patterns: winter for Black, summer for White. Now let’s assume the Black model sales much more than the White (10 times more on average), the risk here is that your aggregate signal will be very close to the Black color signal. You will forecast it, and then when you split you are likely to apply a Black / Winter seasonality pattern to your White shoes, something you would have likely not done if forecasting colors separately.

That does not make the aggregating / splitting approach irrelevant. This approach can be quite useful if you have to forecast many items and do not have the resources to look at all of them individually. But be careful to applying this method to aggregate that “make sense” (for example the sales of all the sizes of the Black shoe sounds fine), and avoid mixing signals with different seasonality or where the split rule to go back to the Item level is not obvious.

Keep it simple

The French have a saying: “Le Mieux est l’ennemi du Bien”, which literally translates into The Better is the enemy of the Good, and could be interpreted as Passed a certain point, adding complexity will do more Damage than Good.

The complexity level you need to introduce in your algorithm should be just enough to capture the main specificities of the signal you are trying to forecast. However you do not want to introduce so much complexity than you lose the big picture or a clear understanding of how your algorithm works.

My teacher once said “What is easy to conceive, should be easy to explain“. Never forget that a Forecaster’s job is less to build the forecast itself than to be able to explain how it was built, in simple terms, to non Forecasting specialists, in order to convince them that it is the correct path towards decision making.  Therefore “technical models” which are hard for non-specialists to comprehend such as ARIMA are not recommended unless you can demonstrate their added value.

Bottom Line: Analyse well your signal and introduce the complexity you need to take its main specificities into consideration, but keep your forecasting methodology as simple and as easy to explain to non specialists as you possibly can.

AnalystMaster

 

Decision Making: why you need to know about the Bayes Theorem

In this section of the blog we will take you into the process of decision making, and we will start with introducing you the Bayes Theorem and why it is so important in the decision making process.

But let’s start with the beginning: how do you make a decision?

Decision Making and Probabilities

If you decide to go out without an umbrella, it could be because you just forgot it, but a more likely reason is that you think it will not rain. The weather is quite impossible to forecast with a 100% certainty (especially in my hometown in Ireland), but somehow you have evaluated that the probability of rain was low (let’s say <10%) and that carrying an umbrella with you was not worth the trouble compared to the benefit of having it in the unlikely event of rain.

We are used to make such decisions unconsciously and this is the basis of Risk Management and Decision Making.

Let’s take another example: you are a Retailer and need to decide how many pieces of Item X you need to carry in your store to avoid running out of stock and losing sales. If you decide to carry 5 pcs of Item X (for which you receive replenishment every day) in Stock, it is (or at least it should be) because you have evaluated that you almost never sell more than 5 pcs a day, and that if you do this happens so rarely (let’s say less than 1% of the time) that you are willing to accept the risk of running out of Stock 1% of the time vs. the cost of carrying additional Inventory to prevent against all the possible odds.

This probabilistic decision making process requires a deep understanding about the events of this world (such as rain or making more than 5 sales per day), now how do we evaluate them?

Rethinking Reality: nothing is certain!

Evaluate or Estimate the probabilities about the events of the world are carefully chosen words. This probabilistic approach invites us to rethink Reality and what we hold for certain.

A prediction like The Sun will rise tomorrow sounds so obvious that most of us would hold it as a universal truth. The probabilistic decision maker would instead say The Sun will rise tomorrow with a 99% chance. Then, every day, as the Sun rises, the probabilistic decision maker refines his estimate which eventually becomes The Sun will rise tomorrow with 99.9% chance, then 99.99% chance, then 99.99999% chance. However the probabilistic decision maker will never give into the certainty of holding The Sun will rise tomorrow statement as an absolute truth. For him, nothing is certain in this world and 100% probability does not exist! (as a matter of fact we now know that in about 5 billion years from now the Sun will begin to die, so eventually one day The Sun will NOT rise tomorrow!)

Therefore we will never know for sure the real probabilities necessary for evaluating risks and making decisions like carrying an umbrella, selling more than 5 pcs per day, or seeing the Sun rise tomorrow. However what we can do is Estimate them through Observations and Tests.

It is very important to make the distinction between Tests and Absolute Reality as they are not the same thing and Tests incorporate a risk of error:

  • Tests and Reality are not the same thing: for example being tested positive for Cancer and having Cancer are not the same thing
  • Tests are flawed: Tests can be wrong. For example you can be tested positive for Cancer and not have Cancer at all (this is called a false positive) or being tested negative for Cancer and have it (this is called a false negative)

The Bayes Theorem and its applications

Instead of holding Universal Truths, we are now invited to think the world (even the most certain things like the Sun rising every day) in terms of probabilities, and to evaluate these probabilities through Objective Tests and Observations, and continuously refine these estimates as new evidence comes up.

In a probabilistic world, this translates into the Bayes Theorem:

Bayes Theorem:

P(A¦X) = P(X¦A) * P(A)  /  P(X)

i.e. probability of A happening knowing X happened = probability of X happened knowing A as true (true positive) * probability of A happening / probability of X happening

or its equivalent form

P(A¦X) = P(X¦A) * P(A) / ( P(X¦A) * P(A) + P(X¦not A) * P(not A) )

Now let’s see how the Bayes Theorem works on a practical example. Let’s try to evaluate P(A¦X) the probability of having Cancer (A), following the result of a positive test X

Prior probability of having Cancer before the test P(A) = 1%

We know that P(not A) = 1 – P(A) = 99%

New Event occurs: tested positive for Cancer

  • P(X¦A) is the true-positive probability of having Cancer knowing that you have been tested positive = 80%
  • P(X¦notA) is the false-positive probability of being tested positive if you do not have cancer have cancer:  = 10%

Posterior probability

P(A¦X) = P(X¦A) * P(A) / ( P(X¦A) * P(A) + P(X¦not A) * P(not A) ) = 7.5%

The Bayes Theorem invites us to start with an initial estimate of 1% chance of having cancer, which will increase to 7.5% after having being tested positive, incorporating the risks of true and false positive.

A second positive test would increase the probability of having cancer further

Prior probability of having Cancer before the test P(A) = 7.5%

We know that P(not A) = 1 – P(A) = 92.5%

New Event occurs: tested positive for Cancer

  • P(X¦A) is the true-positive probability of having Cancer knowing that you have been tested positive = 80%
  • P(X¦notA) is the false-positive probability of being tested positive if you do not have cancer have cancer:  = 10%

Posterior probability

P(A¦X) = P(X¦A) * P(A) / ( P(X¦A) * P(A) + P(X¦not A) * P(not A) ) = 41%

After this second positive test we know have 41% chance of having cancer.

Bottom Line

The Bayes theorem is all about acknowledging that we do not know for sure about the events in the world, that we need to think about them probabilistically and that we need to refine our estimates of these probabilities as new data becomes available

Old Forecast + New & Objective data = New Forecast

This sounds obvious but it is the core of Forecasting, Risk Management and Decision Making

AnalystMaster

6 tips to improve your Forecast Accuracy

Measure your Forecast Accuracy

There is a saying: “You can not improve what you can not measure”. By definition, if you want to improve your Forecast Accuracy, your first step is to measure it in order to understand what you are do OK and your areas for improvement.

There are a bunch of formulas to measure Forecast Accuracy and we won’t go in detail of all of them here, however no matter which formula you will pick it is very important that it comes in two versions, in order to measure both

  • The Bias: this means if you consistently Over Forecast or Under Forecast, if you are an optimistic forecaster or a pessimistic one. An accurate forecast should have a Bias as close as possible to zero. In an aggregated Bias Forecast Error calculation, over and under forecasts will compensate each other, and you will only exhibit a positive Bias Error at aggregated level if you consistently Over Forecast ; if you are consistently Optimistic. Similarly you  will only exhibit a negative Bias Error at aggregated level if you are consistently Pessimistic. Understanding this will be a great step towards improving your Forecast Accuracy
  • The Absolute Error: this is the average error (over forecast or under forecast) that you make, either over forecasting or under forecasting. Here if you measure the aggregated Absolute Forecast Error, over and under forecast do not compensate each other. This error will quite likely not be zero, and your target will be to keep it as low as possible

Bias Forecast Error

Bias_Error= ∑(Forecast – Sales) / ∑Sales

Absolute Forecast Error

Abs_Error = ∑ABS(Forecast-Sales) / ∑Sales

 

Measure and keep track on these errors over time in order to analyse where you have the highest Bias and Absolute Error and find a way to address it.

Invest your resources wisely

Once you have found out which areas you need to improve most, invest your resources wisely. The Pareto Law states that 80% or so of your Forecast Error will be driven by 20% of the Items so focus the most of your resources on this 20% or so that will be driving the biggest reduction.

Use Statistics

One of the best way to get an objective and unbiased forecast will be to use a Statistical algorithm to forecast from Historical data. Statistical algorithms are usually less biased than humans and can allow to detect some patterns such are trends and seasonality in a relatively objective way

Collaborate with peers

Using statistics do not mean falling into the pitfall of an “all-statistics” forecasting approach. It is proven that Forecast Accuracy is maximized when you combine Statistical Forecasting with some Qualitative and Collaborative inputs:

  • Qualitative Feedback: getting some qualitative information about trends, future events already planned… etc is the best way to understand your historical signal, to clean it from outliers if needed, and to anticipate the future and to check that your Statistical Algorithms are going in the right direction
  • Hierarchical checks: If you forecast at Item level, you might want to check that your forecast makes sense at an aggregated level. In order to do this you might need to convert your forecast in Units into other “languages” such as EUR or USD Revenues in order to match a Budget or discuss about the potential by Line with your Finance and Marketing team who will generally talk in Currency. Converting your Forecast in Units into a Forecast in Weight or in Hours of Production is also a good way to speak the same language as your Warehouse or Production Plant and do to high level checks that it makes sense in regard to their capacity / budget.

Look at Sell-Out not Sell-In

If you are selling a product to a Third party (say a Distributor) who will eventually resell it to a Shop and then to the Final client you are in a Sell-In / Sell-Out configuration.

You might be only interested in forecasting your sales to this Distributor (your Sell-In), however it might be worth it to look a little further down the Selling Chain. Indeed, if there is a big slow down in the sales from the Shop to the Final clients, the Shop will likely be full of non rotating inventory, and will likely not order more to the Distributor. If this trend is generalised to all the shops who usually purchase from this Distributors, the Distributor himself will be overstock and stop to order from you. Therefore your Sell-In will soon start to drop. Similarly if Final Customers get crazy about your product, shops and soon your Distributor will run out of Stock, and your Sell-In is likely to pick up strongly as well.

Understanding the behaviour at the end of the Selling Chain; the Sell-Out, is of utmost importance if you want to better anticipate future selling trends. This can be achieved to a strong collaboration with the Final Selling Points, for example through sharing Point of Sale data, or more simply through Qualitative Feedback

Smooth Demand

Last week I lost a Poker game to one of my friends, after he called my bluff. When asking him how he knew I did not have the King needed to win that game, he just told me, a little embarrassed: “I must admit I saw your cards when you looked at them, you should really be more careful next time”.

Apart the fact that I’m a poor Poker player or that I should really reconsider my friendships, this is telling us something extremely important when it comes to Forecast Accuracy. My (so-called) friend’s Forecast was Accurate and it was not because he’s a Forecasting genius, but precisely because he was NOT forecasting anything… he actually already knew the outcome!

Try to apply this to the Business World. By definition, Forecasts are always wrong. However the more you can get to know your actual Signal upfront, with certainty, the less you have to Forecast it, the more Accurate you are!

This might be difficult or close to impossible if you are at the end of the Selling Chain, and that your sales depend of the random customer behaviour of final customers. However it is much less impossible if you are at the beginning or at an intermediary level of the Selling Chain, where you could ask or even agree with your customer about what he intends to purchase to you in the near future. He might be OK to share with your some Forecasts, to place his Orders upfront, to anticipate or postpone them at your request, or to let him manage his Inventory directly (VMI or Vendor Managed Inventory). Reaching that level of Collaboration with your Customer will make it much easier for you to foresee what will happen next and to improve your Forecast Accuracy.

And if you are at the end of the Selling Chain, that does not mean that you can not Smooth Demand either. Of course you will not be able to ask random customers that you have never met to postpone or anticipate their purchase or to give you advance notice about it. However knowing that Marketing and Advertising techniques work, you might be able to redirect your Promotional activities according to Sales Patterns in order to smooth demand. If you Sales of Item X are so strong that you can not keep up with Supply, there is probably little purpose in continuing to dedicate heavy Promotional budget and resources to Item X. Use these resources on Item Y where sales are slower instead. You will still sell Item X, however you might avoid running out of Stock by stopping pushing it so much and your Sales of Item Y which were slow might pick up, reducing your risk of Inventory Obsolescence.

Price Changes are also a great option to achieve similar results. By increasing price of item X and decreasing the price of item Y, your Unit Sales of item X should decrease and the Units Sales of your Item Y should increase.

Using these tools, again through strong Internal and External Collaboration will help you drastically increase your Forecast Accuracy.

AnalystMaster

Too big for Excel…

We have seen in our PowerBI Best Practice Article that it is best to work with data at maximum level of detail. Well the problem is that sometimes it can be a lot of data… like really a lot…

One day I wanted to extract a lot of information from an ERP system. Not only the data was way above 1m rows Excel capacity, but I was also limited by how much information I could extract from this ERP at at time.

Crazy isn’t it? You know the information you need is out there but it’s LOCKED… What is the point of owning a Ferrari if you’re only going to drive it in first gear?

So what can you do? If you have read this article, you know that PowerBI is the right tool for you. We will show you how to use it depending on how much resources you may have or support from our IT department.

Regarding your data, your have essentially 3 options:

  • Use PowerBI to connect directly to the database
  • Make it fit
  • Work around it

Use PowerBI to connect directly to the database

Good news #1: we know a very cool tool: PowerBI who is capable of handling large amount of data, this is the way to go!

Good news #2: it is wired with tons of connectors to big databases (SAP, BW, Hadoop…)

If you get sufficient support from your IT department you should be able to activate these connectors and work without any issue with your large dataset

However if you are reading this blog, we do not expect you to get this kind of resources or too much support from your IT department. Your are left with the last 2 options:

Make it fit

At the other end of the spectrum, you can obviously try to make your data fit by reducing the amount of information you bring into PowerBI. This can essentially be done by

  • filtering: by filtering out some of your data, based on some attributes or combinations of attributes that you are less likely to need later you can make your dataset smaller and eventually make it fit within your capacity requirements. You can also limit your time horizon
  • aggregating: by giving up strategically on some levels of details that will be less likely to be useful in your analysis later on. For example instead of brining data at a daily or weekly level of detail you might consider monthly.

In our opinion, the Make It Fit option should be activated only in last resort. If you go for it, since you will obviously lose some detail in your information, be strategic about it. Think wisely about which levels of details are less likely to add value to your analysis and take decision on which data to leave out accordingly

Thankfully even with limited IT resources, we do not take NO for an answer and there is a last option:

Work around it

This sounds absurd: we don’t have the tools to manage (extract or store) our data with maximum level of detail, but we don’t give hope that we will import it into PowerBI anyway. How? by slicing it and re-aggregating it later!

  • Slicing: First you need to slice your data into slices that fit within your extraction capacities. For this you can refer to the filtering strategy of the “Make it Fit” section.
    Define some filters so that your extraction is manageable and run them one after the other so that you get the full data set.
    Obviously the less slices the better, so don’t necessarily limit yourself with the storage capacities of Excel or Access. Just focus on your extractions capacities and Go with Text (.TXT) files for Storage if necessary
  • Re-aggregating: this can be managed with PowerBI using the Append Query function in the Query Editor
    This works seamlessly if you have a few slices to re-aggregate however if you have many you might want to run a batch to manage this aggregation outside PowerBI

Back to my ERP extraction problem, unfortunately there was no direct connector for PowerBI that I could use so I got the IT department to extract the past information for me into a giant text file, and to run a script every day to extract the new daily data and drop it into another TXT file. All TXT files were in a folder that I could access.

I was left with a bunch of TXT files with the same format, the only thing I had to do is aggregate them together. A quick Python script later (that I’m giving to you below) and I was done. I managed to get all my data with maximum detail out of this ERP and play with it in PowerBI!

Never give up!

AnalystMaster

Appendix: how to aggregate text files using Python

Text files are not limited in row capacity and could be a efficient low cost way to store large datasets.

One of the best tool to work with text files is Python.

If you do not have Python you can download it for free from the Official Python website

Here we suppose that:

  • you have a file called Startfile.txt at the path Path/Startfile.txt which contains the headers (and only the headers) of all your text files (which are all the same and in the same order)
  • that all the text files that you want to append together are in a folder called Path/Files ; and that they are the only text files in this folder
  • you have a file called Finalfile.txt that will contain the results of your appending of all the text files containing your data. This file is at the path Path/Finalfile.txt

What you need to do:

  1. open your notepad and copy paste the script below
  2. replace Path by the actual path where your files will be stored (and only these files), create the Startfile and drop your other text files in the Path/Files folder
  3. save this script as a .py extension.
  4. Run it, that’s it! You can now open your Finalfile.txt or connect it to PowerBI to read your data

import glob

g = open(‘Path/Finalfile.txt’,’w’)

g.truncate

s = open(‘Path/Startfile.txt’,’r’)

g.write(s.read())

g.close()

g =open(‘Path/Finalfile.txt’,’a’)

path= ‘Path/Files/*.txt’

file=glob.glob(path)

for fle in file:

with open(fle) as f:

text = f.read().splitlines(True)

g.writelines(text[1:])

g.close()

Convert Matrix into Flat File format ( Reverse Pivot Table)

We have seen in this article the importance of working with Flat File. This is especially true for working with PowerBI but not only

Reminder: what is a Matrix and what is a Flat File format

Matrix Format

Date 1 Date 2
Product 1 1000 1100
Product 2 500 450

Flat File Format

Products Dates Sales
Product 1 Date 1 1000
Product 1 Date 2 1100
Product 2 Date 1 500
Product 2 Date 2 450

Converting from Matrix to Flat File

We have seen that converting from Flat File to a Matrix is generally seemless. Excel Users know how to do it well using Pivot Tables for example.

However the opposite is not straightforward (at least used to be…). This is annoying since the Matrix format is one of the most used by Excel users and I have no doubt you will receive most of your data in this format.

So how do make this conversion from Matrix to Flat File? Here are some options:

  • With PowerBI: We have told you you need to stop Excel and switch to PowerBI a long time ago. PowerBI’s Query Editor handles this conversion just fine:Just import your data and select ALL YOUR ATTRIBUTES columns (i.e the ones you do NOT want to “unflip”), then right click (your mouse cursor needs to be on the header of one of the attributes) and select Unpivot Other Columns
    BOOOM! Job done!

Before

After.png

  • With Excel 2016 / PowerQuery: Similary Excel 2016 comes with PowerPivot and PowerQuery built in. You just have to follow these steps to convert your Matrix into a Flat file, it’s very similar to the PowerBI technique
  • With old versions of Excel:  this post will show you how to do it

Enjoy!

AnalystMaster