What is forecasting error in time series?

If Tableau is unable to provide a forecast for your view, the problem can often be resolved by changing the Date value in the view (see Change Date Levels).

Forecasting errors can result when the aggregation level of the time series (months, weeks, etc.) is either too fine or too coarse for the data to be forecast. This can lead to the "too much data" or "too little data" errors described below. Date aggregation can trigger a "too many Nulls" scenario when forecasting attempts to extract more data from the measure than is possible. For example, if the underlying granularity of the sales data is months but you aggregate by weeks, the result may be a significant number of Null values.

Other problems arise when the view’s aggregation and the aggregation specified for the forecast (using the Aggregate by field in the Forecast Options dialog box) are not compatible. Tableau can create a forecast when the forecast aggregation is a finer level of detail than the view's aggregation, but not when it is at a coarser level of detail; even when it is finer, the two values are only compatible if there is a strict hierarchy that Tableau can use (for example, quarters can be evenly divided into three months, but months can't be evenly divided into weeks). Avoid these scenarios by setting Aggregate by to Automatic.

The following list shows errors that can be result from invalid forecasts in Tableau, and provides advice on how to resolve them.

Error messageSuggestion for ResolutionA continuous date cannot be derived from the date fields in the view.

Forecasting requires a date field that can be interpreted continuously. If the date field is not explicitly continuous, then one of the included date levels must be Year.

This error is returned if there are fewer than four data points after trimming off unreliable or partial trailing periods which could mislead the forecast.

This article is chunk from one of my blog posts on Arima time series forecasting with Python It is a pretty extensive tutorial and until and unless you are not really interested in learning in and outs of about ARIMA time series forecasting don’t bother to click. 

But I do wanted to share this list of 5 very useful metrics for a quick read about how one can evaluate forecasting errors while working with time series data. Here we also learn the situations when one measure fails and the other succeeds. In a hope that you like this chuck. I am being lazy and just copy pasting some of the interesting points from the original article with the hope to reach out to the reader of data science central with new and refreshed information.

An error is a difference between the actual value and its forecast. Here residuals are different than the forecast error for two reasons. First residuals are calculated on the training dataset, whereas forecast errors are calculated on the test or validation dataset. Second, the forecast involves multiple steps, whereas residuals involve single step. Some of the metrics which we can use to summarise the forecasting errors are given below. But before that, let us look at the formula for calculating error. Here P represents the predicted/forecasted values.

What is forecasting error in time series?
What is forecasting error in time series?

  • Mean Absolute Error(MAE) – The MAE is one of the most popular, easy to understand and compute metrics. Lower the value of the better is our forecast. The models which try to minimize MAE lead to forecast median.
  • Root Mean Square Error(RMSE) – The RMSE is also among the popular methods used by statisticians to understand how good is forecast. The interpretation of the numbers is much more difficult in comparison to MAE. The models trying to minimize RMSE lead to a forecast of the mean.

Both MAE and RMSE are scale-dependent errors. This means that both errors and data are on the same scale. What does this mean to us? This means we cannot use these measures to compare the results of two different time series forecasts with different units.

  • Mean Absolute Percentage Error(MAPE) – The MAPE has an advantage over MAE or RMSE as it is unit-free and thus is safe to use for comparing performances of time series forecast values with different units. The measure should not be used if you have a mix of fast and slow-moving products. The reason being it does not understand the difference between the fast-moving and slow-moving products. Typically, we would expect higher weights to be given to fast-moving products in comparison to slow-moving.

If you look at the formula closely, you will realize that if Y is zero, then the MAPE tends to become infinite or undefined(a typical problem of divide by zero). What does this mean? It means that we should not use MAPE if our time series have zero values. Another disadvantage that MAPE has is that it puts bigger penalties on negative errors than positive errors.

  • Weighted Mean Absolute Percentage Error(WMAPE) – WMAPE is a highly useful and popular method for operational purposes. It provides more importance to fast-moving products and also provides a solution to divide by zero problems of MAPE.
  • Symmetric Mean Absolute Percentage Error(SWAPE) – Another method to tackle “divide by zero problem” of MAPE, but the metric can have negative values, which makes it difficult to explain. 
  • Mean Absolute Scaled Errors(MASE) – All the above errors which we discussed are dependent on the scale and thus pose limitations when it comes to comparing the results of time series with different units. Citing these limitations, Hyndman & Koehler (2006) proposed an alternative metric called MASE. The formula for MASE is complicated, and thus we are skipping it for now.

Now that we have briefly touched upon some of the most popular methods of calculating forecasting errors let’s look at what packages and functions can be used in Python to generate these statistics.

What causes forecasting error?

At its most basic, forecast error is the difference between the forecast demand and the actual demand. A lot of calculations go into forecast error, but the bottom line is that the greater the difference between actual demand and forecast demand, the greater the impact on a distributor's bottom line.

What is forecasting error and why is it important?

Forecast error is the deviation of the actual demand from the forecasted demand. If you can calculate the level of error in your previous demand forecasts, you can factor this into future ones and make the relevant adjustments to your planning.

What are the 2 errors of forecasting?

Two of the most common forecast accuracy / error calculations are MAD – the Mean Absolute Deviation and MAPE – the Mean Absolute Percent Error.

What are the types of forecasting errors?

Examples: Mean Absolute Deviation (MAD), Mean Squared Error (MSE), Root Mean Square Error (RMSE), Error Total (ET) or Total Absolute Error (TAE).