Forecasting with VAR and Prophet

Valentina Djordjevic Technology 2 Comments

In my previous post, I tried to present the ARIMA model for forecasting. It was based on the use of autoregression and moving average concepts, combining the regression of variable based on its lagged values and calculation of error based on the linear combination of error terms occurred in the past, respectively. In this post, we’re going to talk about VAR and Prophet as alternative models.

The main problem was that developed ARIMA model could not be applied to predict the output  for all of different objects we ran the forecasting for, which was the principal idea. As we dug deeper into the problem domain, we discovered that there were several exogenous variables affecting the output we were trying to predict. So, using its past lagged values and errors only, couldn’t solve the problem. That’s when we decided to try another model, a model that accepts historical data of several variables and tries to forecast their future values, by using the information of their lagged values, calculated error and one additional concept – their linear interdependencies.

VAR represents vector autoregression model. What does that mean? It accepts the vector of several time series, representing historical data of some variables. For each one of them, it tries to predict future values by using lagged values of the variable analyzed at the moment, lagged values of other variables and an error term.

Some of you may wonder: why not use the ARIMAX model, which enables us to use ARIMA model with explanatory variable? The problem with ARIMAX model is that it requires future values for the endogenous variable, to be able to predict values for dependent variable. VAR is forecasting those values, which is pretty convenient for our kind of problem, since we do not have the information of future variable behavior.

In consultations with domain expert, we’ve chosen several input variables that could be intercorrelated in some way and also affect the output we’ve been trying to predict.

The first thing we should do is inspect the correlation between variables, so that we could be sure they depend on each other.

Here’s the correlation matrix.

correlation_matrix

It is noticeable that var_3 is in weak correlation with var_2, but has a strong negative one with var_1, for example. That’s why we should include it anyway. After the existence of correlation is inspected and confirmed, we are able to continue with the analysis.

Running a VAR model

PyFlux’s VAR model accepts three parameters: data, number of lags to use for the variable autoregression and the order of differencing, if needed (remember, we do this to remove the non-stationarity).  Just like the ARIMA model, it also uses the MLE, AIC i BIC criteria to estimate parameters.

Here’s the model summary.

model_summary

The only tricky part here was determining the optimal value for lags parameter. We used function that runs a model with different values in range(1,30) for lags parameter, while the optimal value has been chosen for minimal RMSE obtained. There may be a more elegant way, and I’m certainly going to investigate it in the future.

You can see forecast results below.

forecast_results_1

forecast_results2

And even more…

forecast_results_3

forecast_results_4

VAR produced more stable and accurate results, and we were pretty satisfied with it. But then some other models appeared and we couldn’t resist trying them.  Thus you can see Facebook Prophet applied to our data below.

Facebook Prophet in action

Prophet is one of the newest buzzwords in the field of time series forecasting. It is concise and easy to use, flexible to seasonal data, trend changes and outliers. You can feed it with holidays or some other dates you think will have the unusual behaviour, or extract trend and detect seasonality. Finally – it has everything a good forecasting model should have. Well, almost everything.  That will be discussed in the conclusion. But enough of lauding, let’s see it in action.

Prophet has its implementation in Python and R. The docs can be found here, so I’m not going to bother you with the code. Model requires a time series dataframe having two columns –  ds (datestamp) and y (output to forecast).

Prophet has many parameters that could be set, and we tried using some of them, like growth, changepoints, holidays, interval_width and weekly_seasonality. Growth is used to specify a trend, linear or logistic. Other parameters are very intuitive.

You can see forecast results below.

prophet_results

Prophet gives us upper and lower bounds, so we can make plans based on these values as well. It also decomposes output to trend and seasonal components. You can see the filtered output below, upper and lower bounds are removed for better transparency.

prophet_output

Just like with the ARIMA model, the only flaw we noticed here is not supporting the intercorrelations between multiple variables to forecast some output. That’s why we stick to the VAR model. We can only imagine how powerful the Prophet model could be if it was upgraded with this functionality. For now, we can only wait and hope that Facebook will surprise us one more time.

Our client was pretty satisfied with these forecast results, we managed to reduce a 3.56 to a 1.44 RMSE, which was beneficial for decision and strategy making. What’s left now is model improvement based on results and data gathered from its utilization, so I will, for sure, continue to share with you our use cases and inform you of any important updates.

Hopefully, this article was interesting and useful enough, if you have any questions, please do not hesitate to contact me. In my next post,  we’ll be talking about anomalies and their detection and proper treatment.

Comments 2

  1. Maria Restrepo

    Hi Valentina,

    Thank you for the article, it is very interesting. I would like to ask you if you have any idea of what is “t” second column in data frame forecast.

    Thanks!

    1. Post
      Author
      Valentina Djordjevic

      Hello, Maria.

      As well as I know, ‘t’ is used as an auxiliary column, and it is derived from a date column ‘ds’. It is created in the process of setting up a dataframe, among the ‘t_ix’, ‘y_scaled’, and ‘cap_scaled’ columns. These columns are used during both fitting and predicting, when initializing growth and predicting trend. 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *