BVAR Minnesota Prior

The Minnesota BVAR is a Bayesian VAR model with a prior developed by Litterman and Sims at the University of Minnesota. Similar to how a penalized model shrinks the parameters towards zero, the Minnesota prior shrinks them towards a random walk. The prior also specifies a larger variance for shorter lags, implying a prior belief that shorter lags have a larger impact than longer.

The Bayesian Vector Autoregressive with Minnesota Prior (BVAR) model is a Bayesian version of the Vector Autoregressive (VAR) model (Advanced: VAR) with a Minnesota prior.

Bayesian inference

Classical statistical models adhere to a frequentist approach, where the assumption is that there exists an underlying true model. If the model is estimated on multiple random samples, the estimate of the model from data will be closer than a constant αα to the true model for a certain proportion of the samples.

Bayesian statistics begin with a prior which describes a certain prior belief of the underlying process generating the data. After the data is observed, the prior belief and the data are combined through Bayes' theorem, and a posterior distribution which describes the probability of getting different values given both the prior beliefs and the data.

Minnesota (Litterman-Sims) prior

The Minnesota prior developed by Litterman and Sims shrinks the parameter estimates towards a random walk. The prior mean of the first coefficient of a non-stationary variable is set to a value close to 1, and for a stationary variable the prior mean is set to 0. The prior variance is set to a very large value for the first lag, and is set to exponentially smaller values for longer lags. This implies a prior belief that shorter lags have a higher impact than longer.

How does Indicio fit a BVAR model with the Minnesota prior?

Each variable is tested for stationarity, a stationary variable receives a prior mean of the first lag of 0, and a non-stationary variable will have 0.9.

The model is then fitted to the data by drawing samples in a Markov Chain Monte Carlo (MCMC) sampling algorithm. These samples are drawn proportional to how probable they are given the data and the prior. This way, a large sample of parameter sets is obtained, representing the density of the parameter space. These samples are then used to produce a sample of forecasts, which represent the density of the forecast given the priors, data and the model.

Explore more models