Vectorautoregressive (VAR) models are highly parameterized and can easily overfit thedata, leading to poor forecast accuracy. Bayesian VAR (BVAR) models address this issue by introducing prior information that regularizes the parameterestimates. Bayesian inference combines data and prior beliefs using Bayes’theorem, resulting in a posterior probability distribution for the modelparameters and a predictive distribution for future values of the timeseries.
The steady-state prior complements the shrinkage properties of the Minnesota prior by adding economically meaningful information about the long-run level of the variables in the model.
Thesteady-state prior[^1] allows forecasters to incorporate prior knowledge about the mean, or steady state, of each time series. This long-run mean plays acrucial role in forecasting, since forecasts from stationary models converge to the steady state at long horizons.
Empirical evidence shows that incorporating prior information about steadystates improves forecast accuracy at both short and long horizons.
The first step is to specify steady-state priors for each time series in the model. Each prior is assumed to be normally distributed and is characterized by a prior mean (the forecaster’s best guess of the long-run level) and a prior standard deviation (reflecting uncertainty around that mean).
For example, if the VAR model includes inflation for a country with a 2%inflation target, a natural choice is a prior mean for the steady-state of 2%(or 0.02 if inflation is expressed in decimal form). Depending on how certain the forecaster is about this information, the prior standard deviation can beset to:
- a low value, for example 0.1 giving anarrow 95% prior probability interval from approximately 1.8 to 2.2, or
- a high value, for example a standarddeviation of 2 giving a wide 95% prior probability interval from approximately-2 to 6.
A low prior standard deviation implies that the steady-state prior has a strong influence on the fitted model, while a high standard deviation allows the data to play a larger role. This flexibility makes it possible to use informative steady-state priors for well-understood variables while remaining agnostic about the long-run behavior of others.
The model is then fitted to the data by simulating from the posterior distribution of the VAR parameters using an efficient blocked Gibbs sampling algorithm. The posterior parameter draws are subsequently used to generate simulated forecast paths, which together represent the full predictive distribution across all forecast horizons.
[^1]:Villani, M. (2019). Steady-state priors for vector autoregressions. *Journal of Applied Econometrics*. [[pdf]](https://doi.org/10.1002/jae.1065)