As humans we are riddled with biases, unconscious and conscious ones. Even the most analytical, data-heavy person can’t escape this mental trap.
Biases lead us to base opinions and decisions on our own preconceptions of what we expect the outcome of research or an analysis to be.
Due to that,the results of a forecast aren’t allowed to speak for themselves but instead they act as support to whatever idea the forecast analyst is leaning towards. Biases are not limited to people, they also leak into the models they build.
We’ll cover how to avoid the influence of bias, but first let’s learn about a couple that drive inaccurate forecasts.
" Biases are not limited to people,
they also leak into the models they build. "
The tendency to confirm preconceptions by tweaking data and models so that they conform to them. This happens by focusing solely on the information that confirms beliefs rather than focusing on the information that challenges them. Errors are an example of this
Instead of trying to understand why there is an error, it’s easier to look at the results that support the preconception.
The danger of confirmation bias arises when a forecast is influenced by them and used for adjusting, for example, the forecast model. We’re only human and when a lot is at stake it can be easy to fall victim to what we want to see, rather than what it is.
This involves an overly complex model that describes noise (randomness) in the dataset rather than the underlying statistical relationship.
Overfitting occurs often and many people (or their forecasting systems) do it unknowingly every day.
This occurs when a statistical model is allowed to fit as many parameters as possible to explain all deviations in the data.
It’s like adding a trend line to a plot in Excel and keep on adding polynomes to it until the trend line follows the historic data perfectly.
With infinite parameters and enough time the model can be suited to almost any dataset. But there’s no guarantee that the model will generate good forecasts or even if it should be used at all.
We address capturing the value of internal data, the importance of selecting the relevant leading indicators, and its impact on your strategic and tactical planning.Read more
The bias from conjunction fallacy is a common reasoning error in which we believe that two events happening in conjunction is more probable than one of those events happening alone. From a forecasting perspective this is often seen when doing scenario analyses with more than one event, resulting in a conditional forecast with low probability.
Identifying the leading indicators specific to the organization is an optimum way to approach a forecast free of bias. By using a Lasso model to test for all the potential combinations of indicators, you can generate the optimal group of indicators to use as a basis for your forecasting,
After the relevant leading indicators are identified, multivariate forecast models can be applied to forecast the data moving forward. By applying a large number of econometric forecast models and weighting them according to accuracy. you ensure sound forecasting results.
"By weighting a large set of models,
we capture the strengths of each individual model.
This has been proven to be more accurate."
All forecast models have their advantages. By weighting a large set of models, we capture the strengths of each individual models. This has been proven to be more accurate, according to the latest statistical research.
Whether your goal is to increase market share or safeguard against volatility,
the road to making decisions confidently lies in generating accurate forecasts you can trust.