Forecasting used to be a number game - pick a method, fit it, ship a point estimate, move on. Serious teams don't work that way anymore. The job now is figuring out which market signals actually matter, which ones move first, and whether they pull their weight when you plug them into the right model.
That's the gap Indicio was built to close. It's a forecasting platform that puts econometric, AI, and machine-learning approaches in reach of teams who don't want to write code or babysit pipelines - built around automated forecasting, backtesting, leading indicator analysis, and explainable outputs. (indicio.com)
One feature quietly does a lot of the work: star influence indicator analysis. Indicators get scored with stars so you can tell, at a glance, which ones carry real predictive weight. More stars, stronger signal. But there's a less obvious lesson in the system:
More stars are better. More indicators are not.
A model leaning on two or three highly starred indicators will sometimes beat one stuffed with a dozen mediocre ones. That isn't a bug - it's the point.
What the star influence analysis does
It ranks leading indicators by how much they actually help your forecast. More stars means strong signal. Less stars, useful but less so. One star, marginal.
The goal isn't to hoard indicators. It's to find the ones that consistently make the forecast better. Most businesses sit on far more potential drivers than they can use - macro data, demand signals, pricing, supply chain, search trends, weather, sentiment, competitor moves. Some lead the target. Some lag. Some are duplicates dressed in different clothes. The star view cuts through the pile and surfaces the few inputs that earn their place.
Why more stars are better
A higher star rating means an indicator has shown a real, useful relationship to the variable you're forecasting - better timing, a tighter statistical link, stronger out-of-sample performance, more consistent behavior.
None of this is new. The stats literature has been saying it for decades. Tibshirani's original Lasso paper made the case: shrinkage and selection give you a model that's both interpretable and well-behaved. (OUP Academic) In time series the stakes are higher - you're dealing with noise, structural breaks, seasonality, and relationships that drift. A good leading indicator has to do more than line up with history. It has to help predict periods the model hasn't seen.
Why two or three indicators can beat ten
It feels backwards. More information should mean better predictions, right? Not really. Forecasting models reward useful volume, not raw volume. Pile on indicators and a few things go sideways:
- Noise creeps in. Weak indicators add variance the model can't separate from signal.
- The model overfits. It memorizes patterns that don't repeat.
- Variables overlap. Half your indicators say the same thing, and the model double-counts.
- You run out of history. Short series can't support a wide variable set.
- You lose the story. Three clear inputs are easy to explain. Twenty fuzzy ones are a black box even when they work.
Research on Lasso and related methods keeps landing on the same conclusion: picking the most informative predictors makes models both more accurate and more stable. (Proceedings of Machine Learning Research)
Why models prefer a leaner indicator set
A lot of forecasting models actively avoid unnecessary complexity, especially in multivariate time series. Every new indicator adds another set of relationships to estimate.
Indicio offers a wide library - VAR, Structural VAR, VECM, VARX Lasso, VARMA, ARDL, VARX Lag Group Lasso, VAR Elastic Net, HVAR, BVAR, TVP BVAR with stochastic volatility, Markov Switching VAR, MIDAS, Random Forest VAR, and others. All of them want disciplined input. VAR and BVAR drown in parameters once you push them into high dimensions; the Bayesian VAR literature recommends shrinkage to keep estimates stable. (ScienceDirect) Lasso and Elastic Net solve it from a different angle - Lasso drives weak coefficients to zero, Elastic Net handles correlated predictors. (OUP Academic)
That's why a model might settle on a handful of high-star indicators. It isn't throwing away information. It's keeping the forecast safe from weak, redundant, or unstable inputs.
Stars make the forecast explainable
Accuracy alone doesn't cut it. Forecasting teams have to explain what they're putting in front of leadership. (indicio.com) Instead of waving at a black box, you can point to the indicators doing the work and say why:
"The forecast is improving because the model picked up three strong leading indicators with high star influence."
That lands with finance, sales, supply chain, and the exec team. It sounds like reasoning, not magic.
Backtesting is non-negotiable
A star rating shouldn't come from a hunch. Indicators have to be tested against actual historical performance - backtesting, rolling-origin evaluation, time series cross-validation. Time series evaluation has to respect time: train on the past, test on the future, never just shuffle data the way you might in standard ML. Rolling-origin evaluation is one of the more honest ways to simulate live forecasting. (Springer)
A starred indicator carries weight because it isn't "we think this matters." It's "the model has seen this signal in history and found it useful."
When more indicators do help
Small isn't always better. Stock and Watson showed that when many predictors share an underlying factor structure, you can compress them into a few factors and forecast from those. (stock.scholars.harvard.edu) The catch: every extra indicator has to add something new. Ten indicators tracking the same movement aren't ten indicators - they're one signal with redundancy. The star system is how you tell the difference between more data and better data.
A workflow that holds together
- Cast a wide net for candidate indicators.
- Surface the high-star ones.
- Backtest models against them.
- Let the model gravitate toward the strongest, most stable signals.
- Use the star view when you have to explain the forecast.
- Refresh as new data comes in.
Why it matters for the business
Business forecasting tends to fail in one of two ways - teams trust their gut too much, or they overcomplicate the model with marginal variables. Indicio is built to cut down on both, pairing serious academic methods with the usability forecasting teams actually need. (indicio.com) For decision-makers, the payoff is concrete: better accuracy, earlier reads on market shifts, forecasts you can defend in a meeting, and less time burning through indicators by hand.
The takeaway
Forecasting rewards quality over quantity. A four-star indicator is valuable because it brings real predictive signal. A forecast built on two or three of them will often beat one built on a long list of weak ones.
More stars are better. But the best forecast isn't the one with the most indicators. It's the one with the right ones.


