4 ways to know (2 obvious, 2 less obvious)

1. Accuracy -> Predicted vs. Actual

Compare Predicted vs. Actual usually using a metric like MAPE.

2. Accuracy -> Predicted vs. Alternative

How does this compare with the user’s prior forecast? This is how you tell if an 80% accuracy is good or bad. If the user’s prior forecast was 60% accurate, you’re doing great. If it was 90% accurate, then you’re underperforming.

3. User Perception

User perception of a forecast is incredibly important. If users don’t trust it, they won’t use it. We don’t want to stop at measuring accuracy, we need to also measure user’s perceptions.

4. User Adoption

Usage is the true indication of the usefulness of a model. Since no forecast is 100% accurate, the more important question is how useful it is to the user. Adoption tells us if our models are useful.

We start with empirical accuracy, but never stop there.

Usefulness is what we’re ultimately after.