If possible, measure the business value with lagging and leading metrics
## Some thoughts:
- lagging metric might be measured first, but should probably be followed by leading metrics
- The metrics might be complementary, so that moving one metric does not disrupt desired state of another metric
- a metric might bring *clarity* (and thus tackling the value risk of the effort)
- having a metric seems to play well with tackling complexity, for example, with
- [[Big Picture overview]]
- sometimes, system
> I find that it's generally true that, in areas where no one is publishing measurements/benchmarks of products, the products are generally sub-optimal, often in ways that are relatively straightforward to fix once measured.
> —Dan Luu, [Some reasons to measure](https://danluu.com/why-benchmark/)
## Leading vs lagging
- **Lag metrics**
- when ones receives them, the performance that drove them is *already in the past.* Once we get it, we cannot fix it, it is history.
- examples: person's weight, number of customers, gross margin
- **Lead metrics**:
- new behaviours driving success on the lag metrics .
- *predictive* of achieving the goal and it can be *influenced* by the team
- examples: calories eaten, hours of exercise
![[Screenshot 2022-04-11T17.53.png]]
> Your outcomes are a lagging measure of your habits. Your net worth is a lagging measure of your financial habits. Your weight is a lagging measure of your eating habits. Your knowledge is a lagging measure of your learning habits. Your clutter is a lagging measure of your cleaning habits. You get what you repeat.
>
> —James Clear
> The simple answer is that _we are not taught to think like this_. When people say “be more data driven”, we immediately assume, “oh, we have to measure our business outcomes”. And so we measure things like number of deals closed, or cohort retention rates, or revenue growth, or number of blog visitors. These are all output metrics — and while they are important, they are also not particularly actionable.
>
> Amazon argues that it’s not enough to know your output metrics. In fact, they go even further, and say that you _shouldn’t_ pay much attention to your output metrics; you should pay attention to the controllable input metrics that you _know_ will affect those output metrics. It is a very different thing when, say, your customer support team knows that their performance bonuses are tied to a combination of NPS and ‘% of support tickets closed within 3 days’. If you have clearly demonstrated a link between the former and the latter, then everyone on that team would be incentivised to come up with process improvements to bring that % up!
>
> —from [Working Backwards](https://www.goodreads.com/book/show/53138083-working-backwards)
Further reading:
- 2nd disciple from the book [[The Four Disciplines of Execution]]
- [Some Notes on Executive Dashboards by Tom Critchlow](https://tomcritchlow.com/2022/05/06/executive-dashboards/)
- book [Atomic Habits by James Clear](https://www.goodreads.com/book/show/40121378-atomic-habits)
## some examples of (lagging) metrics
- [Startup Metrics for Pirates: AARRR! - Dave McClure](https://www.youtube.com/watch?v=irjgfW0BIrw) might be an orthogonal, or a *consequence* of tackling these risks
- it seems that technical metrics, such as [Metrics For Your Web Application's Dashboards](https://sirupsen.com/metrics), should be linkable to the key risks/areas.
- also see [Google's 4 metrics as golden signals](https://sre.google/sre-book/monitoring-distributed-systems/#xref_monitoring_golden-signals)
- https://mobile.twitter.com/aakashg0/status/1528165714196152320
- some value metrics
- customer care:
- lagging: customer care cost
- leading: number of incoming customer requests
- leading: number of faulty systems per day
- device monitoring:
- lagging: # of malfunctioning systems, in time
- leading: % of malfunctioning systems within a week from being delivered by an installation partner, in time
- some technical metrics
- Availability:
- up-time and ["nines"](https://en.wikipedia.org/wiki/High_availability#%22Nines%22)
- [MTTD](https://www.bmc.com/blogs/mttd-mean-time-to-detect/) and [MTTR](https://en.wikipedia.org/wiki/Mean_time_to_repair)
- Performance:
- p50, p99 percentiles of requests
- [[non-vanity metrics seem to often be percentages (or percentiles)]]
## Further reading
- [[The Four Disciplines of Execution]]
- [Some reasons to measure by Dan Luu](https://danluu.com/why-benchmark/)