Agile methods, when done well, will increase the ability of an organization to deliver value to its customers. Teams deliver frequently. Teams move faster.
In Scrum, the total story points delivered every sprint is the team’s velocity. Increasing velocity is good. Decreasing velocity is bad. That’s the conventional wisdom.
Because we want increasing speed, it’s seductive to make a trend of the team’s velocity. Velocity is easy to measure. Because we measure every sprint, it is easy to make a trend.
Predictability is valuable, so organizations start to set boundaries for what “good” variation looks like, and what “bad” variation looks like. Good variation is modest and generally increasing. Bad variation is erratic and hard to use for predictions.
Because we know what is “good” and what is “bad,” it is easy to set targets for these metrics. But, guess what? When targets get set, and teams get measured against those targets. Who wants to look bad? Nobody. Team members are smart enough to make themselves look good. And there is the problem.
A team that is evaluated against a target will do whatever is needed to achieve that metric. The easiest thing to do is modify behavior to artificially make the data look good. The metrics will get “gamed.”
Measuring, in and of itself, is not bad. Measuring teams, setting targets, evaluating teams, and comparing teams to one another; that’s bad.
If I had to sum this up in one line, it is this:
If you set a target, the teams might hit the bullseye, but might be bullshit.
Do you have a story of metrics that turned into targets that created unintended consequences? Please comment below.
Do you believe you’ve been around targets that didn’t create unintended consequences? I’d be interested in those, too.