As humans, we struggle with uncertainty and complexity.
To give us the illusion of control we crave, we come up with rules of thumb, like:
Spend 10 percent on tech debt
Bucket 20 percent for innovation
Assign 25 percent to fixing bugs
These guidelines are easy to understand and communicate. But that’s not the real reason why people fall back to these comforting percentages.
The ultimate reason for these heuristics is that they are easily defensible and help protect us from potential backlash.
“Are we innovating enough?” becomes ➔ “We’re spending 20% on innovation.”
“Are spending enough time to address existing issues?” becomes ➔“We’ve allocated 25% to fixing bugs.”
“Are working to get our technical debt under control?” becomes ➔ “We’re spending 10% of our time to fix technical debt.”
Rules of thumb are neat, convenient, and give the impression like we know what we’re doing. But let’s not fool ourselves. Easy to understand doesn’t equal importance or that we’re spending our time on the right things.
Usually when we use simple heuristics like this, we’re trying to abstract our ignorance away by wrapping it with the illusion of clarity and a coating of plausible deniability.
If you see percentages like this as answers to questions, then you should wonder: what are we potentially sweeping under the rug by over-simplifying reality in this way?
Don’t confuse simplicity with truth.
The mind loves things that are easy to understand, but just because it smoothly enters our brain, doesn’t make it true.
Reality is usually much bumpier than we are able to comprehend.
Nicely provocative and succinct take on a very complex subject. I think that can be a positive or negative signal, depending on how people are using the quantitative:
Good: If those %s are leading indicators as part of a continually monitored system (say, OKRs), they are a great way of answering the question - e.g. - are we working to get our tech debt under control? can be answered with we're spending 10% of our time on tech debt - and we believe this will lead to <faster deployments, more stable velocity, more time to innovate, better dev exp, etc.> as your lagging indicators.
Bad: The measure is an example of substitution principle - essentially, if the question is too difficult to answer, we substitute an easier one.
The trick, I think, is interrogating the system to understand how much is good vs bad in the intent and implementation
I have always been anti-allocation of capacity.
To me it seems like we are conceding that we are unable to take in all the currently available information to make effective ongoing prioritization decisions.
What should we work on next?
Allocations simply reduce the scope of the decision making to a narrower set and, often, allow a different person or team to determine the priorities within that subset of work.
The obvious problem is that the #1 thing to do in one of these allocations may actually be the 5th most important thing the business should be doing next. So why are we doing it now? We give up trying to find a way to get better holistic decisions.
The second problem is that allocations are often too small for big initiatives. In the example, tech debt at 10%… if you apply an allocation on a per sprint basis with a small team, this effectively means you will never get the big tech debt tackled. It is not always easy to split reengineering work small enough to make progress that way. So either you don’t measure per sprint (or other short duration) or you have to get really good at splitting and tackling bits over long periods of time.
Of course, in the end, I almost always give in to this model because it helps politically. It also helps me pretend I can ignore other buckets of work when we plan a roadmap.