15 Comments

Certainly compelling, Nick

In some environments, I've seen an obsession with Story Points grow into a spectacular distraction.

One effective approach is just to count "cards" – it all evens out in the end

But even that means we've forgotten to measure the most important thing:

Meaningful client behavior change outcomes.

Expand full comment

Completely agree!

Expand full comment

Love the article ! Thanks for Posting it !!

Expand full comment

Thank you Andreas!

Expand full comment

Great article.

Expand full comment

You had me until you started talking about Features. What's a Feature? Just a really big requirement that needs to be broken down into smaller items...which can be right-sized and deployed. So, the concept of delivering value in a Feature should go away entirely. In my organization there is such a focus on Features, which is all we measure, that the User Stories created are wrong (because they're really just Tasks to implement the Feature) and teams purposely hold off deploying value until everything under a Feature is complete, which simply provides an excuse to work in an environment that's more waterfall. I despise the term "Feature". I hate that our tools (Rally) even include it. Just think of everything as a requirement. Some big. Some small. Right-size them all, implement and deploy. It's that simple.

Expand full comment

Ehh.. have to disagree with you. A feature may or may not be broken down in smaller pieces. Depends on the Feature.

Nowhere does it say that a Feature cannot be small.

It's funny how we have our own pet peeves. I hate the term requirement. Nothing is a requirement because we write it down as a requirement. It's something else that makes it a requirement, not the requirement itself.

Expand full comment

that's exactly my point. Why introduce yet another term? Feature, Epic, Capability, User Story, etc. I don't understand why the article treats a "Feature" differently. To quote the article "You will want to know how long that Epic/Feature will take". To me that infers we're treating the Feature as the deliverable, and the article now turns to studying the correct number of children (85th percentile) for the Feature. Based on the article I would simply look at the Feature, ask the team if it is already right-sized (many Features are) and if it is then nothing needs to happen. If not, right-size into multiple User Stories. At that point the Feature GOES AWAY because it's been replaced by right-sized User Stories that will be implemented and delivered.

Expand full comment

Ah ok, now I get your point! Yes indeed they may be right-sized or not.

Expand full comment

Probably far too long to answer in a concise reply to a blog post, so I’ll keep it brief. It really depends on your organisation, how it defines (potential) value and what this level is at. The same goes for the definition of feature.

Expand full comment

So, do I get it right that you change the 'right size' over time? The reason I'm asking is because the cycle time (85%) improved in the cases you showed. This however was initially the bar you used to set the 'right size'. At what point did you discuss recalibration?

And what if an item is to small to be the right size?

Expand full comment

Correct - for all levels. Teams then regularly use the data to see what their right-size is and work against that.

I’m not quite clear on your second question though? Surely if it’s less than or equal to the right-size it’s right-sized? So if your right-size was 7 days it wouldn’t matter if it was 7 days or 7 hours as both are still right-sized…or am I missing something?

Expand full comment

thanks for the clarification. Sorry i wasn't clear on my challenge with the second question. If you want to do some sprint planning how do you know when you have enough items up to the capacity of the team? If you have al items in roughly the same size you can use the amount of items as a limiter. If all items are right sized. As you said, they can be within 7 hours or 7 days. What do you do then? Ask the team if it will fit till it won't? That is an option and might give a wider range of finished items per sprint. Or am I missing something?

Expand full comment

Ah if we’re talking capacity planning for a sprint, that’s different :)

I would do either/both:

A) run a monte carlo sim for what we’ll do in the next 2 weeks and choose a percentile somewhere between 70-85

B) look at last 2 sprints throughput

Expand full comment

(Hit send too soon)

The monte carlo sim would tell us how many rightsized items we have capacity for to a percentile…

Expand full comment