Estimates suck. They’re always wrong, and of course estimates somehow end up getting treated as exactimates, or worse yet, commitments.
I’m not here to tell you how to do estimates, or how to follow the #NoEstimates movement and not do estimates at all.
What I am here to do (well, in this article at least), is tell you to pay attention when you have wide estimate ranges within your team. It’s a situation that isn’t just an indicator of a skills gap. It’s a warning flag that maybe you’re missing something. (It’s also a great learning opportunity!) Regardless whether the range in estimates comes from junior to senior, or between similarly experienced folks, take some time to dive down into a conversation about the delta.
What’s causing the big disconnect? Is it indeed a skills/experience gap? Do the junior team members misunderstand the complexities involved? Take a few minutes and have someone more senior walk them through things—and make sure they get an understanding of what’s involved in this particular work item.
This quick learning moment doesn’t have to extend out to a huge training/knowledge sharing session. Keep it tight and focused; after all, the primary purpose is to get the estimating done. If needed, take the conversation offline so it doesn’t derail whatever meeting you’re in at the moment.
An even more interesting situation is when you have people with similar experience differing widely on work. Don’t simply take averages of the estimates and move on. Take time to explore what’s the rationale behind the extreme delta. In this case, you do want to spend as much time’s needed to clarify the delta.
Is the delta due to a lack of clarity around the work item? Spend more time getting the details around acceptance criteria, documentation, whatever’s necessary to ensure everyone’s on the same page.
Is the delta due to a disconnect in complexity of the work? If needed, whiteboard out workflows, data structures, dependencies, etc. until everyone’s got roughly the same idea of complexity involved.
Is the delta due to misunderstanding of the amount of testing involved? If so, move along, BECAUSE THAT NEVER HAPPENS. Hah. Of course it happens! This situation needs time to rectify, because the team has to make sure testing aligns with the information the business/product owner/customer needs to make effective risk/benefit decisions. Are the testers over-focusing on edge cases that don’t directly apply to highest priority risk/values? This can be a regular issue with less mature testers. Is the team missing critical integrations or dependencies that might impact how much testing is really needed? What about coverage matrices for things like browsers, devices, operating systems, etc.?
Work through the testing requirements, but ensure you’re focusing on what the business needs from you. Remember: The testers don’t assure quality, nor does the delivery team, frankly. The business/product owner/customer is the only one who can do that. The team is responsible to pass along the right information to help those people make informed decisions.
Got big estimate deltas? Make sure you’re using that as a trigger for further discussion!
How have you approached handling big deltas in your teams’ estimates? Let me know in the comments!