Tuesday, April 15, 2008

Dropping Testing is Not a (Viable) Option

Yesterday at the opening of the MVP summit I spent a bit of time in an open spaces session on testing practices and pitfalls.  I left after the conversation took a turn where a number of folks were seriously talking about dropping testing in order to meet deadlines.

This from supposed industry thought leaders.  Frightening.

The conversation started out with someone putting forth their historical data that testing adds 10% - 15% to schedule time.  The person with these metrics then said it's an acceptable business decision to cut out that testing if your deadlines are in jeopardy and you need to hit milestones for time to market, compliance, business drivers.  I became more and more dumbfounded when others piped in and agreed.

This is sheer insanity.  I was so appalled that I couldn't bring up a coherent counter-argument to this lunacy.  Dropping testing to meet a deadline is like cutting off your leg to make a weigh in for your weight loss contest: sure, it's an option, but it's one which ought to be rejected by any rational person.

We've all worked with deadlines, and anyone who's worked more than .0432 projects has undoubtedly hit scheduling problems.  Doing poor quality work to meet that deadline never, EVER gets you in a place where you want to be the day after that deadline.  Sure, you may have gotten a release out and beat your competitors to market, but you're screwed after the first 100 people download your software and find it's rife with bugs.

(I'm purposely leaving off discussion about the design goodness that comes from Test Driven Development or Behavior Driven Development/Design.  I can't have this post going over 500 pages...)

The right answer when you're having milestone or deadline issues is to cut scope, not cut quality.  Furthermore, scheduling problems should be identified very, very early in the project -- you're using some form of feature velocity tracking, hopefully, which will let you discover issues early on.

Those scheduling problems/issues/challenges need to get communicated to the client (internal or external) immediately, along with your mitigation plan.  That mitigation plan can never, ever be "we're going to cut quality to meet this deadline."  Instead, that plan should be "we're working to close the schedule gap, but we need to have a hard look at what scope you want delivered for that release."

One of the best discussions I've ever read about this is in Tom and Mary Poppendeick's Lean Software Development.  They talk about a state government project that had a release date mandated by law with a set of functionality also mandated by law.  The project was also biting off a bunch of other functionality. 

The project was failing and had no chance of hitting the milestone when Mary came on board.  Her first triage was to cut out work on all features that weren't directly tied to those mandated features.  The project team worked only on functionality that was required by law since that had to be part of the release.  Everything else was shoved off and worked on after they hit that deadline.  They met their deadline, not by cutting quality, but by doing a brutal cut of features which weren't absolutely necessary for the release.

Cutting scope for a release isn't ever an easy conversation to have with a client, but it's a necessary one.  Furthermore, that conversation is easier if you have it very early in the process, not a week before you're due to ship.

Cutting scope is a reasonable way to hit your deadlines.  Cutting testing is not.

6 comments:

Jeff Hunsaker said...

Dropping testing to save schedule is like me saying I don't want to become an MVP because I was appalled that someone at an MVP conference suggested dropping testing to save schedule. ;-) So maybe the Earth is flat.

Steve Horn said...

I like your ideas and would like to subscribe to your newsletter.

Jon Kruger said...

Great post. Apparently some people view success as just getting something into production, regardless of whether or not it falls apart in 2 years and is totally unmaintanable. I've seen (and replaced) lots of those kinds of projects.

I think the most challenging but most important part of what we do is setting realistic expectations and correctly estimating features.

As I move from project to project, I feel like I add more time to my estimates each time (except for my latest project where LINQ to SQL shrunk everything down for me). You're not just estimating the time it takes to write code, you're also estimating the time it takes to test the code, write unit tests, chase down requirements, ask questions, and fix bugs. Adding those things into the estimates has really helped get us into a situation where we actually get things done on time.

Jim Holmes said...

@Jeff -- heh. Indeed, but the Earth's not flat because the globe I got from Conspiracy, Inc. is round.

@Jon -- Realistic expectations are critical, both from the client side and our geek side. Learning to do good estimates is critical, and it's not an easy craft to learn and get right. That said, dropping critical aspects when the schedule's tight isn't OK!

Anonymous said...

Several years ago I was at a client that had a very detailed waterfall model for their software development lifecycle plan. Then I discovered that right there in their SDLC documentation it stated that if time pressure was a factor it was "okay" to cut portions of the process as long as the product manager agreed.

At one point time did become a factor and the conversation was brought up about what in the process could be cut. I eagerly piped up and requested that the development stage be completely removed from the timeline. Strangely no one found it funny. They choose to cut testing time to a quarter of what it was supposed to be (and I'm not talking unit testing here, I'm talking QA team testing). :|

I find it appalling that some people believe that perceived success is just as good as real success. If we are all just trying to "get by" or "get it out the door" then I think that reflects poorly on us as developers and does very little to truly help our customers.

Jim Holmes said...

Amen, brother MikeWo! Cutting that much out of QA is just insane. My unit tests help me succeed. QA helps the entire project succeed. (My tests do more than that, but you know what I mean...)

Subscribe (RSS)

The Leadership Journey