Yesterday at the opening of the MVP summit I spent a bit of time in an open spaces session on testing practices and pitfalls. I left after the conversation took a turn where a number of folks were seriously talking about dropping testing in order to meet deadlines.
This from supposed industry thought leaders. Frightening.
The conversation started out with someone putting forth their historical data that testing adds 10% - 15% to schedule time. The person with these metrics then said it's an acceptable business decision to cut out that testing if your deadlines are in jeopardy and you need to hit milestones for time to market, compliance, business drivers. I became more and more dumbfounded when others piped in and agreed.
This is sheer insanity. I was so appalled that I couldn't bring up a coherent counter-argument to this lunacy. Dropping testing to meet a deadline is like cutting off your leg to make a weigh in for your weight loss contest: sure, it's an option, but it's one which ought to be rejected by any rational person.
We've all worked with deadlines, and anyone who's worked more than .0432 projects has undoubtedly hit scheduling problems. Doing poor quality work to meet that deadline never, EVER gets you in a place where you want to be the day after that deadline. Sure, you may have gotten a release out and beat your competitors to market, but you're screwed after the first 100 people download your software and find it's rife with bugs.
(I'm purposely leaving off discussion about the design goodness that comes from Test Driven Development or Behavior Driven Development/Design. I can't have this post going over 500 pages...)
The right answer when you're having milestone or deadline issues is to cut scope, not cut quality. Furthermore, scheduling problems should be identified very, very early in the project -- you're using some form of feature velocity tracking, hopefully, which will let you discover issues early on.
Those scheduling problems/issues/challenges need to get communicated to the client (internal or external) immediately, along with your mitigation plan. That mitigation plan can never, ever be "we're going to cut quality to meet this deadline." Instead, that plan should be "we're working to close the schedule gap, but we need to have a hard look at what scope you want delivered for that release."
One of the best discussions I've ever read about this is in Tom and Mary Poppendeick's Lean Software Development. They talk about a state government project that had a release date mandated by law with a set of functionality also mandated by law. The project was also biting off a bunch of other functionality.
The project was failing and had no chance of hitting the milestone when Mary came on board. Her first triage was to cut out work on all features that weren't directly tied to those mandated features. The project team worked only on functionality that was required by law since that had to be part of the release. Everything else was shoved off and worked on after they hit that deadline. They met their deadline, not by cutting quality, but by doing a brutal cut of features which weren't absolutely necessary for the release.
Cutting scope for a release isn't ever an easy conversation to have with a client, but it's a necessary one. Furthermore, that conversation is easier if you have it very early in the process, not a week before you're due to ship.
Cutting scope is a reasonable way to hit your deadlines. Cutting testing is not.