Monday, December 05, 2011

31 Days of Testing – Day 5: Choosing What To Test

Updated: Index to all posts in this series is here!

You can’t test everything. Get over it.

Even if you’re doing some form of test-first development, you are not going to go as far down the testing rabbit hole as you might like. In my post Focus on the Why, Not the How  I passed on a true story of a pal who, while paired with an industry “expert,” spent five hours ripping apart a view to get 100% test coverage. The exercise turned out to be useless, and they delivered no value whatsoever that day because the remaining three hours of the day were spent rolling back the work they’d done to that point. #FAIL.

What you can do is carefully chose what to test in order to get the most effective testing possible. There are a great many ways to start narrowing down what you should look to hammer away with on testing. Most of this article is directed at exploratory or functional testing, but the points apply generally to all forms of testing--I hope!

In his chapter from Beautiful Testing Adam Goucher speaks of evolving to a focus of “uncover[ing] quality-related information as efficiently as possible.” (Emphasis Adam’s.) This is really a great line, because it’s exactly how I view testing. I want to get the best information about the state of the system, and do it as quickly as possible.

Adam has a great acronym he uses to guide his testing: SLIME, which stands for Security, Languages, RequIrements, Measurement, Existing. (He also freely admits he had to stretch the heck out of “requirements” to make it fit his wonderful acronym.) This is one great way to approach what and how to test existing systems, or to do exploratory or additional testing on a system being developed.

I look at a number of similar things in determining what I want to have a look at, or where I want to focus my automation efforts. (Again, this is in addition to tests that are hopefully being written by devs and testers pairing together as the system is being built!)

Choosing by Value

Hopefully your team is focusing very tightly on delivering only valuable features to your customer. Look at those features, or the existing functionality, and prioritize the most critical things first. You’ve likely got a mix of features and work items driven by the market and customer. Prioritize that list, and figure out which are the most critical to spend your time on. Within that list, figure out how far down the rabbit hole it makes sense to dive.

Dealing With Regulatory Compliance

Perhaps you’re under constraints mandated by some form of legal or regulatory compliance. In that case, you’ve got to do some disciplined focus to ensure you’re meeting those legal constraints. HIPAA, Sorbanes-Oxley, privacy-related information—all these domains will absolutely require your focus if you’re working in those environments.

You may be forced to drop all other work and focus on the features under that constraint to ensure you don’t put your entire organization at risk. Tom and Mary Poppendieck speak to just this situation in their book Lean Software Development—they had a state-mandated set of regulations on a project they were brought in to rescue. Everything else had to be put on the back burner in order to ensure they met the legal requirements.

Choosing by Risk

Another approach is to look to your riskiest features/work items/components and focus on those first. Are you working with a complex security model in your system? Do you have an extremely large, complex feature your team’s been under pressure to roll out? What about overall complexity of the component or feature?

Focusing on your highest risk areas first will help you mitigate last-minute panic when that complex system doesn’t work as expected – and you’re only a couple days away from release. Focusing on those risky areas gets those problems identified as early as possible.

Choosing by Measurements

Looking to concrete measurements is a great way to figure out where to focus additional testing effort. You don’t want to spend time on areas that are already covered by tests, or which have been stable in the past, so look to code coverage tools and bug reports to help you find areas to possibly cover.

Code and architectural/design complexity are great places to look at for determining where to add more testing. Most every platform has a set of tools which will give you stats on things like cyclomatic complexity, lines of code, coupling, etc. These stats are not only geeky cool, they can point you right at spots rife for bugs.

Finally, have a look at code churn, the amount of changes made to classes, files, or components in your system. Lots of churn means a lot of work has gone in to that item, and the chances are something may be amiss. Churn coupled with code metrics and bug histories makes for an awesome indicator!

Choosing by Spidey Sense

Finally, there’s your intuition. You know what your team is like. You likely have a feeling for areas of the system that have scary corners. You know which areas of the system have given you grief in the past.

Trust your spidey sense and hit those areas for some rough and tumble exploratory testing. Use those bugs to create new integration/functional/unit tests around what you discover and fix.

You can’t test everything. You’ve got to apply some discipline and planning so that you can do your testing as effectively and efficiently as possible.

NOTE: I encourage you to go read up on works by folks like Lanette Creamer, Elisabeth Hendrickson, and others who are doing amazing things in the exploratory/rapid testing domain. There are a lot of tremendously smart folks out there doing great work!

No comments:

Subscribe (RSS)

The Leadership Journey