During my first-ever webinar with Telerik this week I spoke briefly about learning how to keep your tests maintainable. This is something near and dear to my heart – I’ve been on a number of projects where we’ve had to suffer through test suites which became ever more brittle as time went on.
Brittle tests break frequently due to unrelated changes in the system’s workflow, UI, or business rules. Of course, tests should fail when something directly relating to the test is broken—that’s why we have the tests!—but one change to a web page unrelated to the specific area you’re testing shouldn’t completely derail that test. Brittle tests duplicate workflows and UI locators across many tests, forcing you to change hundreds or even thousands of tests when one webpage changes its layout. Brittle tests are overly complicated and have too many complex setup steps.
Brittle tests are the testing world’s equivalent of technical debt, shortcuts taken during development which end up causing you grief further down the road. Eventually the project team ends up spending a large amount of time simply cleaning up broken tests—and getting less and less work done around delivering value to the team’s customers or stakeholders. This can even lead to the team abandoning test automation completely, or worse.
Roy Osherove opens up his tremendous book “The Art of Unit Testing” with a very frank discussion of a project of his which failed because the team’s tests turned into a convoluted mess. Note they didn’t just abandon testing, the entire project failed. The team was spending more time trying to fix broken tests then they were delivering value to the customer, and the project tanked because of it.
Avoid this sort of drama by starting out your testing work with maintainability in mind from the beginning. Keep in mind the three major causes of brittleness in larger test suites:
- Large suites taking too long to run
- Changes to common workflow (such as logons) breaking multiple tests
- Changes to UI elements breaking multiple tests
Tests Taking too Long to Run
UI-driven tests will always be orders of magnitude slower than unit tests. As you reach hundreds or thousands of UI tests you may find your tests taking many hours to complete. You’ll find yourself unable to have regular, frequent execution passes of your functional tests, instead relying on everything to get run once a week over the weekend. Long-running test suites can lead to a mindset of ignoring the tests because feedback doesn’t happen until days later.
Solve this with two different approaches. First, look to scale out your test infrastructure. Get your tests running in parallel on multiple systems using some form of test runner agent. Selenium, in both its v1 and v2 incarnations, supports this through Selenium Grid, or you can look to splitting up tests via your own customized test runner. If you’re using Telerik’s Test Studio you’ll benefit from our combination of Scheduling and Execution Servers to do this for you.
Secondly, look to split out your large suites and pull out a minimal set of Basic Validation Tests (BVTs) to check only the most important set of features. These BVTs should check only fundamental operations like:
- Does the home page load
- Can a user log on
- Can a user create/retrieve/update/delete content
Using BVTs lets your team quickly execute a safety net of functional tests before making major commits, and you can also have these BVTs running on a much more frequent basis if you’re using some form of automated build or continuous integration. (And you should be!)
Use both of these approaches together to help deal with your long-running test suites.
Workflow Changes Breaking Multiple Tests
Your tests should focus on one discrete piece of your system’s functionality, i.e. does entering an invalid customer number display an error? However, your test will likely need to take other actions as a prerequisite—logging on to the system and navigating to the customer record area, for example. These prerequisites are likely common across many of your tests, and this can make them a significant risk for increasing complexity and brittleness.
These sorts of common actions need to be pulled out in to separate areas of your test suite so they’re defined once and only once. That way if your logon workflow changes (say you’ve added an additional requirement to input a value from a keyfob) you only need to update the workflow in one spot.
If you’re using Telerik’s Test Studio you can make use of our test-as-step feature to separate these common pieces out in to modules. If you’re working in Selenium, Watir, or some other testing framework then you can separate this commonality in to separate helper methods.
The main point of this is exactly the same as the software engineering concept of the Don’t Repeat Yourself (DRY) principle: Define functionality in one spot and one spot only.
UI Changes Breaking Multiple Tests
Changing UI can also wreak havoc with your tests. Element locator values (element IDs, XPath, CSS locators, etc.) will likely change if the UI is reorganized – and all your tests depend on being able to find those elements based on those locators. If you have your locators scattered through hundreds or even thousands of tests then you’re in for a sad time updating them after the UX team moved a widget from the right side of the page to the footer.
Mitigate this problem with the same concept I wrote about for workflow: Keep critical information in one and only one location in your codebase. Use the same DRY principle for your locators.
If you’re using Selenium, Watir, or some other coding-based framework, then spend some time getting familiar with the Page Objects pattern. This idea has you create a separate class for each page you’re interacting with in your system. The locators are defined in each class, and the rest of your tests refer to or consume those page object classes. If something changes on the UI, you simply go back and modify that single page object class and all your other tests remain unaffected.
Telerik’s Test Studio helps with this as well by using our element repository. Elements you interact with are stored in one spot within Studio’s infrastructure. If an element’s locator changes it’s a simple thing to update that element to the new value – and you do it in one place only.
Wrapping Up
Automated tests are wonderful and I’ve been pitching (and practicing!) what I preach for years. That said, you have to approach your automation efforts with an eye to maintaining your suite over time. A long-term automation strategy isn’t just about writing great tests that help you deliver awesome software, it’s also about keeping your sanity as your software and tests evolve.
No comments:
Post a Comment