Sunday, December 11, 2011

31 Days of Testing—Day 11: Maintainable Functional Automation

Updated: Index to all posts in this series is here!

I wanted to write a blog post on “Introduction to Functional Automation Testing” but I realized I firmly believe there are a number of things to know before you ever start down the road of functional automation so that you don’t get bushwhacked a few weeks or months in to your effort.

In my view, functional tests via a browser or desktop automation tool, are by far the slowest, most brittle, most curmudgeonly type of automated test. Maybe that’s why I like them so much…

I’ve been fortunate to have spent a few trips over the last month working with customers to train them up on Telerik’s Test Studio, the great automation tool I’ve joined Telerik to help promote. I think the customers are a little taken aback when I get very frank and animated about the difficulties around functional test automation. I also make it very clear that everything you do around functional test automation is closely tied to how well your tests will be running in four weeks or four months.

I’ve also been spending a LOT of time wandering around to various conferences and user groups giving my talk on Automation Isn’t Shiny Toys. That talk really drives home the point that you’ve got to carefully focus on your automation strategy if you want to succeed and stay sane.

In this post I want to highlight a few of the things I discuss in that talk. I’ll dive in to greater detail on a few of those points in later posts, but the post you’re reading now will give you the broad view.

I’ve seen automation fail, or cause tremendous stress, in a number of situations: Projects I’ve been on, projects I’ve seen colleagues on, projects I’ve talked about with friends and other conference attendees, and projects I’ve seen at customer sites. I’ve come to see three common threads among these conversations: long-running tests, brittle tests which break too frequently, and tests which take too much time to maintain over the life of the project. (I also admit that items two and three are likely the same root cause.)

Long Running Tests

Long-running test suites can suck the trust out of your project. Functional tests at the UI level will always be much slower than other sorts of tests, but you can’t have a productive, trustworthy automation environment if your tests can only be run once a month over a weekend. You need a moderately short execution time so you can have your UI tests running several times a day.

Keep in mind the scale of time in this context. 800 tests taking 30 seconds per test will take you nearly seven hours to execute. Small changes across many tests will net you huge gains.

Here are a few causes I’ve seen for overly long execution times:

  • Poor infrastructure. You shouldn’t be running your UI tests in series. Scale out to some form of a grid and parallelize your execution. Get reasonable hardware to host your app and execution agents on. Get this all in place early in your project so you don’t have to sweat it when the pressure’s on.
  • Using the browser for setup. Browser actions are SLLLLLOOOOOOOOW. They’re brittle, too. Never use the browser for setting up prerequisite data or configuration, always look to build a backing API to handle this sort of stuff. You can cut huge amounts of time out of your tests by pushing setup out of the browser and to an API.
  • Testing too much. Just because you can automate something doesn’t mean you should. Focus on high-value tests around your most critical business needs, or the highest-risk areas. Keep other tests for your eyeballs and manual exploratory testing.
  • Navigating where you don’t need to. Navigation is expensive. Wherever possible, bypass logons, navigation, and other unnecessary steps. You can even look to modify your system to be able to bypass these actions. Test these steps elsewhere, of course, but not every time!

Brittle Tests

Brittle tests break for odd reasons. Brittle tests work in the developers’ environments, but fail in the QA or staging environments. Brittle tests fail when run in your suite, but pass when run individually.

Here are a few causes of brittle tests I’ve run in to (or shot myself in the foot with!):

  • Side effects. Every test absolutely must be self-sufficient and granular. A test needs to set up its own prerequisites and, where possible, clean up after itself. (Database transaction rollback, FTW!) Tests may not ever rely on state set by other tests! This doesn’t mean you can’t use a baseline dataset to configure broad sets of prerequisites, but you darn well better make sure to clean up after yourself.
  • Bad environments. You can’t expect horrible environments to be stable. I once worked on a project where our functional tests were expected to run on our CI server. The server was continually slammed by builds from 20 devs. Moreover, the CI server had a database on it used by other servers. And it was a virtual machine on a poor host. Priceless. Get realistic environments. You’ll be happy you did.
  • Complex, badly designed tests. This dovetails in with the side effects mentioned above. It also ties in to maintainability which I’ll address shortly. Mixed concerns, too many responsibilities, tight coupling to the UI or other components—this sounds just like problems we see when building systems. You know what? That impacts our tests, too! Pay attention to how you’re building your tests.

High-Maintenance Tests

If you’re not careful, you can easily end up spending more time maintaining your existing automated tests than you spend creating new ones or doing your exploratory testing. Roy Osherove, in his amazing book The Art of Unit Testing talks about a project that failed because his team spent too much time maintaining tests. Note they didn’t abandon testing on their project—the entire project failed! Ouch.

In my experience, for functional/UI tests, the number one cause of high-maintenance tests is poor element locator management. Side effects and poor test design contribute, but locators is the number one PITA around. Locators are how your testing framework/tool finds elements on the page or in the application. (I’ll have much more on locators in a following post.) If you aren’t careful, you’ll end up spending huge amounts of time fixing up your tests’ locators every time the UI changes even a small amount.

Two things can greatly help you cut down pain around your locator management.

  1. Centralize your locator definition. This is one place to learn, live, and love the DRY principle. Having locators defined in more than once place in your codebase is a recipe for misery. If you’re writing purely code-based tests then look to something like the Page Object Pattern, or perhaps even a simple dictionary or set of read-only fields. Tools like QTP or Test Studio help you by creating either a database (QTP) or element repository (Test Studio) which centralizes locator storage and maintenance. With a centralized locator storage you only have to go to one place to update your locators. Every test refers back to that repository to actually read the locator’s definition. It’s a thing of beauty.
  2. Prefer good locator types where possible. Locators are generally based on HTML attributes, CSS, or the Document Object Model (DOM) structure. Not every tool or framework will support every locator type. More on a few of the most common locator types:
    • HTML ID values are the number one, mostest bestest way to specify your locators. IDs are unique on the page, so you’re never ambiguous. IDs don’t change based on their position on the page, so if a field is moved elsewhere on the page your tests should work fine.  IDs are also the fastest to resolve.
    • Text or link. Some frameworks will let you locate an element/target by its link text or text in the field. This is handy; however, may not be unique on the page.
    • CSS selectors are things like CSS class names. These are fairly fast to resolve, but you’re not guaranteed uniqueness on the page, so they may not work for you.
    • XPath. XPath works with the DOM structure based on paths and functions. I have a love/hate relationship with XPath. The geeky part of me that loves regular expressions (I’m taking counseling for it) also loves XPath. I can solve some very tricky problems with XPath, which is neat; however, XPath is hard to grok, and it’s extraordinarily brittle. If anything moves the slightest bit in the page’s DOM then your XPath is totally borked. Moreover, XPath is extremely slow in Internet Explorer. Finally, XPath under the .NET framework support isn’t fully implemented, so XPaths which work fine in tools like the XPather for Firefox may not work in testing tools built on the .NET Framework.

Wrapping Up

I’ll be doing some follow on posts introducing you to functional test automation; however, past experience has taught me that you’ll see the most success when you keep your automation suite’s maintainability in mind from the start. Gee, isn’t that sort of the same approach great software developers use? Maybe you should treat your testing codebase like production code—because it is production code!

I hope you’re enjoying this series so far and are finding it useful. I’m enjoying writing it!

2 comments:

Anonymous said...

Great post. Do you use mocking in your functional tests? I'm thinking it might be useful to isolate the UI from the rest of the application by using stubbed test data and behaviour mocks.

Jim Holmes said...

@jontymc: No, I don't use any mocks in my functional tests. The point of a functional test is to validate an entire vertical slice of the system.

Yes, you could break things out to do automation around just the UI and biz layer underneath it, but I really don't think that's worth the effort to write (and maintain!) that type of test.

Subscribe (RSS)

The Leadership Journey