asked a great question on Twitter
about browser testing and state. He actually wrote a Gist on it
, but I thought the question merited a response here.
State in Tests
Handling "state" of a system for any test automation is an involved process that should evolve as your test suite does. All the same good design/engineering practices that we use for building good software (abstraction, DRY, readability, etc.) should be applied to our test automation suites. Because, you know, test code is production code
Here are some thoughts on Marc's great set of questions:
- What are patterns for creating expected state prior to running browser tests?
"State" is a broad term and many folks may think of different definitions. I think of it as data, environment, configuration. There's a difference between setting up background "state" for a test suite, and having each test set up its prerequisites.
Setting up an environment for a test suite run is a conscious decision based on what's needed for that particular suite to succeed. I regularly use a combination of loading baseline datasets, altering system configuration (turn off CAPTCHA, select a simple text editor, swap mail providers, etc.), and leveraging test-specific libraries, infrastructure, APIs to get the starting environment set up.
Also, it's very, very
critical to understand there's a difference between system-wide state (baseline datasets, eg) and state needed for individual tests. No test should ever, ever
rely on sharing state with another test. The risk of side effects, especially in distributed/parallel testing scenarios is just too high. You'll regret it if you rely on this. Ask me and Jeremy Miller
how we know this.... (ie, school of painfully learned hard knocks.)
- Is it appropriate for the browser tests to communicate directly with the database of the system under test, creating state in the same manner that you would in an integration test?
I generally prefer to use the system under test's own APIs to create state for a test. These APIs ensure CRUD operations are obeying the system's own rules for its data. Using APIs means you don't have duplicated logic around CRUD ops for your tests, which is a good thing. This way you're not having to maintain separate helper methods to handle changes when the app's underlying DB structure changes, for example.
- How important is it that browser tests be decoupled from the application under test, such that the only thing required to run tests are the tests themselves, which can be pointed at any applicable URL (dev, test, staging)?
There are a couple different aspects to this, IMO. You want
your test suite coupled to the application in the sense you need to leverage the system's own APIs for setup, teardown, configuration, and parts of your test oracles. That said, these calls to the system should be abstracted into a set of helper libraries. The tests themselves should never
know how to communicate with the system itself.
For example, you don't want individual tests that know how to invoke a web service endpoint to create a new contact in your Customer Relations Management system. If that endpoint changed, you'd have to touch every test that used that endpoint. That's a recipe for a lot of extra scotch consumption.
Instead of telling the system how
to do something, tests should tell a helper function/library what
they want done. That library in turn knows how to call a stored procedure to create a user. With this approach, no test ever has to be updated if, for example, the system changes from a stored procedure to a web service for creating new contacts.
Write Tests Like You Write Code
I've generalized a number of things in the above thoughts, but the bottom line is this: use the same ideas approaches for your test suites as your production code. Because tests are production code.
 For automated tests, an "oracle" or "heuristic" is the final step in your test after you validate the UI is behaving as expected. It's not enough to leave the test off at that point. You need to check the database to ensure items were properly created, updated, or deleted, for example. You might have an oracle that checks the filesystem to ensure a datafile was properly downloaded.