Automated tests come in several flavors, and I sometimes hear folks mixing up terms for the type of test they’re discussing. I thought I’d take one short column to lay out a few definitions.
Unit tests focus on one small piece of the system’s functionality, most often a method or procedure. Unit tests never cross any service boundaries, and shouldn’t even rely on other dependencies outside of the specific area being tested. Unit tests work in isolation—any external dependencies need to be faked/mocked/stubbed out. If your test is hitting the file system, database, or a web service, then it’s not a unit test.
Imagine a payroll computation method which takes hours worked, a worker’s hourly rate, and whether or not that worker is salaried or hourly. The method is a simple algorithm and doesn’t have any outside dependencies. A unit test would be one that calls that method with a set of input parameters (42 hours, $20 per hour, salaried worker) and checks the output of that against expected results.
Keeping true to the above-described isolation requires a well-designed system that enables you to mock out these dependencies. There are a number of design approaches that help this, but those things are well beyond the scope of this series. (Go look up dependency injection, inversion of control. Also check out tight coupling and use of static classes.) The act of mocking out these dependencies means you’ll be looking to tooling to help you out. Many frameworks exist to help you get through this on whatever platform you’re working on.
Because of this isolation, unit tests are blisteringly fast. It’s common for thousands of unit tests to run in a few tens of seconds. Unit tests are your first line of defense against regressions. Developers should be running them constantly through their work cycle, and most definitely before committing any work back to the source control repository.
Integration tests specifically target interaction between different components or services. Integration tests are most often below the user interface level, checking things like business logic, persistence, or web services.
An integration test might call a web service to create a user, then call the database directly to insure the user was indeed created.
Integration tests are generally quite slower than unit tests because of the overhead involved with standing up web services, hitting databases, etc. Because of this slower execution speed integration tests aren’t generally run as often as unit tests. If your integration test suite is small enough you may be able to have your developers running them right before they commit their final work on the current work item. It’s more common to run only the set of integration tests directly relating to the work item, then run the entire suite of integration tests during regular smoke checks throughout the day. (Or once per night if your suite is really large and you’ve not set up your infrastructure for parallel execution.)
Functional tests are the slowest of all the tests because functional tests are generally standing up the real application, either on the desktop or in a browser, and driving that user interface around to perform the same actions a user would.
An example of this would be using something like Selenium or Test Studio to start up a browser, open your web application, and take some actions to place an order in an electronic purchasing system.
A great many tools and frameworks can be used to help you write functional tests depending on what platform you’re working on. There are many flavors of functional tests, but at their core they’re responsible for helping you verify one specific test case around your system’s functionality behaves as expected.
Performance testing is a broad umbrella encompassing a number of different types of tests. Performance, load, stress, failover, soak, endurance, and other terms all come in to play. I wrote a somewhat longish post on my blog at Telerik around these different types. You can go read that for more detail, but here’s the Cliffs Notes version:
- Performance: Measures the performance of one specific functionality slice, such as saving a user to the database. May be from the user interface down, or may be some slice of internal functionality.
- Load: Checks the system’s performance while ramping up the load on the system by hitting it with simultaneous connections from some source. Measures performance of a scenario under real-world loading.
- Stress: Can also be called failover testing. Can also be called “Throw an insane amount of traffic at the system, then throw more traffic at it until something breaks.” Less flippantly, the idea is to ramp up traffic and load on your system until it degrades or outright fails. The idea is to understand what the maximum useable load is, and what happens when that’s exceeded.
- Endurance: Systems behave differently under load over a long period. Things like subtle memory leaks show up which you might not otherwise find.
Automated security tests are looking for regressions in protections around cross site scripting, SQL injection, and any of the other numerous (and really scary) security-related threats.
Tools for automating these sorts of checks are quite specialized, and I don’t pretend to have a great grasp on them. Instead of possibly leading you astray, I’ll just recommend you look elsewhere for this information.
Each Type of Test Has Its Place
You could make the case there are other types of automated testing, but I think these categories above cover the most important types. What’s critical is that you understand the basic fundamentals of each type and its role.
(Note that I didn’t discuss testing methodology such as test-driven development, behavior-driven development, acceptance test driven development, etc. Those are methodologies which use the types of tests I described above.)
I’m a firm believer that you absolutely need all of these types of tests working together. I’ve seen far too many instances where a developer or team focused on getting unit tests rolling, only to find bugs that would have been caught at the integration or functional level. Performance testing may not be required on every project, but I’d make the case it’s valuable to have at least a bit of metrics around performance simply to ensure you’re not losing ground on user experience as you continue to evolve the system.
Know your test types. Use them in the right places.