Updated: Index to all posts in this series is here!
Automated tests come in several flavors, and I sometimes hear folks mixing up terms for the type of test they’re discussing. I thought I’d take one short column to lay out a few definitions.
Unit Tests
Unit tests focus on one small piece of the system’s functionality, most often a method or procedure. Unit tests never cross any service boundaries, and shouldn’t even rely on other dependencies outside of the specific area being tested. Unit tests work in isolation—any external dependencies need to be faked/mocked/stubbed out. If your test is hitting the file system, database, or a web service, then it’s not a unit test.
Imagine a payroll computation method which takes hours worked, a worker’s hourly rate, and whether or not that worker is salaried or hourly. The method is a simple algorithm and doesn’t have any outside dependencies. A unit test would be one that calls that method with a set of input parameters (42 hours, $20 per hour, salaried worker) and checks the output of that against expected results.
Keeping true to the above-described isolation requires a well-designed system that enables you to mock out these dependencies. There are a number of design approaches that help this, but those things are well beyond the scope of this series. (Go look up dependency injection, inversion of control. Also check out tight coupling and use of static classes.) The act of mocking out these dependencies means you’ll be looking to tooling to help you out. Many frameworks exist to help you get through this on whatever platform you’re working on.
Because of this isolation, unit tests are blisteringly fast. It’s common for thousands of unit tests to run in a few tens of seconds. Unit tests are your first line of defense against regressions. Developers should be running them constantly through their work cycle, and most definitely before committing any work back to the source control repository.
Integration Tests
Integration tests specifically target interaction between different components or services. Integration tests are most often below the user interface level, checking things like business logic, persistence, or web services.
An integration test might call a web service to create a user, then call the database directly to insure the user was indeed created.
Integration tests are generally quite slower than unit tests because of the overhead involved with standing up web services, hitting databases, etc. Because of this slower execution speed integration tests aren’t generally run as often as unit tests. If your integration test suite is small enough you may be able to have your developers running them right before they commit their final work on the current work item. It’s more common to run only the set of integration tests directly relating to the work item, then run the entire suite of integration tests during regular smoke checks throughout the day. (Or once per night if your suite is really large and you’ve not set up your infrastructure for parallel execution.)
Functional Tests
Functional tests are the slowest of all the tests because functional tests are generally standing up the real application, either on the desktop or in a browser, and driving that user interface around to perform the same actions a user would.
An example of this would be using something like Selenium or Test Studio to start up a browser, open your web application, and take some actions to place an order in an electronic purchasing system.
A great many tools and frameworks can be used to help you write functional tests depending on what platform you’re working on. There are many flavors of functional tests, but at their core they’re responsible for helping you verify one specific test case around your system’s functionality behaves as expected.
Performance Tests
Performance testing is a broad umbrella encompassing a number of different types of tests. Performance, load, stress, failover, soak, endurance, and other terms all come in to play. I wrote a somewhat longish post on my blog at Telerik around these different types. You can go read that for more detail, but here’s the Cliffs Notes version:
- Performance: Measures the performance of one specific functionality slice, such as saving a user to the database. May be from the user interface down, or may be some slice of internal functionality.
- Load: Checks the system’s performance while ramping up the load on the system by hitting it with simultaneous connections from some source. Measures performance of a scenario under real-world loading.
- Stress: Can also be called failover testing. Can also be called “Throw an insane amount of traffic at the system, then throw more traffic at it until something breaks.” Less flippantly, the idea is to ramp up traffic and load on your system until it degrades or outright fails. The idea is to understand what the maximum useable load is, and what happens when that’s exceeded.
- Endurance: Systems behave differently under load over a long period. Things like subtle memory leaks show up which you might not otherwise find.
Security Tests
Automated security tests are looking for regressions in protections around cross site scripting, SQL injection, and any of the other numerous (and really scary) security-related threats.
Tools for automating these sorts of checks are quite specialized, and I don’t pretend to have a great grasp on them. Instead of possibly leading you astray, I’ll just recommend you look elsewhere for this information.
Each Type of Test Has Its Place
You could make the case there are other types of automated testing, but I think these categories above cover the most important types. What’s critical is that you understand the basic fundamentals of each type and its role.
(Note that I didn’t discuss testing methodology such as test-driven development, behavior-driven development, acceptance test driven development, etc. Those are methodologies which use the types of tests I described above.)
I’m a firm believer that you absolutely need all of these types of tests working together. I’ve seen far too many instances where a developer or team focused on getting unit tests rolling, only to find bugs that would have been caught at the integration or functional level. Performance testing may not be required on every project, but I’d make the case it’s valuable to have at least a bit of metrics around performance simply to ensure you’re not losing ground on user experience as you continue to evolve the system.
Know your test types. Use them in the right places.
4 comments:
I would also add a type of a test, that I call Acceptance Test that works under the UI but above the services. This test layer addresses the expected functionality of the system, and still fast (because you stub out file system, web, database, system clock, etc.), therefore all you test is the function of your application (which should be defined by acceptance criteria, hence the name).
@Roman: I think that's an interesting approach, but I'd disagree that you're truly checking functionality. You've said you're stubbing out various components, so there's no way to validate they function as expected. Moreover, you're below the UI layer, so you're missing critical aspects of that as well.
Don't get me wrong: I agree there's value with an approach like that, but I wouldn't call it "Acceptance" or say it checks functionality. You're missing too much there, IMO.
@Jim Holmes: I do not disagree. 95% of the application is in that "core" part covered by acceptance test (as I define it). I agree that end-to-end automated testing is required as well, but such end-to-end testing only needs to test your connection assumptions. End-to-end testing is what I would call integration test. It verifies connection of your core-logic with external components. Where external components are: disk, web, db, UI, clock. The big point is that UI is conceptually no different from web access or reading a system clock from integration test stand point. Hexagonal architecture is a very good demonstration of this concept. By lumping all integration points into end-to-end test suit you have only one group of slow tests.
But as always in this discipline - "it depends".
I was going to ask if you distinguished between functional tests and UI/GUI tests. Based on the chat with Roman, you apparently keep them in the same family.
On a large client project, we did no GUI testing (didn't have a viable solution to test through the UI), but did have a large functional test suite that hit all the functions just below the UI. With very little logic in the UI, we covered a lot of our functionality this way, and had a very robust set of tests hammering away at the whole app. (Minus the UI, of course.)
In a prior post you mentioned that "you can't test it all" which is exactly what led us to that point. The cost of getting GUI tests in place was going to far outweigh their value in that project.
One other comment on this post, I'm a HUGE fan of isolation frameworks for unit testing. However, I've seen a lot of people start with the isolation framework and not really understand what's going on "under the hood." It's usually well worth the time to step back with the folks new to mocking/stubbing and show them a few hand rolled mocks/stubs using interfaces in a static language, or some class over riding in a dynamic language. There's a whole heck of a lot of "magic" going on that's pretty easy to explain when you take the time. (Always take the time! :) )
Great series so far, Jim.
Post a Comment