I’ve noticed a bit of disparity in the level of detail and the scope of test cases that are written by different engineers on various QA teams that I have been a part of and I thought it would be interesting to explore this topic. I would be really interested in hearing from other QA Engineers out there regarding their approach and personal standards for writing a test case.
Given that this is perhaps the most quintessential of all job duties that we share, I find it surprising that we don’t have a set standard, or at least some best practices, in regards to writing a test case. I should mention that in all of the organizations that I have worked for I have never seen a standard/best practices document for writing a test case. This article will attempt to outline my general approach to writing a test case.
I suppose my approach is heavily influenced by my background in both development and automation. I usually attempt to keep my test cases isolated to a single feature or a flow through a feature that focuses on a single area of concern. I do this for a number of reasons; key to this is that I feel the test case, if failed, should assist in defect localization. In other words, if a test case fails I should be able to tie that failed test case to a single bug. Another reason for this is to avoid masking deeper or tangential issues due to other failures.
My approach to writing a test case can be summarized in four phases; also known as the Four Phase Test Pattern (GOF). The four phases consist of employing a Fresh/Shared fixture, exercising the system under test, verifying the result, and tearing it down.
- Set up the Fixture: This is the before picture that is required for the system under test to exhibit the expected behavior as well as anything you need to put in place to be able to observe the actual outcome. The two most common patterns for this are the Fresh Fixture and the Shared Fixture
- Exercise: This is when we interact with the system to generate the output to verify
- Verification: In this step we verify whether the expected outcome has been obtained.
- Tear Down: We tear down the test fixture to put the world back into the state in which we found it.
This pattern of test case creation also fits in smoothly with automation, which is one of my primary drivers for using this pattern in most of my test case creations.
Setting up the fixture is usually a common set of steps that can be easily copied to multiple test cases. These are important to list for all the tests, even if it feels redundant, since another goal is to provide enough detail in the test case that it can be repeated. Once the test case is written, we should be able to pass off the test case to another engineer for execution (given that the executing engineer has enough product knowledge to execute).
The exercising and the verification steps go hand in hand. This is the real meat of the test case and is the area that we are most concerned with. These steps should only be as complicated as needed to exercise the focused area under test. Every attempt should be made to keep these steps atomic so that defect masking doesn’t occur. If we are masking issues this could delay defect discovery until subsequent test executions, which will cause schedule slippage and induce more project risk.
The final phase is the tear-down and clean up. From a repeatability standpoint, every change to a complicated system results in a new system which can lead to instability. In other words, we must attempt to get the system back into a known state after each test to ensure that defects are repeatable and not the effect from previous actions. This step is often overlooked but is very important.
To summarize, I feel there are four key aspects of a good test case as listed below:
- Defect Localization-ism
- Avoid Masking other issues-ism
What is your general approach to writing an effective test case?