Written by: T.B. Noor and H. Hemmati. IEEE International Symposium on Software Reliability Engineering (ISSRE), 2015.
Test case prioritization is a crucial element in
software quality assurance in practice, specially, in the context
of regression testing. Typically, test cases are prioritized in a way
that they detect the potential faults earlier. The effectiveness of
test cases, in terms of fault detection, is estimated using quality
metrics, such as code coverage, size, and historical fault detection.
Prior studies have shown that previously failing test cases are
highly likely to fail again in the next releases, therefore, they are
highly ranked, while prioritizing. However, in practice, a failing
test case may not be exactly the same as a previously failed test
case, but quite similar, e.g., when the new failing test is a slightly
modified version of an old failing one to catch an undetected
fault. In this paper, we define a class of metrics that estimate
the test cases quality using their similarity to the previously
failing test cases. We have conducted several experiments with
five real world open source software systems, with real faults,
to evaluate the effectiveness of these quality metrics. The results
of our study show that our proposed similarity-based quality
measure is significantly more effective for prioritizing test cases
compared to existing test case quality measures.