Difference between revisions of "Eiffel Mutation Testing Tool"

(Problems)
(Manual)
 
Line 79: Line 79:
  
 
After locating the test classes and branching from normal execution of auto test, we had to call the mutated test case using all possible values of the mutant_id. The figure below gives an overview of the execution of the mutated manual test cases.
 
After locating the test classes and branching from normal execution of auto test, we had to call the mutated test case using all possible values of the mutant_id. The figure below gives an overview of the execution of the mutated manual test cases.
 +
[[Image:Manual.JPG]]
  
 
=====Random=====
 
=====Random=====

Latest revision as of 12:00, 21 May 2007


Overview

The whole goal of Eiffel mutation testing tool is:

  • Defining a set of efficient Eiffel mutation operators
  • Implementing these mutation operators
  • Executing test cases on mutants to evaluate their quality

This tool is mainly used for evaluating the quality of test strategies, especially the evaluation of fault detectablity and redundance.

Goals

We aim to implement a completely automatic Eiffel mutation testing tool.

Steps

  • Instrument Eiffel source code to generate mutant schema file
  • Create mutant objects from mutant schema file and execute test cases on mutants to calculate the mutation score of test cases

Instrumentation

Evaluation of different mutation insertion methods

As discussed, there is more than one possible method of inserting mutations into the sourcecode of a class. The first approach we considered is compiling the class under test until degree 3 and then using a changed ET_AST_PRINTER to change the features and print the new mutation schema class to a file. There are of course several up- and downsides in this approach:

Plus:

  • Very generic approach, can be done for structural and behavioral mutations
  • Clean interface between mutant schema generation and mutant instatiation
  • Rather easy to implement, maintain and extend

Minus:

  • Slow because the class under test first has to be parsed and compiled (until degree 3) and then the mutant schema has to be written to a file. In the instantisation stage the mutant schema has to be parsed and compiled again.


The second approach would be to change the eiffel parser (geyacc) used to parse the classes under test and insert mutations while parsing.

Plus:

  • Rather fast since only parsing has to be done for both the class under test and the mutation schema (and this could also be optimised using the parsed class as interface between instrumentation and instantiation rather then giving a written file)
  • Clean interface (if the optimization mentioned before is not applied)
  • Implementation not easy but straightforward

Minus:

  • This method only works for behavioral mutations, structural changes cannot be done while parsing
  • Maintainability and extendability rather limited


As a third possibility Silvio and I discussed changing the interface between instrumentation and instantiaton in such a way that we (based on the first method) only have to compile once for every class under text. This would mean that the instantiation part would work directly on the AST generated and mutated during instrumentation.

Plus:

  • Behavioral and structural mutations can be done, thus quite generic
  • Rather fast
  • Implementation of instrumentation rather easy (same as in method one)
  • Extendible solution

Minus:

  • No clean interface between instrumentation and instantiation
  • Instantiation harder to implement
  • Maintainability depends heavily on changes in the compiler (as long as the AST representation isn't changed it would be rather good)


Since, in our (Silvios and mine) opinion, reliability, extendibility and flexibility is more important than speed (although speed is important of course) we will focus on the first approach which will give us two almost completely independant components which can be easily exchanged and extended.

Execution

Overview

The purpose of the „Execution“-Step of the mutation testing tool is to run auto-test with the mutated objects, and to compare the output of these mutants to the output of an „original“ object.

Design

The two ways that auto-test uses to test classes (manual and random) had to be treated separately.

Manual

These are the steps needed in general to test the manual test cases:

  • 1. First run the original test case to obtain the results of the original, not mutated class.
  • 2. Change the attribute identifying the class under test to an instantiated object of the mutated class.
  • 3. Set the mutant_id value of this class to 1.
  • 4. Run this version of the test case to obtain the „mutated“ results.
  • 5. Compare these results to the one gained from the first run to check whether they differ and therefore can be detected using the actual test case.
  • 6. Repeat steps 3 to 5 using the other possible values for the mutant_id.

As it was not possible to manipulate the test cases on run time, we decided to instrument not only the classes under test, but also the test cases.

First, the class under test should be changed in the source code to the mutated type of the class under test. Second, the test case should provide a feature to change the value of the mutant_id attribute. To this end, also the mutant of the class under test should provide a feature to change this value.

After locating the test classes and branching from normal execution of auto test, we had to call the mutated test case using all possible values of the mutant_id. The figure below gives an overview of the execution of the mutated manual test cases. Manual.JPG

Random

The random case was much easier to implement. Nevertheless it was harder to understand the functionality of this part of auto test.

As there are no (manually written) test cases, only the mutated classes under test are needed. As in the manual case, an instance of a class under test can be changed to different mutants by changing their mutant_id.

Problems

One problem is that manually written test cases do not need to fulfill any requirements that restrict the instantiation of a class under test. Some test cases declare the class under test as a global variable, others as a local one for every feature. Therefore, there are many different cases to handle. One more, not yet solved problem is the comparison of the results the interpreter returns, especially when there are exceptions.

Milestones

Todo

Team

Philippe Masson

Silvio Kohler