Difference between revisions of "Testing Tool (Architecture)"
(→EIFFEL_TEST_I) |
(Replaced origo.ethz.ch by eiffel.com in SVN URL) |
||
Line 43: | Line 43: | ||
== Provide testing as a service in EiffelStudio == | == Provide testing as a service in EiffelStudio == | ||
− | {{Note|Interface classes can be found here: https://svn. | + | {{Note|Interface classes can be found here: https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/}} |
Using ESS, we can provide all testing functionality as a service within EiffelStudio. That way other tools can make use of these testing functionalities. Also the tool won't have to access the implementation directly. This is a short description of interfaces so far created. | Using ESS, we can provide all testing functionality as a service within EiffelStudio. That way other tools can make use of these testing functionalities. Also the tool won't have to access the implementation directly. This is a short description of interfaces so far created. | ||
Line 57: | Line 57: | ||
The test suite also provides two registrars where new executors or factories can be registered. Later clients can query whether a certain executor/factory is available and use it if so. More on executors and factories later. | The test suite also provides two registrars where new executors or factories can be registered. Later clients can query whether a certain executor/factory is available and use it if so. More on executors and factories later. | ||
− | {{Block|[https://svn. | + | {{Block|[https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/test_suite_s.e TEST_SUITE_S]}} |
=== EIFFEL_TEST_I === | === EIFFEL_TEST_I === | ||
Line 67: | Line 67: | ||
All tests have a list of outcomes from previous execution sessions. More on that is explained in the next section. | All tests have a list of outcomes from previous execution sessions. More on that is explained in the next section. | ||
− | {{Block|[https://svn. | + | {{Block|[https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/item/test_i.e TEST_I] |
− | [https://svn. | + | [https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/item/eiffel_test_i.e EIFFEL_TEST_I] |
− | [https://svn. | + | [https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/support/tagable_i.e TAGABLE_I] |
− | [https://svn. | + | [https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/support/filtered_collection_i.e FILTERED_COLLECTION_I]}} |
=== TEST_EXECUTOR_I === | === TEST_EXECUTOR_I === | ||
Line 80: | Line 80: | ||
As mentioned above, TEST_I keeps a list of outcomes produced by TEST_EXECUTOR_I. In the case of EIFFEL_TEST_I the list contains items of type EIFFEL_TEST_OUTCOME_I. Each outcome points to a EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I which describes one stage of a test execution. The tree stages are namely setup, test and tear down. Test just means calling the actual testing routine. Based on the responses of each stage, EIFFEL_TEST_OUTCOME_I determines whether a test passes or fails. In cases where it cannot be determined because the execution ran unexpected, an outcome is flagged unresolved. In that case the test need to be inspected which is expressed as `'''is_maintenance_required'''' in EIFFEL_TEST_OUTCOME_I. | As mentioned above, TEST_I keeps a list of outcomes produced by TEST_EXECUTOR_I. In the case of EIFFEL_TEST_I the list contains items of type EIFFEL_TEST_OUTCOME_I. Each outcome points to a EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I which describes one stage of a test execution. The tree stages are namely setup, test and tear down. Test just means calling the actual testing routine. Based on the responses of each stage, EIFFEL_TEST_OUTCOME_I determines whether a test passes or fails. In cases where it cannot be determined because the execution ran unexpected, an outcome is flagged unresolved. In that case the test need to be inspected which is expressed as `'''is_maintenance_required'''' in EIFFEL_TEST_OUTCOME_I. | ||
− | {{Block|[https://svn. | + | {{Block|[https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_executor_i.e TEST_EXECUTOR_I] |
− | [https://svn. | + | [https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_executor_observer.e TEST_EXECUTOR_OBSERVER] |
− | [https://svn. | + | [https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_outcome_i.e TEST_OUTCOME_I] |
− | [https://svn. | + | [https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/eiffel_test_outcome_i.e EIFFEL_TEST_OUTCOME_I] |
− | [https://svn. | + | [https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/eiffel_test_routine_invocation_response_i.e EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I]}} |
=== TEST_FACTORY_I === | === TEST_FACTORY_I === | ||
Line 93: | Line 93: | ||
This pattern should also be valid for test generation and extraction (Auto Test/CDD). Where the factory might not create a single test but multiple ones. | This pattern should also be valid for test generation and extraction (Auto Test/CDD). Where the factory might not create a single test but multiple ones. | ||
− | {{Block|[https://svn. | + | {{Block|[https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/test_factory_i.e TEST_FACTORY_I] |
− | [https://svn. | + | [https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/test_configuration_i.e TEST_CONFIGURATION_I] |
− | [https://svn. | + | [https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/eiffel_test_configuration_i.e EIFFEL_TEST_CONFIGURATION_I]}} |
== Communication between tool and test executor == | == Communication between tool and test executor == |
Latest revision as of 13:18, 4 June 2012
Contents
Test system architecture
- separate system for executing tests
- system is only responsible for running tests, without determine whether a test fails or not -> compiling and launching system is done by tool, which is also oracle
- tests are compiled up to degree 3 in development system, so user can edit tests like normal classes, but do not affect resulting binary
- test referenced by root class of development system will be compiled anyway, this way we can also run tests in debugger
- in CDD: test system is implicit target (does not have to be in ecf) which inherits from development target, test target simply defines new root class/feature
- re-use development EIFGEN (copy) so test system does not need to compile from scratch?
EiffelStudio <-----------------------------> Testing tool <-----------------------------> Test executor * Show tests in system * can be part of EiffelStudio or * execute test in safe environment * Show test result compiled as separate tool to be (executor is allowed to crash) * Provide test creation wizards used e.g. through console * Interface for CDD, Auto tests, * compile test executor creating manual tests, running * distribute test executors to * provide ESF service for different machines testing/test results * schedule test execution * provide test results * find all tests for a given ecf file * write root class for test executor CDD * implemented partially in debugger/executable * should be part of any Eiffel application, that way test can be created for bug submitting * extraction can be initiated through debugger, breakpoints, failure window, etc. Auto Test * separate tool, interface in EiffelStudio
Provide testing as a service in EiffelStudio
Note: Interface classes can be found here: https://svn.eiffel.com/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/
Using ESS, we can provide all testing functionality as a service within EiffelStudio. That way other tools can make use of these testing functionalities. Also the tool won't have to access the implementation directly. This is a short description of interfaces so far created.
So far the service consists of tree major parts: the test suite storing all tests, test execution and test creation. The service already includes more than 20 interface classes, so it will be important to find a good abstraction. Another aspect is that some parts of the service should be extendible. Clients should be able to define new types of tests, executors or factories.
EIFFEL_TEST_SUITE_S
The test suite is the first instance of the service. It has the list of all tests in the system and controls all execution of tests. Right now the service has the restriction that only one executor can run at a time. Although there might not be a reason against having to executors running in parallel, it will make observing the execution of tests much simpler. Whereas factories can be launched by anyone and so run in parallel. In that case clients are usually interested when a new test is created, for which events already exist in the test suite (see below).
Changes in the test suite can be observed, so if tests are added, removed or modified clients can be notified. There are also events for activating or deactivating an executor in the test suite.
The test suite also provides two registrars where new executors or factories can be registered. Later clients can query whether a certain executor/factory is available and use it if so. More on executors and factories later.
EIFFEL_TEST_I
EIFFEL_TEST_I is be the common test representation. EIFFEL_TEST_I inherits from a class TAGABLE_I, which means that all tests have a list of tags represented by string (Tags section of specifications). This allows us to have common used functionality in the service itself, like filtering (see TAG_BASED_FILTERED_COLLECTION). Also it enables the user to introduce his own attributes for tests.
EIFFEL_TEST_I points to the abstract syntax representation of its routine and class the routine is located. This is useful to the implementation but could be also to clients. However implementation wise all relevant information should be accessible (such as feature name and tags in the indexing clause).
All tests have a list of outcomes from previous execution sessions. More on that is explained in the next section.
TEST_I EIFFEL_TEST_I TAGABLE_I FILTERED_COLLECTION_I
TEST_EXECUTOR_I
This is a general interface for executing tests. It takes a list of tests and executes each of them. One restriction it imposes to its implementers is that the execution is non blocking. This means that `run' will return immediately and all tests are executed asynchronously. This again makes it simpler for clients to use (especially graphical UIs).
All state changes of TEST_EXECUTOR_I can be observed by inheriting TEST_EXECUTOR_OBSERVER and connecting to the executor.
As mentioned above, TEST_I keeps a list of outcomes produced by TEST_EXECUTOR_I. In the case of EIFFEL_TEST_I the list contains items of type EIFFEL_TEST_OUTCOME_I. Each outcome points to a EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I which describes one stage of a test execution. The tree stages are namely setup, test and tear down. Test just means calling the actual testing routine. Based on the responses of each stage, EIFFEL_TEST_OUTCOME_I determines whether a test passes or fails. In cases where it cannot be determined because the execution ran unexpected, an outcome is flagged unresolved. In that case the test need to be inspected which is expressed as `is_maintenance_required' in EIFFEL_TEST_OUTCOME_I.
TEST_EXECUTOR_I TEST_EXECUTOR_OBSERVER TEST_OUTCOME_I EIFFEL_TEST_OUTCOME_I EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I
TEST_FACTORY_I
Factories are similar to executors since they are registered in the test suite and once triggered run asynchronously. A test factory takes a TEST_CONFIGURATION_I, which describes properties of a new test. There is a specialized version EIFFEL_TEST_CONFIGURATION_I for Eiffel tests (including class names, location, features and classes being tested by the new test). So far the notification is kept simple by providing a call back function to the run routine of the factory. This is because clients will be notified anyway when a new test is added to the system through the test suite.
This pattern should also be valid for test generation and extraction (Auto Test/CDD). Where the factory might not create a single test but multiple ones.
TEST_FACTORY_I TEST_CONFIGURATION_I EIFFEL_TEST_CONFIGURATION_I
Communication between tool and test executor
Protocol
From tool to executor
- name(s) of test to execute
- quit
From executor to tool
- test result
- text output produced by test
- exception details (type, tag, feature, class? occurred during set up, test, tear down?)
- call stack for exception
Open questions
- executor per machine/processor?
- text based/object base communication?