<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://dev.eiffel.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Arnofiva</id>
		<title>EiffelStudio: an EiffelSoftware project - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://dev.eiffel.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Arnofiva"/>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/Special:Contributions/Arnofiva"/>
		<updated>2026-05-02T11:00:37Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.24.1</generator>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=EiffelStudio_6.7_Releases&amp;diff=13962</id>
		<title>EiffelStudio 6.7 Releases</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=EiffelStudio_6.7_Releases&amp;diff=13962"/>
				<updated>2010-08-20T14:21:14Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* User changes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Releases]]__NOTOC__{{ReleaseHistoryHeader}}&lt;br /&gt;
&lt;br /&gt;
= EiffelStudio 6.7.x Releases=&lt;br /&gt;
&lt;br /&gt;
==6.7.x==&lt;br /&gt;
Placeholder for new stuff since last intermediate release.&lt;br /&gt;
===New features===&lt;br /&gt;
===Improvements===&lt;br /&gt;
*compiler: Allowed for a qualified anchored type that has a stand-alone type qualifier to be used as a type of a once function (bug#17035, test#anchor058).&lt;br /&gt;
*AutoTest: test classes in a system are automatically detected and compiled. Executing tests therefore no longer requires an separate compilation.&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
*compiler: Fixed incorrect output of generic constraints that involve the same formal generic recursively (bug#16948).&lt;br /&gt;
*compiler: Fixed bug#16949 when the particular order of formal generic constraints might cause incorrect compiler error report (test#multicon054).&lt;br /&gt;
*compiler: Fixed a compiler crash when QAT refers to the recursively defined formal generic (bug#16950, test#multicon055).&lt;br /&gt;
*compiler: Fixed bug#16743 that caused a compiler crash when a particular set of keywords is allowed to be used as identifiers under some conditions (test#term186).&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
{{Red|AutoTest library: changes to {EQA_TEST_SET} make it simpler to define a custom `asserter' or `file_system'. Removed {EQA_TEST_SET}.run_test which has previously been used by the tool to execute a test routine.}}&lt;br /&gt;
&lt;br /&gt;
===Developer changes===&lt;br /&gt;
&lt;br /&gt;
==6.7.8.4178 (August 16th 2010)==&lt;br /&gt;
===New features===&lt;br /&gt;
===Improvements===&lt;br /&gt;
*compiler: Allowed for a qualified anchored type that has a standalone type qualifier to be used as a type of a once function (bug#16947, test#anchor050).&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
* compiler: Fixed eweasel test#incr366 when an expanded generic type is being used and its expandness status is removed. It completes the fix for test#incr315.&lt;br /&gt;
* compiler: Fixed test#attach077 when VEVI was not properly reported in some cases for uninitialized attributes passed as arguments.&lt;br /&gt;
*compiler: Fixed eweasel test#incr378 for a bug added at the previous intermediate release which would break execution of code when touching/modifying some generic classes used in the system.&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
===Developer changes===&lt;br /&gt;
&lt;br /&gt;
==6.7.8.4135 (August 10th 2010)==&lt;br /&gt;
===New features===&lt;br /&gt;
* {{Red|compiler: New tracing facility in Eiffel code. Currently tracing is done at the runtime level by writing some text in the standard output. This new tracing facility let you execute some user defined Eiffel code at entry and exit of all routine calls. That way you can better track what is going on without endless search in the output.}}&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
* compiler: Fixed eweasel test#ccomp085. Now the compiler ensures that if you have a C external with a specific includes order, it will be respected at compile time.&lt;br /&gt;
* compiler: Fixed eweasel test#exec326. Now the compiler properly generates the REAL_32 values for {REAL_32}.min_value and {REAL_32}.max_value.&lt;br /&gt;
* compiler: Fixed eweasel test#valid243, test#svalid028, test#svalid029, test#tuple004, test#freez032 and test#multicon058. The issue was that when we performed the type checking of inherited routines using prefix/infix we were not using the new name of the prefix/infix operator but still the old one. Thus if it was renamed it would cause a spurious compilation error instead of accepting the code.&lt;br /&gt;
* compiler: Fixed eweasel test#multicon056 and test#multicon057 which prevented usage of objectless call on formal generic parameter using multiconstraints.&lt;br /&gt;
* compiler: Fixed eweasel test#freez032 showing that the inlining of `.hash_code' was incorrect for .NET and C code generation. Melted was OK because no inlining was done and the Eiffel code was executed.&lt;br /&gt;
* compiler: Fixed the following eweasel tests: test#incr295 test#incr302 test#incr307 test#incr309 test#incr324 test#incr331 test#incr332 test#incr346 test#incr372 test#incr373 test#incr374. The main problem was that we did not properly cleaned the TYPE_LIST and the FILTER_LISTs of the compiler when types do not satisfy their constraint anymore. The other issue was with `pattern_id' where when removing an entry from PATTERN_TABLE we only removed it from `info_array' but not from Current, causing `insert' to misbehave.&lt;br /&gt;
* compiler: Fixed test#incr315 where we did not rebuild the parent list of a class when one of the inheritance clause had its type changed from expanded to non-expanded or vice-versa.&lt;br /&gt;
* compiler: Fixed test#incr345, when you remove the invariant containing an inline agent and you have an error in that class, the inline agent was preserved instead of being removed.&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
* {{Red|base: Made {ARRAY}.make obsolete. Now one has to use `make_empty' or `make_filled'.}}&lt;br /&gt;
&lt;br /&gt;
===Developer changes===&lt;br /&gt;
&lt;br /&gt;
==6.7.8.3946 (July 20th 2010)==&lt;br /&gt;
===New features===&lt;br /&gt;
*{{Red|compiler: Added support for gcc on Windows 64-bit, thus removing the need for Microsoft Visual Studio for most type of projects.}}&lt;br /&gt;
* compiler: UTF-8 source code parser&lt;br /&gt;
* compiler: Unicode (STRING_32) manifest string&lt;br /&gt;
* compiler: Unicode free operator&lt;br /&gt;
* encoding: BOM encoding detector for UTF-8&lt;br /&gt;
* encoding: New localized printer which prints Unicode according to the console encoding.&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
* {{Red|store: Prevent C storable to block all threads while waiting from data to be read in retrieved. Now we would wait for the storable type first before blocking all the other runtime threads. This fixes bug#16859.}}&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
*compiler: Fixed various crashes and incorrect or missing bug reports related to qualified anchored types (bug#16791 (test#anchor011), bug#16792 (test#incr352), bug#16793 (test#incr353), bug#16797 (test#anchor012), bug#16798 (test#anchor013), bug#16799 (test#anchor014), bug#16800 (test#anchor015), bug#16803 (test#anchor016), bug#16804 (test#anchor017), test#anchor018, bug#16819 (test#anchor019), bug#16821 (test#incr354), test#anchor020, test#anchor021, bug#16824 (test#anchor022), test#anchor023, bug#16839 (test#anchor024), test#anchor026, bug#16848 (test#anchor027), bug#16849, bug#16850 (test#attach030), bug#16855 (test#final089), bug#16849, bug#16867 (test#anchor028), bug#16868 (test#anchor029), bug#16876 (test#anchor031), bug#16878 (test#incr356), bug#16879 (test#anchor033), bug#16883 (test#anchor034), bug#16884 (test#anchor035), bug#16885 (test#anchor036), bug#16886 (test#anchor037), bug#16887 (test#anchor038), bug#16889 (test#incr358), bug#16890 (test#anchor040), bug#16893 (test#incr359), bug#16897 (test#anchor041), bug#16899 (test#incr362), bug#16900 (test#incr363), bug#16901 (test#incr364), bug#16902 (test#anchor043), bug#16943 (test#anchor047), bug#16944 (test#anchor048), bug#16945 (test#anchor049), bug#16959 (test#anchor052)).&lt;br /&gt;
*compiler: Fixed eweasel test#exec327 where evaluation an assertion the code being executed encounter the new check ... then ... end instruction it would reset some internal flags causing assertion within assertions to be checked when they should not.&lt;br /&gt;
*compiler: Prevented a failure in the case of a directory containing Eiffel classes is abruptly removed from disk while being processed at degree 6.&lt;br /&gt;
*compiler: Fixed bug#16815: Feature call on void target in {CLASS_C}.inline-agent#1 of has_stable_attribute in EiffelStudio.&lt;br /&gt;
*compiler: Fixed bug#16795: No error or warning for unknown once key.&lt;br /&gt;
*debugger: Fixed bug#16838: User can bring up multiple breakpoint dialogs on same breakpoint.&lt;br /&gt;
*studio: fixed bug#16831: Picking and dropping class from Features window into Editor tab clears Features window.&lt;br /&gt;
*compiler: Fixed incremental bugs that involve types anchored to expanded ones (bug#16882 (test#incr357), bug#15825 (test#incr329)).&lt;br /&gt;
*compiler: Fixed bug#16921 that resulted in VEVI error for an attribute of a formal generic type, constrained to an expanded type (see test#attach073).&lt;br /&gt;
*compiler: Fixed issues with conformance checks for formal generics constrained to other formal generics, including conformance to expanded types (see test#conform009, test#conform010, test#valid114).&lt;br /&gt;
*compiler: Disallowed incorrectly accepted empty contraint types that also fixes some related crashes (bug#15197 (test#term171), bug#16133 (test#multicon052), bug#16908 (test#syntax061)).&lt;br /&gt;
*compiler: Fixed bug#16912: Executing many inherited once-per-object routines crashes in finalized system&lt;br /&gt;
*compiler: Fixed an issue that may cause a compiler crash when a formal generic constraint lists a generic derivation that is never created during system execution (see test#term185).&lt;br /&gt;
*compiler: Fixed bug#16970 that caused incorrect code to be generated when using target type NONE in object test with the source of an expanded type (see test#attach076).&lt;br /&gt;
*store: Fixed bug#16946 and eweasel test#store032 where `cid_array' was not properly initialized in a different thread than the main thread.&lt;br /&gt;
*encoding: Fixed bug#16820: Endianness issue on big endian machine.&lt;br /&gt;
*encoding: Fixed bug#16836: Unexpected BOM apperance, caused by inconsistent behavior of iconv on Solaris 9 Sparc and Solaris 10 Sparc.&lt;br /&gt;
*Vision2: Fixed bug#16892: Context menu in editor gone since 6.5!!!!&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
===Developer changes===&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=EiffelStudio_6.7_Releases&amp;diff=13961</id>
		<title>EiffelStudio 6.7 Releases</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=EiffelStudio_6.7_Releases&amp;diff=13961"/>
				<updated>2010-08-20T14:16:47Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Improvements */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Releases]]__NOTOC__{{ReleaseHistoryHeader}}&lt;br /&gt;
&lt;br /&gt;
= EiffelStudio 6.7.x Releases=&lt;br /&gt;
&lt;br /&gt;
==6.7.x==&lt;br /&gt;
Placeholder for new stuff since last intermediate release.&lt;br /&gt;
===New features===&lt;br /&gt;
===Improvements===&lt;br /&gt;
*compiler: Allowed for a qualified anchored type that has a stand-alone type qualifier to be used as a type of a once function (bug#17035, test#anchor058).&lt;br /&gt;
*AutoTest: test classes in a system are automatically detected and compiled. Executing tests therefore no longer requires an separate compilation.&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
*compiler: Fixed incorrect output of generic constraints that involve the same formal generic recursively (bug#16948).&lt;br /&gt;
*compiler: Fixed bug#16949 when the particular order of formal generic constraints might cause incorrect compiler error report (test#multicon054).&lt;br /&gt;
*compiler: Fixed a compiler crash when QAT refers to the recursively defined formal generic (bug#16950, test#multicon055).&lt;br /&gt;
*compiler: Fixed bug#16743 that caused a compiler crash when a particular set of keywords is allowed to be used as identifiers under some conditions (test#term186).&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
===Developer changes===&lt;br /&gt;
&lt;br /&gt;
==6.7.8.4178 (August 16th 2010)==&lt;br /&gt;
===New features===&lt;br /&gt;
===Improvements===&lt;br /&gt;
*compiler: Allowed for a qualified anchored type that has a standalone type qualifier to be used as a type of a once function (bug#16947, test#anchor050).&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
* compiler: Fixed eweasel test#incr366 when an expanded generic type is being used and its expandness status is removed. It completes the fix for test#incr315.&lt;br /&gt;
* compiler: Fixed test#attach077 when VEVI was not properly reported in some cases for uninitialized attributes passed as arguments.&lt;br /&gt;
*compiler: Fixed eweasel test#incr378 for a bug added at the previous intermediate release which would break execution of code when touching/modifying some generic classes used in the system.&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
===Developer changes===&lt;br /&gt;
&lt;br /&gt;
==6.7.8.4135 (August 10th 2010)==&lt;br /&gt;
===New features===&lt;br /&gt;
* {{Red|compiler: New tracing facility in Eiffel code. Currently tracing is done at the runtime level by writing some text in the standard output. This new tracing facility let you execute some user defined Eiffel code at entry and exit of all routine calls. That way you can better track what is going on without endless search in the output.}}&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
* compiler: Fixed eweasel test#ccomp085. Now the compiler ensures that if you have a C external with a specific includes order, it will be respected at compile time.&lt;br /&gt;
* compiler: Fixed eweasel test#exec326. Now the compiler properly generates the REAL_32 values for {REAL_32}.min_value and {REAL_32}.max_value.&lt;br /&gt;
* compiler: Fixed eweasel test#valid243, test#svalid028, test#svalid029, test#tuple004, test#freez032 and test#multicon058. The issue was that when we performed the type checking of inherited routines using prefix/infix we were not using the new name of the prefix/infix operator but still the old one. Thus if it was renamed it would cause a spurious compilation error instead of accepting the code.&lt;br /&gt;
* compiler: Fixed eweasel test#multicon056 and test#multicon057 which prevented usage of objectless call on formal generic parameter using multiconstraints.&lt;br /&gt;
* compiler: Fixed eweasel test#freez032 showing that the inlining of `.hash_code' was incorrect for .NET and C code generation. Melted was OK because no inlining was done and the Eiffel code was executed.&lt;br /&gt;
* compiler: Fixed the following eweasel tests: test#incr295 test#incr302 test#incr307 test#incr309 test#incr324 test#incr331 test#incr332 test#incr346 test#incr372 test#incr373 test#incr374. The main problem was that we did not properly cleaned the TYPE_LIST and the FILTER_LISTs of the compiler when types do not satisfy their constraint anymore. The other issue was with `pattern_id' where when removing an entry from PATTERN_TABLE we only removed it from `info_array' but not from Current, causing `insert' to misbehave.&lt;br /&gt;
* compiler: Fixed test#incr315 where we did not rebuild the parent list of a class when one of the inheritance clause had its type changed from expanded to non-expanded or vice-versa.&lt;br /&gt;
* compiler: Fixed test#incr345, when you remove the invariant containing an inline agent and you have an error in that class, the inline agent was preserved instead of being removed.&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
* {{Red|base: Made {ARRAY}.make obsolete. Now one has to use `make_empty' or `make_filled'.}}&lt;br /&gt;
&lt;br /&gt;
===Developer changes===&lt;br /&gt;
&lt;br /&gt;
==6.7.8.3946 (July 20th 2010)==&lt;br /&gt;
===New features===&lt;br /&gt;
*{{Red|compiler: Added support for gcc on Windows 64-bit, thus removing the need for Microsoft Visual Studio for most type of projects.}}&lt;br /&gt;
* compiler: UTF-8 source code parser&lt;br /&gt;
* compiler: Unicode (STRING_32) manifest string&lt;br /&gt;
* compiler: Unicode free operator&lt;br /&gt;
* encoding: BOM encoding detector for UTF-8&lt;br /&gt;
* encoding: New localized printer which prints Unicode according to the console encoding.&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
* {{Red|store: Prevent C storable to block all threads while waiting from data to be read in retrieved. Now we would wait for the storable type first before blocking all the other runtime threads. This fixes bug#16859.}}&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
*compiler: Fixed various crashes and incorrect or missing bug reports related to qualified anchored types (bug#16791 (test#anchor011), bug#16792 (test#incr352), bug#16793 (test#incr353), bug#16797 (test#anchor012), bug#16798 (test#anchor013), bug#16799 (test#anchor014), bug#16800 (test#anchor015), bug#16803 (test#anchor016), bug#16804 (test#anchor017), test#anchor018, bug#16819 (test#anchor019), bug#16821 (test#incr354), test#anchor020, test#anchor021, bug#16824 (test#anchor022), test#anchor023, bug#16839 (test#anchor024), test#anchor026, bug#16848 (test#anchor027), bug#16849, bug#16850 (test#attach030), bug#16855 (test#final089), bug#16849, bug#16867 (test#anchor028), bug#16868 (test#anchor029), bug#16876 (test#anchor031), bug#16878 (test#incr356), bug#16879 (test#anchor033), bug#16883 (test#anchor034), bug#16884 (test#anchor035), bug#16885 (test#anchor036), bug#16886 (test#anchor037), bug#16887 (test#anchor038), bug#16889 (test#incr358), bug#16890 (test#anchor040), bug#16893 (test#incr359), bug#16897 (test#anchor041), bug#16899 (test#incr362), bug#16900 (test#incr363), bug#16901 (test#incr364), bug#16902 (test#anchor043), bug#16943 (test#anchor047), bug#16944 (test#anchor048), bug#16945 (test#anchor049), bug#16959 (test#anchor052)).&lt;br /&gt;
*compiler: Fixed eweasel test#exec327 where evaluation an assertion the code being executed encounter the new check ... then ... end instruction it would reset some internal flags causing assertion within assertions to be checked when they should not.&lt;br /&gt;
*compiler: Prevented a failure in the case of a directory containing Eiffel classes is abruptly removed from disk while being processed at degree 6.&lt;br /&gt;
*compiler: Fixed bug#16815: Feature call on void target in {CLASS_C}.inline-agent#1 of has_stable_attribute in EiffelStudio.&lt;br /&gt;
*compiler: Fixed bug#16795: No error or warning for unknown once key.&lt;br /&gt;
*debugger: Fixed bug#16838: User can bring up multiple breakpoint dialogs on same breakpoint.&lt;br /&gt;
*studio: fixed bug#16831: Picking and dropping class from Features window into Editor tab clears Features window.&lt;br /&gt;
*compiler: Fixed incremental bugs that involve types anchored to expanded ones (bug#16882 (test#incr357), bug#15825 (test#incr329)).&lt;br /&gt;
*compiler: Fixed bug#16921 that resulted in VEVI error for an attribute of a formal generic type, constrained to an expanded type (see test#attach073).&lt;br /&gt;
*compiler: Fixed issues with conformance checks for formal generics constrained to other formal generics, including conformance to expanded types (see test#conform009, test#conform010, test#valid114).&lt;br /&gt;
*compiler: Disallowed incorrectly accepted empty contraint types that also fixes some related crashes (bug#15197 (test#term171), bug#16133 (test#multicon052), bug#16908 (test#syntax061)).&lt;br /&gt;
*compiler: Fixed bug#16912: Executing many inherited once-per-object routines crashes in finalized system&lt;br /&gt;
*compiler: Fixed an issue that may cause a compiler crash when a formal generic constraint lists a generic derivation that is never created during system execution (see test#term185).&lt;br /&gt;
*compiler: Fixed bug#16970 that caused incorrect code to be generated when using target type NONE in object test with the source of an expanded type (see test#attach076).&lt;br /&gt;
*store: Fixed bug#16946 and eweasel test#store032 where `cid_array' was not properly initialized in a different thread than the main thread.&lt;br /&gt;
*encoding: Fixed bug#16820: Endianness issue on big endian machine.&lt;br /&gt;
*encoding: Fixed bug#16836: Unexpected BOM apperance, caused by inconsistent behavior of iconv on Solaris 9 Sparc and Solaris 10 Sparc.&lt;br /&gt;
*Vision2: Fixed bug#16892: Context menu in editor gone since 6.5!!!!&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
===Developer changes===&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Main_Page&amp;diff=13782</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Main_Page&amp;diff=13782"/>
				<updated>2010-04-22T07:59:36Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Fixed broken link on main page pointing to EiffelStudio installation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:General]]__NOTOC__&lt;br /&gt;
&amp;lt;h1 class=&amp;quot;firstHeading&amp;quot;&amp;gt;EiffelStudio Integrated Development Environment&amp;lt;/h1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:EiffelStudioScreenshot.png|thumb|250px|right|EiffelStudio IDE ([http://eiffel.com/products/studio/screenshots.html more screenshots]) ]]&lt;br /&gt;
&lt;br /&gt;
Welcome to the central resource for EiffelStudio developers and contributors.&lt;br /&gt;
==News==&lt;br /&gt;
*''8 December 2009'': EiffelStudio 6.5 is available at http://www.eiffel.com/downloads&lt;br /&gt;
*''August 2009'': Check out [http://www.bertrandmeyer.com Bertrand Meyer's new blog]&lt;br /&gt;
*Join us on IRC at chat.freenode.net #eiffelstudio&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
EiffelStudio is an advanced commercial-grade IDE for the [http://en.wikipedia.org/wiki/Eiffel_programming_language Eiffel programming language]. It is maintained and developed mostly by [http://www.eiffel.com  Eiffel Software] and hosted at the [http://se.inf.ethz.ch/ Chair of Software Engineering] at [http://www.ethz.ch/ ETH Zurich].&lt;br /&gt;
&lt;br /&gt;
On April 5th, 2006, Eiffel Software relicensed the EiffelStudio product under the [[Gnu Public License]]. Eiffel Software still offers a commercial variant. Both versions share the same source code.&lt;br /&gt;
&lt;br /&gt;
EiffelStudio is a full-featured IDE offering the following features, many of them unique:&lt;br /&gt;
&lt;br /&gt;
* Complete compiler for the Eiffel programming language, with Design By Contract (DBC) support and both high compile-time speed and high-performance executables, based on the Melting Ice Technology.&lt;br /&gt;
* Full portability (including graphics) across Windows, MacOS X, Linux, *BSD, Solaris and other operating systems&lt;br /&gt;
* Smart code editor&lt;br /&gt;
* Sophisticated multi-view browsing and viewing facilities&lt;br /&gt;
* Interactive debugger&lt;br /&gt;
* Graphical modeling tool for UML and BON with full roundtrip&lt;br /&gt;
* Refactoring support&lt;br /&gt;
* GUI development tool (EiffelBuild) and fully portable GUI library (EiffelVision)&lt;br /&gt;
* Many other libraries of reusable component.&lt;br /&gt;
&lt;br /&gt;
The Eiffel compiler creates C code that is then handed to a standard C compiler. As a result, Eiffel programs have a run-time performance comparable to those directly written in C or C++, but with the benefits of an advanced object-oriented model and strong typing. EiffelStudio uses a highly efficient compacting garbage collector to free the developer from the burden of memory management.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;If you want to know more about the unique features of Eiffel and EiffelStudio, check out our [[Reasons for using Eiffel]] page.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{|style=&amp;quot;font-size:80%;&amp;quot; bgcolor=white|&lt;br /&gt;
{| cellspacing=8 width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
|width=&amp;quot;50%&amp;quot; bgcolor=&amp;quot;#f6f9fb&amp;quot; style=&amp;quot;border:1px solid #8f8f8f;padding:0 .5em .5em .5em;&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
&lt;br /&gt;
* [[Downloads]]&lt;br /&gt;
* [[EiffelStudio 6.5 Releases|Changelog of 6.5 (release branch)]]&lt;br /&gt;
* [http://docs.eiffel.com/book/eiffelstudio/software-installation-eiffelstudio Installing EiffelStudio]&lt;br /&gt;
* [[Compiling Hello World]]&lt;br /&gt;
|width=&amp;quot;50%&amp;quot; bgcolor=&amp;quot;#f6f9fb&amp;quot; style=&amp;quot;border:1px solid #8f8f8f;padding:0 .5em .5em .5em;&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
== Working with EiffelStudio ==&lt;br /&gt;
&lt;br /&gt;
* [[Frequently Asked Questions]]&lt;br /&gt;
* [[Eiffel Glossary]]&lt;br /&gt;
* [[Eiffel Compilation Explained]]&lt;br /&gt;
* [[EiffelStudio Wish List]]&lt;br /&gt;
&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
|width=&amp;quot;50%&amp;quot; bgcolor=&amp;quot;#f6f9fb&amp;quot; style=&amp;quot;border:1px solid #8f8f8f;padding:0 .5em .5em .5em;&amp;quot;|&lt;br /&gt;
== Contributing! ==&lt;br /&gt;
&lt;br /&gt;
* [[:Category:Projects|How to contribute: the Projects page]]&lt;br /&gt;
* [[:Category:Testing|EiffelStudio testing process: you can participate!]]&lt;br /&gt;
* [[EiffelStudio 6.6 Releases|Changelog of latest development version, currently 6.6 (development trunk)]]&lt;br /&gt;
* [[Repository|Getting the source: Subversion repository]]&lt;br /&gt;
* [[Compiling EiffelStudio]]&lt;br /&gt;
* [[:Category:Tools|Developer's tools]]&lt;br /&gt;
* [[Language_Roadmap|Language roadmap]]&lt;br /&gt;
* [[Environment_Roadmap|Environment roadmap]]&lt;br /&gt;
* [[Design_and_coding_rules|Design and coding rules]]&lt;br /&gt;
|width=&amp;quot;50%&amp;quot; bgcolor=&amp;quot;#f6f9fb&amp;quot; style=&amp;quot;border:1px solid #8f8f8f;padding:0 .5em .5em .5em;&amp;quot;|&lt;br /&gt;
&lt;br /&gt;
== Community ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.eiffelroom.org EiffelRoom]&lt;br /&gt;
* [[Spread_the_word|Spread the word]]&lt;br /&gt;
* [[Eiffel Sites and Links]]&lt;br /&gt;
* [[Mailing Lists]]&lt;br /&gt;
* [[:Category:News|News]]&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=EiffelStudio_6.5_Releases&amp;diff=13163</id>
		<title>EiffelStudio 6.5 Releases</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=EiffelStudio_6.5_Releases&amp;diff=13163"/>
				<updated>2009-08-17T10:18:50Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Improvements */ more AutoTest notes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Releases]]__NOTOC__{{ReleaseHistoryHeader}}&lt;br /&gt;
&lt;br /&gt;
= EiffelStudio 6.5.x Releases=&lt;br /&gt;
&lt;br /&gt;
==6.5.x==&lt;br /&gt;
Placeholder for new stuff since last intermediate release. &lt;br /&gt;
===New features===&lt;br /&gt;
&lt;br /&gt;
* {{Red|web_browser: Added Web Browser widget {EV_WEB_BROWSER} (see new &amp;quot;web_browser&amp;quot; library) and example project (see example under $ISE_LIBRARY/examples/web_browser)}}&lt;br /&gt;
* {{Red|studio: GCC and MSC external compilation errors are now displayed in the error list, with linking to source Eiffel feature when the information is available.}}&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
*studio: Improved error messages for VUAR/VJAR/VBAR errors to mention compatibility instead of just conformance.&lt;br /&gt;
*studio: Now double clicking on an ecf file will select the previously compiled target by default.&lt;br /&gt;
*ec: Fixed command line compilation to use the current working directory unless -project_path is specified (restoring pre-ecf behavior)&lt;br /&gt;
*ec: When an ecf file has been specified with -config, the previously compiled target is chosen as the first option in the compilable target list.&lt;br /&gt;
*AutoTest: Improved tag tree for displaying tests by merging view/filter box into one input field&lt;br /&gt;
*AutoTest: Potential test classes are traversed asynchronously after each compilation, allowing the user to continue working while tests are found. This also removes the need for special test clusters.&lt;br /&gt;
*AutoTest: Improved the way tests are executed. By tagging a test with &amp;quot;execution/isolated&amp;quot; the test process is restarted before and after the test is executed. By tagging a number of tests with a tag &amp;quot;execution/serial&amp;quot;, the tagged tests are not executed in parallel. {{Red|Because of these changes, test execution and generation might not completely work in the next release (missing test results/output).}}&lt;br /&gt;
*{{Red|AutoTest library: {EQA_TEST_SET}.on_prepare is called during `default_create', which makes it simpler to use attached attributes in void-safe tests.}}&lt;br /&gt;
*AutoTest library: Moved actual test routine invocation into {EQA_TEST_SET}.run_test which can be used to nest the test routine call. This is used for example by Vision2 tests to launch the event loop before calling the test routine.&lt;br /&gt;
*{{Red|Docking library: Made Smart Docking library (including docking library examples) void-safe}}&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
*base: Fixed bug#4118 where on .NET `put' had no effect on the actual process environment variables because the API was only available in .NET 2.0 which is what we support at the minimum.&lt;br /&gt;
*{{Red|base: Fixed bug in `copy' from HEAP_PRIORITY_QUEUE which would not do anything because it was simply copying itself, not other.}}&lt;br /&gt;
*{{Red|base: Fixed bug in `remove' from HEAP_PRIORITY_QUEUE which caused the internal structure to be referenced beyond its bounds.}}&lt;br /&gt;
*base: Fixed a bug in {MEMORY}.memory_map which would cause a precondition violation in one of its call.&lt;br /&gt;
*serialization: Fixed a bug in SED_INDEPENDENT_DESERIALIZER in the experimental branch where we could get an out of bound access because we incorrectly resized a SPECIAL.&lt;br /&gt;
*studio: Fixed bug preventing the an output from being selected on Windows.&lt;br /&gt;
*studio: Fixed duplicate output issue when a subsystem activates the outputs tool before showing it.&lt;br /&gt;
*studio: Fixed bug preventing the new features dialog from being displayed when using newer syntax.&lt;br /&gt;
*EiffelVision: Fixed potential crash with tab navigation code when a key press was sent to a widget that is in the process of being unparented.&lt;br /&gt;
*debugger: fixed#16013, now all READABLE_STRING_8/32 and descendant will be displayed as string literal in debugger tools.&lt;br /&gt;
*compiler: Fixed eweasel test#final084 where compiler would generate incorrect type at run-time causing some memory corruption or a general failure.&lt;br /&gt;
*compiler: Fixed eweasel test#final083 where compiler would crash when inlining certain type of code involving generic classes.&lt;br /&gt;
*compiler: Fixed issues in experimental version when introducing `generating_type: TYPE [like Current]' in ANY which was causing a few eweasel tests. Also add new test for bug found with manifest type (eweasel test#melt097 and test#valid257).&lt;br /&gt;
*base: Added missing `own_from_pointer' in .NET version of MANAGED_POINTER&lt;br /&gt;
*studio: Fixed issue where one could not change an integer based preference entry in EiffelStudio.&lt;br /&gt;
&lt;br /&gt;
==6.5.7.9743 (July 13th 2009)==&lt;br /&gt;
===New features===&lt;br /&gt;
*base: Added `own_from_pointer' in C_STRING.&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
*studio:Made some bug fixes and improvements to the new output tool.&lt;br /&gt;
*studio: Added tooltip to precompilation wizard list to show item's ecf file path&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
*autotest: Fixed bug#15875 where selecting a test class and choosing run had no effect, now it will run all the test cases defined in that class.&lt;br /&gt;
*base: Added missing `own_from_pointer' routine in the .NET version of MANAGED_POINTER&lt;br /&gt;
*base: Made TYPE class similar to the classic version.&lt;br /&gt;
*studio: Fixed bug#13103: Cannot set Shift+Esc as a shortcut in the preferences dialog.&lt;br /&gt;
*studio: Fixed typo reported by bug#13220 in the Metric tool.&lt;br /&gt;
*compiler: Fixed issue where one could not debug an application if the `executable_name' specified in the ECF contained the .exe suffix on Windows. This fixes bug#11834.&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
*compiler: the indexing value '''volatile''' previously introduced has been renamed into '''transient'''.&lt;br /&gt;
&lt;br /&gt;
===Developer changes===&lt;br /&gt;
&lt;br /&gt;
==6.5.7.9500==&lt;br /&gt;
===New features===&lt;br /&gt;
*{{Red|base: Changed {ANY}.generating_type to return an instance of TYPE}}&lt;br /&gt;
*{{Red|runtime: Added support for transient attributes for store/retrieve. A transient attribute is an attribute which is not stored at runtime and for which its absence in the retrieval system has no effect.}}&lt;br /&gt;
* compiler: Supported detection and validity error report for VSRP(3) (the root procedure is not precondition-free) (see test#vsrp301).&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
===User changes===&lt;br /&gt;
===Developer changes===&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=EiffelStudio_6.5_Releases&amp;diff=13097</id>
		<title>EiffelStudio 6.5 Releases</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=EiffelStudio_6.5_Releases&amp;diff=13097"/>
				<updated>2009-08-05T09:40:38Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Improvements */ Changed color...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Releases]]__NOTOC__{{ReleaseHistoryHeader}}&lt;br /&gt;
&lt;br /&gt;
= EiffelStudio 6.5.x Releases=&lt;br /&gt;
&lt;br /&gt;
==6.5.x==&lt;br /&gt;
Placeholder for new stuff since last intermediate release. &lt;br /&gt;
===New features===&lt;br /&gt;
&lt;br /&gt;
* {{Red|web_browser: Added Web Browser widget {EV_WEB_BROWSER} (see new &amp;quot;web_browser&amp;quot; library) and example project (see example under $ISE_LIBRARY/examples/web_browser)}}&lt;br /&gt;
* studio: GCC and MSC external compilation errors are now displayed in the error list.&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
*studio: Improved error messages for VUAR/VJAR/VBAR errors to mention compatibility instead of just conformance.&lt;br /&gt;
*studio: Now double clicking on an ecf file will select the previously compiled target by default.&lt;br /&gt;
*ec: Fixed command line compilation to use the current working directory unless -project_path is specified (restoring pre-ecf behavior)&lt;br /&gt;
*ec: When an ecf file has been specified with -config, the previously compiled target is chosen as the first option in the compilable target list.&lt;br /&gt;
*AutoTest: Improved tag tree for displaying tests by merging view/filter box into one input field&lt;br /&gt;
*AutoTest: Potential test classes are traversed asynchronously after each compilation, allowing the user to continue working while tests are found. This also removes the need for special test clusters.&lt;br /&gt;
*{{Red|AutoTest library: {EQA_TEST_SET}.on_prepare is called during `default_create', which makes it simpler to use attached attributes in void-safe tests.}}&lt;br /&gt;
*AutoTest library: Moved actual test routine invocation into {EQA_TEST_SET}.run_test which can be used to nest the test routine call. This is used for example by Vision2 tests to launch the event loop before calling the test routine.&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
*base: Fixed bug#4118 where on .NET `put' had no effect on the actual process environment variables because the API was only available in .NET 2.0 which is what we support at the minimum.&lt;br /&gt;
*{{Red|base: Fixed bug in `copy' from HEAP_PRIORITY_QUEUE which would not do anything because it was simply copying itself, not other.}}&lt;br /&gt;
*{{Red|base: Fixed bug in `remove' from HEAP_PRIORITY_QUEUE which caused the internal structure to be referenced beyond its bounds.}}&lt;br /&gt;
*serialization: Fixed a bug in SED_INDEPENDENT_DESERIALIZER in the experimental branch where we could get an out of bound access because we incorrectly resized a SPECIAL.&lt;br /&gt;
*studio: Fixed bug preventing the an output from being selected on Windows.&lt;br /&gt;
*studio: Fixed duplicate output issue when a subsystem activates the outputs tool before showing it.&lt;br /&gt;
*studio: Fixed bug preventing the new features dialog from being displayed when using newer syntax.&lt;br /&gt;
*EiffelVision: Fixed potential crash with tab navigation code when a key press was sent to a widget that is in the process of being unparented.&lt;br /&gt;
*debugger: fixed#16013, now all READABLE_STRING_8/32 and descendant will be displayed as string literal in debugger tools.&lt;br /&gt;
*compiler: Fixed eweasel test#final084 where compiler would generate incorrect type at run-time causing some memory corruption or a general failure.&lt;br /&gt;
*compiler: Fixed eweasel test#final083 where compiler would crash when inlining certain type of code involving generic classes.&lt;br /&gt;
*base: Added missing `own_from_pointer' in .NET version of MANAGED_POINTER&lt;br /&gt;
*studio: Fixed issue where one could not change an integer based preference entry in EiffelStudio.&lt;br /&gt;
&lt;br /&gt;
==6.5.7.9743 (July 13th 2009)==&lt;br /&gt;
===New features===&lt;br /&gt;
*base: Added `own_from_pointer' in C_STRING.&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
*studio:Made some bug fixes and improvements to the new output tool.&lt;br /&gt;
*studio: Added tooltip to precompilation wizard list to show item's ecf file path&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
*autotest: Fixed bug#15875 where selecting a test class and choosing run had no effect, now it will run all the test cases defined in that class.&lt;br /&gt;
*base: Added missing `own_from_pointer' routine in the .NET version of MANAGED_POINTER&lt;br /&gt;
*base: Made TYPE class similar to the classic version.&lt;br /&gt;
*studio: Fixed bug#13103: Cannot set Shift+Esc as a shortcut in the preferences dialog.&lt;br /&gt;
*studio: Fixed typo reported by bug#13220 in the Metric tool.&lt;br /&gt;
*compiler: Fixed issue where one could not debug an application if the `executable_name' specified in the ECF contained the .exe suffix on Windows. This fixes bug#11834.&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
*compiler: the indexing value '''volatile''' previously introduced has been renamed into '''transient'''.&lt;br /&gt;
&lt;br /&gt;
===Developer changes===&lt;br /&gt;
&lt;br /&gt;
==6.5.7.9500==&lt;br /&gt;
===New features===&lt;br /&gt;
*{{Red|base: Changed {ANY}.generating_type to return an instance of TYPE}}&lt;br /&gt;
*{{Red|runtime: Added support for transient attributes for store/retrieve. A transient attribute is an attribute which is not stored at runtime and for which its absence in the retrieval system has no effect.}}&lt;br /&gt;
* compiler: Supported detection and validity error report for VSRP(3) (the root procedure is not precondition-free) (see test#vsrp301).&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
===User changes===&lt;br /&gt;
===Developer changes===&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=EiffelStudio_6.5_Releases&amp;diff=13096</id>
		<title>EiffelStudio 6.5 Releases</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=EiffelStudio_6.5_Releases&amp;diff=13096"/>
				<updated>2009-08-05T09:38:45Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Improvements */ AutoTest library changes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Releases]]__NOTOC__{{ReleaseHistoryHeader}}&lt;br /&gt;
&lt;br /&gt;
= EiffelStudio 6.5.x Releases=&lt;br /&gt;
&lt;br /&gt;
==6.5.x==&lt;br /&gt;
Placeholder for new stuff since last intermediate release. &lt;br /&gt;
===New features===&lt;br /&gt;
&lt;br /&gt;
* {{Red|web_browser: Added Web Browser widget {EV_WEB_BROWSER} (see new &amp;quot;web_browser&amp;quot; library) and example project (see example under $ISE_LIBRARY/examples/web_browser)}}&lt;br /&gt;
* studio: GCC and MSC external compilation errors are now displayed in the error list.&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
*studio: Improved error messages for VUAR/VJAR/VBAR errors to mention compatibility instead of just conformance.&lt;br /&gt;
*studio: Now double clicking on an ecf file will select the previously compiled target by default.&lt;br /&gt;
*ec: Fixed command line compilation to use the current working directory unless -project_path is specified (restoring pre-ecf behavior)&lt;br /&gt;
*ec: When an ecf file has been specified with -config, the previously compiled target is chosen as the first option in the compilable target list.&lt;br /&gt;
*AutoTest: Improved tag tree for displaying tests by merging view/filter box into one input field&lt;br /&gt;
*AutoTest: Potential test classes are traversed asynchronously after each compilation, allowing the user to continue working while tests are found. This also removes the need for special test clusters.&lt;br /&gt;
*&amp;lt;span style=&amp;quot;color:#FF0000&amp;quot;&amp;gt;AutoTest library: {EQA_TEST_SET}.on_prepare is called during `default_create', which makes it simpler to use attached attributes in void-safe tests.&amp;lt;/span&amp;gt;&lt;br /&gt;
*AutoTest library: Moved actual test routine invocation into {EQA_TEST_SET}.run_test which can be used to nest the test routine call. This is used for example by Vision2 tests to launch the event loop before calling the test routine.&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
*base: Fixed bug#4118 where on .NET `put' had no effect on the actual process environment variables because the API was only available in .NET 2.0 which is what we support at the minimum.&lt;br /&gt;
*{{Red|base: Fixed bug in `copy' from HEAP_PRIORITY_QUEUE which would not do anything because it was simply copying itself, not other.}}&lt;br /&gt;
*{{Red|base: Fixed bug in `remove' from HEAP_PRIORITY_QUEUE which caused the internal structure to be referenced beyond its bounds.}}&lt;br /&gt;
*serialization: Fixed a bug in SED_INDEPENDENT_DESERIALIZER in the experimental branch where we could get an out of bound access because we incorrectly resized a SPECIAL.&lt;br /&gt;
*studio: Fixed bug preventing the an output from being selected on Windows.&lt;br /&gt;
*studio: Fixed duplicate output issue when a subsystem activates the outputs tool before showing it.&lt;br /&gt;
*studio: Fixed bug preventing the new features dialog from being displayed when using newer syntax.&lt;br /&gt;
*EiffelVision: Fixed potential crash with tab navigation code when a key press was sent to a widget that is in the process of being unparented.&lt;br /&gt;
*debugger: fixed#16013, now all READABLE_STRING_8/32 and descendant will be displayed as string literal in debugger tools.&lt;br /&gt;
*compiler: Fixed eweasel test#final084 where compiler would generate incorrect type at run-time causing some memory corruption or a general failure.&lt;br /&gt;
*compiler: Fixed eweasel test#final083 where compiler would crash when inlining certain type of code involving generic classes.&lt;br /&gt;
*base: Added missing `own_from_pointer' in .NET version of MANAGED_POINTER&lt;br /&gt;
*studio: Fixed issue where one could not change an integer based preference entry in EiffelStudio.&lt;br /&gt;
&lt;br /&gt;
==6.5.7.9743 (July 13th 2009)==&lt;br /&gt;
===New features===&lt;br /&gt;
*base: Added `own_from_pointer' in C_STRING.&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
*studio:Made some bug fixes and improvements to the new output tool.&lt;br /&gt;
*studio: Added tooltip to precompilation wizard list to show item's ecf file path&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
*autotest: Fixed bug#15875 where selecting a test class and choosing run had no effect, now it will run all the test cases defined in that class.&lt;br /&gt;
*base: Added missing `own_from_pointer' routine in the .NET version of MANAGED_POINTER&lt;br /&gt;
*base: Made TYPE class similar to the classic version.&lt;br /&gt;
*studio: Fixed bug#13103: Cannot set Shift+Esc as a shortcut in the preferences dialog.&lt;br /&gt;
*studio: Fixed typo reported by bug#13220 in the Metric tool.&lt;br /&gt;
*compiler: Fixed issue where one could not debug an application if the `executable_name' specified in the ECF contained the .exe suffix on Windows. This fixes bug#11834.&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
*compiler: the indexing value '''volatile''' previously introduced has been renamed into '''transient'''.&lt;br /&gt;
&lt;br /&gt;
===Developer changes===&lt;br /&gt;
&lt;br /&gt;
==6.5.7.9500==&lt;br /&gt;
===New features===&lt;br /&gt;
*{{Red|base: Changed {ANY}.generating_type to return an instance of TYPE}}&lt;br /&gt;
*{{Red|runtime: Added support for transient attributes for store/retrieve. A transient attribute is an attribute which is not stored at runtime and for which its absence in the retrieval system has no effect.}}&lt;br /&gt;
* compiler: Supported detection and validity error report for VSRP(3) (the root procedure is not precondition-free) (see test#vsrp301).&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
===User changes===&lt;br /&gt;
===Developer changes===&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=EiffelStudio_6.5_Releases&amp;diff=13014</id>
		<title>EiffelStudio 6.5 Releases</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=EiffelStudio_6.5_Releases&amp;diff=13014"/>
				<updated>2009-07-22T12:50:46Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Improvements */ Added AutoTest changes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Releases]]__NOTOC__{{ReleaseHistoryHeader}}&lt;br /&gt;
&lt;br /&gt;
= EiffelStudio 6.5.x Releases=&lt;br /&gt;
&lt;br /&gt;
==6.5.x==&lt;br /&gt;
Placeholder for new stuff since last intermediate release. &lt;br /&gt;
===New features===&lt;br /&gt;
===Improvements===&lt;br /&gt;
*studio: Improved error messages for VUAR/VJAR/VBAR errors to mention compatibility instead of just conformance.&lt;br /&gt;
*ec: Fixed command line compilation to use the current working directory unless -project_path is specified (restoring pre-ecf behavior)&lt;br /&gt;
*AutoTest: Improved tag tree for displaying tests by merging view/filter box into one input field&lt;br /&gt;
*AutoTest: Potential test classes are traversed asynchronously after each compilation, allowing the user to continue working while tests are found. This also removes the need for special test clusters.&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
*base: Fixed bug#4118 where on .NET `put' had no effect on the actual process environment variables because the API was only available in .NET 2.0 which is what we support at the minimum.&lt;br /&gt;
*{{Red|base: Fixed bug in `copy' from HEAP_PRIORITY_QUEUE which would not do anything because it was simply copying itself, not other.}}&lt;br /&gt;
*{{Red|base: Fixed bug in `remove' from HEAP_PRIORITY_QUEUE which caused the internal structure to be referenced beyond its bounds.}}&lt;br /&gt;
*serialization: Fixed a bug in SED_INDEPENDENT_DESERIALIZER in the experimental branch where we could get an out of bound access because we incorrectly resized a SPECIAL.&lt;br /&gt;
*studio: Fixed bug preventing the an output from being selected on Windows.&lt;br /&gt;
*studio: Fixed duplicate output issue when a subsystem activates the outputs tool before showing it.&lt;br /&gt;
*studio: Fixed bug preventing the new features dialog from being displayed when using newer syntax.&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
===Developer changes===&lt;br /&gt;
&lt;br /&gt;
==6.5.7.9743 (July 13th 2009)==&lt;br /&gt;
===New features===&lt;br /&gt;
*base: Added `own_from_pointer' in C_STRING.&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
*studio:Made some bug fixes and improvements to the new output tool.&lt;br /&gt;
*studio: Added tooltip to precompilation wizard list to show item's ecf file path&lt;br /&gt;
&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
*autotest: Fixed bug#15875 where selecting a test class and choosing run had no effect, now it will run all the test cases defined in that class.&lt;br /&gt;
*base: Added missing `own_from_pointer' routine in the .NET version of MANAGED_POINTER&lt;br /&gt;
*base: Made TYPE class similar to the classic version.&lt;br /&gt;
*studio: Fixed bug#13103: Cannot set Shift+Esc as a shortcut in the preferences dialog.&lt;br /&gt;
*studio: Fixed typo reported by bug#13220 in the Metric tool.&lt;br /&gt;
*compiler: Fixed issue where one could not debug an application if the `executable_name' specified in the ECF contained the .exe suffix on Windows. This fixes bug#11834.&lt;br /&gt;
&lt;br /&gt;
===User changes===&lt;br /&gt;
*compiler: the indexing value '''volatile''' previously introduced has been renamed into '''transient'''.&lt;br /&gt;
&lt;br /&gt;
===Developer changes===&lt;br /&gt;
&lt;br /&gt;
==6.5.7.9500==&lt;br /&gt;
===New features===&lt;br /&gt;
*{{Red|base: Changed {ANY}.generating_type to return an instance of TYPE}}&lt;br /&gt;
*{{Red|runtime: Added support for transient attributes for store/retrieve. A transient attribute is an attribute which is not stored at runtime and for which its absence in the retrieval system has no effect.}}&lt;br /&gt;
* compiler: Supported detection and validity error report for VSRP(3) (the root procedure is not precondition-free) (see test#vsrp301).&lt;br /&gt;
&lt;br /&gt;
===Improvements===&lt;br /&gt;
===Feature removed===&lt;br /&gt;
===Bug fixes===&lt;br /&gt;
===User changes===&lt;br /&gt;
===Developer changes===&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=File:Screenshot.png&amp;diff=12655</id>
		<title>File:Screenshot.png</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=File:Screenshot.png&amp;diff=12655"/>
				<updated>2009-06-24T20:53:54Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Vision2 test&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Vision2 test&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Void-Safe_Library_Status&amp;diff=12140</id>
		<title>Void-Safe Library Status</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Void-Safe_Library_Status&amp;diff=12140"/>
				<updated>2009-02-28T13:11:47Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Completion Status */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Releases]]&lt;br /&gt;
&lt;br /&gt;
During the [[:Category:EiffelStudio|EiffelStudio]] [[EiffelStudio 6.4 Releases|6.4]] development cycle Eiffel Software and any willing third-party contributors are updating the Eiffel stock [[:Category:Library|libraries]] to be Void-Safe. The libraries will still compile in non-Void-Safe contexts so your code will not be broken. The status reflects work completed so you may start migrating your own code to ensure Void-safety.&lt;br /&gt;
&lt;br /&gt;
Make sure to follow the general rules given below, and ask the community for guidance if you run into any problems or uncertainties.&lt;br /&gt;
&lt;br /&gt;
== Completion Status ==&lt;br /&gt;
&lt;br /&gt;
To better hightlight the usefulness of the void-safety mechanism, we have put together a [[Void-Safe_Library_Results|non-exhaustive list]] of bugs found during the conversion process.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;200&amp;quot;|Library Name&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Status&lt;br /&gt;
! width=&amp;quot;200&amp;quot;|Credits&lt;br /&gt;
|-&lt;br /&gt;
| EiffelBase&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software&lt;br /&gt;
|-&lt;br /&gt;
| EiffelBase extension&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software (Larry, Jocelyn)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelTime&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software (Ted, Ian)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelThread&lt;br /&gt;
| Done (classic)&lt;br /&gt;
| Eiffel Software (Arno)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelUUID&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software ([[User:paulb|Paul]])&lt;br /&gt;
|-&lt;br /&gt;
| Eiffel2Java&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software (Ted, Ian)&lt;br /&gt;
|-&lt;br /&gt;
| WEL&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software (Manu)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelVision2&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelVision2 extension&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelProcess&lt;br /&gt;
| Done (classic)&lt;br /&gt;
| Eiffel Software (Arno)&lt;br /&gt;
|-&lt;br /&gt;
| Argument parser&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software ([[User:paulb|Paul]])&lt;br /&gt;
|-&lt;br /&gt;
| EiffelLex&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software ([[User:paulb|Paul]])&lt;br /&gt;
|-&lt;br /&gt;
| EiffelParse&lt;br /&gt;
| In progress&lt;br /&gt;
| Eiffel Software (Ted, Ian)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelNet&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software (Manu)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelNet IPv6&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software (Manu)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelCurl&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software (Manu)&lt;br /&gt;
|-&lt;br /&gt;
| Encoding&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software (Ted, Ian)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelCOM&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelStore&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelTesting&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software (Arno)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelWeb&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software (Manu)&lt;br /&gt;
|-&lt;br /&gt;
| Gobo&lt;br /&gt;
| In progress&lt;br /&gt;
| Eiffel Software (Jocelyn,Larry)&lt;br /&gt;
|-&lt;br /&gt;
| Docking&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Gobo extension&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelGraph&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Memory Analyzer&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelPreferences&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| Diff&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software ([[User:paulb|Paul]])&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Contributing ==&lt;br /&gt;
EiffelStudio is open source and welcomes the Eiffel community contributions to speed up the adaptation process. If you are interested in participating please put a comment on the discussion board with your contact details.&lt;br /&gt;
&lt;br /&gt;
==Rules to be applied ==&lt;br /&gt;
&lt;br /&gt;
Please observe the following guidelines carefully to guarantee a quality result.&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
For examples of libraries already adapted, see UUID (for a small example) and EiffelBase (for a larger one).&lt;br /&gt;
&lt;br /&gt;
=== Overall process === &lt;br /&gt;
&lt;br /&gt;
* First compile with the `full_class_checking' option on. Then enable the void-safe option.&lt;br /&gt;
* Compile libraries on all of Windows/.NET/Unix to ensure it is sound.&lt;br /&gt;
&lt;br /&gt;
* Minimize modifications; types should be attached by default if it makes sense, otherwise it has to be detachable by default.&lt;br /&gt;
* Use the convention library-safe.ecf for naming void-safe libraries for now. All library references should be using the -safe.ecf variants.&lt;br /&gt;
* Use the same UUIDs for void-safe and non-void-safe libraries.&lt;br /&gt;
* Before any modifications add a library.lic and library-safe.lic (replace library with the name of the ECF minus the .ecf extension) next to the ECFs of the same name containing only the single line reference:forum2.&lt;br /&gt;
* Update all samples to use the void-safe ecfs and update them.&lt;br /&gt;
&lt;br /&gt;
=== Rules ===&lt;br /&gt;
* DO NOT USE '''!''' (attached mark).&lt;br /&gt;
* MINIMIZE USE OF OBJECT TEST; ideally, don't use object test unless there was an assignment attempt in the original library.&lt;br /&gt;
* When a precondition expects a Void argument, use '''?''' if attached by default.&lt;br /&gt;
* When a precondition expects a non-Void argument, use '''!''' if detachable by default.&lt;br /&gt;
* Libraries should compile in both void-safe and non-void-safe mode.&lt;br /&gt;
* Only use the '''attribute''' keyword when it is impossible to initialize an attribute in the creation procedure. Never use it for lazy evaluation.&lt;br /&gt;
* You may include preconditions x /= Void, but it will have to be removed in the end (helped by a compiler warning that says this is not needed for attached x).&lt;br /&gt;
&lt;br /&gt;
=== General cleanup ===&lt;br /&gt;
The void-safe adaptation process should be accompanied by a general upgrade to ISO/ECMA Eiffel:&lt;br /&gt;
&lt;br /&gt;
* Remove uses of is_equal and equal to compare objects. (They can cause catcalls.) Replace them with the tilde operator, i.e. a ~ b instead of equal (a, b) or a.is_equal (b). Be careful to preserve the semantics (~ always returns false in the case of non-identical types).&lt;br /&gt;
* Replace the '''indexing''' keyword with '''note'''.&lt;br /&gt;
* Remove the '''is''' keyword in routines. Use the Replace tool with the regex '''\ is[ \t]*$'''. (Be careful not to use replace all, because comments and multi-line strings may have &amp;quot;is&amp;quot; text!)&lt;br /&gt;
* Replace the '''is''' keyword in constants with '''='''.&lt;br /&gt;
&lt;br /&gt;
=== Test authoring ===&lt;br /&gt;
1. Create a cluster called 'tests' in the library root folder. E.g., for the UUID library the 'tests' folder exists at '$ISE_LIBRARY/uuid/tests'.&lt;br /&gt;
&lt;br /&gt;
2. In the library ECFs, exclude the 'tests' cluster because it contains testing code and not library code.&lt;br /&gt;
&lt;br /&gt;
3. Add a testing 'tests.ecf' in the 'tests' folder. (See the UUID library for an example ECF.) Be sure to create a library ECF and change the UUID. The library should also use the void-safe options found in the associated library's ECF.&lt;br /&gt;
&lt;br /&gt;
4. Create test class names using the library name along with TEST as a prefix:&lt;br /&gt;
    EiffelBase = BASE_TEST_&lt;br /&gt;
    EiffelThread = THREAD_TEST_&lt;br /&gt;
    EiffelVision2 = EV_TEST_ or VISION2_TEST_&lt;br /&gt;
&lt;br /&gt;
=== Improving this page === &lt;br /&gt;
&lt;br /&gt;
As you encounter problems and devise your solutions, please include the results of your experience here.&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Void-Safe_Library_Status&amp;diff=11917</id>
		<title>Void-Safe Library Status</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Void-Safe_Library_Status&amp;diff=11917"/>
				<updated>2009-01-22T23:20:53Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Completion Status */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Releases]]&lt;br /&gt;
&lt;br /&gt;
During the [[:Category:EiffelStudio|EiffelStudio]] [[EiffelStudio 6.4 Releases|6.4]] development cycle Eiffel Software and any willing third-party contributors are updating the Eiffel stock [[:Category:Library|libraries]] to be Void-Safe. The libraries will still compile in non-Void-Safe contexts so your code will not be broken. The status reflects work completed so you may start migrating your own code to ensure Void-safety.&lt;br /&gt;
&lt;br /&gt;
Make sure to follow the general rules given below, and ask the community for guidance if you run into any problems or uncertainties.&lt;br /&gt;
&lt;br /&gt;
== Completion Status ==&lt;br /&gt;
&lt;br /&gt;
Here is a [[Void-Safe_Library_Results|non-exhaustive list]] of bugs found during this conversion process.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;200&amp;quot;|Library Name&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Status&lt;br /&gt;
! width=&amp;quot;200&amp;quot;|Credits&lt;br /&gt;
|-&lt;br /&gt;
| EiffelBase&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software&lt;br /&gt;
|-&lt;br /&gt;
| EiffelBase extension&lt;br /&gt;
| In progress&lt;br /&gt;
| Eiffel Software (Larry, Jocelyn)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelTime&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software (Ted, Ian)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelThread&lt;br /&gt;
| Done (classic)&lt;br /&gt;
| Eiffel Software (Arno)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelUUID&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software&lt;br /&gt;
|-&lt;br /&gt;
| Eiffel2Java&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| WEL&lt;br /&gt;
| In Progress&lt;br /&gt;
| Eiffel Software (Manu)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelVision2&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelVision2 extension&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelProcess&lt;br /&gt;
| Done (classic)&lt;br /&gt;
| Eiffel Software (Arno)&lt;br /&gt;
|-&lt;br /&gt;
| Argument parser&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software&lt;br /&gt;
|-&lt;br /&gt;
| EiffelLex&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelParse&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelNet&lt;br /&gt;
| In Progress&lt;br /&gt;
| Eiffel Software (Manu)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelNet IPv6&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelCurl&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Encoding&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software (Ted, Ian)&lt;br /&gt;
|-&lt;br /&gt;
| EiffelCOM&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelStore&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelTesting&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelWeb&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Gobo&lt;br /&gt;
| In progress&lt;br /&gt;
| Eiffel Software (Jocelyn,Larry)&lt;br /&gt;
|-&lt;br /&gt;
| Docking&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Gobo extension&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelGraph&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Memory Analyzer&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelPreferences&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Contributing ==&lt;br /&gt;
EiffelStudio is open source and welcomes the Eiffel community contributions to speed up the adaptation process. If you are interested in participating please put a comment on the discussion board with your contact details.&lt;br /&gt;
&lt;br /&gt;
==Rules to be applied ==&lt;br /&gt;
&lt;br /&gt;
Please observe the following guidelines carefully to guarantee a quality result.&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
For examples of libraries already adapted, see UUID (for a small example) and EiffelBase (for a larger one).&lt;br /&gt;
&lt;br /&gt;
=== Overall process === &lt;br /&gt;
&lt;br /&gt;
* First compile with the `full_class_checking' option on. Then enable the void-safe option.&lt;br /&gt;
* Compile libraries on all of Windows/.NET/Unix to ensure it is sound.&lt;br /&gt;
&lt;br /&gt;
* Minimize modifications; types should be attached by default if it makes sense, otherwise it has to be detachable by default.&lt;br /&gt;
* Use the convention library-safe.ecf for naming void-safe libraries for now. All library references should be using the -safe.ecf variants.&lt;br /&gt;
* Use the same UUIDs for void-safe and non-void-safe libraries.&lt;br /&gt;
* Before any modifications add a library.lic and library-safe.lic (replace library with the name of the ECF minus the .ecf extension) next to the ECFs of the same name containing only the single line reference:forum2.&lt;br /&gt;
* Update all samples to use the void-safe ecfs and update them.&lt;br /&gt;
&lt;br /&gt;
=== Rules ===&lt;br /&gt;
* DO NOT USE '''!''' (attached mark).&lt;br /&gt;
* MINIMIZE USE OF OBJECT TEST; ideally, don't use object test unless there was an assignment attempt in the original library.&lt;br /&gt;
* When a precondition expects a Void argument, use '''?''' if attached by default.&lt;br /&gt;
* When a precondition expects a non-Void argument, use '''!''' if detachable by default.&lt;br /&gt;
* Libraries should compile in both void-safe and non-void-safe mode.&lt;br /&gt;
* Only use the '''attribute''' keyword when it is impossible to initialize an attribute in the creation procedure. Never use it for lazy evaluation.&lt;br /&gt;
* You may include preconditions x /= Void, but it will have to be removed in the end (helped by a compiler warning that says this is not needed for attached x).&lt;br /&gt;
&lt;br /&gt;
=== General cleanup ===&lt;br /&gt;
The void-safe adaptation process should be accompanied by a general upgrade to ISO/ECMA Eiffel:&lt;br /&gt;
&lt;br /&gt;
* Remove uses of is_equal and equal to compare objects. (They can cause catcalls.) Replace them with the tilde operator, i.e. a ~ b instead of equal (a, b) or a.is_equal (b). Be careful to preserve the semantics (~ always returns false in the case of non-identical types).&lt;br /&gt;
* Replace the '''indexing''' keyword with '''note'''.&lt;br /&gt;
* Remove the '''is''' keyword in routines. Use the Replace tool with the regex '''\ is[ \t]*$'''. (Be careful not to use replace all, because comments and multi-line strings may have &amp;quot;is&amp;quot; text!)&lt;br /&gt;
* Replace the '''is''' keyword in constants with '''='''.&lt;br /&gt;
&lt;br /&gt;
=== Test authoring ===&lt;br /&gt;
1. Create a cluster called 'tests' in the library root folder. E.g., for the UUID library the 'tests' folder exists at '$ISE_LIBRARY/uuid/tests'.&lt;br /&gt;
&lt;br /&gt;
2. In the library ECFs, exclude the 'tests' cluster because it contains testing code and not library code.&lt;br /&gt;
&lt;br /&gt;
3. Add a testing 'tests.ecf' in the 'tests' folder. (See the UUID library for an example ECF.) Be sure to create a library ECF and change the UUID. The library should also use the void-safe options found in the associated library's ECF.&lt;br /&gt;
&lt;br /&gt;
4. Create test class names using the library name along with TEST as a prefix:&lt;br /&gt;
    EiffelBase = BASE_TEST_&lt;br /&gt;
    EiffelThread = THREAD_TEST_&lt;br /&gt;
    EiffelVision2 = EV_TEST_ or VISION2_TEST_&lt;br /&gt;
&lt;br /&gt;
=== Improving this page === &lt;br /&gt;
&lt;br /&gt;
As you encounter problems and devise your solutions, please include the results of your experience here.&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Void-Safe_Library_Status&amp;diff=11827</id>
		<title>Void-Safe Library Status</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Void-Safe_Library_Status&amp;diff=11827"/>
				<updated>2008-12-24T20:49:46Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: update for thread library&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Releases]]&lt;br /&gt;
&lt;br /&gt;
During the [[:Category:EiffelStudio|EiffelStudio]] [[EiffelStudio 6.4 Releases|6.4]] development cycle Eiffel Software and any willing third-party contributors are updating the Eiffel stock [[:Category:Library|libraries]] to be Void-Safe. The libraries will still compile in non-Void-Safe contexts so you code will not be broken. The status reflects work completed so you may start migrating your own code to ensure Void-safety.&lt;br /&gt;
&lt;br /&gt;
== Completion Status ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;200&amp;quot;|Library Name&lt;br /&gt;
! width=&amp;quot;100&amp;quot;|Status&lt;br /&gt;
! width=&amp;quot;200&amp;quot;|Credits&lt;br /&gt;
|-&lt;br /&gt;
| EiffelBase&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software&lt;br /&gt;
|-&lt;br /&gt;
| EiffelBase extension&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelTime&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelThread&lt;br /&gt;
| Done (classic)&lt;br /&gt;
| Eiffel Software&lt;br /&gt;
|-&lt;br /&gt;
| EiffelUUID&lt;br /&gt;
| Done&lt;br /&gt;
| Eiffel Software&lt;br /&gt;
|-&lt;br /&gt;
| Eiffel2Java&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| WEL&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelVision2&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelVision2 extension&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelProcess&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Argument parser&lt;br /&gt;
| Partial&lt;br /&gt;
| Eiffel Software&lt;br /&gt;
|-&lt;br /&gt;
| EiffelLex&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelParse&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelNet&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelNet IPv6&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelCurl&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Encoding&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelCOM&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelStore&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelTesting&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelWeb&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Gobo&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Docking&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Gobo extension&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelGraph&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Memory Analyzer&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| EiffelPreferences&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Contributing ==&lt;br /&gt;
EiffelStudio is open source and welcomes the Eiffel community contributions to speed up the adapt process. If you are interested in participating please let a comment on the discussion board with you contact details.&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11775</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11775"/>
				<updated>2008-12-01T23:04:42Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Modified examples according to lates changes in testing library&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a arbitrary routine in a class inheriting from '''EQA_TEST_SET''' (the routine must be exported to '''ANY''').&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics (not yet implemented) ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_EQA_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Location of tests (config file) ====&lt;br /&gt;
&lt;br /&gt;
Tests can be located in any cluster of the system. In addition one can define test specific clusters through in the .ecf file. These clusters do not need to exists for the project to compile. This allows one to have library tests but being able to deliver the library without including the test suite.&lt;br /&gt;
For all classes in the test clusters, the inheritance structure will be evaluated so test classes inheriting from EQA_TEST_SET can be found. For any class belonging to a normal cluster, it will have to be reachable from root class to be compiled and detected as a test class. This means the recommended practice is to put test classes into a test cluster, but it is not a rule.&lt;br /&gt;
&lt;br /&gt;
{{Note| Not all classes in a test cluster have to be classes containing tests. One example are helper classes.}}&lt;br /&gt;
&lt;br /&gt;
Test clusters are also needed to provide a location for test generation/extraction.&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note the '''covers/''' tag in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class STRING_TESTS&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EQA_TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers/{STRING}.append, platform/os/winxp&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert (&amp;quot;append&amp;quot;, s.is_equal (&amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers/{STRING}.is_boolean, covers/{STRING}.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''EQA_SYSTEM_LEVEL_TEST_SET''' inherits from '''EQA_TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EQA_SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;platform/os/linux/x86&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1 - linux x86&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px|thumb|Standard view listing existing test sets and the tests they contain]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px|thumb|Predefined ''Class tested'' view listing classes/features of the system together with the associated tests (Note: since test_boolean is tagged to cover multiple features, it also appears multiple times in the view)]]&lt;br /&gt;
[[Image:testing_user-view.png|right|400px|thumb|User defined view (by simply typing part of the tag), where the tool creates a view based on how the tests are tagged (see Examples above)]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with slashes ('/'). For example, a test with tag ''covers/{STRING}.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''class'' tag ({STRING_TESTS}.test_append has the implicit tag ''class/{STRING_TESTS}.test_append'').&lt;br /&gt;
&lt;br /&gt;
{{Note|Tags are defined as strings, but in the view we sometimes want a tag to represent a class, or a feature. The way this is done right now is that the view basically knows if the tag starts with &amp;quot;covers/&amp;quot; it is followed by a class and feature name. Another approach would be to define such tags like this: &amp;quot;covers.{CLASS_NAME}.feature_name&amp;quot;. This would allow user defined tags to have clickable nodes in the view. We could also introduces other special tags such like dates/times.}}&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The tools should support multiple selection. This is important for executing a number of selected test routines, showing passed execution results, etc. Also when selecting a e.g. class node it should execute all leaves below that node.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EQA_EXTRACTED_TEST_SET''' inherits from '''EQA_TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
indexing&lt;br /&gt;
    testing: &amp;quot;type/extracted&amp;quot;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    STRING_TESTS_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EQA_EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers/{STRING}.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            run_extracted_test (agent {STRING}.append_integer, [&amp;quot;#1&amp;quot;, 100])&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: !ARRAY [!TUPLE [type: !TYPE [ANY]; attributes: !TUPLE; inv: BOOLEAN]]&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [{STRING}, [&lt;br /&gt;
                        &amp;quot;this is an integer: &amp;quot;&lt;br /&gt;
                    ], False]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class STRING_TESTS_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Each test routines passes the agent of the routine to be tested, along with a tuple containing the arguments (#x refering to objects in `context'). This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EQA_EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== Wish list ==&lt;br /&gt;
&lt;br /&gt;
If you have any suggestions or ideas which would improve the testing tool, please add them to this section.&lt;br /&gt;
&lt;br /&gt;
=== tests with context ===&lt;br /&gt;
it would be nice to have contextual test cases.&lt;br /&gt;
For instance you would create a test class&lt;br /&gt;
and you create an associated &amp;quot;test point&amp;quot; (kind of breakpoint, or like aspect programming)&lt;br /&gt;
and then, when you run the execution in &amp;quot;testing mode&amp;quot; (under the debugger, or testing tool)&lt;br /&gt;
whenever you reach this &amp;quot;test point&amp;quot;, it would trigger the associated test case.&lt;br /&gt;
I agree, it is not anymore automatic testing, since you need to make sure the execution goes by this &amp;quot;test point&amp;quot;,&lt;br /&gt;
but this would be useful to launch specific test with a valid context.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11687</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11687"/>
				<updated>2008-10-08T16:23:35Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Added wish list section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Location of tests (config file) ====&lt;br /&gt;
&lt;br /&gt;
Tests can be located in any cluster of the system. In addition one can define test specific clusters through in the .ecf file. These clusters do not need to exists for the project to compile. This allows one to have library tests but being able to deliver the library without them.&lt;br /&gt;
For all classes in the test clusters, the inheritance structure will be evaluated so test classes inheriting from TEST_SET can be found. For any class belonging to a normal cluster, it will have to be reachable from root class to be compiled and detected as a test class. This means the recommended practice is to put test classes into a test cluster, but it is not a rule.&lt;br /&gt;
&lt;br /&gt;
{{Note| Not all classes in a test cluster have to be classes containing tests. One example are helper classes.}}&lt;br /&gt;
&lt;br /&gt;
Test clusters are also needed to provide a location for test generation/extraction.&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append, platform.os.winxp&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;platform.os.linux.i386&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px|thumb|Standard view listing existing test sets and the tests they contain]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px|thumb|Predefined ''Class tested'' view listing classes/features of the system together with the associated tests (Note: since test_boolean is tagged to cover multiple features, it also appears multiple times in the view)]]&lt;br /&gt;
[[Image:testing_user-view.png|right|400px|thumb|User defined view (by simply typing part of the tag), where the tool creates a view based on how the tests are tagged (see Examples above)]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
{{Note|Tags are defined as strings, but in the view we sometimes want a tag to represent a class, or a feature. The way this is done right now is that the view basically knows if the tag starts with &amp;quot;covers.&amp;quot; it is followed by a class and feature name. Another approach would be to define such tags like this: &amp;quot;covers.{CLASS_NAME}.feature_name&amp;quot;. This would allow user defined tags to have clickable nodes in the view. We could also introduces other special tags such like dates/times.}}&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The tools should support multiple selection. This is important for executing a number of selected test routines, showing passed execution results, etc. Also when selecting a e.g. class node it should execute all leaves below that node.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== Wish list ==&lt;br /&gt;
&lt;br /&gt;
If you have any suggestions or ideas which would improve the testing tool, please add them to this section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11654</id>
		<title>Testing Tool (Architecture)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11654"/>
				<updated>2008-09-26T03:48:20Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* EIFFEL_TEST_I */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
== Test system architecture ==&lt;br /&gt;
&lt;br /&gt;
* separate system for executing tests&lt;br /&gt;
* system is only responsible for running tests, without determine whether a test fails or not -&amp;gt; compiling and launching system is done by tool, which is also oracle&lt;br /&gt;
* tests are compiled up to degree 3 in development system, so user can edit tests like normal classes, but do not affect resulting binary&lt;br /&gt;
* test referenced by root class of development system will be compiled anyway, this way we can also run tests in debugger&lt;br /&gt;
* in CDD: test system is implicit target (does not have to be in ecf) which inherits from development target, test target simply defines new root class/feature&lt;br /&gt;
* re-use development EIFGEN (copy) so test system does not need to compile from scratch?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EiffelStudio  &amp;lt;-----------------------------&amp;gt;  Testing tool  &amp;lt;-----------------------------&amp;gt;  Test executor&lt;br /&gt;
&lt;br /&gt;
* Show tests in system                         * can be part of EiffelStudio or               * execute test in safe environment&lt;br /&gt;
* Show test result                               compiled as separate tool to be                (executor is allowed to crash)&lt;br /&gt;
* Provide test creation wizards                  used e.g. through console&lt;br /&gt;
* Interface for CDD, Auto tests,               * compile test executor&lt;br /&gt;
  creating manual tests, running               * distribute test executors to&lt;br /&gt;
* provide ESF service for                        different machines&lt;br /&gt;
  testing/test results                         * schedule test execution&lt;br /&gt;
                                               * provide test results&lt;br /&gt;
                                               * find all tests for a given ecf file&lt;br /&gt;
                                               * write root class for test executor&lt;br /&gt;
&lt;br /&gt;
CDD&lt;br /&gt;
* implemented partially in&lt;br /&gt;
  debugger/executable&lt;br /&gt;
* should be part of any Eiffel&lt;br /&gt;
  application, that way test can be&lt;br /&gt;
  created for bug submitting&lt;br /&gt;
* extraction can be initiated&lt;br /&gt;
  through debugger, breakpoints,&lt;br /&gt;
  failure window, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Auto Test&lt;br /&gt;
* separate tool, interface in EiffelStudio&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Provide testing as a service in EiffelStudio ==&lt;br /&gt;
&lt;br /&gt;
{{Note|Interface classes can be found here: https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/}}&lt;br /&gt;
&lt;br /&gt;
Using ESS, we can provide all testing functionality as a service within EiffelStudio. That way other tools can make use of these testing functionalities. Also the tool won't have to access the implementation directly. This is a short description of interfaces so far created.&lt;br /&gt;
&lt;br /&gt;
So far the service consists of tree major parts: the test suite storing all tests, test execution and test creation. The service already includes more than 20 interface classes, so it will be important to find a good abstraction. Another aspect is that some parts of the service should be extendible. Clients should be able to define new types of tests, executors or factories.&lt;br /&gt;
&lt;br /&gt;
=== EIFFEL_TEST_SUITE_S ===&lt;br /&gt;
&lt;br /&gt;
The test suite is the first instance of the service. It has the list of all tests in the system and controls all execution of tests. Right now the service has the restriction that only one executor can run at a time. Although there might not be a reason against having to executors running in parallel, it will make observing the execution of tests much simpler. Whereas factories can be launched by anyone and so run in parallel. In that case clients are usually interested when a new test is created, for which events already exist in the test suite (see below).&lt;br /&gt;
&lt;br /&gt;
Changes in the test suite can be observed, so if tests are added, removed or modified clients can be notified. There are also events for activating or deactivating an executor in the test suite.&lt;br /&gt;
&lt;br /&gt;
The test suite also provides two registrars where new executors or factories can be registered. Later clients can query whether a certain executor/factory is available and use it if so. More on executors and factories later.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/test_suite_s.e TEST_SUITE_S]}}&lt;br /&gt;
&lt;br /&gt;
=== EIFFEL_TEST_I ===&lt;br /&gt;
&lt;br /&gt;
EIFFEL_TEST_I is be the common test representation. EIFFEL_TEST_I inherits from a class TAGABLE_I, which means that all tests have a list of tags represented by string ([[Testing_Tool_(Specification)#Tags|Tags section of specifications]]). This allows us to have common used functionality in the service itself, like filtering (see TAG_BASED_FILTERED_COLLECTION). Also it enables the user to introduce his own attributes for tests. &lt;br /&gt;
&lt;br /&gt;
EIFFEL_TEST_I points to the abstract syntax representation of its routine and class the routine is located. This is useful to the implementation but could be also to clients. However implementation wise all relevant information should be accessible (such as feature name and tags in the indexing clause).&lt;br /&gt;
&lt;br /&gt;
All tests have a list of outcomes from previous execution sessions. More on that is explained in the next section.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/item/test_i.e TEST_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/item/eiffel_test_i.e EIFFEL_TEST_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/support/tagable_i.e TAGABLE_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/support/filtered_collection_i.e FILTERED_COLLECTION_I]}}&lt;br /&gt;
&lt;br /&gt;
=== TEST_EXECUTOR_I ===&lt;br /&gt;
&lt;br /&gt;
This is a general interface for executing tests. It takes a list of tests and executes each of them. One restriction it imposes to its implementers is that the execution is non blocking. This means that `'''run'''' will return immediately and all tests are executed asynchronously. This again makes it simpler for clients to use (especially graphical UIs).&lt;br /&gt;
&lt;br /&gt;
All state changes of TEST_EXECUTOR_I can be observed by inheriting TEST_EXECUTOR_OBSERVER and connecting to the executor.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, TEST_I keeps a list of outcomes produced by TEST_EXECUTOR_I. In the case of EIFFEL_TEST_I the list contains items of type EIFFEL_TEST_OUTCOME_I. Each outcome points to a EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I which describes one stage of a test execution. The tree stages are namely setup, test and tear down. Test just means calling the actual testing routine. Based on the responses of each stage, EIFFEL_TEST_OUTCOME_I determines whether a test passes or fails. In cases where it cannot be determined because the execution ran unexpected, an outcome is flagged unresolved. In that case the test need to be inspected which is expressed as `'''is_maintenance_required'''' in EIFFEL_TEST_OUTCOME_I.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_executor_i.e TEST_EXECUTOR_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_executor_observer.e TEST_EXECUTOR_OBSERVER]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_outcome_i.e TEST_OUTCOME_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/eiffel_test_outcome_i.e EIFFEL_TEST_OUTCOME_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/eiffel_test_routine_invocation_response_i.e EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I]}}&lt;br /&gt;
&lt;br /&gt;
=== TEST_FACTORY_I ===&lt;br /&gt;
&lt;br /&gt;
Factories are similar to executors since they are registered in the test suite and once triggered run asynchronously. A test factory takes a TEST_CONFIGURATION_I, which describes properties of a new test. There is a specialized version EIFFEL_TEST_CONFIGURATION_I for Eiffel tests (including class names, location, features and classes being tested by the new test).&lt;br /&gt;
So far the notification is kept simple by providing a call back function to the '''run''' routine of the factory. This is because clients will be notified anyway when a new test is added to the system through the test suite.&lt;br /&gt;
&lt;br /&gt;
This pattern should also be valid for test generation and extraction (Auto Test/CDD). Where the factory might not create a single test but multiple ones.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/test_factory_i.e TEST_FACTORY_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/test_configuration_i.e TEST_CONFIGURATION_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/eiffel_test_configuration_i.e EIFFEL_TEST_CONFIGURATION_I]}}&lt;br /&gt;
&lt;br /&gt;
== Communication between tool and test executor ==&lt;br /&gt;
&lt;br /&gt;
=== Protocol ===&lt;br /&gt;
&lt;br /&gt;
'''From tool to executor'''&lt;br /&gt;
&lt;br /&gt;
* name(s) of test to execute&lt;br /&gt;
* quit&lt;br /&gt;
&lt;br /&gt;
'''From executor to tool'''&lt;br /&gt;
&lt;br /&gt;
* test result&lt;br /&gt;
* text output produced by test&lt;br /&gt;
* exception details (type, tag, feature, class? occurred during set up, test, tear down?)&lt;br /&gt;
* call stack for exception&lt;br /&gt;
&lt;br /&gt;
=== Open questions ===&lt;br /&gt;
&lt;br /&gt;
* executor per machine/processor?&lt;br /&gt;
* text based/object base communication?&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Specification)]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11302</id>
		<title>Testing Tool (Architecture)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11302"/>
				<updated>2008-07-21T17:37:23Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Provide testing as a service in EiffelStudio */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
== Test system architecture ==&lt;br /&gt;
&lt;br /&gt;
* separate system for executing tests&lt;br /&gt;
* system is only responsible for running tests, without determine whether a test fails or not -&amp;gt; compiling and launching system is done by tool, which is also oracle&lt;br /&gt;
* tests are compiled up to degree 3 in development system, so user can edit tests like normal classes, but do not affect resulting binary&lt;br /&gt;
* test referenced by root class of development system will be compiled anyway, this way we can also run tests in debugger&lt;br /&gt;
* in CDD: test system is implicit target (does not have to be in ecf) which inherits from development target, test target simply defines new root class/feature&lt;br /&gt;
* re-use development EIFGEN (copy) so test system does not need to compile from scratch?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EiffelStudio  &amp;lt;-----------------------------&amp;gt;  Testing tool  &amp;lt;-----------------------------&amp;gt;  Test executor&lt;br /&gt;
&lt;br /&gt;
* Show tests in system                         * can be part of EiffelStudio or               * execute test in safe environment&lt;br /&gt;
* Show test result                               compiled as separate tool to be                (executor is allowed to crash)&lt;br /&gt;
* Provide test creation wizards                  used e.g. through console&lt;br /&gt;
* Interface for CDD, Auto tests,               * compile test executor&lt;br /&gt;
  creating manual tests, running               * distribute test executors to&lt;br /&gt;
* provide ESF service for                        different machines&lt;br /&gt;
  testing/test results                         * schedule test execution&lt;br /&gt;
                                               * provide test results&lt;br /&gt;
                                               * find all tests for a given ecf file&lt;br /&gt;
                                               * write root class for test executor&lt;br /&gt;
&lt;br /&gt;
CDD&lt;br /&gt;
* implemented partially in&lt;br /&gt;
  debugger/executable&lt;br /&gt;
* should be part of any Eiffel&lt;br /&gt;
  application, that way test can be&lt;br /&gt;
  created for bug submitting&lt;br /&gt;
* extraction can be initiated&lt;br /&gt;
  through debugger, breakpoints,&lt;br /&gt;
  failure window, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Auto Test&lt;br /&gt;
* separate tool, interface in EiffelStudio&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Provide testing as a service in EiffelStudio ==&lt;br /&gt;
&lt;br /&gt;
{{Note|Interface classes can be found here: https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/}}&lt;br /&gt;
&lt;br /&gt;
Using ESS, we can provide all testing functionality as a service within EiffelStudio. That way other tools can make use of these testing functionalities. Also the tool won't have to access the implementation directly. This is a short description of interfaces so far created.&lt;br /&gt;
&lt;br /&gt;
So far the service consists of tree major parts: the test suite storing all tests, test execution and test creation. The service already includes more than 20 interface classes, so it will be important to find a good abstraction. Another aspect is that some parts of the service should be extendible. Clients should be able to define new types of tests, executors or factories.&lt;br /&gt;
&lt;br /&gt;
=== EIFFEL_TEST_SUITE_S ===&lt;br /&gt;
&lt;br /&gt;
The test suite is the first instance of the service. It has the list of all tests in the system and controls all execution of tests. Right now the service has the restriction that only one executor can run at a time. Although there might not be a reason against having to executors running in parallel, it will make observing the execution of tests much simpler. Whereas factories can be launched by anyone and so run in parallel. In that case clients are usually interested when a new test is created, for which events already exist in the test suite (see below).&lt;br /&gt;
&lt;br /&gt;
Changes in the test suite can be observed, so if tests are added, removed or modified clients can be notified. There are also events for activating or deactivating an executor in the test suite.&lt;br /&gt;
&lt;br /&gt;
The test suite also provides two registrars where new executors or factories can be registered. Later clients can query whether a certain executor/factory is available and use it if so. More on executors and factories later.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/test_suite_s.e TEST_SUITE_S]}}&lt;br /&gt;
&lt;br /&gt;
=== EIFFEL_TEST_I ===&lt;br /&gt;
&lt;br /&gt;
EIFFEL_TEST_I will be the common test representation for now. It inherits from a more general interface TEST_I which enables users to introduce new types of tests (not necessarily written in Eiffel). TEST_I inherits from a class TAGABLE, which means that all tests have a list of tags represented by string ([[Testing_Tool_(Specification)#Tags|Tags section of specifications]]). This allows us to have common used functionality in the service itself, like filtering (see FILTERED_COLLECTION_I). Also it enables the user to introduce his own attributes for tests.&lt;br /&gt;
&lt;br /&gt;
EIFFEL_TEST_I points to the abstract syntax representation of its routine and class the routine is located. This is useful to the implementation but could be also to clients. However implementation wise all relevant information should be accessible (such as feature name and tags in the indexing clause).&lt;br /&gt;
&lt;br /&gt;
All tests have a list of outcomes from previous execution sessions. More on that is explained in the next section.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/item/test_i.e TEST_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/item/eiffel_test_i.e EIFFEL_TEST_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/support/tagable_i.e TAGABLE_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/support/filtered_collection_i.e FILTERED_COLLECTION_I]}}&lt;br /&gt;
&lt;br /&gt;
=== TEST_EXECUTOR_I ===&lt;br /&gt;
&lt;br /&gt;
This is a general interface for executing tests. It takes a list of tests and executes each of them. One restriction it imposes to its implementers is that the execution is non blocking. This means that `'''run'''' will return immediately and all tests are executed asynchronously. This again makes it simpler for clients to use (especially graphical UIs).&lt;br /&gt;
&lt;br /&gt;
All state changes of TEST_EXECUTOR_I can be observed by inheriting TEST_EXECUTOR_OBSERVER and connecting to the executor.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, TEST_I keeps a list of outcomes produced by TEST_EXECUTOR_I. In the case of EIFFEL_TEST_I the list contains items of type EIFFEL_TEST_OUTCOME_I. Each outcome points to a EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I which describes one stage of a test execution. The tree stages are namely setup, test and tear down. Test just means calling the actual testing routine. Based on the responses of each stage, EIFFEL_TEST_OUTCOME_I determines whether a test passes or fails. In cases where it cannot be determined because the execution ran unexpected, an outcome is flagged unresolved. In that case the test need to be inspected which is expressed as `'''is_maintenance_required'''' in EIFFEL_TEST_OUTCOME_I.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_executor_i.e TEST_EXECUTOR_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_executor_observer.e TEST_EXECUTOR_OBSERVER]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_outcome_i.e TEST_OUTCOME_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/eiffel_test_outcome_i.e EIFFEL_TEST_OUTCOME_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/eiffel_test_routine_invocation_response_i.e EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I]}}&lt;br /&gt;
&lt;br /&gt;
=== TEST_FACTORY_I ===&lt;br /&gt;
&lt;br /&gt;
Factories are similar to executors since they are registered in the test suite and once triggered run asynchronously. A test factory takes a TEST_CONFIGURATION_I, which describes properties of a new test. There is a specialized version EIFFEL_TEST_CONFIGURATION_I for Eiffel tests (including class names, location, features and classes being tested by the new test).&lt;br /&gt;
So far the notification is kept simple by providing a call back function to the '''run''' routine of the factory. This is because clients will be notified anyway when a new test is added to the system through the test suite.&lt;br /&gt;
&lt;br /&gt;
This pattern should also be valid for test generation and extraction (Auto Test/CDD). Where the factory might not create a single test but multiple ones.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/test_factory_i.e TEST_FACTORY_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/test_configuration_i.e TEST_CONFIGURATION_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/eiffel_test_configuration_i.e EIFFEL_TEST_CONFIGURATION_I]}}&lt;br /&gt;
&lt;br /&gt;
== Communication between tool and test executor ==&lt;br /&gt;
&lt;br /&gt;
=== Protocol ===&lt;br /&gt;
&lt;br /&gt;
'''From tool to executor'''&lt;br /&gt;
&lt;br /&gt;
* name(s) of test to execute&lt;br /&gt;
* quit&lt;br /&gt;
&lt;br /&gt;
'''From executor to tool'''&lt;br /&gt;
&lt;br /&gt;
* test result&lt;br /&gt;
* text output produced by test&lt;br /&gt;
* exception details (type, tag, feature, class? occurred during set up, test, tear down?)&lt;br /&gt;
* call stack for exception&lt;br /&gt;
&lt;br /&gt;
=== Open questions ===&lt;br /&gt;
&lt;br /&gt;
* executor per machine/processor?&lt;br /&gt;
* text based/object base communication?&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Specification)]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11301</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11301"/>
				<updated>2008-07-21T16:04:39Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Location of Tests (config file) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Location of tests (config file) ====&lt;br /&gt;
&lt;br /&gt;
Tests can be located in any cluster of the system. In addition one can define test specific clusters through in the .ecf file. These clusters do not need to exists for the project to compile. This allows one to have library tests but being able to deliver the library without them.&lt;br /&gt;
For all classes in the test clusters, the inheritance structure will be evaluated so test classes inheriting from TEST_SET can be found. For any class belonging to a normal cluster, it will have to be reachable from root class to be compiled and detected as a test class. This means the recommended practice is to put test classes into a test cluster, but it is not a rule.&lt;br /&gt;
&lt;br /&gt;
{{Note| Not all classes in a test cluster have to be classes containing tests. One example are helper classes.}}&lt;br /&gt;
&lt;br /&gt;
Test clusters are also needed to provide a location for test generation/extraction.&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append, platform.os.winxp&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;platform.os.linux.i386&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px|thumb|Standard view listing existing test sets and the tests they contain]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px|thumb|Predefined ''Class tested'' view listing classes/features of the system together with the associated tests (Note: since test_boolean is tagged to cover multiple features, it also appears multiple times in the view)]]&lt;br /&gt;
[[Image:testing_user-view.png|right|400px|thumb|User defined view (by simply typing part of the tag), where the tool creates a view based on how the tests are tagged (see Examples above)]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
{{Note|Tags are defined as strings, but in the view we sometimes want a tag to represent a class, or a feature. The way this is done right now is that the view basically knows if the tag starts with &amp;quot;covers.&amp;quot; it is followed by a class and feature name. Another approach would be to define such tags like this: &amp;quot;covers.{CLASS_NAME}.feature_name&amp;quot;. This would allow user defined tags to have clickable nodes in the view. We could also introduces other special tags such like dates/times.}}&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The tools should support multiple selection. This is important for executing a number of selected test routines, showing passed execution results, etc. Also when selecting a e.g. class node it should execute all leaves below that node.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11300</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11300"/>
				<updated>2008-07-21T16:04:24Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Location of Tests (config file) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Location of Tests (config file) ====&lt;br /&gt;
&lt;br /&gt;
Tests can be located in any cluster of the system. In addition one can define test specific clusters through in the .ecf file. These clusters do not need to exists for the project to compile. This allows one to have library tests but being able to deliver the library without them.&lt;br /&gt;
For all classes in the test clusters, the inheritance structure will be evaluated so test classes inheriting from TEST_SET can be found. For any class belonging to a normal cluster, it will have to be reachable from root class to be compiled and detected as a test class. This means the recommended practice is to put test classes into a test cluster, but it is not a rule.&lt;br /&gt;
&lt;br /&gt;
{{Note| Not all classes in a test cluster have to be classes containing tests. One example are helper classes.}}&lt;br /&gt;
&lt;br /&gt;
Test clusters are also needed to provide a location for test generation/extraction.&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append, platform.os.winxp&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;platform.os.linux.i386&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px|thumb|Standard view listing existing test sets and the tests they contain]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px|thumb|Predefined ''Class tested'' view listing classes/features of the system together with the associated tests (Note: since test_boolean is tagged to cover multiple features, it also appears multiple times in the view)]]&lt;br /&gt;
[[Image:testing_user-view.png|right|400px|thumb|User defined view (by simply typing part of the tag), where the tool creates a view based on how the tests are tagged (see Examples above)]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
{{Note|Tags are defined as strings, but in the view we sometimes want a tag to represent a class, or a feature. The way this is done right now is that the view basically knows if the tag starts with &amp;quot;covers.&amp;quot; it is followed by a class and feature name. Another approach would be to define such tags like this: &amp;quot;covers.{CLASS_NAME}.feature_name&amp;quot;. This would allow user defined tags to have clickable nodes in the view. We could also introduces other special tags such like dates/times.}}&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The tools should support multiple selection. This is important for executing a number of selected test routines, showing passed execution results, etc. Also when selecting a e.g. class node it should execute all leaves below that node.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11299</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11299"/>
				<updated>2008-07-21T16:04:08Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Added conclusion from last meeting regarding test cluster&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Location of Tests (Config file) ====&lt;br /&gt;
&lt;br /&gt;
Tests can be located in any cluster of the system. In addition one can define test specific clusters through in the .ecf file. These clusters do not need to exists for the project to compile. This allows one to have library tests but being able to deliver the library without them.&lt;br /&gt;
For all classes in the test clusters, the inheritance structure will be evaluated so test classes inheriting from TEST_SET can be found. For any class belonging to a normal cluster, it will have to be reachable from root class to be compiled and detected as a test class. This means the recommended practice is to put test classes into a test cluster, but it is not a rule.&lt;br /&gt;
&lt;br /&gt;
{{Note| Not all classes in a test cluster have to be classes containing tests. One example are helper classes.}}&lt;br /&gt;
&lt;br /&gt;
Test clusters are also needed to provide a location for test generation/extraction.&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append, platform.os.winxp&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;platform.os.linux.i386&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px|thumb|Standard view listing existing test sets and the tests they contain]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px|thumb|Predefined ''Class tested'' view listing classes/features of the system together with the associated tests (Note: since test_boolean is tagged to cover multiple features, it also appears multiple times in the view)]]&lt;br /&gt;
[[Image:testing_user-view.png|right|400px|thumb|User defined view (by simply typing part of the tag), where the tool creates a view based on how the tests are tagged (see Examples above)]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
{{Note|Tags are defined as strings, but in the view we sometimes want a tag to represent a class, or a feature. The way this is done right now is that the view basically knows if the tag starts with &amp;quot;covers.&amp;quot; it is followed by a class and feature name. Another approach would be to define such tags like this: &amp;quot;covers.{CLASS_NAME}.feature_name&amp;quot;. This would allow user defined tags to have clickable nodes in the view. We could also introduces other special tags such like dates/times.}}&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The tools should support multiple selection. This is important for executing a number of selected test routines, showing passed execution results, etc. Also when selecting a e.g. class node it should execute all leaves below that node.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11233</id>
		<title>Testing Tool (Architecture)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11233"/>
				<updated>2008-06-21T00:15:22Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: bad URL&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
== Test system architecture ==&lt;br /&gt;
&lt;br /&gt;
* separate system for executing tests&lt;br /&gt;
* system is only responsible for running tests, without determine whether a test fails or not -&amp;gt; compiling and launching system is done by tool, which is also oracle&lt;br /&gt;
* tests are compiled up to degree 3 in development system, so user can edit tests like normal classes, but do not affect resulting binary&lt;br /&gt;
* test referenced by root class of development system will be compiled anyway, this way we can also run tests in debugger&lt;br /&gt;
* in CDD: test system is implicit target (does not have to be in ecf) which inherits from development target, test target simply defines new root class/feature&lt;br /&gt;
* re-use development EIFGEN (copy) so test system does not need to compile from scratch?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EiffelStudio  &amp;lt;-----------------------------&amp;gt;  Testing tool  &amp;lt;-----------------------------&amp;gt;  Test executor&lt;br /&gt;
&lt;br /&gt;
* Show tests in system                         * can be part of EiffelStudio or               * execute test in safe environment&lt;br /&gt;
* Show test result                               compiled as separate tool to be                (executor is allowed to crash)&lt;br /&gt;
* Provide test creation wizards                  used e.g. through console&lt;br /&gt;
* Interface for CDD, Auto tests,               * compile test executor&lt;br /&gt;
  creating manual tests, running               * distribute test executors to&lt;br /&gt;
* provide ESF service for                        different machines&lt;br /&gt;
  testing/test results                         * schedule test execution&lt;br /&gt;
                                               * provide test results&lt;br /&gt;
                                               * find all tests for a given ecf file&lt;br /&gt;
                                               * write root class for test executor&lt;br /&gt;
&lt;br /&gt;
CDD&lt;br /&gt;
* implemented partially in&lt;br /&gt;
  debugger/executable&lt;br /&gt;
* should be part of any Eiffel&lt;br /&gt;
  application, that way test can be&lt;br /&gt;
  created for bug submitting&lt;br /&gt;
* extraction can be initiated&lt;br /&gt;
  through debugger, breakpoints,&lt;br /&gt;
  failure window, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Auto Test&lt;br /&gt;
* separate tool, interface in EiffelStudio&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Provide testing as a service in EiffelStudio ==&lt;br /&gt;
&lt;br /&gt;
{{Note|Interface classes can be found here: https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/}}&lt;br /&gt;
&lt;br /&gt;
Using ESS, we can provide all testing functionality as a service within EiffelStudio. That way other tools can make use of these testing functionalities. Also the tool won't have to access the implementation directly. This is a short description of interfaces so far created.&lt;br /&gt;
&lt;br /&gt;
So far the service consists of tree major parts: the test suite storing all tests, test execution and test creation. The service already includes more than 20 interface classes, so it will be important to find a good abstraction. Another aspect is that some parts of the service should be extendible. Clients should be able to define new types of tests, executors or factories.&lt;br /&gt;
&lt;br /&gt;
=== TEST_SUITE_S ===&lt;br /&gt;
&lt;br /&gt;
The test suite is the first instance of the service. It has the list of all tests in the system and controls all execution of tests. Right now the service has the restriction that only one executor can run at a time. Although there might not be a reason against having to executors running in parallel, it will make observing the execution of tests much simpler. Whereas factories can be launched by anyone and so run in parallel. In that case clients are usually interested when a new test is created, for which events already exist in the test suite (see below).&lt;br /&gt;
&lt;br /&gt;
Changes in the test suite can be observed, so if tests are added, removed or modified clients can be notified. There are also events for activating or deactivating an executor in the test suite.&lt;br /&gt;
&lt;br /&gt;
The test suite also provides two registrars where new executors or factories can be registered. Later clients can query whether a certain executor/factory is available and use it if so. More on executors and factories later.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/test_suite_s.e TEST_SUITE_S]}}&lt;br /&gt;
&lt;br /&gt;
=== EIFFEL_TEST_I ===&lt;br /&gt;
&lt;br /&gt;
EIFFEL_TEST_I will be the common test representation for now. It inherits from a more general interface TEST_I which enables users to introduce new types of tests (not necessarily written in Eiffel). TEST_I inherits from a class TAGABLE, which means that all tests have a list of tags represented by string ([[Testing_Tool_(Specification)#Tags|Tags section of specifications]]). This allows us to have common used functionality in the service itself, like filtering (see FILTERED_COLLECTION_I). Also it enables the user to introduce his own attributes for tests.&lt;br /&gt;
&lt;br /&gt;
EIFFEL_TEST_I points to the abstract syntax representation of its routine and class the routine is located. This is useful to the implementation but could be also to clients. However implementation wise all relevant information should be accessible (such as feature name and tags in the indexing clause).&lt;br /&gt;
&lt;br /&gt;
All tests have a list of outcomes from previous execution sessions. More on that is explained in the next section.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/item/test_i.e TEST_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/item/eiffel_test_i.e EIFFEL_TEST_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/support/tagable_i.e TAGABLE_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/support/filtered_collection_i.e FILTERED_COLLECTION_I]}}&lt;br /&gt;
&lt;br /&gt;
=== TEST_EXECUTOR_I ===&lt;br /&gt;
&lt;br /&gt;
This is a general interface for executing tests. It takes a list of tests and executes each of them. One restriction it imposes to its implementers is that the execution is non blocking. This means that `'''run'''' will return immediately and all tests are executed asynchronously. This again makes it simpler for clients to use (especially graphical UIs).&lt;br /&gt;
&lt;br /&gt;
All state changes of TEST_EXECUTOR_I can be observed by inheriting TEST_EXECUTOR_OBSERVER and connecting to the executor.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, TEST_I keeps a list of outcomes produced by TEST_EXECUTOR_I. In the case of EIFFEL_TEST_I the list contains items of type EIFFEL_TEST_OUTCOME_I. Each outcome points to a EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I which describes one stage of a test execution. The tree stages are namely setup, test and tear down. Test just means calling the actual testing routine. Based on the responses of each stage, EIFFEL_TEST_OUTCOME_I determines whether a test passes or fails. In cases where it cannot be determined because the execution ran unexpected, an outcome is flagged unresolved. In that case the test need to be inspected which is expressed as `'''is_maintenance_required'''' in EIFFEL_TEST_OUTCOME_I.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_executor_i.e TEST_EXECUTOR_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_executor_observer.e TEST_EXECUTOR_OBSERVER]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_outcome_i.e TEST_OUTCOME_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/eiffel_test_outcome_i.e EIFFEL_TEST_OUTCOME_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/eiffel_test_routine_invocation_response_i.e EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I]}}&lt;br /&gt;
&lt;br /&gt;
=== TEST_FACTORY_I ===&lt;br /&gt;
&lt;br /&gt;
Factories are similar to executors since they are registered in the test suite and once triggered run asynchronously. A test factory takes a TEST_CONFIGURATION_I, which describes properties of a new test. There is a specialized version EIFFEL_TEST_CONFIGURATION_I for Eiffel tests (including class names, location, features and classes being tested by the new test).&lt;br /&gt;
So far the notification is kept simple by providing a call back function to the '''run''' routine of the factory. This is because clients will be notified anyway when a new test is added to the system through the test suite.&lt;br /&gt;
&lt;br /&gt;
This pattern should also be valid for test generation and extraction (Auto Test/CDD). Where the factory might not create a single test but multiple ones.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/test_factory_i.e TEST_FACTORY_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/test_configuration_i.e TEST_CONFIGURATION_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/eiffel_test_configuration_i.e EIFFEL_TEST_CONFIGURATION_I]}}&lt;br /&gt;
&lt;br /&gt;
== Communication between tool and test executor ==&lt;br /&gt;
&lt;br /&gt;
=== Protocol ===&lt;br /&gt;
&lt;br /&gt;
'''From tool to executor'''&lt;br /&gt;
&lt;br /&gt;
* name(s) of test to execute&lt;br /&gt;
* quit&lt;br /&gt;
&lt;br /&gt;
'''From executor to tool'''&lt;br /&gt;
&lt;br /&gt;
* test result&lt;br /&gt;
* text output produced by test&lt;br /&gt;
* exception details (type, tag, feature, class? occurred during set up, test, tear down?)&lt;br /&gt;
* call stack for exception&lt;br /&gt;
&lt;br /&gt;
=== Open questions ===&lt;br /&gt;
&lt;br /&gt;
* executor per machine/processor?&lt;br /&gt;
* text based/object base communication?&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Specification)]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11232</id>
		<title>Testing Tool (Architecture)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11232"/>
				<updated>2008-06-21T00:11:36Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Added links to source in svn repository&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
== Test system architecture ==&lt;br /&gt;
&lt;br /&gt;
* separate system for executing tests&lt;br /&gt;
* system is only responsible for running tests, without determine whether a test fails or not -&amp;gt; compiling and launching system is done by tool, which is also oracle&lt;br /&gt;
* tests are compiled up to degree 3 in development system, so user can edit tests like normal classes, but do not affect resulting binary&lt;br /&gt;
* test referenced by root class of development system will be compiled anyway, this way we can also run tests in debugger&lt;br /&gt;
* in CDD: test system is implicit target (does not have to be in ecf) which inherits from development target, test target simply defines new root class/feature&lt;br /&gt;
* re-use development EIFGEN (copy) so test system does not need to compile from scratch?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EiffelStudio  &amp;lt;-----------------------------&amp;gt;  Testing tool  &amp;lt;-----------------------------&amp;gt;  Test executor&lt;br /&gt;
&lt;br /&gt;
* Show tests in system                         * can be part of EiffelStudio or               * execute test in safe environment&lt;br /&gt;
* Show test result                               compiled as separate tool to be                (executor is allowed to crash)&lt;br /&gt;
* Provide test creation wizards                  used e.g. through console&lt;br /&gt;
* Interface for CDD, Auto tests,               * compile test executor&lt;br /&gt;
  creating manual tests, running               * distribute test executors to&lt;br /&gt;
* provide ESF service for                        different machines&lt;br /&gt;
  testing/test results                         * schedule test execution&lt;br /&gt;
                                               * provide test results&lt;br /&gt;
                                               * find all tests for a given ecf file&lt;br /&gt;
                                               * write root class for test executor&lt;br /&gt;
&lt;br /&gt;
CDD&lt;br /&gt;
* implemented partially in&lt;br /&gt;
  debugger/executable&lt;br /&gt;
* should be part of any Eiffel&lt;br /&gt;
  application, that way test can be&lt;br /&gt;
  created for bug submitting&lt;br /&gt;
* extraction can be initiated&lt;br /&gt;
  through debugger, breakpoints,&lt;br /&gt;
  failure window, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Auto Test&lt;br /&gt;
* separate tool, interface in EiffelStudio&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Provide testing as a service in EiffelStudio ==&lt;br /&gt;
&lt;br /&gt;
{{Note|Interface classes can be found here: https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/}}&lt;br /&gt;
&lt;br /&gt;
Using ESS, we can provide all testing functionality as a service within EiffelStudio. That way other tools can make use of these testing functionalities. Also the tool won't have to access the implementation directly. This is a short description of interfaces so far created.&lt;br /&gt;
&lt;br /&gt;
So far the service consists of tree major parts: the test suite storing all tests, test execution and test creation. The service already includes more than 20 interface classes, so it will be important to find a good abstraction. Another aspect is that some parts of the service should be extendible. Clients should be able to define new types of tests, executors or factories.&lt;br /&gt;
&lt;br /&gt;
=== TEST_SUITE_S ===&lt;br /&gt;
&lt;br /&gt;
The test suite is the first instance of the service. It has the list of all tests in the system and controls all execution of tests. Right now the service has the restriction that only one executor can run at a time. Although there might not be a reason against having to executors running in parallel, it will make observing the execution of tests much simpler. Whereas factories can be launched by anyone and so run in parallel. In that case clients are usually interested when a new test is created, for which events already exist in the test suite (see below).&lt;br /&gt;
&lt;br /&gt;
Changes in the test suite can be observed, so if tests are added, removed or modified clients can be notified. There are also events for activating or deactivating an executor in the test suite.&lt;br /&gt;
&lt;br /&gt;
The test suite also provides two registrars where new executors or factories can be registered. Later clients can query whether a certain executor/factory is available and use it if so. More on executors and factories later.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/test_suite_s.e TEST_SUITE_S]}}&lt;br /&gt;
&lt;br /&gt;
=== EIFFEL_TEST_I ===&lt;br /&gt;
&lt;br /&gt;
EIFFEL_TEST_I will be the common test representation for now. It inherits from a more general interface TEST_I which enables users to introduce new types of tests (not necessarily written in Eiffel). TEST_I inherits from a class TAGABLE, which means that all tests have a list of tags represented by string ([[Testing_Tool_(Specification)#Tags|Tags section of specifications]]). This allows us to have common used functionality in the service itself, like filtering (see FILTERED_COLLECTION_I). Also it enables the user to introduce his own attributes for tests.&lt;br /&gt;
&lt;br /&gt;
EIFFEL_TEST_I points to the abstract syntax representation of its routine and class the routine is located. This is useful to the implementation but could be also to clients. However implementation wise all relevant information should be accessible (such as feature name and tags in the indexing clause).&lt;br /&gt;
&lt;br /&gt;
All tests have a list of outcomes from previous execution sessions. More on that is explained in the next section.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/item/test_i.e TEST_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/item/eiffel_test_i.e EIFFEL_TEST_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/support/tagable_i.e TAGABLE_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/support/filtered_collection_i.e FILTERED_COLLECTION_I]}}&lt;br /&gt;
&lt;br /&gt;
=== TEST_EXECUTOR_I ===&lt;br /&gt;
&lt;br /&gt;
This is a general interface for executing tests. It takes a list of tests and executes each of them. One restriction it imposes to its implementers is that the execution is non blocking. This means that `'''run'''' will return immediately and all tests are executed asynchronously. This again makes it simpler for clients to use (especially graphical UIs).&lt;br /&gt;
&lt;br /&gt;
All state changes of TEST_EXECUTOR_I can be observed by inheriting TEST_EXECUTOR_OBSERVER and connecting to the executor.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, TEST_I keeps a list of outcomes produced by TEST_EXECUTOR_I. In the case of EIFFEL_TEST_I the list contains items of type EIFFEL_TEST_OUTCOME_I. Each outcome points to a EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I which describes one stage of a test execution. The tree stages are namely setup, test and tear down. Test just means calling the actual testing routine. Based on the responses of each stage, EIFFEL_TEST_OUTCOME_I determines whether a test passes or fails. In cases where it cannot be determined because the execution ran unexpected, an outcome is flagged unresolved. In that case the test need to be inspected which is expressed as `'''is_maintenance_required'''' in EIFFEL_TEST_OUTCOME_I.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_executor_i.e TEST_EXECUTOR_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_executor_observer.e TEST_EXECUTOR_OBSERVER]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/test_outcome_i.e TEST_OUTCOME_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/eiffel_test_outcome_i.e EIFFEL_TEST_OUTCOME_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/execution/eiffel_test_routine_invocation_response_i.e EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I]}}&lt;br /&gt;
&lt;br /&gt;
=== TEST_FACTORY_I ===&lt;br /&gt;
&lt;br /&gt;
Factories are similar to executors since they are registered in the test suite and once triggered run asynchronously. A test factory takes a TEST_CONFIGURATION_I, which describes properties of a new test. There is a specialized version EIFFEL_TEST_CONFIGURATION_I for Eiffel tests (including class names, location, features and classes being tested by the new test).&lt;br /&gt;
So far the notification is kept simple by providing a call back function to the '''run''' routine of the factory. This is because clients will be notified anyway when a new test is added to the system through the test suite.&lt;br /&gt;
&lt;br /&gt;
This pattern should also be valid for test generation and extraction (Auto Test/CDD). Where the factory might not create a single test but multiple ones.&lt;br /&gt;
&lt;br /&gt;
{{Block|[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/test_factory_i.e TEST_FACTORY_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/test_configuration_i.e TEST_CONFIGURATION_I]&lt;br /&gt;
[https://svn.origo.ethz.ch/eiffelstudio/trunk/Src/Eiffel/ecosystem/services/testing/factory/eiffel_test_configuratoin_i.e EIFFEL_TEST_CONFIGURATION_I]}}&lt;br /&gt;
&lt;br /&gt;
== Communication between tool and test executor ==&lt;br /&gt;
&lt;br /&gt;
=== Protocol ===&lt;br /&gt;
&lt;br /&gt;
'''From tool to executor'''&lt;br /&gt;
&lt;br /&gt;
* name(s) of test to execute&lt;br /&gt;
* quit&lt;br /&gt;
&lt;br /&gt;
'''From executor to tool'''&lt;br /&gt;
&lt;br /&gt;
* test result&lt;br /&gt;
* text output produced by test&lt;br /&gt;
* exception details (type, tag, feature, class? occurred during set up, test, tear down?)&lt;br /&gt;
* call stack for exception&lt;br /&gt;
&lt;br /&gt;
=== Open questions ===&lt;br /&gt;
&lt;br /&gt;
* executor per machine/processor?&lt;br /&gt;
* text based/object base communication?&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Specification)]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11231</id>
		<title>Testing Tool (Architecture)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11231"/>
				<updated>2008-06-20T22:03:38Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Description for current service interfaces&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
== Test system architecture ==&lt;br /&gt;
&lt;br /&gt;
* separate system for executing tests&lt;br /&gt;
* system is only responsible for running tests, without determine whether a test fails or not -&amp;gt; compiling and launching system is done by tool, which is also oracle&lt;br /&gt;
* tests are compiled up to degree 3 in development system, so user can edit tests like normal classes, but do not affect resulting binary&lt;br /&gt;
* test referenced by root class of development system will be compiled anyway, this way we can also run tests in debugger&lt;br /&gt;
* in CDD: test system is implicit target (does not have to be in ecf) which inherits from development target, test target simply defines new root class/feature&lt;br /&gt;
* re-use development EIFGEN (copy) so test system does not need to compile from scratch?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EiffelStudio  &amp;lt;-----------------------------&amp;gt;  Testing tool  &amp;lt;-----------------------------&amp;gt;  Test executor&lt;br /&gt;
&lt;br /&gt;
* Show tests in system                         * can be part of EiffelStudio or               * execute test in safe environment&lt;br /&gt;
* Show test result                               compiled as separate tool to be                (executor is allowed to crash)&lt;br /&gt;
* Provide test creation wizards                  used e.g. through console&lt;br /&gt;
* Interface for CDD, Auto tests,               * compile test executor&lt;br /&gt;
  creating manual tests, running               * distribute test executors to&lt;br /&gt;
* provide ESF service for                        different machines&lt;br /&gt;
  testing/test results                         * schedule test execution&lt;br /&gt;
                                               * provide test results&lt;br /&gt;
                                               * find all tests for a given ecf file&lt;br /&gt;
                                               * write root class for test executor&lt;br /&gt;
&lt;br /&gt;
CDD&lt;br /&gt;
* implemented partially in&lt;br /&gt;
  debugger/executable&lt;br /&gt;
* should be part of any Eiffel&lt;br /&gt;
  application, that way test can be&lt;br /&gt;
  created for bug submitting&lt;br /&gt;
* extraction can be initiated&lt;br /&gt;
  through debugger, breakpoints,&lt;br /&gt;
  failure window, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Auto Test&lt;br /&gt;
* separate tool, interface in EiffelStudio&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Provide testing as a service in EiffelStudio ==&lt;br /&gt;
&lt;br /&gt;
Using ESS, we can provide all testing functionality as a service within EiffelStudio. That way other tools can make use of these testing functionalities. Also the tool won't have to access the implementation directly. This is a short description of interfaces so far created.&lt;br /&gt;
&lt;br /&gt;
So far the service consists of tree major parts: the test suite storing all tests, test execution and test creation. The service already includes more than 20 interface classes, so it will be important to find a good abstraction. Another aspect is that some parts of the service should be extendible. Clients should be able to define new types of tests, executors or factories.&lt;br /&gt;
&lt;br /&gt;
=== TEST_SUITE_S ===&lt;br /&gt;
&lt;br /&gt;
The test suite is the first instance of the service. It has the list of all tests in the system and controls all execution of tests. Right now the service has the restriction that only one executor can run at a time. Although there might not be a reason against having to executors running in parallel, it will make observing the execution of tests much simpler. Whereas factories can be launched by anyone and so run in parallel. In that case clients are usually interested when a new test is created, for which events already exist in the test suite (see below).&lt;br /&gt;
&lt;br /&gt;
Changes in the test suite can be observed, so if tests are added, removed or modified clients can be notified. There are also events for activating or deactivating an executor in the test suite.&lt;br /&gt;
&lt;br /&gt;
The test suite also provides two registrars where new executors or factories can be registered. Later clients can query whether a certain executor/factory is available and use it if so. More on executors and factories later.&lt;br /&gt;
&lt;br /&gt;
=== EIFFEL_TEST_I ===&lt;br /&gt;
&lt;br /&gt;
EIFFEL_TEST_I will be the common test representation for now. It inherits from a more general interface TEST_I which enables users to introduce new types of tests (not necessarily written in Eiffel). TEST_I inherits from a class TAGABLE, which means that all tests have a list of tags represented by string ([[Testing_Tool_(Specification)#Tags|Tags section of specifications]]). This allows us to have common used functionality in the service itself, like filtering (see FILTERED_COLLECTION_I). Also it enables the user to introduce his own attributes for tests.&lt;br /&gt;
&lt;br /&gt;
EIFFEL_TEST_I points to the abstract syntax representation of its routine and class the routine is located. This is useful to the implementation but could be also to clients. However implementation wise all relevant information should be accessible (such as feature name and tags in the indexing clause).&lt;br /&gt;
&lt;br /&gt;
All tests have a list of outcomes from previous execution sessions. More on that is explained in the next section.&lt;br /&gt;
&lt;br /&gt;
=== TEST_EXECUTOR_I ===&lt;br /&gt;
&lt;br /&gt;
This is a general interface for executing tests. It takes a list of tests and executes each of them. One restriction it imposes to its implementers is that the execution is non blocking. This means that `'''run'''' will return immediately and all tests are executed asynchronously. This again makes it simpler for clients to use (especially graphical UIs).&lt;br /&gt;
&lt;br /&gt;
All state changes of TEST_EXECUTOR_I can be observed by inheriting TEST_EXECUTOR_OBSERVER and connecting to the executor.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, TEST_I keeps a list of outcomes produced by TEST_EXECUTOR_I. In the case of EIFFEL_TEST_I the list contains items of type EIFFEL_TEST_OUTCOME_I. Each outcome points to a EIFFEL_TEST_ROUTINE_INVOCATION_RESPONSE_I which describes one stage of a test execution. The tree stages are namely setup, test and tear down. Test just means calling the actual testing routine. Based on the responses of each stage, EIFFEL_TEST_OUTCOME_I determines whether a test passes or fails. In cases where it cannot be determined because the execution ran unexpected, an outcome is flagged unresolved. In that case the test need to be inspected which is expressed as `'''is_maintenance_required'''' in EIFFEL_TEST_OUTCOME_I.&lt;br /&gt;
&lt;br /&gt;
=== TEST_FACTORY_I ===&lt;br /&gt;
&lt;br /&gt;
Factories are similar to executors since they are registered in the test suite and once triggered run asynchronously. A test factory takes a TEST_CONFIGURATION_I, which describes properties of a new test. There is a specialized version EIFFEL_TEST_CONFIGURATION_I for Eiffel tests (including class names, location, features and classes being tested by the new test).&lt;br /&gt;
So far the notification is kept simple by providing a call back function to the '''run''' routine of the factory. This is because clients will be notified anyway when a new test is added to the system through the test suite.&lt;br /&gt;
&lt;br /&gt;
This pattern should also be valid for test generation and extraction (Auto Test/CDD). Where the factory might not create a single test but multiple ones.&lt;br /&gt;
&lt;br /&gt;
== Communication between tool and test executor ==&lt;br /&gt;
&lt;br /&gt;
=== Protocol ===&lt;br /&gt;
&lt;br /&gt;
'''From tool to executor'''&lt;br /&gt;
&lt;br /&gt;
* name(s) of test to execute&lt;br /&gt;
* quit&lt;br /&gt;
&lt;br /&gt;
'''From executor to tool'''&lt;br /&gt;
&lt;br /&gt;
* test result&lt;br /&gt;
* text output produced by test&lt;br /&gt;
* exception details (type, tag, feature, class? occurred during set up, test, tear down?)&lt;br /&gt;
* call stack for exception&lt;br /&gt;
&lt;br /&gt;
=== Open questions ===&lt;br /&gt;
&lt;br /&gt;
* executor per machine/processor?&lt;br /&gt;
* text based/object base communication?&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Specification)]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11230</id>
		<title>Testing Tool (Architecture)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11230"/>
				<updated>2008-06-20T20:38:16Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Added service section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
== Test system architecture ==&lt;br /&gt;
&lt;br /&gt;
* separate system for executing tests&lt;br /&gt;
* system is only responsible for running tests, without determine whether a test fails or not -&amp;gt; compiling and launching system is done by tool, which is also oracle&lt;br /&gt;
* tests are compiled up to degree 3 in development system, so user can edit tests like normal classes, but do not affect resulting binary&lt;br /&gt;
* test referenced by root class of development system will be compiled anyway, this way we can also run tests in debugger&lt;br /&gt;
* in CDD: test system is implicit target (does not have to be in ecf) which inherits from development target, test target simply defines new root class/feature&lt;br /&gt;
* re-use development EIFGEN (copy) so test system does not need to compile from scratch?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EiffelStudio  &amp;lt;-----------------------------&amp;gt;  Testing tool  &amp;lt;-----------------------------&amp;gt;  Test executor&lt;br /&gt;
&lt;br /&gt;
* Show tests in system                         * can be part of EiffelStudio or               * execute test in safe environment&lt;br /&gt;
* Show test result                               compiled as separate tool to be                (executor is allowed to crash)&lt;br /&gt;
* Provide test creation wizards                  used e.g. through console&lt;br /&gt;
* Interface for CDD, Auto tests,               * compile test executor&lt;br /&gt;
  creating manual tests, running               * distribute test executors to&lt;br /&gt;
* provide ESF service for                        different machines&lt;br /&gt;
  testing/test results                         * schedule test execution&lt;br /&gt;
                                               * provide test results&lt;br /&gt;
                                               * find all tests for a given ecf file&lt;br /&gt;
                                               * write root class for test executor&lt;br /&gt;
&lt;br /&gt;
CDD&lt;br /&gt;
* implemented partially in&lt;br /&gt;
  debugger/executable&lt;br /&gt;
* should be part of any Eiffel&lt;br /&gt;
  application, that way test can be&lt;br /&gt;
  created for bug submitting&lt;br /&gt;
* extraction can be initiated&lt;br /&gt;
  through debugger, breakpoints,&lt;br /&gt;
  failure window, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Auto Test&lt;br /&gt;
* separate tool, interface in EiffelStudio&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Provide testing as a service in EiffelStudio ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Communication between tool and test executor ==&lt;br /&gt;
&lt;br /&gt;
=== Protocol ===&lt;br /&gt;
&lt;br /&gt;
'''From tool to executor'''&lt;br /&gt;
&lt;br /&gt;
* name(s) of test to execute&lt;br /&gt;
* quit&lt;br /&gt;
&lt;br /&gt;
'''From executor to tool'''&lt;br /&gt;
&lt;br /&gt;
* test result&lt;br /&gt;
* text output produced by test&lt;br /&gt;
* exception details (type, tag, feature, class? occurred during set up, test, tear down?)&lt;br /&gt;
* call stack for exception&lt;br /&gt;
&lt;br /&gt;
=== Open questions ===&lt;br /&gt;
&lt;br /&gt;
* executor per machine/processor?&lt;br /&gt;
* text based/object base communication?&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Specification)]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Tutorial:_Creating_a_Service&amp;diff=11215</id>
		<title>Tutorial: Creating a Service</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Tutorial:_Creating_a_Service&amp;diff=11215"/>
				<updated>2008-06-16T23:14:45Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Extending EiffelStudio]]&lt;br /&gt;
[[Category:EiffelStudio Services]]&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
In this tutorial I'll demonstrate the process for integrating third-party [[EiffelStudio_Service|services]] inside [[EiffelStudio]] and hooking up internal parts of [[EiffelStudio]] to use the new service.&lt;br /&gt;
&lt;br /&gt;
Before we begin you should have a fundamental understanding of what a service is and a clear understanding of the guidelines for writing service&lt;br /&gt;
&lt;br /&gt;
{{Note|This tutorial is followed up by another tutorial for creating an EiffelStudio tool for displaying information published by the service.}}&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
When extending EiffelStudio, it is a good idea to separate your code from the EiffelStudio code. The [[Customizing the EiffelStudio Project]] page describes the process of doing this.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
This tutorial will showing you how to create a service logger service, used to log messages. The logger service will actually be a simplied version of the logger service already available in [[EiffelStudio]], &amp;lt;e&amp;gt;LOGGER_S&amp;lt;/e&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The tutorial will cover creating and implementing a service, registering a service, adding eventing and finally consuming the service and using an service observer.&lt;br /&gt;
&lt;br /&gt;
Although the service contains a simple interface, it's actually quite complete in that the service itself will make use of the [[Event List Service]] as a demonstration how reusing services in [[EiffelStudio]] can make development strategies quicker and easier.&lt;br /&gt;
&lt;br /&gt;
== Creating a Service Interface ==&lt;br /&gt;
The very first step in creating a service is to define a service interface. A service interface should contain only deferred routines or deferred routines with effective routines that only reference the service interface directly or other interfaces in the [[EiffelStudio]] [[EiffelStudio Ecosystem|ecosystem]].&lt;br /&gt;
&lt;br /&gt;
{{Note|It is important that the interface abstraction allows for complete freedom to be given to the implementation of the service. Implementation details are not public and should remain that way. No consumer of the service should ever attempt to reverse assign a retrieve service to the implementation class but to the interface class. Consumer of the service should not have to rely on the implementation details of a service and doing so will potentially break code in the future or if a different implementation is returned than expected when querying to a specific service.}}&lt;br /&gt;
&lt;br /&gt;
This tutorial is creating a logger service so it makes senses we should create a service interface class call &amp;lt;eiffel&amp;gt;LOGGER_SERVICE_S&amp;lt;/eiffel&amp;gt;. Create a deferred class &amp;lt;eiffel&amp;gt;LOGGER_SERVICE_S&amp;lt;/eiffel&amp;gt; in your extension project cluster.&lt;br /&gt;
&lt;br /&gt;
{{Note|All service interfaces by convention are suffixed &amp;lt;eiffel&amp;gt;_S&amp;lt;/eiffel&amp;gt;. This makes it clear to a consumer that they are using a service interface. All other related interfaces for the service should be suffixed &amp;lt;eiffel&amp;gt;_I&amp;lt;/eiffel&amp;gt; to indicate an interface.}}&lt;br /&gt;
&lt;br /&gt;
The first step is to define &amp;lt;eiffel&amp;gt;SIMPLE_LOGGER_S&amp;lt;/eiffel&amp;gt; as an actually service interface. In order to achieve this &amp;lt;eiffel&amp;gt;SIMPLE_LOGGER_S&amp;lt;/eiffel&amp;gt; must inherit a service base interface &amp;lt;eiffel&amp;gt;SERVICE_I&amp;lt;/eiffel&amp;gt;. As of [[EiffelStudio]] [[EiffelStudio 6.1 Releases|6.1]] &amp;lt;eiffel&amp;gt;SERVICE_I&amp;lt;/eiffel&amp;gt; does not contain any effective or deferred routines, it is merely a place holder for future additions and a method of classification. It does however inherit another service class &amp;lt;e&amp;gt;SITE&amp;lt;/e&amp;gt;, which will be talked about it later.&lt;br /&gt;
&lt;br /&gt;
So now you should have something looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;e&amp;gt;&lt;br /&gt;
deferred class&lt;br /&gt;
  SIMPLE_LOGGER_S&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
  SERVICE_I&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/e&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of course this doesn't do all that much, in fact it does nothing! We need to add a way to log messages. For this we'll add &amp;lt;e&amp;gt;put&amp;lt;/e&amp;gt; routines; &amp;lt;e&amp;gt;put_message&amp;lt;/e&amp;gt; and &amp;lt;e&amp;gt;put_message_with_severity&amp;lt;/e&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A logger shouldn't just simply log a message, it's just not powerful enough. So the put routines for the service should permit categorization and even indicate a level of severity in case a logger service consumer deems that a particular entry deserves more or less attention. Fortunately ESS offers built in support for categorization and a basic priority level, which will serve quite nicely as a translation for a log item severity level.&lt;br /&gt;
&lt;br /&gt;
=== Categories and Priorities ===&lt;br /&gt;
&amp;lt;eiffel&amp;gt;ENVIRONMENT_CATEGORIES&amp;lt;/eiffel&amp;gt; is a class consisting of constants defining EiffelStudio environment region categories. There are constants for the compiler, debugger the editor and so forth. As extenders you are free to add your own categories and utilize them. Any class can access a single instance of &amp;lt;eiffel&amp;gt;ENVIRONMENT_CATEGORIES&amp;lt;/eiffel&amp;gt; through &amp;lt;eiffel&amp;gt;SHARED_ENVIRONMENT_CATEGORIES.categories&amp;lt;/eiffel&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;PRIORITY_LEVELS&amp;lt;/eiffel&amp;gt; is another class containing constants for basic priority levels; high, normal and low. Any class can access a single instance of &amp;lt;eiffel&amp;gt;PRIORITY_LEVELS&amp;lt;/eiffel&amp;gt; through &amp;lt;eiffel&amp;gt;SHARED_PRIORITY_LEVELS.priorities&amp;lt;/eiffel&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We want to make use of both categories and priorites in the logger service so &amp;lt;eiffel&amp;gt;LOGGER_SERVICE_S&amp;lt;/eiffel&amp;gt; should inherit both &amp;lt;eiffel&amp;gt;SHARED_ENVIRONMENT_CATEGORIES&amp;lt;/eiffel&amp;gt; and &amp;lt;eiffel&amp;gt;SHARED_PRIORITY_LEVELS&amp;lt;/eiffel&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
{{Warning|Inheriting the shared classes should not affect the service interface so be sure to set the export status when inheriting those shared classes!}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;ENVIRONMENT_CATEGORIES&amp;lt;/eiffel&amp;gt; and &amp;lt;eiffel&amp;gt;PRIORITY_LEVELS&amp;lt;/eiffel&amp;gt; in addition to being constant definition classes, also contain validation functions to ensure a category identifier is a known identifiers, as it true for a priorty identifier. In the practice of [[Design by Contract]] our service routines are going to be passed a category and severity (priority) level, which require validation. Given &amp;lt;eiffel&amp;gt;SHARED_ENVIRONMENT_CATEGORIES.categories&amp;lt;/eiffel&amp;gt; and &amp;lt;eiffel&amp;gt;SHARED_PRIORITY_LEVELS.priorities&amp;lt;/eiffel&amp;gt; are not exported members of the interface we'll need to create proxy query functions, which is actually good design. These proxy function can then be used an service routine preconditions and can also be used by a service consumer client when making the call to one of the service routines.&lt;br /&gt;
&lt;br /&gt;
Below is the complete code for adding categories and severity levels to the logger service interface. The proxy function &amp;lt;eiffel&amp;gt;is_valid_category&amp;lt;/eiffel&amp;gt; has been added for category validation and &amp;lt;eiffel&amp;gt;is_valid_severity_level&amp;lt;/eiffel&amp;gt; added for severity level validation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
deferred class&lt;br /&gt;
    LOGGER_SERVICE_I&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
    SERVICE_I&lt;br /&gt;
&lt;br /&gt;
    SHARED_ENVIRONMENT_CATEGORIES&lt;br /&gt;
        export&lt;br /&gt;
            {NONE} all&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    SHARED_PRIORITY_LEVELS&lt;br /&gt;
        export&lt;br /&gt;
            {NONE} all&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature -- Query&lt;br /&gt;
&lt;br /&gt;
    frozen is_valid_category (a_cat: NATURAL_8): BOOLEAN&lt;br /&gt;
            -- Determines if `a_cat' is a valid logger category&lt;br /&gt;
            --&lt;br /&gt;
            -- `a_cat': A category identifier to validate.&lt;br /&gt;
            -- `Result': True to indicate the category is valid; False otherwise.&lt;br /&gt;
        do&lt;br /&gt;
            Result := categories.is_valid_category (a_cat)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    frozen is_valid_severity_level (a_level: INTEGER_8): BOOLEAN&lt;br /&gt;
            -- Determines if `a_level' is a valid severity level&lt;br /&gt;
            --&lt;br /&gt;
            -- `a_level': A severity level.&lt;br /&gt;
            -- `Result': True to indicate the level of severity is valid; False otherwise.&lt;br /&gt;
        do&lt;br /&gt;
            Result := priorities.is_valid_priority_level (a_level)&lt;br /&gt;
        end&lt;br /&gt;
		&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding Service Functionality ==&lt;br /&gt;
&lt;br /&gt;
Still, the logger service has not functionality. The service interface is now at a stage where the actual service routines can be added. We're going to add three routines; two to log messages and another to clear the log.&lt;br /&gt;
&lt;br /&gt;
{{Note|The logger created here is very simple. It would be highly likely for you to add routines to flush log entries and even provide a mutable status attribute to set an auto-flush mode. It's actually important to realize your design before releasing a service in a version of EiffelStudio, because once released then service interface may be used by others. In our case there is no flush routine, which means a later implementation of the logger service who's message flushing capabilities are expensive, will suffer bad performance penalties. The interface was already released without a flush routine so now a flush has to be performed every time a log message is added because existing consumer clients are not using the new service version's flush routine.&lt;br /&gt;
&lt;br /&gt;
When designing a service it's necessary to think how the service might be used by EiffelStudio, the Eiffel compiler and what might happen in the future. In the case of the logger you may have one EiffelStudio SKU that presents logged information in an embedded EiffelStudio tool, in another it may be pushed to the OS event log, in another it may be written to a file. Or, you might have all three available and a preference to indicate how added log messages are handled.}}&lt;br /&gt;
&lt;br /&gt;
Here is the basic interface with the previous interface members elided for clarity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
deferred class&lt;br /&gt;
    LOGGER_SERVICE_I&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
feature -- Extension&lt;br /&gt;
&lt;br /&gt;
    put_message (a_msg: STRING_32; a_cat: NATURAL_8)&lt;br /&gt;
            -- Logs a message.&lt;br /&gt;
            --&lt;br /&gt;
            -- `a_msg': Message text to log.&lt;br /&gt;
            -- `a_cat': A message category, see {ENVIRONMENT_CATEGORIES}.&lt;br /&gt;
        require&lt;br /&gt;
            a_msg_attached: a_msg /= Void&lt;br /&gt;
            not_a_msg_is_empty: not a_msg.is_empty&lt;br /&gt;
            a_cat_is_empty_is_valid_category: is_valid_category (a_cat)&lt;br /&gt;
        do&lt;br /&gt;
            put_message_with_severity (a_msg, a_cat, {PRIORITY_LEVELS}.normal)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    put_message_with_severity (a_msg: STRING_32; a_cat: NATURAL_8; a_level: INTEGER_8)&lt;br /&gt;
            -- Logs a message specifying a severity level.&lt;br /&gt;
            --&lt;br /&gt;
            -- `a_msg': Message text to log.&lt;br /&gt;
            -- `a_cat': A message category, see {ENVIRONMENT_CATEGORIES}.&lt;br /&gt;
            -- `a_level': A severity level for the message, See {PRIORITY_LEVELS}.&lt;br /&gt;
        require&lt;br /&gt;
            a_msg_attached: a_msg /= Void&lt;br /&gt;
            not_a_msg_is_empty: not a_msg.is_empty&lt;br /&gt;
            a_cat_is_empty_is_valid_category: is_valid_category (a_cat)&lt;br /&gt;
            a_level_is_valid_severity_level: is_valid_severity_level (a_level)&lt;br /&gt;
        deferred&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature -- Removal&lt;br /&gt;
&lt;br /&gt;
    clear_log&lt;br /&gt;
            -- Clear any cached log data&lt;br /&gt;
        deferred&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
		&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The simpler &amp;lt;eiffel&amp;gt;put_message&amp;lt;/eiffel&amp;gt; service routine has already been implemented by calling the more specific &amp;lt;eiffel&amp;gt;put_message_with_severity&amp;lt;/eiffel&amp;gt;, using a default severity level. Now our service has a simple and more specific versions of logging routines with zero-cost to a service implementer, and value-add to the logger service clients as they will not have to use the specific version each time a log message is to be added.&lt;br /&gt;
&lt;br /&gt;
=== Adding Events ===&lt;br /&gt;
To be a good service citizen of EiffelStudio it is highly desirable to provide events service consumers can hook up to. Not all services have events but it so happens that the logger service is interacted with in a way that tools or other services may be interested in;  a message is added and messaged are cleared.&lt;br /&gt;
&lt;br /&gt;
Griffin provides its own even mechanism using &amp;lt;eiffel&amp;gt;EVENT_TYPE&amp;lt;/eiffel&amp;gt;, which is an extremely powerful event abstraction that is simple to use.&lt;br /&gt;
&lt;br /&gt;
To facilitate event hooks we'll add the events &amp;lt;eiffel&amp;gt;message_logged_events&amp;lt;/eiffel&amp;gt; to notify subscribes when a message is added, and &amp;lt;eiffel&amp;gt;cleared_events&amp;lt;/eiffel&amp;gt; to notify subscribes when a clear operation was performed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
deferred class&lt;br /&gt;
    LOGGER_SERVICE_I&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
feature -- Events&lt;br /&gt;
&lt;br /&gt;
    message_logged_events: EVENT_TYPE [TUPLE [service: LOGGER_SERVICE_I; message: STRING_32;&lt;br /&gt;
        category: NATURAL_8; level: INTEGER_8]]&lt;br /&gt;
            -- Events called when a message has been logged&lt;br /&gt;
        deferred&lt;br /&gt;
            result_attached: Result /= Void&lt;br /&gt;
            result_consistent: Result = Result&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    cleared_events: EVENT_TYPE [TUPLE [service: LOGGER_SERVICE_I]]&lt;br /&gt;
            -- Events called when the messages have been cleared from the log&lt;br /&gt;
        deferred&lt;br /&gt;
            result_attached: Result /= Void&lt;br /&gt;
            result_consistent: Result = Result&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note, the events are deferred also. This given a logger service implementation the option to implement the events as attributes or deferred-evaluation functions, for performance and memory footprint optimizations. The postcondition &amp;lt;eiffel&amp;gt;result_consistent&amp;lt;/eiffel&amp;gt; ensures that any deferred-evaluation/once-per-object implementation actually performs the correct per-object caching.&lt;br /&gt;
&lt;br /&gt;
Respecting the events could be implemented using lazy-evaluation no class invariants have been added to ensure the event attributes are always attached, because (A) in deferred-evaluation they may not be attached until called and (B) evaluating the class invariants would remove any optimization benefits of deferred-evaluation as they would be evaluated after the service has been created.&lt;br /&gt;
&lt;br /&gt;
{{Note|For events implemented as attributes it's desirable for the implementation to add the appropriate invariants to ensure the events at in an attached state after the logger service has been created.}}&lt;br /&gt;
&lt;br /&gt;
== Creating a Consumer ==&lt;br /&gt;
Consumer helper classes are a nice addition to adding a new service. It makes working with an added service so much easier and it take only a minute to create a consumer.&lt;br /&gt;
&lt;br /&gt;
A consumer is a helper class that provides cached access to a service. Service consumers can then simply inherit one or more consumer helper classes to gain access to desired services.&lt;br /&gt;
&lt;br /&gt;
{{Note|As a convention all service consumer helper classes are suffix by &amp;lt;eiffel&amp;gt;_SERVICE_CONSUMER&amp;lt;/eiffel&amp;gt;.}}&lt;br /&gt;
&lt;br /&gt;
Below is literally all the code you need to create a consumer helper class for your service. It simply renames the features from a generic class base class as to provide non-conflict feature names when using multiple service consumer helper classes from a single class.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
class&lt;br /&gt;
    LOGGER_SERVICE_CONSUMER&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
    SERVICE_CONSUMER [LOGGER_SERVICE_S]&lt;br /&gt;
        rename&lt;br /&gt;
            service as logger_service,&lt;br /&gt;
            is_service_available as is_logger_service_available,&lt;br /&gt;
            internal_service as internal_logger_service&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Creating the Service Implementation ==&lt;br /&gt;
The ground work has been laid for basing the implementation of the logger service on. In this tutorial we are simply going to make use of the [[Event List Service]], which does an fantastic job of proving facilities for adding and removing added object (call event items). It means our implementation is basically a proxy to another service. Using another service saves time and effort to go from design to integration. The additional benefit of using the [[Event List Service]] is that an EiffelStudio tool can be created very quick to display the logged messages because EiffelStudio foundations provides base implementation for tools built using consuming the [[Event List Service]] (for those that are interested see &amp;lt;eiffel&amp;gt;ES_EVENT_LIST_TOOL_BASE&amp;lt;/eiffel&amp;gt; and &amp;lt;eiffel&amp;gt;ES_CLICKABLE_EVENT_LIST_TOOL_BASE&amp;lt;/eiffel&amp;gt;.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Below is the stub implementation for the effective logger service. &lt;br /&gt;
&lt;br /&gt;
{{Note|Just like service interfaces, interfaces and service consumers, effective service classes should always yield the name of the service with the &amp;lt;eiffel&amp;gt;_S&amp;lt;/eiffel&amp;gt; suffix removed.}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
class&lt;br /&gt;
    LOGGER_SERVICE&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
    LOGGER_SERVICE_S&lt;br /&gt;
&lt;br /&gt;
    SAFE_AUTO_DISPOSABLE&lt;br /&gt;
&lt;br /&gt;
    EVENT_LIST_SERVICE_CONSUMER&lt;br /&gt;
        export&lt;br /&gt;
            {NONE} all&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
create&lt;br /&gt;
    make&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    make&lt;br /&gt;
            -- Initialize logger service.&lt;br /&gt;
        do&lt;br /&gt;
                -- Initialize events&lt;br /&gt;
            create message_logged_events&lt;br /&gt;
            create cleared_events&lt;br /&gt;
            &lt;br /&gt;
                -- Set up automatic cleaning of event object&lt;br /&gt;
            auto_dispose (message_logged_events)&lt;br /&gt;
            auto_dispose (cleared_events)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature -- Extension&lt;br /&gt;
&lt;br /&gt;
    put_message_with_severity (a_msg: STRING_32; a_cat: NATURAL_8; a_level: INTEGER_8)&lt;br /&gt;
            -- Logs a message specifying a severity level.&lt;br /&gt;
            --&lt;br /&gt;
            -- `a_msg': Message text to log.&lt;br /&gt;
            -- `a_cat': A optional message category.&lt;br /&gt;
            -- `a_level': A serverity level for the message.&lt;br /&gt;
        do&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature -- Removal&lt;br /&gt;
&lt;br /&gt;
    clear_log&lt;br /&gt;
            -- Clear any cached log data&lt;br /&gt;
        do&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature -- Events&lt;br /&gt;
&lt;br /&gt;
    message_logged_events: EVENT_TYPE [TUPLE [service: LOGGER_SERVICE_S; message: STRING_32; &lt;br /&gt;
        category: NATURAL_8; level: INTEGER_8]]&lt;br /&gt;
            -- Events called when a message has been logged&lt;br /&gt;
            &lt;br /&gt;
    cleared_events: EVENT_TYPE [TUPLE [service: LOGGER_SERVICE_S]]&lt;br /&gt;
            -- Events called when the messages have been cleared from the log&lt;br /&gt;
&lt;br /&gt;
invariant&lt;br /&gt;
    message_logged_events_attached: not is_zombie implies message_logged_events /= Void&lt;br /&gt;
    cleared_events_attached: not is_zombie implies cleard_events /= Void&lt;br /&gt;
    &lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The service is an implementation of the previously defined logger service interface (&amp;lt;eiffel&amp;gt;LOGGER_SERVICE_S&amp;lt;/eiffel&amp;gt;) so we have to inherit the interface so later the SOA core can validated the service when it's registered, but more on this later.&lt;br /&gt;
&lt;br /&gt;
Also inherited is &amp;lt;eiffel&amp;gt;SAFE_AUTO_DISPOSABLE&amp;lt;/eiffel&amp;gt;, a memory resource management base class for handling automatically disposing of class objects when then class object itself is disposed. As the service hosts two events, after creation of those event they are added to the auto-dispose pool to automatic disposal. This saves the logger service from having to implement &amp;lt;eiffel&amp;gt;safe_disposable&amp;lt;/eiffel&amp;gt; and performing the resource management manually. For more information on resource management see [[EiffelStudio Memory Management]].&lt;br /&gt;
&lt;br /&gt;
As stated this implementation is actually using the [[Event List Service]] so access to the service is provided using the service consumer &amp;lt;eiffel&amp;gt;EVENT_LIST_SERVICE_CONSUMER&amp;lt;/eiffel&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A creation routine &amp;lt;eiffel&amp;gt;make&amp;lt;/eiffel&amp;gt; has been added for the class to create the implemented event attribute objects and register them with the auto disposable pool from &amp;lt;eiffel&amp;gt;SAFE_AUTO_DISPOSABLE&amp;lt;/eiffel&amp;gt;. As recommended the events, implemented as attributes, have class invariants to ensure their validity for the life time of the object.&lt;br /&gt;
&lt;br /&gt;
All the other routines are empty stubs waiting to be implemented.&lt;br /&gt;
&lt;br /&gt;
=== Supporting the Event List Service: Event Items ===&lt;br /&gt;
The [[Event List Service]] make used of an entity call an [[Event List Service#Event Items|event item]]. So, for the logger service to effectively use the [[Event List Service]] it must implement an [[Event List Service#Event Items|event item]] for a log message.&lt;br /&gt;
&lt;br /&gt;
Fortunately [[Griffin]] already provides much of the implementation to implement basic [[Event List Service#Event Items|event items]] through &amp;lt;eiffel&amp;gt;EVENT_LIST_ITEM&amp;lt;/eiffel&amp;gt;. &amp;lt;eiffel&amp;gt;EVENT_LIST_ITEM&amp;lt;/eiffel&amp;gt; however does not provide all the implementation required for an logger based-[[Event List Service#Event Items|event item]], there are still a few deferred routines that require implementation. One of these function &amp;lt;eiffel&amp;gt;type&amp;lt;/eiffel&amp;gt; is important for agnostic [[Event List Service#Event Items|event item]] identification as the implementation for an [[Event List Service#Event Items|event item]] should not be relied upon  by any other part of [[EiffelStudio]] other than the implementation aspect responsible for creating it, in this case the implementation of the logger service - &amp;lt;eiffel&amp;gt;LOGGER_SERVICE&amp;lt;/eiffel&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Event List Item Types ====&lt;br /&gt;
An [[Event List Service#Event Items|event item]] type corresponds to a type identifier found in &amp;lt;eiffel&amp;gt;EVENT_LIST_ITEM_TYPES&amp;lt;/eiffel&amp;gt;. As the logger service is a new service introducing a new type of [[Event List Service#Event Items|event item]] the logger service needs to add a new type identifier. Open &amp;lt;eiffel&amp;gt;EVENT_LIST_ITEM_TYPES&amp;lt;/eiffel&amp;gt; and add the following code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
log: NATURAL_8 = 2&lt;br /&gt;
        -- Logger event list item type&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though the type constant identifier is given the value of ''2'', the value should be unique. If you are using a version of [[EiffelStudio]] where ''2'' is already taken by another type constant identifier, pick the next available index.&lt;br /&gt;
&lt;br /&gt;
==== A Log Event Item Abstraction ====&lt;br /&gt;
&lt;br /&gt;
To support clean abstraction and separation from the underlying implementation, a derived [[Event List Service#Event Items|event item]] interface for a log message should be created, implementing &amp;lt;eiffel&amp;gt;type&amp;lt;/eiffel&amp;gt; from &amp;lt;eiffel&amp;gt;EVENT_LIST_ITEM_I&amp;lt;/eiffel&amp;gt;. &amp;lt;eiffel&amp;gt;type&amp;lt;/eiffel&amp;gt; should return the new type constant identifier - &amp;lt;eiffel&amp;gt;log&amp;lt;/eiffel&amp;gt; - added to &amp;lt;eiffel&amp;gt;EVENT_LIST_ITEM_TYPES&amp;lt;/eiffel&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Create a new deferred class &amp;lt;eiffel&amp;gt;LOGGER_EVENT_LIST_ITEM_I&amp;lt;/eiffel&amp;gt; and copy the following code into it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
deferred class&lt;br /&gt;
    LOGGER_EVENT_LIST_ITEM_I&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
    EVENT_LIST_ITEM_I&lt;br /&gt;
&lt;br /&gt;
feature -- Access&lt;br /&gt;
&lt;br /&gt;
    frozen type: NATURAL_8&lt;br /&gt;
            -- Event list item type identifier, see {EVENT_LIST_ITEM_TYPES}&lt;br /&gt;
        once&lt;br /&gt;
            Result := {EVENT_LIST_ITEM_TYPES}.log&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== A Log Event Item Implementation ====&lt;br /&gt;
&lt;br /&gt;
Using the newly create log interface &amp;lt;eiffel&amp;gt;LOGGER_EVENT_LIST_ITEM_I&amp;lt;/eiffel&amp;gt;, create a new class &amp;lt;eiffel&amp;gt;LOGGER_EVENT_LIST_ITEM&amp;lt;/eiffel&amp;gt;, which will be the [[Event List Service#Event Items|event list item]] &amp;lt;eiffel&amp;gt;LOGGER_SERVICE&amp;lt;/eiffel&amp;gt; will use to push log messages to the [[Event List Service]].&lt;br /&gt;
&lt;br /&gt;
Copy and paste the following code into the new class:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
class&lt;br /&gt;
    LOGGER_EVENT_LIST_ITEM&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
    LOGGER_EVENT_LIST_ITEM_I&lt;br /&gt;
&lt;br /&gt;
    EVENT_LIST_ITEM&lt;br /&gt;
        rename&lt;br /&gt;
            make as make_event_list_item&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
create&lt;br /&gt;
    make&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    make (a_category: like category; a_description: like description; a_level: like priority)&lt;br /&gt;
            -- Initialize a new event list error item.&lt;br /&gt;
            --&lt;br /&gt;
            -- `a_category': Log category, see {ENVIRONMENT_CATEGORIES}.&lt;br /&gt;
            -- `a_description': Log message.&lt;br /&gt;
            -- `a_level': Serverity level of the logged message.&lt;br /&gt;
        require&lt;br /&gt;
            a_category_is_valid_category: is_valid_category (a_category)&lt;br /&gt;
            a_description_attached: a_description /= Void&lt;br /&gt;
            not_a_description_is_empty: not a_description.is_empty&lt;br /&gt;
            a_level_is_valid_priority: is_valid_priority (a_level)&lt;br /&gt;
        do&lt;br /&gt;
            make_event_list_item (a_category, a_level, Void)&lt;br /&gt;
            description := a_description&lt;br /&gt;
        ensure&lt;br /&gt;
            category_set: category = a_category&lt;br /&gt;
            description_set: description = a_description&lt;br /&gt;
            priority_set: priority = a_level&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature -- Access&lt;br /&gt;
&lt;br /&gt;
    description: STRING_32&lt;br /&gt;
            -- Log message description&lt;br /&gt;
&lt;br /&gt;
feature -- Query&lt;br /&gt;
&lt;br /&gt;
    is_valid_data (a_data: like data): BOOLEAN&lt;br /&gt;
            -- Determines is the user data `a_data' is valid for the current event item.&lt;br /&gt;
            --&lt;br /&gt;
            -- `a_data': The user data to validate.&lt;br /&gt;
            -- `Result': True if the user data is valid; False otherwise.&lt;br /&gt;
        do&lt;br /&gt;
            Result := True&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
invariant&lt;br /&gt;
    description_attached: description /= Void&lt;br /&gt;
    not_description_is_empty: not description.is_empty&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The new logger service [[Event List Service#Event Items|event list item]] implements the deferred functions &amp;lt;eiffel&amp;gt;EVENT_LIST_ITEM&amp;lt;/eiffel&amp;gt; leaves deferred. Namely &amp;lt;eiffel&amp;gt;description&amp;lt;/eiffel&amp;gt; and &amp;lt;eiffel&amp;gt;is_valid_data&amp;lt;/eiffel&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;description&amp;lt;/eiffel&amp;gt; is implemented as an attribute and the class invariants taken from &amp;lt;eiffel&amp;gt;EVENT_LIST_ITEM_I.description&amp;lt;/eiffel&amp;gt;'s postconditions. The description will serve a the holder of a log message.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;is_valid_data&amp;lt;/eiffel&amp;gt; determines if an custom data is valid for the [[Event List Service#Event Items|event list item]] but seeing as not custom data is used by the logger [[Event List Service#Event Items|event list item]] it's implemented returning always &amp;lt;eiffel&amp;gt;True&amp;lt;/eiffel&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;eiffel&amp;gt;LOGGER_EVENT_LIST_ITEM&amp;lt;/eiffel&amp;gt;'s creation routine &amp;lt;eiffel&amp;gt;make&amp;lt;/eiffel&amp;gt; is used to set the information passed through &amp;lt;eiffel&amp;gt;LOGGER_SERVICE_I.put_message_with_severity_level&amp;lt;/eiffel&amp;gt; on the log [[Event List Service#Event Items|event list item]]. The &amp;lt;eiffel&amp;gt;category&amp;lt;/eiffel&amp;gt; and &amp;lt;eiffel&amp;gt;priority&amp;lt;/eiffel&amp;gt; are members of &amp;lt;eiffel&amp;gt;EVENT_LIST_ITEM&amp;lt;/eiffel&amp;gt; and are effective implementations of the deferred parent declarations of &amp;lt;eiffel&amp;gt;EVENT_LIST_ITEM_I&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the log [[Event List Service#Event Items|event list item]] created it's time to finish up the implementation of the logger service.&lt;br /&gt;
&lt;br /&gt;
=== Implementing the Service Routines ===&lt;br /&gt;
The final stage to complete the implementation of the logger service is to implement the deferred features of &amp;lt;eiffel&amp;gt;LOGGER_SERVICE_S&amp;lt;/eiffel&amp;gt;. The logger service is quite compact and only two features require implementing; &amp;lt;eiffel&amp;gt;put_message_with_severity&amp;lt;/eiffel&amp;gt; and &amp;lt;eiffel&amp;gt;clear_log&amp;lt;/eiffel&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;put_message_with_severity&amp;lt;/eiffel&amp;gt; will basically take the information passed in and create an instance of &amp;lt;eiffel&amp;gt;LOGGER_EVENT_LIST_ITEM&amp;lt;/eiffel&amp;gt; and push it to the [[Event List Service]], if it's available.&lt;br /&gt;
&lt;br /&gt;
{{Warning|When working with services it is important to remember that they might not be available for one reason or another, even if they are available when you are debugging. There are multiple reasons for their absence, one being a service did not make the final release because of bugs and time constraints.}}&lt;br /&gt;
&lt;br /&gt;
In order to push and remove ###event list items### to and from the [[Event List Service]], the [[Event List Service]] interface requires a &amp;quot;[[Event List Service#Context Cookie|Context Cookie]]&amp;quot;. A context cookie allows the [[Event List Service]] to track where ###event list items### were added from. Using a [[Event List Service#Context Cookie|context cookie]] enables tools and services to remove all ###event list items### added by that tool or service in a single step. The benefit of this, apart from simplicity, is the tool or service does not have to manually track what it pushed to the [[Event List Service]] for later removal.&lt;br /&gt;
&lt;br /&gt;
Adding the following code to &amp;lt;eiffel&amp;gt;LOGGER_SERVICE&amp;lt;/eiffel&amp;gt; will support &amp;lt;eiffel&amp;gt;put_message_with_severity&amp;lt;/eiffel&amp;gt; and &amp;lt;eiffel&amp;gt;clear_log&amp;lt;/eiffel&amp;gt; in their endeavors:&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context_cookie: UUID&lt;br /&gt;
            -- Context cookie for event list service&lt;br /&gt;
        once&lt;br /&gt;
            create Result.make_from_string (&amp;quot;E1FFE100-0106-4145-A53F-ED44CE92714D&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;put_message_with_severity&amp;lt;/eiffel&amp;gt; is not done with. The logger service exposes events, one of which should be published to any subscribers when a message is logged. &amp;lt;eiffel&amp;gt;put_message_with_severity&amp;lt;/eiffel&amp;gt; will need to publish the event also.&lt;br /&gt;
&lt;br /&gt;
The full implementation of &amp;lt;eiffel&amp;gt;put_message_with_severity&amp;lt;/eiffel&amp;gt; looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
feature -- Extension&lt;br /&gt;
&lt;br /&gt;
    put_message_with_severity (a_msg: STRING_32; a_cat: NATURAL_8; a_level: INTEGER_8)&lt;br /&gt;
            -- Logs a message specifiying a severity level.&lt;br /&gt;
            --&lt;br /&gt;
            -- `a_msg': Message text to log.&lt;br /&gt;
            -- `a_cat': A optional message category.&lt;br /&gt;
            -- `a_level': A severity level for the message.&lt;br /&gt;
        local&lt;br /&gt;
            l_item: like create_event_list_log_item&lt;br /&gt;
        do&lt;br /&gt;
            if is_event_list_service_available then&lt;br /&gt;
                l_item := create_event_list_log_item (a_msg, a_cat, a_level)&lt;br /&gt;
                event_list_service.put_event_item (context_cookie, l_item)&lt;br /&gt;
            end&lt;br /&gt;
&lt;br /&gt;
                -- Publish events&lt;br /&gt;
            message_logged_events.publish ([Current, a_msg, a_cat, a_level])&lt;br /&gt;
        end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To support better design, &amp;lt;eiffel&amp;gt;put_message_with_severity&amp;lt;/eiffel&amp;gt; should not actually create an instance of &amp;lt;eiffel&amp;gt;LOGGER_EVENT_LIST_ITEM&amp;lt;/eiffel&amp;gt; directly. It's possible for another party to take and extend the logger service creating even more specialized log ##event list items###, in which case the descendant logger service should not have to reimplement &amp;lt;eiffel&amp;gt;put_message_with_severity&amp;lt;/eiffel&amp;gt; in order to change the type of log ##event list items### pushed to the [[Event List Service]]. To facilitate, instead a factory function will be used to create the log ##event list items###, allowing service extenders to create specialize log ##event list items###.&lt;br /&gt;
&lt;br /&gt;
The factory function, used to create the log ###event list item###:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
feature {NONE} -- Factory&lt;br /&gt;
&lt;br /&gt;
    create_event_list_log_item (a_msg: STRING_32; a_cat: NATURAL_8;&lt;br /&gt;
        a_level: INTEGER_8): EVENT_LIST_LOG_ITEM_I&lt;br /&gt;
            -- Creates a new event list item for a log message.&lt;br /&gt;
            --&lt;br /&gt;
            -- `a_msg': Message text to log.&lt;br /&gt;
            -- `a_cat': A message category, see {ENVIRONMENT_CATEGORIES}.&lt;br /&gt;
            -- `a_level': A severity level for the message, See {PRIORITY_LEVELS}.&lt;br /&gt;
            -- `Result': An event list service item.&lt;br /&gt;
        require&lt;br /&gt;
            a_msg_attached: a_msg /= Void&lt;br /&gt;
            not_a_msg_is_empty: not a_msg.is_empty&lt;br /&gt;
            a_cat_is_empty_is_valid_category: is_valid_category (a_cat)&lt;br /&gt;
            a_level_is_valid_severity_level: is_valid_severity_level (a_level)&lt;br /&gt;
        do&lt;br /&gt;
            create {EVENT_LIST_LOG_ITEM} Result.make (a_cat, a_msg, a_level)&lt;br /&gt;
        ensure&lt;br /&gt;
            result_attached: Result /= Void&lt;br /&gt;
            result_is_log_item: Result.type = {EVENT_LIST_ITEM_TYPES}.log&lt;br /&gt;
        end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The last part to complete the logger event service is implementing &amp;lt;eiffel&amp;gt;clear_log&amp;lt;/eiffel&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;clear_log&amp;lt;/eiffel&amp;gt; is contains fairly rudementry logic thanks to the [[Event List Service]]. In a single line of code the logger service can remove all pushed log ##event list items### from the [[Event List Service]] using:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
event_list_service.prune_event_items (context_cookie)&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition to removing all the pushed log ##event list items### &amp;lt;eiffel&amp;gt;clear_log&amp;lt;/eiffel&amp;gt; must also publish the event &amp;lt;eiffel&amp;gt;cleared_events&amp;lt;/eiffel&amp;gt; to notify all subscribes of the purge.&lt;br /&gt;
&lt;br /&gt;
The full implementation of &amp;lt;eiffel&amp;gt;clear_log&amp;lt;/eiffel&amp;gt; is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
feature -- Removal&lt;br /&gt;
&lt;br /&gt;
    clear_log&lt;br /&gt;
            -- Clear any cached log data&lt;br /&gt;
        do&lt;br /&gt;
            if is_event_list_service_available then&lt;br /&gt;
                event_list_service.prune_event_items (context_cookie)&lt;br /&gt;
            end&lt;br /&gt;
            &lt;br /&gt;
                -- Publish events&lt;br /&gt;
            cleared_events.publish ([Current])&lt;br /&gt;
        end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! Coming this far you have created all of the interfaces and implementation needed to use the service. All that's left to do now is to proffer the service so other tools and services can make use of the logger service.&lt;br /&gt;
&lt;br /&gt;
== Proffering a Service ==&lt;br /&gt;
To permit access to the logger service you need to register the service with a [[Service#Service Containers|service container]. Typically most [[Service|services]] in [[EiffelStudio]] will be registered in &amp;lt;eiffel&amp;gt;ES_ABSTRACT_GRAPHIC.add_core_services&amp;lt;/eiffel&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In &amp;lt;eiffel&amp;gt;ES_ABSTRACT_GRAPHIC.add_core_services&amp;lt;/eiffel&amp;gt; add the following line of code to register the logger service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
a_container.add_service_with_activator ({LOGGER_SERVICE_S},&lt;br /&gt;
    agent create_logger_service, False)&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above line of code registers the logger service with a &amp;quot;[[Services#Delayed Activation|activator]]&amp;quot;, which ensures the service is only created when it's requested. This saves on start up performance degradation as well as not using up memory required to instantiate and run the logger service. It may seem small in the scope of a single, tiny service but every little counts and that count grows with ever services registered.&lt;br /&gt;
&lt;br /&gt;
Services are registered using their service interface type. One reason why the &amp;lt;eiffel&amp;gt;_S&amp;lt;/eiffel&amp;gt; suffix is used is for identity. When querying for a service you know instinctively to look for a class name ending &amp;lt;eiffel&amp;gt;_S&amp;lt;/eiffel&amp;gt;, which is assignable to an Eiffel variable of the same type.&lt;br /&gt;
&lt;br /&gt;
To actually create the logger service, a factory function is used which has the added advantage of allowing descendants to redefine the default service returned (or return no service at all.)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
feature {NONE} -- Service factories&lt;br /&gt;
&lt;br /&gt;
    create_logger_service: LOGGER_SERVICE_S&lt;br /&gt;
            -- Creates the logger service&lt;br /&gt;
        do&lt;br /&gt;
            create {LOGGER_SERVICE}Result.make&lt;br /&gt;
        end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next Steps ==&lt;br /&gt;
With the all the code in place, the logger service is read to be interacted with. In the [[Tutorial: Consuming a Service|next tutorial]] on [[Services|services]] we will put the logger service to use in existing functionality in EiffelStudio.&lt;br /&gt;
&lt;br /&gt;
In the [[Tutorial: Creating a Service-Based Tool|final tutorial]] we'll explorer using [[EiffelStudio Foundations]] to create a dockable Eiffel tool to display the logged messages.&lt;br /&gt;
&lt;br /&gt;
== SVN Patch ==&lt;br /&gt;
For the full tutorial code, please grab an SVN patch from [[here]].&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11212</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11212"/>
				<updated>2008-06-13T20:44:58Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Comment on tag syntax&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes will be created. Other files needed for testing can be put there as well. It could be included as a special type of cluster, so classes in that folder will be compiled.&lt;br /&gt;
&lt;br /&gt;
Proposal: '''/&amp;quot;location_of_ecf_file&amp;quot;/testing/&amp;quot;target_name&amp;quot;'''&lt;br /&gt;
&lt;br /&gt;
{{Note| Some cases require special testing folder when automatically creating new test cases (e.g. in a writable library, since it might use classes which are not visible from the library). The test executor could use that folder to place log files. Also system level tests rely on a location for files (such as text files containing the expected output).}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append, platform.os.winxp&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;platform.os.linux.i386&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px|thumb|Standard view listing existing test sets and the tests they contain]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px|thumb|Predefined ''Class tested'' view listing classes/features of the system together with the associated tests (Note: since test_boolean is tagged to cover multiple features, it also appears multiple times in the view)]]&lt;br /&gt;
[[Image:testing_user-view.png|right|400px|thumb|User defined view (by simply typing part of the tag), where the tool creates a view based on how the tests are tagged (see Examples above)]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
{{Note|Tags are defined as strings, but in the view we sometimes want a tag to represent a class, or a feature. The way this is done right now is that the view basically knows if the tag starts with &amp;quot;covers.&amp;quot; it is followed by a class and feature name. Another approach would be to define such tags like this: &amp;quot;covers.{CLASS_NAME}.feature_name&amp;quot;. This would allow user defined tags to have clickable nodes in the view. We could also introduces other special tags such like dates/times.}}&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The tools should support multiple selection. This is important for executing a number of selected test routines, showing passed execution results, etc. Also when selecting a e.g. class node it should execute all leaves below that node.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11211</id>
		<title>Testing Tool (Architecture)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11211"/>
				<updated>2008-06-12T23:57:59Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Moved missplaced title&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
== Test system architecture ==&lt;br /&gt;
&lt;br /&gt;
* separate system for executing tests&lt;br /&gt;
* system is only responsible for running tests, without determine whether a test fails or not -&amp;gt; compiling and launching system is done by tool, which is also oracle&lt;br /&gt;
* tests are compiled up to degree 3 in development system, so user can edit tests like normal classes, but do not affect resulting binary&lt;br /&gt;
* test referenced by root class of development system will be compiled anyway, this way we can also run tests in debugger&lt;br /&gt;
* in CDD: test system is implicit target (does not have to be in ecf) which inherits from development target, test target simply defines new root class/feature&lt;br /&gt;
* re-use development EIFGEN (copy) so test system does not need to compile from scratch?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EiffelStudio  &amp;lt;-----------------------------&amp;gt;  Testing tool  &amp;lt;-----------------------------&amp;gt;  Test executor&lt;br /&gt;
&lt;br /&gt;
* Show tests in system                         * can be part of EiffelStudio or               * execute test in safe environment&lt;br /&gt;
* Show test result                               compiled as separate tool to be                (executor is allowed to crash)&lt;br /&gt;
* Provide test creation wizards                  used e.g. through console&lt;br /&gt;
* Interface for CDD, Auto tests,               * compile test executor&lt;br /&gt;
  creating manual tests, running               * distribute test executors to&lt;br /&gt;
* provide ESF service for                        different machines&lt;br /&gt;
  testing/test results                         * schedule test execution&lt;br /&gt;
                                               * provide test results&lt;br /&gt;
                                               * find all tests for a given ecf file&lt;br /&gt;
                                               * write root class for test executor&lt;br /&gt;
&lt;br /&gt;
CDD&lt;br /&gt;
* implemented partially in&lt;br /&gt;
  debugger/executable&lt;br /&gt;
* should be part of any Eiffel&lt;br /&gt;
  application, that way test can be&lt;br /&gt;
  created for bug submitting&lt;br /&gt;
* extraction can be initiated&lt;br /&gt;
  through debugger, breakpoints,&lt;br /&gt;
  failure window, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Auto Test&lt;br /&gt;
* separate tool, interface in EiffelStudio&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication between tool and test executor ==&lt;br /&gt;
&lt;br /&gt;
=== Protocol ===&lt;br /&gt;
&lt;br /&gt;
'''From tool to executor'''&lt;br /&gt;
&lt;br /&gt;
* name(s) of test to execute&lt;br /&gt;
* quit&lt;br /&gt;
&lt;br /&gt;
'''From executor to tool'''&lt;br /&gt;
&lt;br /&gt;
* test result&lt;br /&gt;
* text output produced by test&lt;br /&gt;
* exception details (type, tag, feature, class? occurred during set up, test, tear down?)&lt;br /&gt;
* call stack for exception&lt;br /&gt;
&lt;br /&gt;
=== Open questions ===&lt;br /&gt;
&lt;br /&gt;
* executor per machine/processor?&lt;br /&gt;
* text based/object base communication?&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Specification)]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11210</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11210"/>
				<updated>2008-06-12T23:13:10Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Comment on covers view&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes will be created. Other files needed for testing can be put there as well. It could be included as a special type of cluster, so classes in that folder will be compiled.&lt;br /&gt;
&lt;br /&gt;
Proposal: '''/&amp;quot;location_of_ecf_file&amp;quot;/testing/&amp;quot;target_name&amp;quot;'''&lt;br /&gt;
&lt;br /&gt;
{{Note| Some cases require special testing folder when automatically creating new test cases (e.g. in a writable library, since it might use classes which are not visible from the library). The test executor could use that folder to place log files. Also system level tests rely on a location for files (such as text files containing the expected output).}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append, platform.os.winxp&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;platform.os.linux.i386&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px|thumb|Standard view listing existing test sets and the tests they contain]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px|thumb|Predefined ''Class tested'' view listing classes/features of the system together with the associated tests (Note: since test_boolean is tagged to cover multiple features, it also appears multiple times in the view)]]&lt;br /&gt;
[[Image:testing_user-view.png|right|400px|thumb|User defined view (by simply typing part of the tag), where the tool creates a view based on how the tests are tagged (see Examples above)]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The tools should support multiple selection. This is important for executing a number of selected test routines, showing passed execution results, etc. Also when selecting a e.g. class node it should execute all leaves below that node.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11209</id>
		<title>Testing Tool (Architecture)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11209"/>
				<updated>2008-06-12T23:08:01Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Scheme of different components needed for test creation/execution&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
== Test system architecture ==&lt;br /&gt;
&lt;br /&gt;
* separate system for executing tests&lt;br /&gt;
* system is only responsible for running tests, without determine whether a test fails or not -&amp;gt; compiling and launching system is done by tool, which is also oracle&lt;br /&gt;
* tests are compiled up to degree 3 in development system, so user can edit tests like normal classes, but do not affect resulting binary&lt;br /&gt;
* test referenced by root class of development system will be compiled anyway, this way we can also run tests in debugger&lt;br /&gt;
* in CDD: test system is implicit target (does not have to be in ecf) which inherits from development target, test target simply defines new root class/feature&lt;br /&gt;
* re-use development EIFGEN (copy) so test system does not need to compile from scratch?&lt;br /&gt;
&lt;br /&gt;
== Communication between tool and test executor ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EiffelStudio  &amp;lt;-----------------------------&amp;gt;  Testing tool  &amp;lt;-----------------------------&amp;gt;  Test executor&lt;br /&gt;
&lt;br /&gt;
* Show tests in system                         * can be part of EiffelStudio or               * execute test in safe environment&lt;br /&gt;
* Show test result                               compiled as separate tool to be                (executor is allowed to crash)&lt;br /&gt;
* Provide test creation wizards                  used e.g. through console&lt;br /&gt;
* Interface for CDD, Auto tests,               * compile test executor&lt;br /&gt;
  creating manual tests, running               * distribute test executors to&lt;br /&gt;
* provide ESF service for                        different machines&lt;br /&gt;
  testing/test results                         * schedule test execution&lt;br /&gt;
                                               * provide test results&lt;br /&gt;
                                               * find all tests for a given ecf file&lt;br /&gt;
                                               * write root class for test executor&lt;br /&gt;
&lt;br /&gt;
CDD&lt;br /&gt;
* implemented partially in&lt;br /&gt;
  debugger/executable&lt;br /&gt;
* should be part of any Eiffel&lt;br /&gt;
  application, that way test can be&lt;br /&gt;
  created for bug submitting&lt;br /&gt;
* extraction can be initiated&lt;br /&gt;
  through debugger, breakpoints,&lt;br /&gt;
  failure window, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Auto Test&lt;br /&gt;
* separate tool, interface in EiffelStudio&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Protocol ===&lt;br /&gt;
&lt;br /&gt;
'''From tool to executor'''&lt;br /&gt;
&lt;br /&gt;
* name(s) of test to execute&lt;br /&gt;
* quit&lt;br /&gt;
&lt;br /&gt;
'''From executor to tool'''&lt;br /&gt;
&lt;br /&gt;
* test result&lt;br /&gt;
* text output produced by test&lt;br /&gt;
* exception details (type, tag, feature, class? occurred during set up, test, tear down?)&lt;br /&gt;
* call stack for exception&lt;br /&gt;
&lt;br /&gt;
=== Open questions ===&lt;br /&gt;
&lt;br /&gt;
* executor per machine/processor?&lt;br /&gt;
* text based/object base communication?&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Specification)]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11208</id>
		<title>Testing Tool (Architecture)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11208"/>
				<updated>2008-06-12T22:20:11Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Added &amp;quot;See also&amp;quot; section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
== Test system architecture ==&lt;br /&gt;
&lt;br /&gt;
* separate system for executing tests&lt;br /&gt;
* system is only responsible for running tests, without determine whether a test fails or not -&amp;gt; compiling and launching system is done by tool, which is also oracle&lt;br /&gt;
* tests are compiled up to degree 3 in development system, so user can edit tests like normal classes, but do not affect resulting binary&lt;br /&gt;
* test referenced by root class of development system will be compiled anyway, this way we can also run tests in debugger&lt;br /&gt;
* in CDD: test system is implicit target (does not have to be in ecf) which inherits from development target, test target simply defines new root class/feature&lt;br /&gt;
* re-use development EIFGEN (copy) so test system does not need to compile from scratch?&lt;br /&gt;
&lt;br /&gt;
== Communication between tool and test executor ==&lt;br /&gt;
&lt;br /&gt;
=== Protocol ===&lt;br /&gt;
&lt;br /&gt;
'''From tool to executor'''&lt;br /&gt;
&lt;br /&gt;
* name(s) of test to execute&lt;br /&gt;
* quit&lt;br /&gt;
&lt;br /&gt;
'''From executor to tool'''&lt;br /&gt;
&lt;br /&gt;
* test result&lt;br /&gt;
* text output produced by test&lt;br /&gt;
* exception details (type, tag, feature, class? occurred during set up, test, tear down?)&lt;br /&gt;
* call stack for exception&lt;br /&gt;
&lt;br /&gt;
=== Open questions ===&lt;br /&gt;
&lt;br /&gt;
* executor per machine/processor?&lt;br /&gt;
* text based/object base communication?&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Specification)]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11207</id>
		<title>Testing Tool (Architecture)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Architecture)&amp;diff=11207"/>
				<updated>2008-06-12T22:18:28Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Initial description of tester system and communication&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
== Test system architecture ==&lt;br /&gt;
&lt;br /&gt;
* separate system for executing tests&lt;br /&gt;
* system is only responsible for running tests, without determine whether a test fails or not -&amp;gt; compiling and launching system is done by tool, which is also oracle&lt;br /&gt;
* tests are compiled up to degree 3 in development system, so user can edit tests like normal classes, but do not affect resulting binary&lt;br /&gt;
* test referenced by root class of development system will be compiled anyway, this way we can also run tests in debugger&lt;br /&gt;
* in CDD: test system is implicit target (does not have to be in ecf) which inherits from development target, test target simply defines new root class/feature&lt;br /&gt;
* re-use development EIFGEN (copy) so test system does not need to compile from scratch?&lt;br /&gt;
&lt;br /&gt;
== Communication between tool and test executor ==&lt;br /&gt;
&lt;br /&gt;
=== Protocol ===&lt;br /&gt;
&lt;br /&gt;
'''From tool to executor'''&lt;br /&gt;
&lt;br /&gt;
* name(s) of test to execute&lt;br /&gt;
* quit&lt;br /&gt;
&lt;br /&gt;
'''From executor to tool'''&lt;br /&gt;
&lt;br /&gt;
* test result&lt;br /&gt;
* text output produced by test&lt;br /&gt;
* exception details (type, tag, feature, class? occurred during set up, test, tear down?)&lt;br /&gt;
* call stack for exception&lt;br /&gt;
&lt;br /&gt;
=== Open questions ===&lt;br /&gt;
&lt;br /&gt;
* executor per machine/processor?&lt;br /&gt;
* text based/object base communication?&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11206</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11206"/>
				<updated>2008-06-12T21:53:41Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: typo...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes will be created. Other files needed for testing can be put there as well. It could be included as a special type of cluster, so classes in that folder will be compiled.&lt;br /&gt;
&lt;br /&gt;
Proposal: '''/&amp;quot;location_of_ecf_file&amp;quot;/testing/&amp;quot;target_name&amp;quot;'''&lt;br /&gt;
&lt;br /&gt;
{{Note| Some cases require special testing folder when automatically creating new test cases (e.g. in a writable library, since it might use classes which are not visible from the library). The test executor could use that folder to place log files. Also system level tests rely on a location for files (such as text files containing the expected output).}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append, platform.os.winxp&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;platform.os.linux.i386&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px|thumb|Standard view listing existing test sets and the tests they contain]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px|thumb|Predefined ''Class tested'' view listing classes/features of the system together with the associated tests]]&lt;br /&gt;
[[Image:testing_user-view.png|right|400px|thumb|User defined view (by simply typing part of the tag), where the tool creates a view based on how the tests are tagged (see Examples above)]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The tools should support multiple selection. This is important for executing a number of selected test routines, showing passed execution results, etc. Also when selecting a e.g. class node it should execute all leaves below that node.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11205</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11205"/>
				<updated>2008-06-12T21:51:40Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Added screen shot of user defined view/tags&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes will be created. Other files needed for testing can be put there as well. It could be included as a special type of cluster, so classes in that folder will be compiled.&lt;br /&gt;
&lt;br /&gt;
Proposal: '''/&amp;quot;location_of_ecf_file&amp;quot;/testing/&amp;quot;target_name&amp;quot;'''&lt;br /&gt;
&lt;br /&gt;
{{Note| Some cases require special testing folder when automatically creating new test cases (e.g. in a writable library, since it might use classes which are not visible from the library). The test executor could use that folder to place log files. Also system level tests rely on a location for files (such as text files containing the expected output).}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append, platform.os.winxp&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;platform.os.linux.x86_64&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px|thumb|Standard view listing existing test sets and the tests they contain]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px|thumb|Predefined ''Class tested'' view listing classes/features of the system together with the associated tests]]&lt;br /&gt;
[[Image:testing_user-view.png|right|400px|thumb|User defined view (by simply typing part of the tag), where the tool creates a view based on how the tests are tagged (see Examples above)]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The tools should support multiple selection. This is important for executing a number of selected test routines, showing passed execution results, etc. Also when selecting a e.g. class node it should execute all leaves below that node.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=File:Testing_user-view.png&amp;diff=11204</id>
		<title>File:Testing user-view.png</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=File:Testing_user-view.png&amp;diff=11204"/>
				<updated>2008-06-12T21:39:23Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=File:Testing_user-view.jpg&amp;diff=11203</id>
		<title>File:Testing user-view.jpg</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=File:Testing_user-view.jpg&amp;diff=11203"/>
				<updated>2008-06-12T21:36:42Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Example of what a user defined view based on user defined tags could look like&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Example of what a user defined view based on user defined tags could look like&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11202</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11202"/>
				<updated>2008-06-12T21:16:17Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Added user defined tags to examples&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes will be created. Other files needed for testing can be put there as well. It could be included as a special type of cluster, so classes in that folder will be compiled.&lt;br /&gt;
&lt;br /&gt;
Proposal: '''/&amp;quot;location_of_ecf_file&amp;quot;/testing/&amp;quot;target_name&amp;quot;'''&lt;br /&gt;
&lt;br /&gt;
{{Note| Some cases require special testing folder when automatically creating new test cases (e.g. in a writable library, since it might use classes which are not visible from the library). The test executor could use that folder to place log files. Also system level tests rely on a location for files (such as text files containing the expected output).}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append, platform.os.winxp&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;platform.os.linux.x86_64&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The tools should support multiple selection. This is important for executing a number of selected test routines, showing passed execution results, etc. Also when selecting a e.g. class node it should execute all leaves below that node.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11200</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11200"/>
				<updated>2008-06-11T23:42:37Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Manage and run test suite */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes will be created. Other files needed for testing can be put there as well. It could be included as a special type of cluster, so classes in that folder will be compiled.&lt;br /&gt;
&lt;br /&gt;
Proposal: '''/&amp;quot;location_of_ecf_file&amp;quot;/testing/&amp;quot;target_name&amp;quot;'''&lt;br /&gt;
&lt;br /&gt;
{{Note| Some cases require special testing folder when automatically creating new test cases (e.g. in a writable library, since it might use classes which are not visible from the library). The test executor could use that folder to place log files. Also system level tests rely on a location for files (such as text files containing the expected output).}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The tools should support multiple selection. This is important for executing a number of selected test routines, showing passed execution results, etc. Also when selecting a e.g. class node it should execute all leaves below that node.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11199</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11199"/>
				<updated>2008-06-11T23:41:47Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Manage and run test suite */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes will be created. Other files needed for testing can be put there as well. It could be included as a special type of cluster, so classes in that folder will be compiled.&lt;br /&gt;
&lt;br /&gt;
Proposal: '''/&amp;quot;location_of_ecf_file&amp;quot;/testing/&amp;quot;target_name&amp;quot;'''&lt;br /&gt;
&lt;br /&gt;
{{Note| Some cases require special testing folder when automatically creating new test cases (e.g. in a writable library, since it might use classes which are not visible from the library). The test executor could use that folder to place log files. Also system level tests rely on a location for files (such as text files containing the expected output).}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
{{Note|The tools should support multiple selection. This is important for execution a number of selected test routines, showing passed execution results, etc. Also when selecting a e.g. class node it should execute all leaves below that node.}}&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11198</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11198"/>
				<updated>2008-06-11T23:38:09Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Added note for displaying test history&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes will be created. Other files needed for testing can be put there as well. It could be included as a special type of cluster, so classes in that folder will be compiled.&lt;br /&gt;
&lt;br /&gt;
Proposal: '''/&amp;quot;location_of_ecf_file&amp;quot;/testing/&amp;quot;target_name&amp;quot;'''&lt;br /&gt;
&lt;br /&gt;
{{Note| Some cases require special testing folder when automatically creating new test cases (e.g. in a writable library, since it might use classes which are not visible from the library). The test executor could use that folder to place log files. Also system level tests rely on a location for files (such as text files containing the expected output).}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|We should have two different views for displaying testing history. One structured by test sessions (list of test execution containing all test routines for each session) and one listing recent executions for a single test routine.}}&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11197</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11197"/>
				<updated>2008-06-11T23:31:26Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* System level test specifics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them. These classes will probably have to be in an special testing library, since they also make use of other libraries such as the process library.&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes will be created. Other files needed for testing can be put there as well. It could be included as a special type of cluster, so classes in that folder will be compiled.&lt;br /&gt;
&lt;br /&gt;
Proposal: '''/&amp;quot;location_of_ecf_file&amp;quot;/testing/&amp;quot;target_name&amp;quot;'''&lt;br /&gt;
&lt;br /&gt;
{{Note| Some cases require special testing folder when automatically creating new test cases (e.g. in a writable library, since it might use classes which are not visible from the library). The test executor could use that folder to place log files. Also system level tests rely on a location for files (such as text files containing the expected output).}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11196</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11196"/>
				<updated>2008-06-11T23:28:37Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Config file */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes will be created. Other files needed for testing can be put there as well. It could be included as a special type of cluster, so classes in that folder will be compiled.&lt;br /&gt;
&lt;br /&gt;
Proposal: '''/&amp;quot;location_of_ecf_file&amp;quot;/testing/&amp;quot;target_name&amp;quot;'''&lt;br /&gt;
&lt;br /&gt;
{{Note| Some cases require special testing folder when automatically creating new test cases (e.g. in a writable library, since it might use classes which are not visible from the library). The test executor could use that folder to place log files. Also system level tests rely on a location for files (such as text files containing the expected output).}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11184</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11184"/>
				<updated>2008-06-11T00:09:47Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: /* Examples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes, but also other files needed for testing can be put there.&lt;br /&gt;
&lt;br /&gt;
{{Note| special testing folder is needed when automatically creating new test cases. Also system level tests rely on a location for files etc...}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11183</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11183"/>
				<updated>2008-06-10T23:34:57Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Comments and example to extracting tests&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes, but also other files needed for testing can be put there.&lt;br /&gt;
&lt;br /&gt;
{{Note| special testing folder is needed when automatically creating new test cases. Also system level tests rely on a location for files etc...}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Extract tests from a running application ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is the a simple example of an extracted test case (note that '''EXTRACTED_TEST_SET''' inherits from '''TEST_SET''' and implements all functionality for executing an extracted test).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
class&lt;br /&gt;
    TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    EXTRACTED_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up_routine is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        do&lt;br /&gt;
            routine_under_test := agent {STRING}.append_integer&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTER} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append_integer is&lt;br /&gt;
            -- Call `routine_under_test' with input provided by `context'.&lt;br /&gt;
        indexing&lt;br /&gt;
            tag: &amp;quot;covers.STRING.append_integer&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            call_routine_under_test&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Access&lt;br /&gt;
&lt;br /&gt;
    context: ARRAY [TUPLE [id: STRING; type: STRING; inv: BOOLEAN; attributes: ARRAY [STRING]]] is&lt;br /&gt;
            -- &amp;lt;Precursor&amp;gt;&lt;br /&gt;
        once&lt;br /&gt;
            Result := &amp;lt;&amp;lt;&lt;br /&gt;
                [&amp;quot;#operand&amp;quot;, &amp;quot;TUPLE [STRING, INTEGER]&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;#2&amp;quot;, &amp;quot;110&amp;quot; &amp;gt;&amp;gt;],&lt;br /&gt;
                [&amp;quot;#2&amp;quot;, &amp;quot;STRING&amp;quot;, True, &amp;lt;&amp;lt; &amp;quot;this is an integer: &amp;quot; &amp;gt;&amp;gt;]&lt;br /&gt;
            &amp;gt;&amp;gt;&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end -- class TEST_STRING_001&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EXTRACTED_TEST_SET''' implements '''set_up''' (frozen), but has a deferred feature '''set_up_routine''' which assigns the proper agent to '''routine_under_test'''. This basically replaces the missing reflection functionality for calling features. '''context''' is also deferred in '''EXTACTED_TEST_SET''' and contains all data from the heap and call stack which was reachable by the routine at extraction time. Each TUPLE represents an object, where `inv' defines whether the object should fulfill its invariant or not (if the object was on the stack at extraction time, this does not have to be the case).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This design lets us have the entire test in one file. This is practical especially in the situation where a user should submit such a test as a bug report after experiencing a crash.&lt;br /&gt;
The drawback currently is that the design only allows us to have one test per class. The reason for that is mainly the set_up procedure. Creating all objects in '''context''' must be done during set_up. If there is a failure, the set_up will be blamed instead of the actual test routine, which makes the test not fail but invalid. This can happen e.g. if one of the objects in the context does not fulfill its invariant, which again could result from simply editing the class being tested. Any suggestions welcome!&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11182</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11182"/>
				<updated>2008-06-10T22:45:21Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Description for &amp;quot;generate&amp;quot; section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes, but also other files needed for testing can be put there.&lt;br /&gt;
&lt;br /&gt;
{{Note| special testing folder is needed when automatically creating new test cases. Also system level tests rely on a location for files etc...}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''generate''' menu lets you generate new tests for all classes in system (randomly picked?) or for classes which where last modified.&lt;br /&gt;
&lt;br /&gt;
=== Turn any failed execution into a test ===&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11181</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11181"/>
				<updated>2008-06-10T22:42:16Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Description for &amp;quot;run&amp;quot; section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes, but also other files needed for testing can be put there.&lt;br /&gt;
&lt;br /&gt;
{{Note| special testing folder is needed when automatically creating new test cases. Also system level tests rely on a location for files etc...}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
The '''run''' menu provides different options for running tests in the background:&lt;br /&gt;
&lt;br /&gt;
* Run all tests in system&lt;br /&gt;
* Run currently failing ones&lt;br /&gt;
* Run test for classes last modified (better description needed here)&lt;br /&gt;
* Only run tests shown below&lt;br /&gt;
* Only run tests which are selected below&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Turn any failed execution into a test ===&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11180</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11180"/>
				<updated>2008-06-10T22:38:57Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Moved image further up&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes, but also other files needed for testing can be put there.&lt;br /&gt;
&lt;br /&gt;
{{Note| special testing folder is needed when automatically creating new test cases. Also system level tests rely on a location for files etc...}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Turn any failed execution into a test ===&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11179</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11179"/>
				<updated>2008-06-10T22:37:40Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Adopted examples to explenation below&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes, but also other files needed for testing can be put there.&lt;br /&gt;
&lt;br /&gt;
{{Note| special testing folder is needed when automatically creating new test cases. Also system level tests rely on a location for files etc...}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing: &amp;quot;covers.STRING.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;covers.STRING.is_boolean, covers.STRING.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Turn any failed execution into a test ===&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11178</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11178"/>
				<updated>2008-06-10T22:36:05Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: More comments on the screen shots&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes, but also other files needed for testing can be put there.&lt;br /&gt;
&lt;br /&gt;
{{Note| special testing folder is needed when automatically creating new test cases. Also system level tests rely on a location for files etc...}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;{STRING}.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;{STRING}.is_boolean, {STRING}.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine'', it is important to see the difference between the actual routine and its tests. Also the tool has more of a vertical layout. Since the number of tests is comparable to the number of classes in the system, it makes sense the tools have the same layout. Also it allows to have tabs in the bottom for displaying further information, such as execution details (output, call stack, etc.).&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords (see below). It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'').&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one - they are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Tags ====&lt;br /&gt;
&lt;br /&gt;
Each test can have a number of tags. Tags can be a single string or hierarchically structured with dots ('.'). For example, a test with tag ''covers.STRING.append'' means that this test is a regression test for {STRING}.append. There are a number of implicit tags for each test, such like the ''name'' tag ({TEST_STRING}.test_append has the implicit tag ''name.TEST_STRING.test_append'').&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_cut-view.png|right|400px]]&lt;br /&gt;
Based on the notion of tags, we are able to define different views. The default view ''Test sets'' simply shows a hierarchical tree for every ''name.X'' tag. This enables us to define more views, such as ''Class tested'', which displays every ''covers.X'' tag. Note that with other tags than ''name.'' some tests might get listed multiple times where other not containing such a tag must be listed explicitly. The main advantage is that the user can define his own views based on any type of tags.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Turn any failed execution into a test ===&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11177</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11177"/>
				<updated>2008-06-10T21:32:25Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Screenshots for testing tool with first comments&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes, but also other files needed for testing can be put there.&lt;br /&gt;
&lt;br /&gt;
{{Note| special testing folder is needed when automatically creating new test cases. Also system level tests rely on a location for files etc...}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;{STRING}.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;{STRING}.is_boolean, {STRING}.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
This is what a screen shot of the above example could look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_set-view.png|right|400px]]&lt;br /&gt;
&lt;br /&gt;
The tool should have an own icon for displaying test cases (test routines). In this example it is a Lego block. Especially for views like ''list all tests for this routine''', it is important to see the difference between the actual routine and its tests.&lt;br /&gt;
&lt;br /&gt;
The '''menu bar''' includes following buttons:&lt;br /&gt;
* Create new manual test case (opens wizard)&lt;br /&gt;
** if test class is dropped on button, the wizard will suggest to create new test in that class&lt;br /&gt;
** if normal class (or feature) is dropped on button, wizard will suggest to create test for the class (or feature)&lt;br /&gt;
* Menu for generating new test (defaults to last chosen one?)&lt;br /&gt;
** if normal class/feature is dropped on button, generate tests for that class/feature&lt;br /&gt;
&lt;br /&gt;
* Menu for executing tests in background (defaults to last chosen one?)&lt;br /&gt;
** if any class/feature is dropped on button, run tests associated with class/feature&lt;br /&gt;
* Run test in debugger (must have a test selected or dropped on button to start)&lt;br /&gt;
* Stop any execution (background or debugger)&lt;br /&gt;
&lt;br /&gt;
* Opens settings dialog for testing&lt;br /&gt;
&lt;br /&gt;
* Status indicating how many tests we have ran so far and&lt;br /&gt;
* how many failing ones there are&lt;br /&gt;
&lt;br /&gt;
'''View''' defines in which way the test cases are listed (see below).&lt;br /&gt;
&lt;br /&gt;
'''Filter''' can be used to type keywords for showing only test cases having tags including the keywords. It's a drop down so predefined filter patterns can be used (such as ''outcome.fail'')&lt;br /&gt;
&lt;br /&gt;
The '''grid''' contains a tree view of all test cases (test cases are always in leaves). Multiples columns for more information. Currently there are two indications whether a test fails or not (column and icons). Obviously it only needs one. They are both shown just to see the difference. The advantage with using icons is that less space is needed. Coloring the background of a row containing a failing test case would be an option as well.&lt;br /&gt;
&lt;br /&gt;
==== Running tests ====&lt;br /&gt;
[[Image:testing_run-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Different views ====&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_cut-view.png]]&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Turn any failed execution into a test ===&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11176</id>
		<title>Testing Tool (Specification)</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=Testing_Tool_(Specification)&amp;diff=11176"/>
				<updated>2008-06-10T21:01:22Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Added screen shot for generating tests&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
&lt;br /&gt;
{{UnderConstruction}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Main functionalities ==&lt;br /&gt;
&lt;br /&gt;
=== Add unit/system level tests ===&lt;br /&gt;
&lt;br /&gt;
Semantically there is no difference between unit tests and system level tests. This way all tests can be written in Eiffel in a conforming way.&lt;br /&gt;
&lt;br /&gt;
A test is a routine having the prefix '''test''' in a class inheriting from '''TEST_SET'''. In general features in classes specifically used for testing should be exported at most to {TESTING_CLASS}. This is to prevent testing code from remaining in a finalized system. If you write a helper class for your test routines, let it inherit from '''TESTING_CLASS''' (Note: '''TEST_SET''' already inherits from '''TESTING_CLASS'''). Additionally you should make leaf test sets frozen and make sure you never directly reference testing classes in your project code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== System level test specifics ====&lt;br /&gt;
&lt;br /&gt;
Since system level testing often relies on external items like files, '''SYSTEM_LEVEL_TEST_SET''' provides a number of helper routines accessing them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Config file ====&lt;br /&gt;
&lt;br /&gt;
For each target in a configuration file you may define a testing folder in which test classes, but also other files needed for testing can be put there.&lt;br /&gt;
&lt;br /&gt;
{{Note| special testing folder is needed when automatically creating new test cases. Also system level tests rely on a location for files etc...}}&lt;br /&gt;
&lt;br /&gt;
==== Additional information ====&lt;br /&gt;
&lt;br /&gt;
The indexing clause can be used to specify which classes and routines are tested by the test routine. Any specifications in the class indexing clause will apply to all tests in that class. Note '''testing_covers''' in the following examples.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&lt;br /&gt;
Example unit tests '''test_append''' and '''test_boolean'''&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_STRING&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    TEST_SET&lt;br /&gt;
        redefine&lt;br /&gt;
            set_up&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {NONE} -- Initialization&lt;br /&gt;
&lt;br /&gt;
    set_up&lt;br /&gt;
        do&lt;br /&gt;
            create s.make (10)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Access&lt;br /&gt;
&lt;br /&gt;
    s: STRING&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_append&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;{STRING}.append&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;12345&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;append&amp;quot;, s, &amp;quot;12345&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
    test_boolean&lt;br /&gt;
        indexing&lt;br /&gt;
            testing_covers: &amp;quot;{STRING}.is_boolean, {STRING}.to_boolean&amp;quot;&lt;br /&gt;
        require&lt;br /&gt;
            set_up: s /= Void and then s.is_empty&lt;br /&gt;
        do&lt;br /&gt;
            s.append (&amp;quot;True&amp;quot;)&lt;br /&gt;
            assert_true (&amp;quot;boolean&amp;quot;, s.is_boolean and then s.to_boolean)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example system level test '''test_version''' (Note: '''SYSTEM_LEVEL_TEST_SET''' inherits from '''TEST_SET''' and provides basic functionality for executing external commands, including the system currently under development):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
indexing&lt;br /&gt;
    testing_covers: &amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
frozen class TEST_MY_APP&lt;br /&gt;
&lt;br /&gt;
inherit&lt;br /&gt;
&lt;br /&gt;
    SYSTEM_LEVEL_TEST_SET&lt;br /&gt;
&lt;br /&gt;
feature {TESTING_CLASS} -- Test routines&lt;br /&gt;
&lt;br /&gt;
    test_version&lt;br /&gt;
        do&lt;br /&gt;
            run_system_with_args (&amp;quot;--version&amp;quot;)&lt;br /&gt;
            assert_string_equality (&amp;quot;version&amp;quot;, last_output, &amp;quot;my_app version 0.1&amp;quot;)&lt;br /&gt;
        end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Manage and run test suite ===&lt;br /&gt;
&lt;br /&gt;
This is what a screen shot of the above example could look like:&lt;br /&gt;
&lt;br /&gt;
[[Image:Testing_tool.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Generate tests automatically ===&lt;br /&gt;
&lt;br /&gt;
[[Image:testing_generate-menu.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Turn any failed execution into a test ===&lt;br /&gt;
&lt;br /&gt;
=== Background test execution ===&lt;br /&gt;
== Open questions ==&lt;br /&gt;
(This section should disappear as the questions get answered.)&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Testing Tool (Architecture)]]&lt;br /&gt;
* [[Eweasel]]&lt;br /&gt;
* [[CddBranch]]&lt;br /&gt;
* [[Eiffel Testing Tool]]&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=File:Testing_run-menu.jpg&amp;diff=11175</id>
		<title>File:Testing run-menu.jpg</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=File:Testing_run-menu.jpg&amp;diff=11175"/>
				<updated>2008-06-10T20:57:07Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Testing tool menu for running test cases&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Testing tool menu for running test cases&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=File:Testing_generate-menu.jpg&amp;diff=11174</id>
		<title>File:Testing generate-menu.jpg</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=File:Testing_generate-menu.jpg&amp;diff=11174"/>
				<updated>2008-06-10T20:56:35Z</updated>
		
		<summary type="html">&lt;p&gt;Arnofiva: Testing tool menu for generating new tests&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Testing tool menu for generating new tests&lt;/div&gt;</summary>
		<author><name>Arnofiva</name></author>	</entry>

	</feed>