<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://dev.eiffel.com/api.php?action=feedcontributions&amp;user=Mogh&amp;feedformat=atom</id>
		<title>EiffelStudio: an EiffelSoftware project - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://dev.eiffel.com/api.php?action=feedcontributions&amp;user=Mogh&amp;feedformat=atom"/>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/Special:Contributions/Mogh"/>
		<updated>2026-04-13T07:52:49Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.24.1</generator>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=11059</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=11059"/>
				<updated>2008-05-14T13:45:22Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* News */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD EiffelStudio? ==&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_logo.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video/ '''Play CDD Video''', click here]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at [http://se.ethz.ch ETH Zurich]. It adds advanced support for unit testing to EiffelStudio. With CDD EiffelStudio you can&lt;br /&gt;
&lt;br /&gt;
* Write test cases&lt;br /&gt;
* Manage test cases (using tags)&lt;br /&gt;
* Run test cases&lt;br /&gt;
* View test outcomes&lt;br /&gt;
* '''Automatically extract test cases'''&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [http://eiffelstudio.origo.ethz.ch/forum/20 CDD forum]. &lt;br /&gt;
&lt;br /&gt;
CDD EiffelStudio adds the following panel to regular EiffelStudio:&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_panel.png|center]]&lt;br /&gt;
&lt;br /&gt;
== News ==&lt;br /&gt;
14.May 2008: Added patch for Windows and Linux that fixes a bug that can cause a crash during extraction.&lt;br /&gt;
&lt;br /&gt;
10.April 2008: If installation exits with cryptic error codes, please have a look at the [[CDD Common Problems|Common Problems]] page.&lt;br /&gt;
&lt;br /&gt;
08.April 2008: CDD EiffelStudio final 7 available&lt;br /&gt;
* Important bug fix: EiffelStudio no longer freezes when opening projects with many test cases&lt;br /&gt;
&lt;br /&gt;
02.April 2008: CDD EiffelStudio Final 6 available&lt;br /&gt;
* Less mess: Redundant or duplicate test cases are no longer extracted&lt;br /&gt;
* Smoother upgrade: Cleaning a project will automatically clean the test suite&lt;br /&gt;
* Fixed bug in invarant checking&lt;br /&gt;
* Fixed bug that caused EiffelStudio to freeze in some situations&lt;br /&gt;
* Improved logging&lt;br /&gt;
* Installer ''might'' work with mingw on Windows now. (Not yet tested, reports welcome)&lt;br /&gt;
&lt;br /&gt;
== Download CDD ==&lt;br /&gt;
&lt;br /&gt;
The following packages contain the full EiffelStudio 6.1 plus the CDD extension. You do not need to have EiffelStudio installed already in order to install below packages. On Windows you do have to have either the Platform SDK or Visual C++ installed. Do not use EiffelStudio with the gcc/mingw or the .Net backend.&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
* Full Linux version: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_7.tar.bz2&lt;br /&gt;
* Full Linux version (for Debian stable): http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_7_debian_stable.tar.bz2&lt;br /&gt;
* Installation instructions: http://docs.eiffel.com/eiffelstudio/installation/studio/060_linux.html&lt;br /&gt;
Notes:&lt;br /&gt;
* If you are '''upgrading''' from a previous version, make sure you delete the old version first (rm -rf)&lt;br /&gt;
&lt;br /&gt;
Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;. Make sure you set/update the environment variables PATH, ISE_EIFFEL, and ISE_PLATFORM according to the installation instructions.&lt;br /&gt;
&lt;br /&gt;
=== Linux Patch ===&lt;br /&gt;
* Patch for CDD EiffelStudio Final 7: Download http://se.ethz.ch/people/leitner/cdd/ec and replace it with the '$ISE_EIFFEL/studio/spec/linux-x86/bin/ec' from Final 7. This patch fixes a crash that sometimes occurs during extraction of test cases. You may need to delete the precompile and EIFGENs of your project after applying the patch.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
* Full Windows version (with installer): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_7-windows.msi&lt;br /&gt;
Notes:&lt;br /&gt;
* Installation is independent of installations of official EiffelStudio 6.1 (neither overwrites nor invalidates nor is influenced by those) &lt;br /&gt;
* If you are '''upgrading''' from a previous version, you first have to uninstall the old version then delete the existing precompilations (= delete EIFGENs in subdirectories of &amp;lt;INSTALL_DIRECTORY&amp;gt;/precomp/spec/windows/) and then install the new version.&lt;br /&gt;
* Which C compiler to install?&lt;br /&gt;
** Use the Microsoft C compiler either from Visual Studio or the Windows SDK:&lt;br /&gt;
*** Visual Studio (up to Visual Studio 2005, but no later, and only the non-express version)&lt;br /&gt;
*** Windows SDK (up to version 6.0, but no later)&lt;br /&gt;
** Do not use the gcc/mingw.&lt;br /&gt;
** Do not use the .NET compiler backend. &lt;br /&gt;
&lt;br /&gt;
Have a look at http://dev.eiffel.com/Installing_Microsoft_C_compiler_6.1_and_older to learn how to install either Visual C++ or the Windows SDK for use with EiffelStudio. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Windows Patch ====&lt;br /&gt;
&lt;br /&gt;
* Patch version for existing EiffelStudio CDD Edition Final 5 or Final 6 installation (or final 7, fixes bug that causes crash under uncommon circumstances): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_7-windows_patch_01.zip&lt;br /&gt;
* does NOT work for official EiffelStudio 6.1 installation (or pre-final 5 versions of CDD Edition)! &lt;br /&gt;
* Replace &amp;lt;INSTALL_DIRECTORY&amp;gt;/studio/spec/windows/bin/ec.exe with ec.exe contained in archive&lt;br /&gt;
* Delete existing precompilations (= delete EIFGENs in subdirectories of &amp;lt;INSTALL_DIRECTORY&amp;gt;/precomp/spec/windows/)&lt;br /&gt;
* Recompile existing projects from scratch (use the &amp;quot;clean&amp;quot; option in the project load screen)&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
=== Manually Written Test Cases ===&lt;br /&gt;
CDD Eiffelstudio allows you to create a new “empty” test case. These test cases are similar to jUnit test cases. A manually written test class must start with the word “TEST” and all test routines also have to start with the word “test”. It also has to inherit from class CDD_TEST_CASE.&lt;br /&gt;
&lt;br /&gt;
=== Extracted Test Cases ===&lt;br /&gt;
CDD EiffelStudio automatically extracts test cases whenever you run your program and an exception is triggered. This feature is novel and not yet part of any other testing environment. You will be the first to try it out.&lt;br /&gt;
&lt;br /&gt;
=== Test outcomes ===&lt;br /&gt;
A test case checks whether your program contains a particular bug. A test cases can fail indicating that the bug is present in your program, or pass indicating your program does not contain this bug. Sometimes test cases will be unresolved, in which case the testing framework was unable to find out whether the test case passed or failed. A test case can be unresolved for several reasons.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Debug Test Case === &lt;br /&gt;
Select a test case and press this button to run a test case in the debugger.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable Execution of Test Cases in Background === &lt;br /&gt;
If enabled, all test cases are retested every time you press compile. If disabled no test cases are executed.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable automatic extraction of Test Cases === &lt;br /&gt;
If enabled every time an exception is triggered a set of test cases that try to reproduce this exception is extracted. If disabled no test cases are extracted.&lt;br /&gt;
&lt;br /&gt;
=== Clean up/Delete === &lt;br /&gt;
You can use the “Clean Up/Delete” button in two different ways. By simply pressing it you will delete all unresolved test cases. By pick and dropping a test case to the “Clean up/Delete” button (right click on test case, move mouse to button and right click again) you can delete a test case. By the way, test cases are just regular classes. So you can use all existing tools that apply to classes in EiffelStudio too.&lt;br /&gt;
&lt;br /&gt;
Update: To remove duplicate test cases (until the next cdd update), please use the command-line tool from the [http://clean-cdd.origo.ethz.ch/ clean-cdd project].&lt;br /&gt;
&lt;br /&gt;
=== Create new manual test class ===&lt;br /&gt;
Press this button to create a empty test class. You can then edit the class to add manually written test cases. This is how a manually written test case can look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
  test_deposit&lt;br /&gt;
    local&lt;br /&gt;
      ba: BANK_ACCOUNT&lt;br /&gt;
    do&lt;br /&gt;
      create ba.make_with_balance (0)&lt;br /&gt;
      ba.deposit (100)&lt;br /&gt;
      check&lt;br /&gt;
         money_depisited: ba.balance = 100&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Redefine routine `set_up' or `tear_down' if you want something to be executed before resp. after every test routine of a class.&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons_2.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Search for tags ===&lt;br /&gt;
Enter keywords to search for paritcular test cases. Some tags are automatically set for you like the name of the test case. You can also easily add your own tags by adding an indexing item &amp;quot;tag&amp;quot; to your test class or routine:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
indexing&lt;br /&gt;
  tag: &amp;quot;fixme&amp;quot;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
   ...&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Tags can be adde to all test routines and classes. Whether they are extracted or manually written does not matter.&lt;br /&gt;
&lt;br /&gt;
=== Restrict execution of test cases ===&lt;br /&gt;
&lt;br /&gt;
Once you have many test cases, you will run into situation where you don't want to execute all of them. The restrict button will help you to achieve this. As long as the &amp;quot;Restrict&amp;quot; button is pushed test cases that don't show up in the test case view will not be tested. Execution is restricted to those test cases that do show up.&lt;br /&gt;
&lt;br /&gt;
=== Change test case view ===&lt;br /&gt;
&lt;br /&gt;
Select one of several predefined test case views. For example you can group test cases by their outcome to quickly see only failing test cases.&lt;br /&gt;
&lt;br /&gt;
== Further Documentation and Common Problems ==&lt;br /&gt;
Please visit [[Using CDD]] for further documentation or look at &lt;br /&gt;
[[CDD Common Problems|Common Problems]] if you run into problems.&lt;br /&gt;
&lt;br /&gt;
== Related Publications ==&lt;br /&gt;
&lt;br /&gt;
* Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., &amp;quot;Contract Driven Development = Test Driven Development - Writing Test Cases&amp;quot;, Proceedings of ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September 2007 [http://se.ethz.ch/people/leitner/publications/cdd_leitner_esec_fse_2007.pdf (pdf)]&lt;br /&gt;
* Sunghun Kim, Shay Artzi, and Michael D. Ernst, &amp;quot;reCrash: Making Crash Reproducible&amp;quot; MIT Computer Science and Artificial Intelligence Laboratory technical report MIT-CSAIL-TR-2007-054, (Cambridge, MA), November 20, 2007. [http://recrash.googlecode.com/files/MIT-CSAIL-TR-2007-054.pdf (pdf)]&lt;br /&gt;
&lt;br /&gt;
== Project Internal Stuff ==&lt;br /&gt;
&lt;br /&gt;
[[CddBranchInternal]]&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10997</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10997"/>
				<updated>2008-04-28T15:03:20Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Windows Patch */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD EiffelStudio? ==&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_logo.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video/ '''Play CDD Video''', click here]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at [http://se.ethz.ch ETH Zurich]. It adds advanced support for unit testing to EiffelStudio. With CDD EiffelStudio you can&lt;br /&gt;
&lt;br /&gt;
* Write test cases&lt;br /&gt;
* Manage test cases (using tags)&lt;br /&gt;
* Run test cases&lt;br /&gt;
* View test outcomes&lt;br /&gt;
* '''Automatically extract test cases'''&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [http://eiffelstudio.origo.ethz.ch/forum/20 CDD forum]. &lt;br /&gt;
&lt;br /&gt;
CDD EiffelStudio adds the following panel to regular EiffelStudio:&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_panel.png|center]]&lt;br /&gt;
&lt;br /&gt;
== News ==&lt;br /&gt;
10.April.2008: If installation exits with cryptic error codes, please have a look at the [[CDD Common Problems|Common Problems]] page.&lt;br /&gt;
&lt;br /&gt;
8.April.2008: CDD EiffelStudio final 7 available&lt;br /&gt;
* Important bug fix: EiffelStudio no longer freezes when opening projects with many test cases&lt;br /&gt;
&lt;br /&gt;
2.April.2008: CDD EiffelStudio Final 6 available&lt;br /&gt;
* Less mess: Redundant or duplicate test cases are no longer extracted&lt;br /&gt;
* Smoother upgrade: Cleaning a project will automatically clean the test suite&lt;br /&gt;
* Fixed bug in invarant checking&lt;br /&gt;
* Fixed bug that caused EiffelStudio to freeze in some situations&lt;br /&gt;
* Improved logging&lt;br /&gt;
* Installer ''might'' work with mingw on Windows now. (Not yet tested, reports welcome)&lt;br /&gt;
&lt;br /&gt;
== Download CDD ==&lt;br /&gt;
&lt;br /&gt;
The following packages contain the full EiffelStudio 6.1 plus the CDD extension. You do not need to have EiffelStudio installed already in order to install below packages. On Windows you do have to have either the Platform SDK or Visual C++ installed. Do not use EiffelStudio with the gcc/mingw or the .Net backend.&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
* Full Linux version: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_7.tar.bz2&lt;br /&gt;
* Full Linux version (for Debian stable): http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_7_debian_stable.tar.bz2&lt;br /&gt;
* Installation instructions: http://docs.eiffel.com/eiffelstudio/installation/studio/060_linux.html&lt;br /&gt;
Notes:&lt;br /&gt;
* If you are '''upgrading''' from a previous version, make sure you delete the old version first (rm -rf)&lt;br /&gt;
&lt;br /&gt;
Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;. Make sure you set/update the environment variables PATH, ISE_EIFFEL, and ISE_PLATFORM according to the installation instructions.&lt;br /&gt;
&lt;br /&gt;
=== Linux Patch ===&lt;br /&gt;
* Patch for CDD EiffelStudio Final 7: Download http://se.ethz.ch/people/leitner/cdd/ec and replace it with the '$ISE_EIFFEL/studio/spec/linux-x86/bin/ec' from Final 7. This patch fixes a crash that sometimes occurs during extraction of test cases. You may need to delete the precompile and EIFGENs of your project after applying the patch.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
* Full Windows version (with installer): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_7-windows.msi&lt;br /&gt;
Notes:&lt;br /&gt;
* Installation is independent of installations of official EiffelStudio 6.1 (neither overwrites nor invalidates nor is influenced by those) &lt;br /&gt;
* If you are '''upgrading''' from a previous version, you first have to uninstall the old version then delete the existing precompilations (= delete EIFGENs in subdirectories of &amp;lt;INSTALL_DIRECTORY&amp;gt;/precomp/spec/windows/) and then install the new version.&lt;br /&gt;
* Which C compiler to install?&lt;br /&gt;
** Use the Microsoft C compiler either from Visual Studio or the Windows SDK:&lt;br /&gt;
*** Visual Studio (up to Visual Studio 2005, but no later, and only the non-express version)&lt;br /&gt;
*** Windows SDK (up to version 6.0, but no later)&lt;br /&gt;
** Do not use the gcc/mingw.&lt;br /&gt;
** Do not use the .NET compiler backend. &lt;br /&gt;
&lt;br /&gt;
Have a look at http://dev.eiffel.com/Installing_Microsoft_C_compiler_6.1_and_older to learn how to install either Visual C++ or the Windows SDK for use with EiffelStudio. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Windows Patch ====&lt;br /&gt;
&lt;br /&gt;
* Patch version for existing EiffelStudio CDD Edition Final 5 or Final 6 installation (or final 7, fixes bug that causes crash under uncommon circumstances): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_7-windows_patch_01.zip&lt;br /&gt;
* does NOT work for official EiffelStudio 6.1 installation (or pre-final 5 versions of CDD Edition)! &lt;br /&gt;
* Replace &amp;lt;INSTALL_DIRECTORY&amp;gt;/studio/spec/windows/bin/ec.exe with ec.exe contained in archive&lt;br /&gt;
* Delete existing precompilations (= delete EIFGENs in subdirectories of &amp;lt;INSTALL_DIRECTORY&amp;gt;/precomp/spec/windows/)&lt;br /&gt;
* Recompile existing projects from scratch (use the &amp;quot;clean&amp;quot; option in the project load screen)&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
=== Manually Written Test Cases ===&lt;br /&gt;
CDD Eiffelstudio allows you to create a new “empty” test case. These test cases are similar to jUnit test cases. A manually written test class must start with the word “TEST” and all test routines also have to start with the word “test”. It also has to inherit from class CDD_TEST_CASE.&lt;br /&gt;
&lt;br /&gt;
=== Extracted Test Cases ===&lt;br /&gt;
CDD EiffelStudio automatically extracts test cases whenever you run your program and an exception is triggered. This feature is novel and not yet part of any other testing environment. You will be the first to try it out.&lt;br /&gt;
&lt;br /&gt;
=== Test outcomes ===&lt;br /&gt;
A test case checks whether your program contains a particular bug. A test cases can fail indicating that the bug is present in your program, or pass indicating your program does not contain this bug. Sometimes test cases will be unresolved, in which case the testing framework was unable to find out whether the test case passed or failed. A test case can be unresolved for several reasons.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Debug Test Case === &lt;br /&gt;
Select a test case and press this button to run a test case in the debugger.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable Execution of Test Cases in Background === &lt;br /&gt;
If enabled, all test cases are retested every time you press compile. If disabled no test cases are executed.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable automatic extraction of Test Cases === &lt;br /&gt;
If enabled every time an exception is triggered a set of test cases that try to reproduce this exception is extracted. If disabled no test cases are extracted.&lt;br /&gt;
&lt;br /&gt;
=== Clean up/Delete === &lt;br /&gt;
You can use the “Clean Up/Delete” button in two different ways. By simply pressing it you will delete all unresolved test cases. By pick and dropping a test case to the “Clean up/Delete” button (right click on test case, move mouse to button and right click again) you can delete a test case. By the way, test cases are just regular classes. So you can use all existing tools that apply to classes in EiffelStudio too.&lt;br /&gt;
&lt;br /&gt;
Update: To remove duplicate test cases (until the next cdd update), please use the command-line tool from the [http://clean-cdd.origo.ethz.ch/ clean-cdd project].&lt;br /&gt;
&lt;br /&gt;
=== Create new manual test class ===&lt;br /&gt;
Press this button to create a empty test class. You can then edit the class to add manually written test cases. This is how a manually written test case can look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
  test_deposit&lt;br /&gt;
    local&lt;br /&gt;
      ba: BANK_ACCOUNT&lt;br /&gt;
    do&lt;br /&gt;
      create ba.make_with_balance (0)&lt;br /&gt;
      ba.deposit (100)&lt;br /&gt;
      check&lt;br /&gt;
         money_depisited: ba.balance = 100&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Redefine routine `set_up' or `tear_down' if you want something to be executed before resp. after every test routine of a class.&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons_2.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Search for tags ===&lt;br /&gt;
Enter keywords to search for paritcular test cases. Some tags are automatically set for you like the name of the test case. You can also easily add your own tags by adding an indexing item &amp;quot;tag&amp;quot; to your test class or routine:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
indexing&lt;br /&gt;
  tag: &amp;quot;fixme&amp;quot;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
   ...&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Tags can be adde to all test routines and classes. Whether they are extracted or manually written does not matter.&lt;br /&gt;
&lt;br /&gt;
=== Restrict execution of test cases ===&lt;br /&gt;
&lt;br /&gt;
Once you have many test cases, you will run into situation where you don't want to execute all of them. The restrict button will help you to achieve this. As long as the &amp;quot;Restrict&amp;quot; button is pushed test cases that don't show up in the test case view will not be tested. Execution is restricted to those test cases that do show up.&lt;br /&gt;
&lt;br /&gt;
=== Change test case view ===&lt;br /&gt;
&lt;br /&gt;
Select one of several predefined test case views. For example you can group test cases by their outcome to quickly see only failing test cases.&lt;br /&gt;
&lt;br /&gt;
== Further Documentation and Common Problems ==&lt;br /&gt;
Please visit [[Using CDD]] for further documentation or look at &lt;br /&gt;
[[CDD Common Problems|Common Problems]] if you run into problems.&lt;br /&gt;
&lt;br /&gt;
== Related Publications ==&lt;br /&gt;
&lt;br /&gt;
* Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., &amp;quot;Contract Driven Development = Test Driven Development - Writing Test Cases&amp;quot;, Proceedings of ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September 2007 [http://se.ethz.ch/people/leitner/publications/cdd_leitner_esec_fse_2007.pdf (pdf)]&lt;br /&gt;
* Sunghun Kim, Shay Artzi, and Michael D. Ernst, &amp;quot;reCrash: Making Crash Reproducible&amp;quot; MIT Computer Science and Artificial Intelligence Laboratory technical report MIT-CSAIL-TR-2007-054, (Cambridge, MA), November 20, 2007. [http://recrash.googlecode.com/files/MIT-CSAIL-TR-2007-054.pdf (pdf)]&lt;br /&gt;
&lt;br /&gt;
== Project Internal Stuff ==&lt;br /&gt;
&lt;br /&gt;
[[CddBranchInternal]]&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CDD_Common_Problems&amp;diff=10950</id>
		<title>CDD Common Problems</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CDD_Common_Problems&amp;diff=10950"/>
				<updated>2008-04-15T09:21:21Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
Problem: ''''My system suddenly has the wrong root class / doesn't compile anymore because it doesn't find the strange wrong root class''''&lt;br /&gt;
* This issue arises if EiffelStudio crashes / is killed during the (foreground) debugging of a test case (via the corresponding button in the testing tool). On Windows, crashes should no happen, but there is an issue on Linux with the foreground debugging of test cases we haven't been able to solve. If you are debugging a test case that passes, OR you are debugging a test case that fails, but you press &amp;quot;Continue&amp;quot; after the exception occured (instead of &amp;quot;Stop&amp;quot;, to end the execution immediately), EiffelStudio will crash.&lt;br /&gt;
* Solution: You have to manually set the root class of the system back to the correct class (via Project Settings, or by directly editing the .ecf file)&lt;br /&gt;
* Prevention: On Linux, use test case debugging ONLY for FAILING test cases, and after exception occured, STOP the execution (with the &amp;quot;Stop&amp;quot; button)!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Problem: ''''My test cases time out''''&lt;br /&gt;
* CDD automatically aborts test cases that run longer than a given period of time. The default value (which is a few seconds) can be changed by setting the environment variable ''''CDD_TESTER_TIMEOUT''''. On Linux ''''&amp;quot;export CDD_TESTER_TIMEOUT=120&amp;quot;'''' sets the timeout to 120 seconds. On Windows the same thing is acomplished with ''''&amp;quot;set CDD_TESTER_TIMEOUT=120&amp;quot;''''.&lt;br /&gt;
&lt;br /&gt;
Problem: '''Installing EiffelStudio reports error codes like '2908' and '2909' '''&lt;br /&gt;
* This seems to be a rare problem. The following might help&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
&amp;lt;&amp;lt;&lt;br /&gt;
I finally found the solution. Have a look at the following URLs:&lt;br /&gt;
- http://tinyurl.com/2rj93p&lt;br /&gt;
- http://tinyurl.com/3ey6fe&lt;br /&gt;
 &lt;br /&gt;
I believe you should have the same issue. In my case, I think it is because&lt;br /&gt;
I&lt;br /&gt;
had just installed the .NET 3.0 runtime which is part of the Windows Update.&lt;br /&gt;
 &lt;br /&gt;
If you rename the `Components' registry key, assuming that like me you only&lt;br /&gt;
have one child key, the installation should progress fine.&lt;br /&gt;
&amp;gt;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This often happens when uninstalling and then reinstalling EiffelStudio&lt;br /&gt;
although it is clearly not a bug in our installer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Problem: ''''I cannot see the testing window.''''&lt;br /&gt;
* Make it visible via ''''View -&amp;gt; Tools -&amp;gt; CDD Output''''&lt;br /&gt;
&lt;br /&gt;
Problem: ''''Is there some kind of CDD log window?''''&lt;br /&gt;
* Yes there is it is called ''''CDD Output''''. If you don't see this window, you can make it visible via ''''&amp;quot;View -&amp;gt; Tools -&amp;gt; CDD Output&amp;quot;''''&lt;br /&gt;
&lt;br /&gt;
Problem: ''''I am getting a duplicate class error'''' or ''''I am trying to use CDD on an existing project but it doesn't work''''&lt;br /&gt;
* If you have a look at the ''''CDD output'''' window you will probably find the error message:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error code: VSCN&lt;br /&gt;
Configuration error: cluster has two classes with the same name.&lt;br /&gt;
What to do: if both classes are needed, change name of one of them.&lt;br /&gt;
&lt;br /&gt;
Cluster name: erl_g_tests&lt;br /&gt;
First class: CDD_INTERPRETER&lt;br /&gt;
First file: &amp;quot;/home/aleitner/src/erl_g/src/erl_g/cdd_tests/erl_g/cdd_interpreter.e&amp;quot;&lt;br /&gt;
Second class: CDD_INTERPRETER&lt;br /&gt;
Second file: &amp;quot;/home/aleitner/src/erl_g/library/cdd_tests/erl_g_library/cdd_interpreter.e&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To fix this go to the project settings, to your active target, '''Groups -&amp;gt; Clusters''' and then select your root cluster. Open the '''Advanced''' tree-item and click on the value part of the '''Exclude Rules'''. Now add rule ''''/cdd_tests$'''' and click ''''OK''''. &lt;br /&gt;
&lt;br /&gt;
Alternatively you can open your ecf file with a text editor. You will find a stance that looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;file_rule&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/.svn$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/EIFGENs$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
&amp;lt;/file_rule&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Edit edit to make it looks like the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;file_rule&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/.svn$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/EIFGENs$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/cdd_tests$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
&amp;lt;/file_rule&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Problem: ''''I have written or extracted a test case. How can I execute it?''''&lt;br /&gt;
* All tests are executed in the background right after compiling automatically. You can choose to enable or disable background testing via the ''''Enable/Disable automatic background execution of tests''''.&lt;br /&gt;
&lt;br /&gt;
[[Image:enable_testing.png|center]]&lt;br /&gt;
&lt;br /&gt;
Problem: ''''The environment is unstable when using the .Net backend.''''&lt;br /&gt;
Solution: CDD does currently not support the .Net backend. Please use the C backend.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Problem: ''''How to disable the CDD extension in EiffelStudio''''&lt;br /&gt;
Solution: Make sure the push-buttons for Extraction and Execution are not pressed. The buttons are shown in the below picture.&lt;br /&gt;
[[Image:enable_exec_and_extract.png|center]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Problem: ''''My extracted test cases get overwritten in the SVN Repository, because my colleagues commit their extracted test cases which have the same name.''''&lt;br /&gt;
* Solution: Define a new environment variable &amp;quot;CDD_TESTER_ID&amp;quot;. The value of the variable will be used as suffix for all extracted test class names. So if all members of the project define a unique &amp;quot;CDD_TESTER_ID&amp;quot;, the individually extracted test cases can be commited to the svn without collisions.&lt;br /&gt;
* ATTENTION: The value of CDD_TESTER_ID has to conform to a valid Eiffel class name, e.g. it has to consist of (uppercase) letters and '_' only!&lt;br /&gt;
* EXAMPLE: You define CDD_TESTER_ID and set it to value &amp;quot;MARKUS&amp;quot;. Each time a new test case gets exracted for a class ROOT_CLASS, the generated class name for the test case will be CDD_TEST_ROOT_CLASS_xyz_MARKUS. Likewise it will get stored in a file called cdd_test_root_class_xyz_markus.e&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CDD_Common_Problems&amp;diff=10929</id>
		<title>CDD Common Problems</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CDD_Common_Problems&amp;diff=10929"/>
				<updated>2008-04-09T08:48:02Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
Problem: ''''My test cases time out''''&lt;br /&gt;
* CDD automatically aborts test cases that run longer than a given period of time. The default value (which is a few seconds) can be changed by setting the environment variable ''''CDD_TESTER_TIMEOUT''''. On Linux ''''&amp;quot;export CDD_TESTER_TIMEOUT=120&amp;quot;'''' sets the timeout to 120 seconds. On Windows the same thing is acomplished with ''''&amp;quot;set CDD_TESTER_TIMEOUT=120&amp;quot;''''.&lt;br /&gt;
&lt;br /&gt;
Problem: ''''I cannot see the testing window.''''&lt;br /&gt;
* Make it visible via ''''View -&amp;gt; Tools -&amp;gt; CDD Output''''&lt;br /&gt;
&lt;br /&gt;
Problem: ''''Is there some kind of CDD log window?''''&lt;br /&gt;
* Yes there is it is called ''''CDD Output''''. If you don't see this window, you can make it visible via ''''&amp;quot;View -&amp;gt; Tools -&amp;gt; CDD Output&amp;quot;''''&lt;br /&gt;
&lt;br /&gt;
Problem: ''''I am getting a duplicate class error'''' or ''''I am trying to use CDD on an existing project but it doesn't work''''&lt;br /&gt;
* If you have a look at the ''''CDD output'''' window you will probably find the error message:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error code: VSCN&lt;br /&gt;
Configuration error: cluster has two classes with the same name.&lt;br /&gt;
What to do: if both classes are needed, change name of one of them.&lt;br /&gt;
&lt;br /&gt;
Cluster name: erl_g_tests&lt;br /&gt;
First class: CDD_INTERPRETER&lt;br /&gt;
First file: &amp;quot;/home/aleitner/src/erl_g/src/erl_g/cdd_tests/erl_g/cdd_interpreter.e&amp;quot;&lt;br /&gt;
Second class: CDD_INTERPRETER&lt;br /&gt;
Second file: &amp;quot;/home/aleitner/src/erl_g/library/cdd_tests/erl_g_library/cdd_interpreter.e&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To fix this go to the project settings, to your active target, '''Groups -&amp;gt; Clusters''' and then select your root cluster. Open the '''Advanced''' tree-item and click on the value part of the '''Exclude Rules'''. Now add rule ''''/cdd_tests$'''' and click ''''OK''''. &lt;br /&gt;
&lt;br /&gt;
Alternatively you can open your ecf file with a text editor. You will find a stance that looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;file_rule&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/.svn$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/EIFGENs$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
&amp;lt;/file_rule&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Edit edit to make it looks like the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;file_rule&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/.svn$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/EIFGENs$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/cdd_tests$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
&amp;lt;/file_rule&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Problem: ''''I have written or extracted a test case. How can I execute it?''''&lt;br /&gt;
* All tests are executed in the background right after compiling automatically. You can choose to enable or disable background testing via the ''''Enable/Disable automatic background execution of tests''''.&lt;br /&gt;
&lt;br /&gt;
[[Image:enable_testing.png|center]]&lt;br /&gt;
&lt;br /&gt;
Problem: ''''The environment is unstable when using the .Net backend.''''&lt;br /&gt;
Solution: CDD does currently not support the .Net backend. Please use the C backend.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Problem: ''''How to disable the CDD extension in EiffelStudio''''&lt;br /&gt;
Solution: Make sure the push-buttons for Extraction and Execution are not pressed. The buttons are shown in the below picture.&lt;br /&gt;
[[Image:enable_exec_and_extract.png|center]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Problem: ''''My extracted test cases get overwritten in the SVN Repository, because my colleagues commit their extracted test cases which have the same name.''''&lt;br /&gt;
* Solution: Define a new environment variable &amp;quot;CDD_TESTER_ID&amp;quot;. The value of the variable will be used as suffix for all extracted test class names. So if all members of the project define a unique &amp;quot;CDD_TESTER_ID&amp;quot;, the individually extracted test cases can be commited to the svn without collisions.&lt;br /&gt;
* ATTENTION: The value of CDD_TESTER_ID has to conform to a valid Eiffel class name, e.g. it has to consist of (uppercase) letters and '_' only!&lt;br /&gt;
* EXAMPLE: You define CDD_TESTER_ID and set it to value &amp;quot;MARKUS&amp;quot;. Each time a new test case gets exracted for a class ROOT_CLASS, the generated class name for the test case will be CDD_TEST_ROOT_CLASS_xyz_MARKUS. Likewise it will get stored in a file called cdd_test_root_class_xyz_markus.e&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CDD_Common_Problems&amp;diff=10928</id>
		<title>CDD Common Problems</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CDD_Common_Problems&amp;diff=10928"/>
				<updated>2008-04-09T08:46:14Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
Problem: ''''My test cases time out''''&lt;br /&gt;
* CDD automatically aborts test cases that run longer than a given period of time. The default value (which is a few seconds) can be changed by setting the environment variable ''''CDD_TESTER_TIMEOUT''''. On Linux ''''&amp;quot;export CDD_TESTER_TIMEOUT=120&amp;quot;'''' sets the timeout to 120 seconds. On Windows the same thing is acomplished with ''''&amp;quot;set CDD_TESTER_TIMEOUT=120&amp;quot;''''.&lt;br /&gt;
&lt;br /&gt;
Problem: ''''I cannot see the testing window.''''&lt;br /&gt;
* Make it visible via ''''View -&amp;gt; Tools -&amp;gt; CDD Output''''&lt;br /&gt;
&lt;br /&gt;
Problem: ''''Is there some kind of CDD log window?''''&lt;br /&gt;
* Yes there is it is called ''''CDD Output''''. If you don't see this window, you can make it visible via ''''&amp;quot;View -&amp;gt; Tools -&amp;gt; CDD Output&amp;quot;''''&lt;br /&gt;
&lt;br /&gt;
Problem: ''''I am getting a duplicate class error'''' or ''''I am trying to use CDD on an existing project but it doesn't work''''&lt;br /&gt;
* If you have a look at the ''''CDD output'''' window you will probably find the error message:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error code: VSCN&lt;br /&gt;
Configuration error: cluster has two classes with the same name.&lt;br /&gt;
What to do: if both classes are needed, change name of one of them.&lt;br /&gt;
&lt;br /&gt;
Cluster name: erl_g_tests&lt;br /&gt;
First class: CDD_INTERPRETER&lt;br /&gt;
First file: &amp;quot;/home/aleitner/src/erl_g/src/erl_g/cdd_tests/erl_g/cdd_interpreter.e&amp;quot;&lt;br /&gt;
Second class: CDD_INTERPRETER&lt;br /&gt;
Second file: &amp;quot;/home/aleitner/src/erl_g/library/cdd_tests/erl_g_library/cdd_interpreter.e&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To fix this go to the project settings, to your active target, '''Groups -&amp;gt; Clusters''' and then select your root cluster. Open the '''Advanced''' tree-item and click on the value part of the '''Exclude Rules'''. Now add rule ''''/cdd_tests$'''' and click ''''OK''''. &lt;br /&gt;
&lt;br /&gt;
Alternatively you can open your ecf file with a text editor. You will find a stance that looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;file_rule&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/.svn$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/EIFGENs$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
&amp;lt;/file_rule&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Edit edit to make it looks like the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;file_rule&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/.svn$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/EIFGENs$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/cdd_tests$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
&amp;lt;/file_rule&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Problem: ''''I have written or extracted a test case. How can I execute it?''''&lt;br /&gt;
* All tests are executed in the background right after compiling automatically. You can choose to enable or disable background testing via the ''''Enable/Disable automatic background execution of tests''''.&lt;br /&gt;
&lt;br /&gt;
[[Image:enable_testing.png|center]]&lt;br /&gt;
&lt;br /&gt;
Problem: ''''The environment is unstable when using the .Net backend.''''&lt;br /&gt;
Solution: CDD does currently not support the .Net backend. Please use the C backend.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Problem: ''''How to disable the CDD extension in EiffelStudio''''&lt;br /&gt;
Solution: Make sure the push-buttons for Extraction and Execution are not pressed. The buttons are shown in the below picture.&lt;br /&gt;
[[Image:enable_exec_and_extract.png|center]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Problem: ''''My extracted test cases get overwritten in the SVN Repository, because my colleagues commit their extracted test cases which have the same name.''''&lt;br /&gt;
Solution: Define a new environment variable &amp;quot;CDD_TESTER_ID&amp;quot;. The value of the variable will be used as suffix for all extracted test class names. So if all members of the project define a unique &amp;quot;CDD_TESTER_ID&amp;quot;, the individually extracted test cases can be commited to the svn without collisions.&lt;br /&gt;
ATTENTION: The value of CDD_TESTER_ID has to conform to a valid Eiffel class name, e.g. it has to consist of (uppercase) letters and '_' only!&lt;br /&gt;
EXAMPLE: You define CDD_TESTER_ID and set it to value &amp;quot;MARKUS&amp;quot;. Each time a new test case gets exracted for a class ROOT_CLASS, the generated class name for the test case will be CDD_TEST_ROOT_CLASS_xyz_MARKUS. Likewise it will get stored in a file called cdd_test_root_class_xyz_markus.e&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CDD_Common_Problems&amp;diff=10927</id>
		<title>CDD Common Problems</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CDD_Common_Problems&amp;diff=10927"/>
				<updated>2008-04-09T08:39:27Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
Problem: ''''My test cases time out''''&lt;br /&gt;
* CDD automatically aborts test cases that run longer than a given period of time. The default value (which is a few seconds) can be changed by setting the environment variable ''''CDD_TESTER_TIMEOUT''''. On Linux ''''&amp;quot;export CDD_TESTER_TIMEOUT=120&amp;quot;'''' sets the timeout to 120 seconds. On Windows the same thing is acomplished with ''''&amp;quot;set CDD_TESTER_TIMEOUT=120&amp;quot;''''.&lt;br /&gt;
&lt;br /&gt;
Problem: ''''I cannot see the testing window.''''&lt;br /&gt;
* Make it visible via ''''View -&amp;gt; Tools -&amp;gt; CDD Output''''&lt;br /&gt;
&lt;br /&gt;
Problem: ''''Is there some kind of CDD log window?''''&lt;br /&gt;
* Yes there is it is called ''''CDD Output''''. If you don't see this window, you can make it visible via ''''&amp;quot;View -&amp;gt; Tools -&amp;gt; CDD Output&amp;quot;''''&lt;br /&gt;
&lt;br /&gt;
Problem: ''''I am getting a duplicate class error'''' or ''''I am trying to use CDD on an existing project but it doesn't work''''&lt;br /&gt;
* If you have a look at the ''''CDD output'''' window you will probably find the error message:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error code: VSCN&lt;br /&gt;
Configuration error: cluster has two classes with the same name.&lt;br /&gt;
What to do: if both classes are needed, change name of one of them.&lt;br /&gt;
&lt;br /&gt;
Cluster name: erl_g_tests&lt;br /&gt;
First class: CDD_INTERPRETER&lt;br /&gt;
First file: &amp;quot;/home/aleitner/src/erl_g/src/erl_g/cdd_tests/erl_g/cdd_interpreter.e&amp;quot;&lt;br /&gt;
Second class: CDD_INTERPRETER&lt;br /&gt;
Second file: &amp;quot;/home/aleitner/src/erl_g/library/cdd_tests/erl_g_library/cdd_interpreter.e&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To fix this go to the project settings, to your active target, '''Groups -&amp;gt; Clusters''' and then select your root cluster. Open the '''Advanced''' tree-item and click on the value part of the '''Exclude Rules'''. Now add rule ''''/cdd_tests$'''' and click ''''OK''''. &lt;br /&gt;
&lt;br /&gt;
Alternatively you can open your ecf file with a text editor. You will find a stance that looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;file_rule&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/.svn$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/EIFGENs$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
&amp;lt;/file_rule&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Edit edit to make it looks like the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;file_rule&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/.svn$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/EIFGENs$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
   &amp;lt;exclude&amp;gt;/cdd_tests$&amp;lt;/exclude&amp;gt;&lt;br /&gt;
&amp;lt;/file_rule&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Problem: ''''I have written or extracted a test case. How can I execute it?''''&lt;br /&gt;
* All tests are executed in the background right after compiling automatically. You can choose to enable or disable background testing via the ''''Enable/Disable automatic background execution of tests''''.&lt;br /&gt;
&lt;br /&gt;
[[Image:enable_testing.png|center]]&lt;br /&gt;
&lt;br /&gt;
Problem: ''''The environment is unstable when using the .Net backend.''''&lt;br /&gt;
Solution: CDD does currently not support the .Net backend. Please use the C backend.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Problem: ''''How to disable the CDD extension in EiffelStudio''''&lt;br /&gt;
Solution: Make sure the push-buttons for Extraction and Execution are not pressed. The buttons are shown in the below picture.&lt;br /&gt;
[[Image:enable_exec_and_extract.png|center]]&lt;br /&gt;
&lt;br /&gt;
Problem: ''''My extracted test cases get overwritten in the SVN Repository, because my colleagues commit their extracted test cases which have the same name.''''&lt;br /&gt;
Solution: Define da new environment variable &amp;quot;CDD_TESTER_ID&amp;quot;. The value of the variable will be used as suffix for all extracted test class names. So if all members of the project define a unique &amp;quot;CDD_TESTER_ID&amp;quot;, the individually extracted test cases can be commited to the svn without collisions.&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10924</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10924"/>
				<updated>2008-04-08T12:11:10Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD EiffelStudio? ==&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_logo.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video/ '''Play CDD Video''', click here]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at [http://se.ethz.ch ETH Zurich]. It adds advanced support for unit testing to EiffelStudio. With CDD EiffelStudio you can&lt;br /&gt;
&lt;br /&gt;
* Write test cases&lt;br /&gt;
* Manage test cases (using tags)&lt;br /&gt;
* Run test cases&lt;br /&gt;
* View test outcomes&lt;br /&gt;
* '''Automatically extract test cases'''&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [http://eiffelstudio.origo.ethz.ch/forum/20 CDD forum]. &lt;br /&gt;
&lt;br /&gt;
CDD EiffelStudio adds the following panel to regular EiffelStudio:&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_panel.png|center]]&lt;br /&gt;
&lt;br /&gt;
== News ==&lt;br /&gt;
&lt;br /&gt;
2.April.2008: CDD EiffelStudio Final 6 available&lt;br /&gt;
* Less mess: Redundant or duplicate test cases are no longer extracted&lt;br /&gt;
* Smoother upgrade: Cleaning a project will automatically clean the test suite&lt;br /&gt;
* Fixed bug in invarant checking&lt;br /&gt;
* Fixed bug that caused EiffelStudio to freeze in some situations&lt;br /&gt;
* Improved logging&lt;br /&gt;
* Installer ''might'' work with mingw on Windows now. (Not yet tested, reports welcome)&lt;br /&gt;
&lt;br /&gt;
== Download CDD ==&lt;br /&gt;
&lt;br /&gt;
The following packages contain the full EiffelStudio 6.1 plus the CDD extension. You do not need to have EiffelStudio installed already in order to install below packages. On Windows you do have to have either the Platform SDK or Visual C++ installed. Do not use EiffelStudio with the gcc/mingw or the .Net backend.&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
* Full Linux version: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_7.tar.bz2&lt;br /&gt;
* Installation instructions: http://docs.eiffel.com/eiffelstudio/installation/studio/060_linux.html&lt;br /&gt;
Notes:&lt;br /&gt;
* If you are '''upgrading''' from a previous version, make sure you delete the old version first (rm -rf)&lt;br /&gt;
&lt;br /&gt;
Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;. Make sure you set/update the environment variables PATH, ISE_EIFFEL, and ISE_PLATFORM according to the installation instructions.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
* Full Windows version (with installer): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_7-windows.msi&lt;br /&gt;
Notes:&lt;br /&gt;
* Installation is independent of installations of official EiffelStudio 6.1 (neither overwrites nor invalidates nor is influenced by those) &lt;br /&gt;
* If you are '''upgrading''' from a previous version, you first have to uninstall the old version then delete the existing precompilations (= delete EIFGENs in subdirectories of &amp;lt;INSTALL_DIRECTORY&amp;gt;/precomp/spec/windows/) and then install the new version.&lt;br /&gt;
* Which C compiler to install?&lt;br /&gt;
** Use the Microsoft C compiler either from Visual Studio or the Windows SDK:&lt;br /&gt;
*** Visual Studio (up to Visual Studio 2005, but no later, and only the non-express version)&lt;br /&gt;
*** Windows SDK (up to version 6.0, but no later)&lt;br /&gt;
** Do not use the gcc/mingw.&lt;br /&gt;
** Do not use the .NET compiler backend. &lt;br /&gt;
&lt;br /&gt;
Have a look at http://dev.eiffel.com/Installing_Microsoft_C_compiler_6.1_and_older to learn how to install either Visual C++ or the Windows SDK for use with EiffelStudio. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Windows Patch ====&lt;br /&gt;
&lt;br /&gt;
* Patch version for existing EiffelStudio CDD Edition Final 5 or Final 6 installation (does NOT work for official EiffelStudio 6.1 installation!): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_7-windows.zip&lt;br /&gt;
* Replace &amp;lt;INSTALL_DIRECTORY&amp;gt;/studio/spec/windows/bin/ec.exe with ec.exe contained in archive&lt;br /&gt;
* Delete existing precompilations (= delete EIFGENs in subdirectories of &amp;lt;INSTALL_DIRECTORY&amp;gt;/precomp/spec/windows/)&lt;br /&gt;
* Recompile existing projects from scratch (use the &amp;quot;clean&amp;quot; option in the project load screen)&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
=== Manually Written Test Cases ===&lt;br /&gt;
CDD Eiffelstudio allows you to create a new “empty” test case. These test cases are similar to jUnit test cases. A manually written test class must start with the word “TEST” and all test routines also have to start with the word “test”. It also has to inherit from class CDD_TEST_CASE.&lt;br /&gt;
&lt;br /&gt;
=== Extracted Test Cases ===&lt;br /&gt;
CDD EiffelStudio automatically extracts test cases whenever you run your program and an exception is triggered. This feature is novel and not yet part of any other testing environment. You will be the first to try it out.&lt;br /&gt;
&lt;br /&gt;
=== Test outcomes ===&lt;br /&gt;
A test case checks whether your program contains a particular bug. A test cases can fail indicating that the bug is present in your program, or pass indicating your program does not contain this bug. Sometimes test cases will be unresolved, in which case the testing framework was unable to find out whether the test case passed or failed. A test case can be unresolved for several reasons.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Debug Test Case === &lt;br /&gt;
Select a test case and press this button to run a test case in the debugger.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable Execution of Test Cases in Background === &lt;br /&gt;
If enabled, all test cases are retested every time you press compile. If disabled no test cases are executed.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable automatic extraction of Test Cases === &lt;br /&gt;
If enabled every time an exception is triggered a set of test cases that try to reproduce this exception is extracted. If disabled no test cases are extracted.&lt;br /&gt;
&lt;br /&gt;
=== Clean up/Delete === &lt;br /&gt;
You can use the “Clean Up/Delete” button in two different ways. By simply pressing it you will delete all unresolved test cases. By pick and dropping a test case to the “Clean up/Delete” button (right click on test case, move mouse to button and right click again) you can delete a test case. By the way, test cases are just regular classes. So you can use all existing tools that apply to classes in EiffelStudio too.&lt;br /&gt;
&lt;br /&gt;
Update: To remove duplicate test cases (until the next cdd update), please use the command-line tool from the [http://clean-cdd.origo.ethz.ch/ clean-cdd project].&lt;br /&gt;
&lt;br /&gt;
=== Create new manual test class ===&lt;br /&gt;
Press this button to create a empty test class. You can then edit the class to add manually written test cases. This is how a manually written test case can look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
  test_deposit&lt;br /&gt;
    local&lt;br /&gt;
      ba: BANK_ACCOUNT&lt;br /&gt;
    do&lt;br /&gt;
      create ba.make_with_balance (0)&lt;br /&gt;
      ba.deposit (100)&lt;br /&gt;
      check&lt;br /&gt;
         money_depisited: ba.balance = 100&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Redefine routine `set_up' or `tear_down' if you want something to be executed before resp. after every test routine of a class.&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons_2.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Search for tags ===&lt;br /&gt;
Enter keywords to search for paritcular test cases. Some tags are automatically set for you like the name of the test case. You can also easily add your own tags by adding an indexing item &amp;quot;tag&amp;quot; to your test class or routine:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
indexing&lt;br /&gt;
  tag: &amp;quot;fixme&amp;quot;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
   ...&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Tags can be adde to all test routines and classes. Whether they are extracted or manually written does not matter.&lt;br /&gt;
&lt;br /&gt;
=== Restrict execution of test cases ===&lt;br /&gt;
&lt;br /&gt;
Once you have many test cases, you will run into situation where you don't want to execute all of them. The restrict button will help you to achieve this. As long as the &amp;quot;Restrict&amp;quot; button is pushed test cases that don't show up in the test case view will not be tested. Execution is restricted to those test cases that do show up.&lt;br /&gt;
&lt;br /&gt;
=== Change test case view ===&lt;br /&gt;
&lt;br /&gt;
Select one of several predefined test case views. For example you can group test cases by their outcome to quickly see only failing test cases.&lt;br /&gt;
&lt;br /&gt;
== Further Documentation and Common Problems ==&lt;br /&gt;
Please visit [[Using CDD]] for further documentation or look at &lt;br /&gt;
[[CDD Common Problems|Common Problems]] if you run into problems.&lt;br /&gt;
&lt;br /&gt;
== Related Publications ==&lt;br /&gt;
&lt;br /&gt;
* Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., &amp;quot;Contract Driven Development = Test Driven Development - Writing Test Cases&amp;quot;, Proceedings of ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September 2007 [http://se.ethz.ch/people/leitner/publications/cdd_leitner_esec_fse_2007.pdf (pdf)]&lt;br /&gt;
* Sunghun Kim, Shay Artzi, and Michael D. Ernst, &amp;quot;reCrash: Making Crash Reproducible&amp;quot; MIT Computer Science and Artificial Intelligence Laboratory technical report MIT-CSAIL-TR-2007-054, (Cambridge, MA), November 20, 2007. [http://recrash.googlecode.com/files/MIT-CSAIL-TR-2007-054.pdf (pdf)]&lt;br /&gt;
&lt;br /&gt;
== Project Internal Stuff ==&lt;br /&gt;
&lt;br /&gt;
[[CddBranchInternal]]&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10884</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10884"/>
				<updated>2008-04-01T20:16:01Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Windows Patch */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD EiffelStudio? ==&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_logo.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video/ '''Play CDD Video''', click here]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at [http://se.ethz.ch ETH Zurich]. It adds advanced support for unit testing to EiffelStudio. With CDD EiffelStudio you can&lt;br /&gt;
&lt;br /&gt;
* Write test cases&lt;br /&gt;
* Manage test cases (using tags)&lt;br /&gt;
* Run test cases&lt;br /&gt;
* View test outcomes&lt;br /&gt;
* '''Automatically extract test cases'''&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [http://eiffelstudio.origo.ethz.ch/forum/20 CDD forum]. &lt;br /&gt;
&lt;br /&gt;
CDD EiffelStudio adds the following panel to regular EiffelStudio:&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_panel.png|center]]&lt;br /&gt;
&lt;br /&gt;
== Download CDD ==&lt;br /&gt;
&lt;br /&gt;
The following packages contain the full EiffelStudio 6.1 plus the CDD extension. You do not need to have EiffelStudio installed already in order to install below packages. On Windows you do have to have either the Platform SDK or Visual C++ installed. Do not use EiffelStudio with the gcc/mingw or the .Net backend.&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
* Full Linux version: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_6.tar.bz2&lt;br /&gt;
* Installation instructions: http://docs.eiffel.com/eiffelstudio/installation/studio/060_linux.html&lt;br /&gt;
&lt;br /&gt;
Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;. Make sure you set/update the environment variables PATH, ISE_EIFFEL, and ISE_PLATFORM according to the installation instructions.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
* Full Windows version (with installer): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_6-windows.msi&lt;br /&gt;
* Note 1: Installation is independent of installations of official EiffelStudio 6.1 (neither overwrites nor invalidates nor is influenced by those) &lt;br /&gt;
* Note 2: If you have a previous installation of the CDD Edition of EiffelStudio installed, you need to uninstall it first. If you want to reuse the installation directory, you need to manually delete all EIFGENs in its subdirectories after the uninstall procedure.&lt;br /&gt;
* Note 3: Do not use the gcc/mingw or the .NET compiler backend. You will have to use the Microsoft C compiler. You can get it either by installing Visual C++, or via (the freely available) Microsoft Platform SDK.Have a look at http://eiffelsoftware.origo.ethz.ch/Installing_Microsoft_C_compiler to learn how to install either compiler.&lt;br /&gt;
&lt;br /&gt;
==== Windows Patch ====&lt;br /&gt;
&lt;br /&gt;
* Patch version for existing EiffelStudio CDD Edition Final 5 installation (does NOT work for official EiffelStudio 6.1 installation!): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_6-windows.zip&lt;br /&gt;
* Replace &amp;lt;INSTALL_DIRECTORY&amp;gt;/studio/spec/windows/bin/ec.exe with ec.exe contained in archive&lt;br /&gt;
* Delete existing precompilations (= delete EIFGENs in subdirectories of &amp;lt;INSTALL_DIRECTORY&amp;gt;/precomp/spec/windows/)&lt;br /&gt;
* Recompile existing projects from scratch (use the &amp;quot;clean&amp;quot; option in the project load screen)&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
=== Manually Written Test Cases ===&lt;br /&gt;
CDD Eiffelstudio allows you to create a new “empty” test case. These test cases are similar to jUnit test cases. A manually written test class must start with the word “TEST” and all test routines also have to start with the word “test”. It also has to inherit from class CDD_TEST_CASE.&lt;br /&gt;
&lt;br /&gt;
=== Extracted Test Cases ===&lt;br /&gt;
CDD EiffelStudio automatically extracts test cases whenever you run your program and an exception is triggered. This feature is novel and not yet part of any other testing environment. You will be the first to try it out.&lt;br /&gt;
&lt;br /&gt;
=== Test outcomes ===&lt;br /&gt;
A test case checks whether your program contains a particular bug. A test cases can fail indicating that the bug is present in your program, or pass indicating your program does not contain this bug. Sometimes test cases will be unresolved, in which case the testing framework was unable to find out whether the test case passed or failed. A test case can be unresolved for several reasons.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Debug Test Case === &lt;br /&gt;
Select a test case and press this button to run a test case in the debugger.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable Execution of Test Cases in Background === &lt;br /&gt;
If enabled, all test cases are retested every time you press compile. If disabled no test cases are executed.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable automatic extraction of Test Cases === &lt;br /&gt;
If enabled every time an exception is triggered a set of test cases that try to reproduce this exception is extracted. If disabled no test cases are extracted.&lt;br /&gt;
&lt;br /&gt;
=== Clean up/Delete === &lt;br /&gt;
You can use the “Clean Up/Delete” button in two different ways. By simply pressing it you will delete all unresolved test cases. By pick and dropping a test case to the “Clean up/Delete” button (right click on test case, move mouse to button and right click again) you can delete a test case. By the way, test cases are just regular classes. So you can use all existing tools that apply to classes in EiffelStudio too.&lt;br /&gt;
&lt;br /&gt;
Update: To remove duplicate test cases (until the next cdd update), please use the command-line tool from the [http://clean-cdd.origo.ethz.ch/ clean-cdd project].&lt;br /&gt;
&lt;br /&gt;
=== Create new manual test class ===&lt;br /&gt;
Press this button to create a empty test class. You can then edit the class to add manually written test cases. This is how a manually written test case can look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
  test_deposit&lt;br /&gt;
    local&lt;br /&gt;
      ba: BANK_ACCOUNT&lt;br /&gt;
    do&lt;br /&gt;
      create ba.make_with_balance (0)&lt;br /&gt;
      ba.deposit (100)&lt;br /&gt;
      check&lt;br /&gt;
         money_depisited: ba.balance = 100&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons_2.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Search for tags ===&lt;br /&gt;
Enter keywords to search for paritcular test cases. Some tags are automatically set for you like the name of the test case. You can also easily add your own tags by adding an indexing item &amp;quot;tag&amp;quot; to your test class or routine:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
indexing&lt;br /&gt;
  tag: &amp;quot;fixme&amp;quot;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
   ...&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Tags can be adde to all test routines and classes. Whether they are extracted or manually written does not matter.&lt;br /&gt;
&lt;br /&gt;
=== Restrict execution of test cases ===&lt;br /&gt;
&lt;br /&gt;
Once you have many test cases, you will run into situation where you don't want to execute all of them. The restrict button will help you to achieve this. As long as the &amp;quot;Restrict&amp;quot; button is pushed test cases that don't show up in the test case view will not be tested. Execution is restricted to those test cases that do show up.&lt;br /&gt;
&lt;br /&gt;
=== Change test case view ===&lt;br /&gt;
&lt;br /&gt;
Select one of several predefined test case views. For example you can group test cases by their outcome to quickly see only failing test cases.&lt;br /&gt;
&lt;br /&gt;
== Further Documentation and Common Problems ==&lt;br /&gt;
Please visit [[Using CDD]] for further documentation or look at &lt;br /&gt;
[[CDD Common Problems|Common Problems]] if you run into problems.&lt;br /&gt;
&lt;br /&gt;
== Related Publications ==&lt;br /&gt;
&lt;br /&gt;
* Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., &amp;quot;Contract Driven Development = Test Driven Development - Writing Test Cases&amp;quot;, Proceedings of ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September 2007 [http://se.ethz.ch/people/leitner/publications/cdd_leitner_esec_fse_2007.pdf (pdf)]&lt;br /&gt;
* Sunghun Kim, Shay Artzi, and Michael D. Ernst, &amp;quot;reCrash: Making Crash Reproducible&amp;quot; MIT Computer Science and Artificial Intelligence Laboratory technical report MIT-CSAIL-TR-2007-054, (Cambridge, MA), November 20, 2007. [http://recrash.googlecode.com/files/MIT-CSAIL-TR-2007-054.pdf (pdf)]&lt;br /&gt;
&lt;br /&gt;
== Project Internal Stuff ==&lt;br /&gt;
&lt;br /&gt;
[[CddBranchInternal]]&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10883</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10883"/>
				<updated>2008-04-01T20:07:13Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Windows Patch */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD EiffelStudio? ==&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_logo.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video/ '''Play CDD Video''', click here]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at [http://se.ethz.ch ETH Zurich]. It adds advanced support for unit testing to EiffelStudio. With CDD EiffelStudio you can&lt;br /&gt;
&lt;br /&gt;
* Write test cases&lt;br /&gt;
* Manage test cases (using tags)&lt;br /&gt;
* Run test cases&lt;br /&gt;
* View test outcomes&lt;br /&gt;
* '''Automatically extract test cases'''&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [http://eiffelstudio.origo.ethz.ch/forum/20 CDD forum]. &lt;br /&gt;
&lt;br /&gt;
CDD EiffelStudio adds the following panel to regular EiffelStudio:&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_panel.png|center]]&lt;br /&gt;
&lt;br /&gt;
== Download CDD ==&lt;br /&gt;
&lt;br /&gt;
The following packages contain the full EiffelStudio 6.1 plus the CDD extension. You do not need to have EiffelStudio installed already in order to install below packages. On Windows you do have to have either the Platform SDK or Visual C++ installed. Do not use EiffelStudio with the gcc/mingw or the .Net backend.&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
* Full Linux version: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_6.tar.bz2&lt;br /&gt;
* Installation instructions: http://docs.eiffel.com/eiffelstudio/installation/studio/060_linux.html&lt;br /&gt;
&lt;br /&gt;
Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;. Make sure you set/update the environment variables PATH, ISE_EIFFEL, and ISE_PLATFORM according to the installation instructions.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
* Full Windows version (with installer): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_6-windows.msi&lt;br /&gt;
* Note 1: Installation is independent of installations of official EiffelStudio 6.1 (neither overwrites nor invalidates nor is influenced by those) &lt;br /&gt;
* Note 2: If you have a previous installation of the CDD Edition of EiffelStudio installed, you need to uninstall it first. If you want to reuse the installation directory, you need to manually delete all EIFGENs in its subdirectories after the uninstall procedure.&lt;br /&gt;
* Note 3: Do not use the gcc/mingw or the .NET compiler backend. You will have to use the Microsoft C compiler. You can get it either by installing Visual C++, or via (the freely available) Microsoft Platform SDK.Have a look at http://eiffelsoftware.origo.ethz.ch/Installing_Microsoft_C_compiler to learn how to install either compiler.&lt;br /&gt;
&lt;br /&gt;
==== Windows Patch ====&lt;br /&gt;
&lt;br /&gt;
* Patch version: http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_6-windows.zip&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
=== Manually Written Test Cases ===&lt;br /&gt;
CDD Eiffelstudio allows you to create a new “empty” test case. These test cases are similar to jUnit test cases. A manually written test class must start with the word “TEST” and all test routines also have to start with the word “test”. It also has to inherit from class CDD_TEST_CASE.&lt;br /&gt;
&lt;br /&gt;
=== Extracted Test Cases ===&lt;br /&gt;
CDD EiffelStudio automatically extracts test cases whenever you run your program and an exception is triggered. This feature is novel and not yet part of any other testing environment. You will be the first to try it out.&lt;br /&gt;
&lt;br /&gt;
=== Test outcomes ===&lt;br /&gt;
A test case checks whether your program contains a particular bug. A test cases can fail indicating that the bug is present in your program, or pass indicating your program does not contain this bug. Sometimes test cases will be unresolved, in which case the testing framework was unable to find out whether the test case passed or failed. A test case can be unresolved for several reasons.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Debug Test Case === &lt;br /&gt;
Select a test case and press this button to run a test case in the debugger.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable Execution of Test Cases in Background === &lt;br /&gt;
If enabled, all test cases are retested every time you press compile. If disabled no test cases are executed.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable automatic extraction of Test Cases === &lt;br /&gt;
If enabled every time an exception is triggered a set of test cases that try to reproduce this exception is extracted. If disabled no test cases are extracted.&lt;br /&gt;
&lt;br /&gt;
=== Clean up/Delete === &lt;br /&gt;
You can use the “Clean Up/Delete” button in two different ways. By simply pressing it you will delete all unresolved test cases. By pick and dropping a test case to the “Clean up/Delete” button (right click on test case, move mouse to button and right click again) you can delete a test case. By the way, test cases are just regular classes. So you can use all existing tools that apply to classes in EiffelStudio too.&lt;br /&gt;
&lt;br /&gt;
Update: To remove duplicate test cases (until the next cdd update), please use the command-line tool from the [http://clean-cdd.origo.ethz.ch/ clean-cdd project].&lt;br /&gt;
&lt;br /&gt;
=== Create new manual test class ===&lt;br /&gt;
Press this button to create a empty test class. You can then edit the class to add manually written test cases. This is how a manually written test case can look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
  test_deposit&lt;br /&gt;
    local&lt;br /&gt;
      ba: BANK_ACCOUNT&lt;br /&gt;
    do&lt;br /&gt;
      create ba.make_with_balance (0)&lt;br /&gt;
      ba.deposit (100)&lt;br /&gt;
      check&lt;br /&gt;
         money_depisited: ba.balance = 100&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons_2.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Search for tags ===&lt;br /&gt;
Enter keywords to search for paritcular test cases. Some tags are automatically set for you like the name of the test case. You can also easily add your own tags by adding an indexing item &amp;quot;tag&amp;quot; to your test class or routine:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
indexing&lt;br /&gt;
  tag: &amp;quot;fixme&amp;quot;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
   ...&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Tags can be adde to all test routines and classes. Whether they are extracted or manually written does not matter.&lt;br /&gt;
&lt;br /&gt;
=== Restrict execution of test cases ===&lt;br /&gt;
&lt;br /&gt;
Once you have many test cases, you will run into situation where you don't want to execute all of them. The restrict button will help you to achieve this. As long as the &amp;quot;Restrict&amp;quot; button is pushed test cases that don't show up in the test case view will not be tested. Execution is restricted to those test cases that do show up.&lt;br /&gt;
&lt;br /&gt;
=== Change test case view ===&lt;br /&gt;
&lt;br /&gt;
Select one of several predefined test case views. For example you can group test cases by their outcome to quickly see only failing test cases.&lt;br /&gt;
&lt;br /&gt;
== Further Documentation and Common Problems ==&lt;br /&gt;
Please visit [[Using CDD]] for further documentation or look at &lt;br /&gt;
[[CDD Common Problems|Common Problems]] if you run into problems.&lt;br /&gt;
&lt;br /&gt;
== Related Publications ==&lt;br /&gt;
&lt;br /&gt;
* Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., &amp;quot;Contract Driven Development = Test Driven Development - Writing Test Cases&amp;quot;, Proceedings of ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September 2007 [http://se.ethz.ch/people/leitner/publications/cdd_leitner_esec_fse_2007.pdf (pdf)]&lt;br /&gt;
* Sunghun Kim, Shay Artzi, and Michael D. Ernst, &amp;quot;reCrash: Making Crash Reproducible&amp;quot; MIT Computer Science and Artificial Intelligence Laboratory technical report MIT-CSAIL-TR-2007-054, (Cambridge, MA), November 20, 2007. [http://recrash.googlecode.com/files/MIT-CSAIL-TR-2007-054.pdf (pdf)]&lt;br /&gt;
&lt;br /&gt;
== Project Internal Stuff ==&lt;br /&gt;
&lt;br /&gt;
[[CddBranchInternal]]&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10882</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10882"/>
				<updated>2008-04-01T20:06:48Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD EiffelStudio? ==&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_logo.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video/ '''Play CDD Video''', click here]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at [http://se.ethz.ch ETH Zurich]. It adds advanced support for unit testing to EiffelStudio. With CDD EiffelStudio you can&lt;br /&gt;
&lt;br /&gt;
* Write test cases&lt;br /&gt;
* Manage test cases (using tags)&lt;br /&gt;
* Run test cases&lt;br /&gt;
* View test outcomes&lt;br /&gt;
* '''Automatically extract test cases'''&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [http://eiffelstudio.origo.ethz.ch/forum/20 CDD forum]. &lt;br /&gt;
&lt;br /&gt;
CDD EiffelStudio adds the following panel to regular EiffelStudio:&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_panel.png|center]]&lt;br /&gt;
&lt;br /&gt;
== Download CDD ==&lt;br /&gt;
&lt;br /&gt;
The following packages contain the full EiffelStudio 6.1 plus the CDD extension. You do not need to have EiffelStudio installed already in order to install below packages. On Windows you do have to have either the Platform SDK or Visual C++ installed. Do not use EiffelStudio with the gcc/mingw or the .Net backend.&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
* Full Linux version: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_6.tar.bz2&lt;br /&gt;
* Installation instructions: http://docs.eiffel.com/eiffelstudio/installation/studio/060_linux.html&lt;br /&gt;
&lt;br /&gt;
Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;. Make sure you set/update the environment variables PATH, ISE_EIFFEL, and ISE_PLATFORM according to the installation instructions.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
* Full Windows version (with installer): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_6-windows.msi&lt;br /&gt;
* Note 1: Installation is independent of installations of official EiffelStudio 6.1 (neither overwrites nor invalidates nor is influenced by those) &lt;br /&gt;
* Note 2: If you have a previous installation of the CDD Edition of EiffelStudio installed, you need to uninstall it first. If you want to reuse the installation directory, you need to manually delete all EIFGENs in its subdirectories after the uninstall procedure.&lt;br /&gt;
* Note 3: Do not use the gcc/mingw or the .NET compiler backend. You will have to use the Microsoft C compiler. You can get it either by installing Visual C++, or via (the freely available) Microsoft Platform SDK.Have a look at http://eiffelsoftware.origo.ethz.ch/Installing_Microsoft_C_compiler to learn how to install either compiler.&lt;br /&gt;
&lt;br /&gt;
== Windows Patch ==&lt;br /&gt;
&lt;br /&gt;
* Patch version: http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_6-windows.zip&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
=== Manually Written Test Cases ===&lt;br /&gt;
CDD Eiffelstudio allows you to create a new “empty” test case. These test cases are similar to jUnit test cases. A manually written test class must start with the word “TEST” and all test routines also have to start with the word “test”. It also has to inherit from class CDD_TEST_CASE.&lt;br /&gt;
&lt;br /&gt;
=== Extracted Test Cases ===&lt;br /&gt;
CDD EiffelStudio automatically extracts test cases whenever you run your program and an exception is triggered. This feature is novel and not yet part of any other testing environment. You will be the first to try it out.&lt;br /&gt;
&lt;br /&gt;
=== Test outcomes ===&lt;br /&gt;
A test case checks whether your program contains a particular bug. A test cases can fail indicating that the bug is present in your program, or pass indicating your program does not contain this bug. Sometimes test cases will be unresolved, in which case the testing framework was unable to find out whether the test case passed or failed. A test case can be unresolved for several reasons.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Debug Test Case === &lt;br /&gt;
Select a test case and press this button to run a test case in the debugger.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable Execution of Test Cases in Background === &lt;br /&gt;
If enabled, all test cases are retested every time you press compile. If disabled no test cases are executed.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable automatic extraction of Test Cases === &lt;br /&gt;
If enabled every time an exception is triggered a set of test cases that try to reproduce this exception is extracted. If disabled no test cases are extracted.&lt;br /&gt;
&lt;br /&gt;
=== Clean up/Delete === &lt;br /&gt;
You can use the “Clean Up/Delete” button in two different ways. By simply pressing it you will delete all unresolved test cases. By pick and dropping a test case to the “Clean up/Delete” button (right click on test case, move mouse to button and right click again) you can delete a test case. By the way, test cases are just regular classes. So you can use all existing tools that apply to classes in EiffelStudio too.&lt;br /&gt;
&lt;br /&gt;
Update: To remove duplicate test cases (until the next cdd update), please use the command-line tool from the [http://clean-cdd.origo.ethz.ch/ clean-cdd project].&lt;br /&gt;
&lt;br /&gt;
=== Create new manual test class ===&lt;br /&gt;
Press this button to create a empty test class. You can then edit the class to add manually written test cases. This is how a manually written test case can look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
  test_deposit&lt;br /&gt;
    local&lt;br /&gt;
      ba: BANK_ACCOUNT&lt;br /&gt;
    do&lt;br /&gt;
      create ba.make_with_balance (0)&lt;br /&gt;
      ba.deposit (100)&lt;br /&gt;
      check&lt;br /&gt;
         money_depisited: ba.balance = 100&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons_2.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Search for tags ===&lt;br /&gt;
Enter keywords to search for paritcular test cases. Some tags are automatically set for you like the name of the test case. You can also easily add your own tags by adding an indexing item &amp;quot;tag&amp;quot; to your test class or routine:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
indexing&lt;br /&gt;
  tag: &amp;quot;fixme&amp;quot;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
   ...&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Tags can be adde to all test routines and classes. Whether they are extracted or manually written does not matter.&lt;br /&gt;
&lt;br /&gt;
=== Restrict execution of test cases ===&lt;br /&gt;
&lt;br /&gt;
Once you have many test cases, you will run into situation where you don't want to execute all of them. The restrict button will help you to achieve this. As long as the &amp;quot;Restrict&amp;quot; button is pushed test cases that don't show up in the test case view will not be tested. Execution is restricted to those test cases that do show up.&lt;br /&gt;
&lt;br /&gt;
=== Change test case view ===&lt;br /&gt;
&lt;br /&gt;
Select one of several predefined test case views. For example you can group test cases by their outcome to quickly see only failing test cases.&lt;br /&gt;
&lt;br /&gt;
== Further Documentation and Common Problems ==&lt;br /&gt;
Please visit [[Using CDD]] for further documentation or look at &lt;br /&gt;
[[CDD Common Problems|Common Problems]] if you run into problems.&lt;br /&gt;
&lt;br /&gt;
== Related Publications ==&lt;br /&gt;
&lt;br /&gt;
* Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., &amp;quot;Contract Driven Development = Test Driven Development - Writing Test Cases&amp;quot;, Proceedings of ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September 2007 [http://se.ethz.ch/people/leitner/publications/cdd_leitner_esec_fse_2007.pdf (pdf)]&lt;br /&gt;
* Sunghun Kim, Shay Artzi, and Michael D. Ernst, &amp;quot;reCrash: Making Crash Reproducible&amp;quot; MIT Computer Science and Artificial Intelligence Laboratory technical report MIT-CSAIL-TR-2007-054, (Cambridge, MA), November 20, 2007. [http://recrash.googlecode.com/files/MIT-CSAIL-TR-2007-054.pdf (pdf)]&lt;br /&gt;
&lt;br /&gt;
== Project Internal Stuff ==&lt;br /&gt;
&lt;br /&gt;
[[CddBranchInternal]]&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10881</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10881"/>
				<updated>2008-04-01T20:05:45Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD EiffelStudio? ==&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_logo.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video/ '''Play CDD Video''', click here]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at [http://se.ethz.ch ETH Zurich]. It adds advanced support for unit testing to EiffelStudio. With CDD EiffelStudio you can&lt;br /&gt;
&lt;br /&gt;
* Write test cases&lt;br /&gt;
* Manage test cases (using tags)&lt;br /&gt;
* Run test cases&lt;br /&gt;
* View test outcomes&lt;br /&gt;
* '''Automatically extract test cases'''&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [http://eiffelstudio.origo.ethz.ch/forum/20 CDD forum]. &lt;br /&gt;
&lt;br /&gt;
CDD EiffelStudio adds the following panel to regular EiffelStudio:&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_panel.png|center]]&lt;br /&gt;
&lt;br /&gt;
== Download CDD ==&lt;br /&gt;
&lt;br /&gt;
The following packages contain the full EiffelStudio 6.1 plus the CDD extension. You do not need to have EiffelStudio installed already in order to install below packages. On Windows you do have to have either the Platform SDK or Visual C++ installed. Do not use EiffelStudio with the gcc/mingw or the .Net backend.&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
* Full Linux version: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_6.tar.bz2&lt;br /&gt;
* Installation instructions: http://docs.eiffel.com/eiffelstudio/installation/studio/060_linux.html&lt;br /&gt;
&lt;br /&gt;
Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;. Make sure you set/update the environment variables PATH, ISE_EIFFEL, and ISE_PLATFORM according to the installation instructions.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
* Full Windows version (with installer): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_6-windows.msi&lt;br /&gt;
* Note 1: Installation is independent of installations of official EiffelStudio 6.1 (neither overwrites nor invalidates nor is influenced by those) &lt;br /&gt;
* Note 2: If you have a previous installation of the CDD Edition of EiffelStudio installed, you need to uninstall it first. If you want to reuse the installation directory, you need to manually delete all EIFGENs in its subdirectories after the uninstall procedure.&lt;br /&gt;
* Note 3: Do not use the gcc/mingw or the .NET compiler backend. You will have to use the Microsoft C compiler. You can get it either by installing Visual C++, or via (the freely available) Microsoft Platform SDK.Have a look at http://eiffelsoftware.origo.ethz.ch/Installing_Microsoft_C_compiler to learn how to install either compiler.&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
=== Manually Written Test Cases ===&lt;br /&gt;
CDD Eiffelstudio allows you to create a new “empty” test case. These test cases are similar to jUnit test cases. A manually written test class must start with the word “TEST” and all test routines also have to start with the word “test”. It also has to inherit from class CDD_TEST_CASE.&lt;br /&gt;
&lt;br /&gt;
=== Extracted Test Cases ===&lt;br /&gt;
CDD EiffelStudio automatically extracts test cases whenever you run your program and an exception is triggered. This feature is novel and not yet part of any other testing environment. You will be the first to try it out.&lt;br /&gt;
&lt;br /&gt;
=== Test outcomes ===&lt;br /&gt;
A test case checks whether your program contains a particular bug. A test cases can fail indicating that the bug is present in your program, or pass indicating your program does not contain this bug. Sometimes test cases will be unresolved, in which case the testing framework was unable to find out whether the test case passed or failed. A test case can be unresolved for several reasons.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Debug Test Case === &lt;br /&gt;
Select a test case and press this button to run a test case in the debugger.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable Execution of Test Cases in Background === &lt;br /&gt;
If enabled, all test cases are retested every time you press compile. If disabled no test cases are executed.&lt;br /&gt;
&lt;br /&gt;
=== Enable/Disable automatic extraction of Test Cases === &lt;br /&gt;
If enabled every time an exception is triggered a set of test cases that try to reproduce this exception is extracted. If disabled no test cases are extracted.&lt;br /&gt;
&lt;br /&gt;
=== Clean up/Delete === &lt;br /&gt;
You can use the “Clean Up/Delete” button in two different ways. By simply pressing it you will delete all unresolved test cases. By pick and dropping a test case to the “Clean up/Delete” button (right click on test case, move mouse to button and right click again) you can delete a test case. By the way, test cases are just regular classes. So you can use all existing tools that apply to classes in EiffelStudio too.&lt;br /&gt;
&lt;br /&gt;
Update: To remove duplicate test cases (until the next cdd update), please use the command-line tool from the [http://clean-cdd.origo.ethz.ch/ clean-cdd project].&lt;br /&gt;
&lt;br /&gt;
=== Create new manual test class ===&lt;br /&gt;
Press this button to create a empty test class. You can then edit the class to add manually written test cases. This is how a manually written test case can look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
  test_deposit&lt;br /&gt;
    local&lt;br /&gt;
      ba: BANK_ACCOUNT&lt;br /&gt;
    do&lt;br /&gt;
      create ba.make_with_balance (0)&lt;br /&gt;
      ba.deposit (100)&lt;br /&gt;
      check&lt;br /&gt;
         money_depisited: ba.balance = 100&lt;br /&gt;
      end&lt;br /&gt;
    end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:cdd_buttons_2.png|center]]&lt;br /&gt;
&lt;br /&gt;
=== Search for tags ===&lt;br /&gt;
Enter keywords to search for paritcular test cases. Some tags are automatically set for you like the name of the test case. You can also easily add your own tags by adding an indexing item &amp;quot;tag&amp;quot; to your test class or routine:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;eiffel&amp;gt;&lt;br /&gt;
indexing&lt;br /&gt;
  tag: &amp;quot;fixme&amp;quot;&lt;br /&gt;
class TEST_BANK_ACCOUNT&lt;br /&gt;
inherit CDD_TEST_CASE&lt;br /&gt;
feature&lt;br /&gt;
   ...&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/eiffel&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Tags can be adde to all test routines and classes. Whether they are extracted or manually written does not matter.&lt;br /&gt;
&lt;br /&gt;
=== Restrict execution of test cases ===&lt;br /&gt;
&lt;br /&gt;
Once you have many test cases, you will run into situation where you don't want to execute all of them. The restrict button will help you to achieve this. As long as the &amp;quot;Restrict&amp;quot; button is pushed test cases that don't show up in the test case view will not be tested. Execution is restricted to those test cases that do show up.&lt;br /&gt;
&lt;br /&gt;
=== Change test case view ===&lt;br /&gt;
&lt;br /&gt;
Select one of several predefined test case views. For example you can group test cases by their outcome to quickly see only failing test cases.&lt;br /&gt;
&lt;br /&gt;
== Further Documentation and Common Problems ==&lt;br /&gt;
Please visit [[Using CDD]] for further documentation or look at &lt;br /&gt;
[[CDD Common Problems|Common Problems]] if you run into problems.&lt;br /&gt;
&lt;br /&gt;
== Related Publications ==&lt;br /&gt;
&lt;br /&gt;
* Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., &amp;quot;Contract Driven Development = Test Driven Development - Writing Test Cases&amp;quot;, Proceedings of ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September 2007 [http://se.ethz.ch/people/leitner/publications/cdd_leitner_esec_fse_2007.pdf (pdf)]&lt;br /&gt;
* Sunghun Kim, Shay Artzi, and Michael D. Ernst, &amp;quot;reCrash: Making Crash Reproducible&amp;quot; MIT Computer Science and Artificial Intelligence Laboratory technical report MIT-CSAIL-TR-2007-054, (Cambridge, MA), November 20, 2007. [http://recrash.googlecode.com/files/MIT-CSAIL-TR-2007-054.pdf (pdf)]&lt;br /&gt;
&lt;br /&gt;
== Project Internal Stuff ==&lt;br /&gt;
&lt;br /&gt;
[[CddBranchInternal]]&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10880</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10880"/>
				<updated>2008-04-01T19:31:56Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* [ON WINDOWS NO CRASH OCCURS] Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
&lt;br /&gt;
* [DONE] Don't store test case if exactly the same one exists already.&lt;br /&gt;
* [DONE] Look at test case from Manu that doesn't compile (ARRAY is printed instead of ARRAY[STRING]&lt;br /&gt;
* [DONE] Add link to &amp;quot;common problems&amp;quot; website in context window for test cases that time out&lt;br /&gt;
&lt;br /&gt;
* [DONE]Invariant violations occurring before routine invocation make a test case fail. It should be invalid. How to reproduce: Launch bank account example with final 2. Implement `BANK_ACCOUNT.withdraw' correctly. Replace postcondition with &amp;quot;withdrawn: balance = old balance + an_amount&amp;quot;. Launch app, try to withdraw 700. You will get a test case that fails. It should be invalid though.&lt;br /&gt;
&lt;br /&gt;
* [DONE] Inspect cannot read file problem that occured in lecture. I can reproduce it also on my Linux box now.&lt;br /&gt;
* [DONE] Log:&lt;br /&gt;
** Playing around (duration and number of times)&lt;br /&gt;
** FG-test case execution (duration, number of times, exception thrown, which test case, outcome)&lt;br /&gt;
* [DONE] Clean cdd target when main target is cleaned too.&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
* [LOW PRIORITY since inv prob solved] When runtime of interpreter is severly damaged a popup box shows up on Windows. Popup can be ignored and goes away. This popup does not come up when we shoot down interpreter, but when a faulty instruction is executed. (E.g. the inv bug can trigger this dialog)&lt;br /&gt;
* [LOW PRIORITY since unlikely situation] Test cases with a 'False' (not satisfieable) invariant makes interpreter hang and ES crash. Add test case and fix it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10875</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10875"/>
				<updated>2008-03-26T17:25:34Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* [ON WINDOWS NO CRASH OCCURS] Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
&lt;br /&gt;
* [DONE] Don't store test case if exactly the same one exists already.&lt;br /&gt;
* [DONE] Look at test case from Manu that doesn't compile (ARRAY is printed instead of ARRAY[STRING]&lt;br /&gt;
* [DONE] Add link to &amp;quot;common problems&amp;quot; website in context window for test cases that time out&lt;br /&gt;
&lt;br /&gt;
* [DONE]Invariant violations occurring before routine invocation make a test case fail. It should be invalid. How to reproduce: Launch bank account example with final 2. Implement `BANK_ACCOUNT.withdraw' correctly. Replace postcondition with &amp;quot;withdrawn: balance = old balance + an_amount&amp;quot;. Launch app, try to withdraw 700. You will get a test case that fails. It should be invalid though.&lt;br /&gt;
&lt;br /&gt;
* Inspect cannot read file problem that occured in lecture. I can reproduce it also on my Linux box now.&lt;br /&gt;
* Log:&lt;br /&gt;
** Playing around (duration and number of times)&lt;br /&gt;
** FG-test case execution (duration, number of times, exception thrown, which test case, outcome)&lt;br /&gt;
* Clean cdd target when main target is cleaned too.&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
* [LOW PRIORITY since inv prob solved] When runtime of interpreter is severly damaged a popup box shows up on Windows. Popup can be ignored and goes away. This popup does not come up when we shoot down interpreter, but when a faulty instruction is executed. (E.g. the inv bug can trigger this dialog)&lt;br /&gt;
* [LOW PRIORITY since unlikely situation] Test cases with a 'False' (not satisfieable) invariant makes interpreter hang and ES crash. Add test case and fix it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10873</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10873"/>
				<updated>2008-03-26T15:15:09Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* [ON WINDOWS NO CRASH OCCURS] Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
&lt;br /&gt;
* [DONE] Don't store test case if exactly the same one exists already.&lt;br /&gt;
* [DONE] Look at test case from Manu that doesn't compile (ARRAY is printed instead of ARRAY[STRING]&lt;br /&gt;
* [DONE] Add link to &amp;quot;common problems&amp;quot; website in context window for test cases that time out&lt;br /&gt;
&lt;br /&gt;
* Invariant violations occurring before routine invocation make a test case fail. It should be invalid. How to reproduce: Launch bank account example with final 2. Implement `BANK_ACCOUNT.withdraw' correctly. Replace postcondition with &amp;quot;withdrawn: balance = old balance + an_amount&amp;quot;. Launch app, try to withdraw 700. You will get a test case that fails. It should be invalid though.&lt;br /&gt;
&lt;br /&gt;
* Inspect cannot read file problem that occured in lecture. I can reproduce it also on my Linux box now.&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
* Log:&lt;br /&gt;
** Playing around (duration and number of times)&lt;br /&gt;
** FG-test case execution (duration, number of times, exception thrown, which test case, outcome)&lt;br /&gt;
&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
* [LOW PRIORITY since inv prob solved] When runtime of interpreter is severly damaged a popup box shows up on Windows. Popup can be ignored and goes away. This popup does not come up when we shoot down interpreter, but when a faulty instruction is executed. (E.g. the inv bug can trigger this dialog)&lt;br /&gt;
* [LOW PRIORITY since unlikely situation] Test cases with a 'False' (not satisfieable) invariant makes interpreter hang and ES crash. Add test case and fix it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10872</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10872"/>
				<updated>2008-03-26T14:50:12Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* [ON WINDOWS NO CRASH OCCURS] Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
&lt;br /&gt;
* [DONE] Don't store test case if exactly the same one exists already.&lt;br /&gt;
* [DONE] Look at test case from Manu that doesn't compile (ARRAY is printed instead of ARRAY[STRING]&lt;br /&gt;
&lt;br /&gt;
* Invariant violations occurring before routine invocation make a test case fail. It should be invalid. How to reproduce: Launch bank account example with final 2. Implement `BANK_ACCOUNT.withdraw' correctly. Replace postcondition with &amp;quot;withdrawn: balance = old balance + an_amount&amp;quot;. Launch app, try to withdraw 700. You will get a test case that fails. It should be invalid though.&lt;br /&gt;
&lt;br /&gt;
* Add link to &amp;quot;common problems&amp;quot; website in context window for test cases that time out&lt;br /&gt;
* Inspect cannot read file problem that occured in lecture. I can reproduce it also on my Linux box now.&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
* Log:&lt;br /&gt;
** Playing around (duration and number of times)&lt;br /&gt;
** FG-test case execution (duration, number of times, exception thrown, which test case, outcome)&lt;br /&gt;
&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
* [LOW PRIORITY since inv prob solved] When runtime of interpreter is severly damaged a popup box shows up on Windows. Popup can be ignored and goes away. This popup does not come up when we shoot down interpreter, but when a faulty instruction is executed. (E.g. the inv bug can trigger this dialog)&lt;br /&gt;
* [LOW PRIORITY since unlikely situation] Test cases with a 'False' (not satisfieable) invariant makes interpreter hang and ES crash. Add test case and fix it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10871</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10871"/>
				<updated>2008-03-26T14:48:11Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Bug Fixing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* [ON WINDOWS NO CRASH OCCURS] Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
&lt;br /&gt;
* Invariant violations occurring before routine invocation make a test case fail. It should be invalid. How to reproduce: Launch bank account example with final 2. Implement `BANK_ACCOUNT.withdraw' correctly. Replace postcondition with &amp;quot;withdrawn: balance = old balance + an_amount&amp;quot;. Launch app, try to withdraw 700. You will get a test case that fails. It should be invalid though.&lt;br /&gt;
&lt;br /&gt;
* Add link to &amp;quot;common problems&amp;quot; website in context window for test cases that time out&lt;br /&gt;
* Inspect cannot read file problem that occured in lecture. I can reproduce it also on my Linux box now.&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
* Don't store test case if exactly the same one exists already.&lt;br /&gt;
* Look at test case from Manu that doesn't compile (ARRAY is printed instead of ARRAY[STRING]&lt;br /&gt;
* Log:&lt;br /&gt;
** Playing around (duration and number of times)&lt;br /&gt;
** FG-test case execution (duration, number of times, exception thrown, which test case, outcome)&lt;br /&gt;
&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
* [LOW PRIORITY since inv prob solved] When runtime of interpreter is severly damaged a popup box shows up on Windows. Popup can be ignored and goes away. This popup does not come up when we shoot down interpreter, but when a faulty instruction is executed. (E.g. the inv bug can trigger this dialog)&lt;br /&gt;
* [LOW PRIORITY since unlikely situation] Test cases with a 'False' (not satisfieable) invariant makes interpreter hang and ES crash. Add test case and fix it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10739</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10739"/>
				<updated>2008-03-12T14:38:49Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD extension for EiffelStudio? ==&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at [http://se.ethz.ch ETH Zurich]. The extension adds full support for unit testing to EiffelStudio. It also introduces the new idea of extracting test cases automatically from failures observed via the debugger. The following lists the main features of CDD:&lt;br /&gt;
&lt;br /&gt;
* Automated extraction of test cases from failures (For every exception thrown a new test case is created)&lt;br /&gt;
* Visualization of test cases and their outcomes&lt;br /&gt;
* One button creation of manual test cases&lt;br /&gt;
* Automated execution of test cases in the background&lt;br /&gt;
* Limit visible test cases via predefined filters and custom tags&lt;br /&gt;
* Testing occurs in the background and is undisruptive to the developer&lt;br /&gt;
* Easy test case management through tags&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [http://eiffelstudio.origo.ethz.ch/forum/20 CDD forum]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:video_still.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video Play Video!]&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
(TODO: Update with more recent screenshot and video)&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Download CDD ==&lt;br /&gt;
&lt;br /&gt;
The following packages contain the full EiffelStudio 6.1 plus the CDD extension. You do not need to have EiffelStudio installed already in order to install below packages. On Windows you do have to have either the Platform SDK or Visual C++ installed. Do not use EiffelStudio with the gcc/mingw or the .Net backend.&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
* Full Linux version: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_5.tar.bz2&lt;br /&gt;
* Installation instructions: http://docs.eiffel.com/eiffelstudio/installation/studio/060_linux.html&lt;br /&gt;
&lt;br /&gt;
Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;. Make sure you set/update the environment variables PATH, ISE_EIFFEL, and ISE_PLATFORM according to the installation instructions.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
* Full Windows version (with installer): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_5-windows.msi&lt;br /&gt;
* Note 1: Installation is independent of installations of official EiffelStudio 6.1 (neither overwrites nor invalidates nor is influenced by those) &lt;br /&gt;
* Note 2: If you have a previous installation of the CDD Edition of EiffelStudio installed, you need to uninstall it first. If you want to reuse the installation directory, you need to manually delete all EIFGENs in its subdirectories after the uninstall procedure.&lt;br /&gt;
* Note 3: Do not use the gcc/mingw or the .NET compiler backend. You will have to use the Microsoft C compiler. You can get it either by installing Visual C++, or via (the freely available) Microsoft Platform SDK.Have a look at http://eiffelsoftware.origo.ethz.ch/Installing_Microsoft_C_compiler to learn how to install either compiler.&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* [[Using CDD]]&lt;br /&gt;
* [[CDD Common Problems|Common Problems]]&lt;br /&gt;
&lt;br /&gt;
== Old Documentation == &lt;br /&gt;
&lt;br /&gt;
Documentation for the release of CDD for EiffelStudio version 5.7 is available from [[CddOldDocumentation]].&lt;br /&gt;
&lt;br /&gt;
= Project Internal Stuff =&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 04.01.2008: Final experiment definition (questions to ask, how to conduct experiment)&lt;br /&gt;
* 08.01.2008: Finalized list of features to go in release (including logging and log submission)&lt;br /&gt;
* 27.01.2008: Beta 1 (feature complete version online)&lt;br /&gt;
* 04.02.2008: Beta 2 (designated testers test release, # &amp;gt; 3)&lt;br /&gt;
* 11.02.2008: Beta tester feedback in&lt;br /&gt;
* 11.02.2008: Beta 3&lt;br /&gt;
* 18.02.2008: Final 1 release&lt;br /&gt;
* 19.02.2008: Initial Questionnaire&lt;br /&gt;
* 20.02.2008: Final 1 handover to ISG&lt;br /&gt;
* 22.02.2008: Final 1 installed on students machines&lt;br /&gt;
* 29.02.2008: Final 2 release&lt;br /&gt;
* 05.03.2008: Final 2 handover to ISG&lt;br /&gt;
* 07.03.2008: Final 2 installed on students machines&lt;br /&gt;
* 25.03.2008: Midterm Questionnaire&lt;br /&gt;
* 19.05.2008: Final Questionnaire&lt;br /&gt;
* 20.05.2008: Having all data&lt;br /&gt;
* 06.06.2008: Finished analysis&lt;br /&gt;
&lt;br /&gt;
== Stefans Master Plan == &lt;br /&gt;
&lt;br /&gt;
* MA Start ca 17.12.2007&lt;br /&gt;
* MA End ca 17.6.2008&lt;br /&gt;
&lt;br /&gt;
* Testing the tester&lt;br /&gt;
** System level test for CDD (incl. framework)&lt;br /&gt;
** Recreating existing unit test suite with CDD&lt;br /&gt;
** Large scale validation of CDD&lt;br /&gt;
*** Info 4 and/or Software Engineering&lt;br /&gt;
*** Questions&lt;br /&gt;
**** Does testing (manual/extracted) increase developer productivity?&lt;br /&gt;
**** How many tests do ppl end up with (manual/extracted)?&lt;br /&gt;
**** ...&lt;br /&gt;
&lt;br /&gt;
== Various ==&lt;br /&gt;
* Regression testing: [[CddRegressionTesting]]&lt;br /&gt;
* TreeView Specification: [[CddTreeViewSpec]]&lt;br /&gt;
* [[CDDHowtoRollARelease]]&lt;br /&gt;
&lt;br /&gt;
=== Things we need from estudio ===&lt;br /&gt;
* Invariants should be checked during debugging equally to pre- and post conditions (they could also be visualised in the flat view the same way like pre- and post conditions are)&lt;br /&gt;
* The information whether some call is a creation call or a normal routine call (Not sure if this is really necessary, what if we assume every call to some creation procedure is always a creation call?)&lt;br /&gt;
* Support for multiple open targets&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10736</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10736"/>
				<updated>2008-03-11T20:31:42Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD extension for EiffelStudio? ==&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at [http://se.ethz.ch ETH Zurich]. The extension adds full support for unit testing to EiffelStudio. It also introduces the new idea of extracting test cases automatically from failures observed via the debugger. The following lists the main features of CDD:&lt;br /&gt;
&lt;br /&gt;
* Automated extraction of test cases from failures (For every exception thrown a new test case is created)&lt;br /&gt;
* Visualization of test cases and their outcomes&lt;br /&gt;
* One button creation of manual test cases&lt;br /&gt;
* Automated execution of test cases in the background&lt;br /&gt;
* Limit visible test cases via predefined filters and custom tags&lt;br /&gt;
* Testing occurs in the background and is undisruptive to the developer&lt;br /&gt;
* Easy test case management through tags&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [http://eiffelstudio.origo.ethz.ch/forum/20 CDD forum]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:video_still.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video Play Video!]&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
(TODO: Update with more recent screenshot and video)&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Download CDD ==&lt;br /&gt;
&lt;br /&gt;
The following packages contain the full EiffelStudio 6.1 plus the CDD extension. You do not need to have EiffelStudio installed already in order to install below packages. On Windows you do have to have either the Platform SDK or Visual C++ installed. Do not use EiffelStudio with the gcc/mingw or the .Net backend.&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
* Full Linux version: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_3.tar.bz2&lt;br /&gt;
* Installation instructions: http://docs.eiffel.com/eiffelstudio/installation/studio/060_linux.html&lt;br /&gt;
&lt;br /&gt;
Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;. Make sure you set/update the environment variables PATH, ISE_EIFFEL, and ISE_PLATFORM according to the installation instructions.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
* Full Windows version (with installer): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_4-windows.msi&lt;br /&gt;
* Note 1: Installation is independent of installations of official EiffelStudio 6.1 (neither overwrites nor invalidates nor is influenced by those) &lt;br /&gt;
* Note 2: If you have a previous installation of the CDD Edition of EiffelStudio installed, you need to uninstall it first. If you want to reuse the installation directory, you need to manually delete all EIFGENs in its subdirectories after the uninstall procedure.&lt;br /&gt;
* Note 3: Do not use the gcc/mingw or the .NET compiler backend. You will have to use the Microsoft C compiler. You can get it either by installing Visual C++, or via (the freely available) Microsoft Platform SDK.Have a look at http://eiffelsoftware.origo.ethz.ch/Installing_Microsoft_C_compiler to learn how to install either compiler.&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* [[Using CDD]]&lt;br /&gt;
* [[CDD Common Problems|Common Problems]]&lt;br /&gt;
&lt;br /&gt;
== Old Documentation == &lt;br /&gt;
&lt;br /&gt;
Documentation for the release of CDD for EiffelStudio version 5.7 is available from [[CddOldDocumentation]].&lt;br /&gt;
&lt;br /&gt;
= Project Internal Stuff =&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 04.01.2008: Final experiment definition (questions to ask, how to conduct experiment)&lt;br /&gt;
* 08.01.2008: Finalized list of features to go in release (including logging and log submission)&lt;br /&gt;
* 27.01.2008: Beta 1 (feature complete version online)&lt;br /&gt;
* 04.02.2008: Beta 2 (designated testers test release, # &amp;gt; 3)&lt;br /&gt;
* 11.02.2008: Beta tester feedback in&lt;br /&gt;
* 11.02.2008: Beta 3&lt;br /&gt;
* 18.02.2008: Final 1 release&lt;br /&gt;
* 19.02.2008: Initial Questionnaire&lt;br /&gt;
* 20.02.2008: Final 1 handover to ISG&lt;br /&gt;
* 22.02.2008: Final 1 installed on students machines&lt;br /&gt;
* 29.02.2008: Final 2 release&lt;br /&gt;
* 05.03.2008: Final 2 handover to ISG&lt;br /&gt;
* 07.03.2008: Final 2 installed on students machines&lt;br /&gt;
* 25.03.2008: Midterm Questionnaire&lt;br /&gt;
* 19.05.2008: Final Questionnaire&lt;br /&gt;
* 20.05.2008: Having all data&lt;br /&gt;
* 06.06.2008: Finished analysis&lt;br /&gt;
&lt;br /&gt;
== Stefans Master Plan == &lt;br /&gt;
&lt;br /&gt;
* MA Start ca 17.12.2007&lt;br /&gt;
* MA End ca 17.6.2008&lt;br /&gt;
&lt;br /&gt;
* Testing the tester&lt;br /&gt;
** System level test for CDD (incl. framework)&lt;br /&gt;
** Recreating existing unit test suite with CDD&lt;br /&gt;
** Large scale validation of CDD&lt;br /&gt;
*** Info 4 and/or Software Engineering&lt;br /&gt;
*** Questions&lt;br /&gt;
**** Does testing (manual/extracted) increase developer productivity?&lt;br /&gt;
**** How many tests do ppl end up with (manual/extracted)?&lt;br /&gt;
**** ...&lt;br /&gt;
&lt;br /&gt;
== Various ==&lt;br /&gt;
* Regression testing: [[CddRegressionTesting]]&lt;br /&gt;
* TreeView Specification: [[CddTreeViewSpec]]&lt;br /&gt;
* [[CDDHowtoRollARelease]]&lt;br /&gt;
&lt;br /&gt;
=== Things we need from estudio ===&lt;br /&gt;
* Invariants should be checked during debugging equally to pre- and post conditions (they could also be visualised in the flat view the same way like pre- and post conditions are)&lt;br /&gt;
* The information whether some call is a creation call or a normal routine call (Not sure if this is really necessary, what if we assume every call to some creation procedure is always a creation call?)&lt;br /&gt;
* Support for multiple open targets&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10731</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10731"/>
				<updated>2008-03-10T18:39:19Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD extension for EiffelStudio? ==&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at [http://se.ethz.ch ETH Zurich]. The extension adds full support for unit testing to EiffelStudio. It also introduces the new idea of extracting test cases automatically from failures observed via the debugger. The following lists the main features of CDD:&lt;br /&gt;
&lt;br /&gt;
* Automated extraction of test cases from failures (For every exception thrown a new test case is created)&lt;br /&gt;
* Visualization of test cases and their outcomes&lt;br /&gt;
* One button creation of manual test cases&lt;br /&gt;
* Automated execution of test cases in the background&lt;br /&gt;
* Limit visible test cases via predefined filters and custom tags&lt;br /&gt;
* Testing occurs in the background and is undisruptive to the developer&lt;br /&gt;
* Easy test case management through tags&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [http://eiffelstudio.origo.ethz.ch/forum/20 CDD forum]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:video_still.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video Play Video!]&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
(TODO: Update with more recent screenshot and video)&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Download CDD ==&lt;br /&gt;
&lt;br /&gt;
The following packages contain the full EiffelStudio 6.1 plus the CDD extension. You do not need to have EiffelStudio installed already in order to install below packages. On Windows you do have to have either the Platform SDK or Visual C++ installed. Do not use EiffelStudio with the gcc/mingw or the .Net backend.&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
* Full Linux version: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_3.tar.bz2&lt;br /&gt;
* Installation instructions: http://docs.eiffel.com/eiffelstudio/installation/studio/060_linux.html&lt;br /&gt;
&lt;br /&gt;
Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;. Make sure you set/update the environment variables PATH, ISE_EIFFEL, and ISE_PLATFORM according to the installation instructions.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
* Full Windows version (with installer): http://n.ethz.ch/~moris/download/Eiffel61_cdd_final_3-windows.msi&lt;br /&gt;
* Note 1: Installation is independent of installations of official EiffelStudio 6.1 (neither overwrites nor invalidates nor is influenced by those) &lt;br /&gt;
* Note 2: If you have a previous installation of the CDD Edition of EiffelStudio installed, you need to uninstall it first. If you want to reuse the installation directory, you need to manually delete all EIFGENs in its subdirectories after the uninstall procedure.&lt;br /&gt;
* Note 3: Do not use the gcc/mingw or the .NET compiler backend. You will have to use the Microsoft C compiler. You can get it either by installing Visual C++, or via (the freely available) Microsoft Platform SDK.Have a look at http://eiffelsoftware.origo.ethz.ch/Installing_Microsoft_C_compiler to learn how to install either compiler.&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* [[Using CDD]]&lt;br /&gt;
* [[CDD Common Problems|Common Problems]]&lt;br /&gt;
&lt;br /&gt;
== Old Documentation == &lt;br /&gt;
&lt;br /&gt;
Documentation for the release of CDD for EiffelStudio version 5.7 is available from [[CddOldDocumentation]].&lt;br /&gt;
&lt;br /&gt;
= Project Internal Stuff =&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 04.01.2008: Final experiment definition (questions to ask, how to conduct experiment)&lt;br /&gt;
* 08.01.2008: Finalized list of features to go in release (including logging and log submission)&lt;br /&gt;
* 27.01.2008: Beta 1 (feature complete version online)&lt;br /&gt;
* 04.02.2008: Beta 2 (designated testers test release, # &amp;gt; 3)&lt;br /&gt;
* 11.02.2008: Beta tester feedback in&lt;br /&gt;
* 11.02.2008: Beta 3&lt;br /&gt;
* 18.02.2008: Final 1 release&lt;br /&gt;
* 19.02.2008: Initial Questionnaire&lt;br /&gt;
* 20.02.2008: Final 1 handover to ISG&lt;br /&gt;
* 22.02.2008: Final 1 installed on students machines&lt;br /&gt;
* 29.02.2008: Final 2 release&lt;br /&gt;
* 05.03.2008: Final 2 handover to ISG&lt;br /&gt;
* 07.03.2008: Final 2 installed on students machines&lt;br /&gt;
* 25.03.2008: Midterm Questionnaire&lt;br /&gt;
* 19.05.2008: Final Questionnaire&lt;br /&gt;
* 20.05.2008: Having all data&lt;br /&gt;
* 06.06.2008: Finished analysis&lt;br /&gt;
&lt;br /&gt;
== Stefans Master Plan == &lt;br /&gt;
&lt;br /&gt;
* MA Start ca 17.12.2007&lt;br /&gt;
* MA End ca 17.6.2008&lt;br /&gt;
&lt;br /&gt;
* Testing the tester&lt;br /&gt;
** System level test for CDD (incl. framework)&lt;br /&gt;
** Recreating existing unit test suite with CDD&lt;br /&gt;
** Large scale validation of CDD&lt;br /&gt;
*** Info 4 and/or Software Engineering&lt;br /&gt;
*** Questions&lt;br /&gt;
**** Does testing (manual/extracted) increase developer productivity?&lt;br /&gt;
**** How many tests do ppl end up with (manual/extracted)?&lt;br /&gt;
**** ...&lt;br /&gt;
&lt;br /&gt;
== Various ==&lt;br /&gt;
* Regression testing: [[CddRegressionTesting]]&lt;br /&gt;
* TreeView Specification: [[CddTreeViewSpec]]&lt;br /&gt;
* [[CDDHowtoRollARelease]]&lt;br /&gt;
&lt;br /&gt;
=== Things we need from estudio ===&lt;br /&gt;
* Invariants should be checked during debugging equally to pre- and post conditions (they could also be visualised in the flat view the same way like pre- and post conditions are)&lt;br /&gt;
* The information whether some call is a creation call or a normal routine call (Not sure if this is really necessary, what if we assume every call to some creation procedure is always a creation call?)&lt;br /&gt;
* Support for multiple open targets&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10711</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10711"/>
				<updated>2008-03-05T19:05:31Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Download CDD Final 1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD extension for EiffelStudio? ==&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at ETH Zurich. The extension adds full support for unit testing to EiffelStudio. It also introduces the new idea of extracting test cases automatically from failures observed via the debugger. The following lists the main features of CDD:&lt;br /&gt;
&lt;br /&gt;
* Automated extraction of test cases from failures (For every exception thrown a new test case is created)&lt;br /&gt;
* Visualization of test cases and their outcomes&lt;br /&gt;
* One button creation of manual test cases&lt;br /&gt;
* Automated execution of test cases in the background&lt;br /&gt;
* Limit visible test cases via predefined filters and custom tags&lt;br /&gt;
* Testing occurs in the background and is undisruptive to the developer&lt;br /&gt;
* Easy test case management through tags&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [[http://eiffelstudio.origo.ethz.ch/forum/20| CDD forum]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:video_still.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video Play Video!]&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
(TODO: Update with more recent screenshot and video)&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Download CDD Final 2 ==&lt;br /&gt;
&lt;br /&gt;
The following packages contain the full EiffelStudio 6.1 plus the CDD extension. You do not need to have EiffelStudio installed already in order to install below packages. On Windows you do have to have either the Platform SDK or Visual C++ installed. Do not use EiffelStudio with the gcc/mingw or the .Net backend.&lt;br /&gt;
&lt;br /&gt;
=== Linux ===&lt;br /&gt;
* Full Linux version: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_1.tar.bz2&lt;br /&gt;
* Installation instructions: http://docs.eiffel.com/eiffelstudio/installation/studio/060_linux.html&lt;br /&gt;
&lt;br /&gt;
Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;. Make sure you set/update the environment variables PATH, ISE_EIFFEL, and ISE_PLATFORM according to the installation instructions.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
* Full Windows version (with installer): http://n.ethz.ch/~moris/download/Eiffel61_cdd_edition_gpl_72290-windows.msi&lt;br /&gt;
* Note 1: Installation is independent of installations of official EiffelStudio 6.1 (neither overwrites nor invalidates nor is influenced by those) &lt;br /&gt;
* Note 2: If you have a previous installation of the CDD Edition of EiffelStudio installed, you need to uninstall it first. If you want to reuse the installation directory, you need to manually delete all EIFGENs in its subdirectories after the uninstall procedure.&lt;br /&gt;
* Note 3: Do not use the gcc/mingw or the .NET compiler backend. You will have to use the Microsoft C compiler. You can get it either by installing Visual C++, or via (the freely available) Microsoft Platform SDK.Have a look at http://eiffelsoftware.origo.ethz.ch/Installing_Microsoft_C_compiler to learn how to install either compiler.&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* [[Using CDD]]&lt;br /&gt;
* [[CDD Common Problems|Common Problems]]&lt;br /&gt;
&lt;br /&gt;
== Old Documentation == &lt;br /&gt;
&lt;br /&gt;
Documentation for the release of CDD for EiffelStudio version 5.7 is available from [[CddOldDocumentation]].&lt;br /&gt;
&lt;br /&gt;
= Project Internal Stuff =&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 04.01.2008: Final experiment definition (questions to ask, how to conduct experiment)&lt;br /&gt;
* 08.01.2008: Finalized list of features to go in release (including logging and log submission)&lt;br /&gt;
* 27.01.2008: Beta 1 (feature complete version online)&lt;br /&gt;
* 04.02.2008: Beta 2 (designated testers test release, # &amp;gt; 3)&lt;br /&gt;
* 11.02.2008: Beta tester feedback in&lt;br /&gt;
* 11.02.2008: Beta 3&lt;br /&gt;
* 18.02.2008: Final 1 release&lt;br /&gt;
* 19.02.2008: Initial Questionnaire&lt;br /&gt;
* 20.02.2008: Final 1 handover to ISG&lt;br /&gt;
* 22.02.2008: Final 1 installed on students machines&lt;br /&gt;
* 29.02.2008: Final 2 release&lt;br /&gt;
* 05.03.2008: Final 2 handover to ISG&lt;br /&gt;
* 07.03.2008: Final 2 installed on students machines&lt;br /&gt;
* 25.03.2008: Midterm Questionnaire&lt;br /&gt;
* 19.05.2008: Final Questionnaire&lt;br /&gt;
* 20.05.2008: Having all data&lt;br /&gt;
* 06.06.2008: Finished analysis&lt;br /&gt;
&lt;br /&gt;
== Stefans Master Plan == &lt;br /&gt;
&lt;br /&gt;
* MA Start ca 17.12.2007&lt;br /&gt;
* MA End ca 17.6.2008&lt;br /&gt;
&lt;br /&gt;
* Testing the tester&lt;br /&gt;
** System level test for CDD (incl. framework)&lt;br /&gt;
** Recreating existing unit test suite with CDD&lt;br /&gt;
** Large scale validation of CDD&lt;br /&gt;
*** Info 4 and/or Software Engineering&lt;br /&gt;
*** Questions&lt;br /&gt;
**** Does testing (manual/extracted) increase developer productivity?&lt;br /&gt;
**** How many tests do ppl end up with (manual/extracted)?&lt;br /&gt;
**** ...&lt;br /&gt;
&lt;br /&gt;
== Various ==&lt;br /&gt;
* Regression testing: [[CddRegressionTesting]]&lt;br /&gt;
* TreeView Specification: [[CddTreeViewSpec]]&lt;br /&gt;
* [[CDDHowtoRollARelease]]&lt;br /&gt;
&lt;br /&gt;
=== Things we need from estudio ===&lt;br /&gt;
* Invariants should be checked during debugging equally to pre- and post conditions (they could also be visualised in the flat view the same way like pre- and post conditions are)&lt;br /&gt;
* The information whether some call is a creation call or a normal routine call (Not sure if this is really necessary, what if we assume every call to some creation procedure is always a creation call?)&lt;br /&gt;
* Support for multiple open targets&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10688</id>
		<title>CDDHowtoRollARelease</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10688"/>
				<updated>2008-03-03T21:40:11Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* List of CDD specific files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
== List of CDD specific files ==&lt;br /&gt;
The following files have to be overwritten or added to a stock EiffelStudio release:&lt;br /&gt;
* ec(.exe) -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/bin&lt;br /&gt;
* Src/library/base/ise/support/cdd -&amp;gt; $ISE_EIFFEL/library/base/ise/support/cdd&lt;br /&gt;
* Src/examples/cdd -&amp;gt; $ISE_EIFFEL/examples/cdd&lt;br /&gt;
* Delivery/studio/help/defaults -&amp;gt; $ISE_EIFFEL/studio/help/defaults&lt;br /&gt;
* Delivery/studio/bitmaps/png/16x16.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/16x16.png&lt;br /&gt;
* Delivery/studio/bitmaps/png/splash_shadow.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/splash_shadow.png&lt;br /&gt;
* Delivery/studio/bitmaps/png/splash.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/splash.png&lt;br /&gt;
* [For Windows Only] estudio.exe -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/bin&lt;br /&gt;
RUNTIME patch&lt;br /&gt;
* eif_except.h, eif_types.h: Src/C/run-time -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/include/&lt;br /&gt;
* finalized.lib, mtfinalized.lib, mtwkbench.lib, wkbench.lib: Src/C/run-time/LIB -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/lib(/msc)/ [not sure about linux!]&lt;br /&gt;
** NOTE: the libs mentioned above can be build with the geant script, geant -b %EIFFEL_SRC%/build.eant prepare (this will create the LIB folder and its content)&lt;br /&gt;
&lt;br /&gt;
== How to roll a release on Windows ==&lt;br /&gt;
* prepare some directory &amp;lt;INSTALL_DIR&amp;gt; like this:&lt;br /&gt;
** create 3 subdirectories &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio, &amp;lt;INSTALL_DIR&amp;gt;/gcc, &amp;lt;INSTALL_DIR&amp;gt;/releases. Fill them according to the following description&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio] has to contain a complete delivery without the ec.exe binaries. Without a working delivery script, this can be achieved by &amp;quot;patching&amp;quot; an official installation:&lt;br /&gt;
**** copy the contents of the installation directory of a fresh offcial installation into &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio (fresh = NO EIFGENs produced. To be sure, uninstall the official version if alrady existing, manually delete the remains in the installation directory, and reinstall it WITHOUT building any precompilations)&lt;br /&gt;
**** Remove the ec.exe from  &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio/studio/spec/windows/bin&lt;br /&gt;
**** Replace/Add the cdd specific files (see list above). Watch out for the proper place to add/replace them (the above list is svn-specific)&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/gcc] needs to contain content of &amp;lt;svn-eiffel-branch&amp;gt;/free_add_ons/gcc&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/releases] needs to contain the ec.exe binaries (the cdd versions of course)&lt;br /&gt;
**** &amp;lt;INSTALL_DIR&amp;gt;/releases/gpl_version/ needs to contain ec.exe&lt;br /&gt;
**** &amp;lt;INSTALL_DIR&amp;gt;/releases/enterprise_version/ needs to contain ec.exe (take the same ec.exe, it's a dummy for using the scripts later on)&lt;br /&gt;
&lt;br /&gt;
* let env variable %INSTALL_DIR% point to the &amp;lt;INSTALL_DIR&amp;gt;&lt;br /&gt;
* let env variable %INIT_DIR% point to the &amp;lt;svn-eiffel-branch&amp;gt;/Delivery/scripts/windows folder&lt;br /&gt;
* finalize the &amp;quot;hallow&amp;quot; tool (&amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/hallow.ecf)&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/bin&lt;br /&gt;
* copy content of &amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/EIFGENs/hallow/F_code to INIT_DIR/install/bin&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/binaries/x86&lt;br /&gt;
* get a proper setup.dll (from manus probably, or build yourself, instructions for this will be added soon) and put it into %INIT_DIR%/install/binaries/x86/&lt;br /&gt;
* start command line, go to %INIT_DIR%/install/content/eiffelstudio and run:&lt;br /&gt;
** nmake /nologo clean&lt;br /&gt;
** nmake /nologo&lt;br /&gt;
** nmake /nologo gpl_x86&lt;br /&gt;
* wait some minutes .... and pray :-)&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10687</id>
		<title>CDDHowtoRollARelease</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10687"/>
				<updated>2008-03-03T21:20:19Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* List of CDD specific files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
== List of CDD specific files ==&lt;br /&gt;
The following files have to be overwritten or added to a stock EiffelStudio release:&lt;br /&gt;
* ec(.exe) -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/bin&lt;br /&gt;
* Src/library/base/ise/support/cdd -&amp;gt; $ISE_EIFFEL/library/base/ise/support/cdd&lt;br /&gt;
* Src/examples/cdd -&amp;gt; $ISE_EIFFEL/examples/cdd&lt;br /&gt;
* Delivery/studio/help/defaults -&amp;gt; $ISE_EIFFEL/studio/help/defaults&lt;br /&gt;
* Delivery/studio/bitmaps/png/16x16.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/16x16.png&lt;br /&gt;
* Delivery/studio/bitmaps/png/splash_shadow.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/splash_shadow.png&lt;br /&gt;
* Delivery/studio/bitmaps/png/splash.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/splash.png&lt;br /&gt;
* [For Windows Only] estudio.exe -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/bin&lt;br /&gt;
RUNTIME patch&lt;br /&gt;
* eif_except.h, eif_types.h: Src/C/run-time -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/include/&lt;br /&gt;
* finalized.lib, mtfinalized.lib, mtwkbench.lib, wkbench.lib: Src/C/run-time/LIB -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/lib(/msc)/ [not sure about linux!]&lt;br /&gt;
** NOTE: the libs mentioned above can be build with the geant script, geant -b %EIFFEL_SRC%/build.eant prepare&lt;br /&gt;
&lt;br /&gt;
== How to roll a release on Windows ==&lt;br /&gt;
* prepare some directory &amp;lt;INSTALL_DIR&amp;gt; like this:&lt;br /&gt;
** create 3 subdirectories &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio, &amp;lt;INSTALL_DIR&amp;gt;/gcc, &amp;lt;INSTALL_DIR&amp;gt;/releases. Fill them according to the following description&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio] has to contain a complete delivery without the ec.exe binaries. Without a working delivery script, this can be achieved by &amp;quot;patching&amp;quot; an official installation:&lt;br /&gt;
**** copy the contents of the installation directory of a fresh offcial installation into &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio (fresh = NO EIFGENs produced. To be sure, uninstall the official version if alrady existing, manually delete the remains in the installation directory, and reinstall it WITHOUT building any precompilations)&lt;br /&gt;
**** Remove the ec.exe from  &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio/studio/spec/windows/bin&lt;br /&gt;
**** Replace/Add the cdd specific files (see list above). Watch out for the proper place to add/replace them (the above list is svn-specific)&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/gcc] needs to contain content of &amp;lt;svn-eiffel-branch&amp;gt;/free_add_ons/gcc&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/releases] needs to contain the ec.exe binaries (the cdd versions of course)&lt;br /&gt;
**** &amp;lt;INSTALL_DIR&amp;gt;/releases/gpl_version/ needs to contain ec.exe&lt;br /&gt;
**** &amp;lt;INSTALL_DIR&amp;gt;/releases/enterprise_version/ needs to contain ec.exe (take the same ec.exe, it's a dummy for using the scripts later on)&lt;br /&gt;
&lt;br /&gt;
* let env variable %INSTALL_DIR% point to the &amp;lt;INSTALL_DIR&amp;gt;&lt;br /&gt;
* let env variable %INIT_DIR% point to the &amp;lt;svn-eiffel-branch&amp;gt;/Delivery/scripts/windows folder&lt;br /&gt;
* finalize the &amp;quot;hallow&amp;quot; tool (&amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/hallow.ecf)&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/bin&lt;br /&gt;
* copy content of &amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/EIFGENs/hallow/F_code to INIT_DIR/install/bin&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/binaries/x86&lt;br /&gt;
* get a proper setup.dll (from manus probably, or build yourself, instructions for this will be added soon) and put it into %INIT_DIR%/install/binaries/x86/&lt;br /&gt;
* start command line, go to %INIT_DIR%/install/content/eiffelstudio and run:&lt;br /&gt;
** nmake /nologo clean&lt;br /&gt;
** nmake /nologo&lt;br /&gt;
** nmake /nologo gpl_x86&lt;br /&gt;
* wait some minutes .... and pray :-)&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10677</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10677"/>
				<updated>2008-03-01T15:45:23Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* [ON WINDOWS NO CRASH OCCURS] Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
* [DONE!] CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
&lt;br /&gt;
* [DONE] Clean up Logging, add information that allows for associating extracted test case with its &amp;quot;extraced&amp;quot; message&lt;br /&gt;
** [DONE] Log original exception (make it part of test routine's state)&lt;br /&gt;
** [DONE] Logging all thrown exceptions (no matter whether extraction is enabled)&lt;br /&gt;
** [DONE] Split log files&lt;br /&gt;
* [DONE] Refactor CDD Manager-Routine invocation cache-CDD Capturer&lt;br /&gt;
* [DONE - WORKS ONLY WITH PRECONDITION CHECK ENABLED!] Sometimes when a CDD breakpoint is hit no routine invocation is extracted. Find out why and fix it.&lt;br /&gt;
* [DONE] Fix problem where exe of target is non standard (Anders report)&lt;br /&gt;
* [DONE] Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
* [DONE] Revive system level test suite&lt;br /&gt;
* [DONE] Adding a new test case and, changing it, saving it and recompiling it results in an error message about a &amp;quot;.swp&amp;quot; file. Make message go away.&lt;br /&gt;
* [DONE] &amp;quot;Cleanup button&amp;quot; deletes filtered unresolved extracted test cases&lt;br /&gt;
&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
* [LOW PRIORITY since inv prob solved] When runtime of interpreter is severly damaged a popup box shows up on Windows. Popup can be ignored and goes away. This popup does not come up when we shoot down interpreter, but when a faulty instruction is executed. (E.g. the inv bug can trigger this dialog)&lt;br /&gt;
* [LOW PRIORITY since unlikely situation] Test cases with a 'False' (not satisfieable) invariant makes interpreter hang and ES crash. Add test case and fix it.&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10676</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10676"/>
				<updated>2008-03-01T15:03:45Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Bug Fixing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* [ON WINDOWS NO CRASH OCCURS] Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
* [DONE!] CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
&lt;br /&gt;
* [DONE] Clean up Logging, add information that allows for associating extracted test case with its &amp;quot;extraced&amp;quot; message&lt;br /&gt;
** [DONE] Log original exception (make it part of test routine's state)&lt;br /&gt;
** [DONE] Logging all thrown exceptions (no matter whether extraction is enabled)&lt;br /&gt;
** [DONE] Split log files&lt;br /&gt;
* [DONE] Refactor CDD Manager-Routine invocation cache-CDD Capturer&lt;br /&gt;
* [DONE - WORKS ONLY WITH PRECONDITION CHECK ENABLED!] Sometimes when a CDD breakpoint is hit no routine invocation is extracted. Find out why and fix it.&lt;br /&gt;
* [DONE] Fix problem where exe of target is non standard (Anders report)&lt;br /&gt;
* [DONE] Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
* [DONE] Revive system level test suite&lt;br /&gt;
* [DONE] Adding a new test case and, changing it, saving it and recompiling it results in an error message about a &amp;quot;.swp&amp;quot; file. Make message go away.&lt;br /&gt;
* &amp;quot;Cleanup button&amp;quot; deletes filtered unresolved extracted test cases&lt;br /&gt;
&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
* [LOW PRIORITY since inv prob solved] When runtime of interpreter is severly damaged a popup box shows up on Windows. Popup can be ignored and goes away. This popup does not come up when we shoot down interpreter, but when a faulty instruction is executed. (E.g. the inv bug can trigger this dialog)&lt;br /&gt;
* [LOW PRIORITY since unlikely situation] Test cases with a 'False' (not satisfieable) invariant makes interpreter hang and ES crash. Add test case and fix it.&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10675</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10675"/>
				<updated>2008-03-01T15:02:53Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
* CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
&lt;br /&gt;
* [DONE] Clean up Logging, add information that allows for associating extracted test case with its &amp;quot;extraced&amp;quot; message&lt;br /&gt;
** [DONE] Log original exception (make it part of test routine's state)&lt;br /&gt;
** [DONE] Logging all thrown exceptions (no matter whether extraction is enabled)&lt;br /&gt;
** [DONE] Split log files&lt;br /&gt;
* [DONE] Refactor CDD Manager-Routine invocation cache-CDD Capturer&lt;br /&gt;
* [DONE - WORKS ONLY WITH PRECONDITION CHECK ENABLED!] Sometimes when a CDD breakpoint is hit no routine invocation is extracted. Find out why and fix it.&lt;br /&gt;
* [DONE] Fix problem where exe of target is non standard (Anders report)&lt;br /&gt;
* [DONE] Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
* [DONE] Revive system level test suite&lt;br /&gt;
* [DONE] Adding a new test case and, changing it, saving it and recompiling it results in an error message about a &amp;quot;.swp&amp;quot; file. Make message go away.&lt;br /&gt;
* &amp;quot;Cleanup button&amp;quot; deletes filtered unresolved extracted test cases&lt;br /&gt;
&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
* [LOW PRIORITY since inv prob solved] When runtime of interpreter is severly damaged a popup box shows up on Windows. Popup can be ignored and goes away. This popup does not come up when we shoot down interpreter, but when a faulty instruction is executed. (E.g. the inv bug can trigger this dialog)&lt;br /&gt;
* [LOW PRIORITY since unlikely situation] Test cases with a 'False' (not satisfieable) invariant makes interpreter hang and ES crash. Add test case and fix it.&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10673</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10673"/>
				<updated>2008-02-29T14:38:04Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
* CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
&lt;br /&gt;
* [DONE] Clean up Logging, add information that allows for associating extracted test case with its &amp;quot;extraced&amp;quot; message&lt;br /&gt;
** [DONE] Log original exception (make it part of test routine's state)&lt;br /&gt;
** [DONE] Logging all thrown exceptions (no matter whether extraction is enabled)&lt;br /&gt;
** [DONE] Split log files&lt;br /&gt;
* [DONE] Refactor CDD Manager-Routine invocation cache-CDD Capturer&lt;br /&gt;
* [DONE - WORKS ONLY WITH PRECONDITION CHECK ENABLED!] Sometimes when a CDD breakpoint is hit no routine invocation is extracted. Find out why and fix it.&lt;br /&gt;
* [DONE] Fix problem where exe of target is non standard (Anders report)&lt;br /&gt;
* [DONE] Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
* [DONE] Revive system level test suite&lt;br /&gt;
* [DONE] Adding a new test case and, changing it, saving it and recompiling it results in an error message about a &amp;quot;.swp&amp;quot; file. Make message go away.&lt;br /&gt;
* &amp;quot;Cleanup button&amp;quot; deletes filtered unresolved extracted test cases&lt;br /&gt;
&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
* When runtime of interpreter is severly damaged a popup box shows up on Windows. Popup can be ignored and goes away. This popup does not come up when we shoot down interpreter, but when a faulty instruction is executed. (E.g. the inv bug can trigger this dialog)&lt;br /&gt;
* Test cases with a 'False' (not satisfieable) invariant makes interpreter hang and ES crash. Add test case and fix it.&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10672</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10672"/>
				<updated>2008-02-29T10:09:37Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
* CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
&lt;br /&gt;
* Clean up Logging, add information that allows for associating extracted test case with its &amp;quot;extraced&amp;quot; message&lt;br /&gt;
** Log original exception (make it part of test routine's state)&lt;br /&gt;
** Logging all thrown exceptions (no matter whether extraction is enabled)&lt;br /&gt;
** Split log files&lt;br /&gt;
* [DONE] Refactor CDD Manager-Routine invocation cache-CDD Capturer&lt;br /&gt;
* Sometimes when a CDD breakpoint is hit no routine invocation is extracted. Find out why and fix it.&lt;br /&gt;
* [DONE] Fix problem where exe of target is non standard (Anders report)&lt;br /&gt;
* &amp;quot;Cleanup button&amp;quot; deletes filtered unresolved extracted test cases&lt;br /&gt;
&lt;br /&gt;
* Pressing manual test case button may cause error. &lt;br /&gt;
* Fix CDD BP bug (in some cases there is a visible, disabled user bp generated which DOES make the application stop!)&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
* Revive system level test suite&lt;br /&gt;
* Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
* When runtime of interpreter is severly damaged a popup box shows up on Windows. Popup can be ignored and goes away. This popup does not come up when we shoot down interpreter, but when a faulty instruction is executed. (E.g. the inv bug can trigger this dialog)&lt;br /&gt;
* Adding a new test case and, changing it, saving it and recompiling it results in an error message about a &amp;quot;.swp&amp;quot; file. Make message go away.&lt;br /&gt;
* Test cases with a 'False' (not satisfieable) invariant makes interpreter hang and ES crash. Add test case and fix it.&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10671</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10671"/>
				<updated>2008-02-29T10:05:45Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
* CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
&lt;br /&gt;
* Clean up Logging, add information that allows for associating extracted test case with its &amp;quot;extraced&amp;quot; message&lt;br /&gt;
** Log original exception (make it part of test routine's state)&lt;br /&gt;
** Logging all thrown exceptions (no matter whether extraction is enabled)&lt;br /&gt;
** Split log files&lt;br /&gt;
* [DONE] Refactor CDD Manager-Routine invocation cache-CDD Capturer&lt;br /&gt;
* Sometimes when a CDD breakpoint is hit no routine invocation is extracted. Find out why and fix it.&lt;br /&gt;
* [DONE] Fix problem where exe of target is non standard (Anders report)&lt;br /&gt;
* &amp;quot;Cleanup button&amp;quot; deletes filtered unresolved extracted test cases&lt;br /&gt;
&lt;br /&gt;
* Pressing manual test case button may cause error. &lt;br /&gt;
* Fix CDD BP bug (in some cases there is a visible, disabled user bp generated which DOES make the application stop!)&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
* Revive system level test suite&lt;br /&gt;
* Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
* When runtime of interpreter is severly damaged a popup box shows up on Windows. Popup can be ignored and goes away. This popup does not come up when we shoot down interpreter, but when a faulty instruction is executed. (E.g. the inv bug can trigger this dialog)&lt;br /&gt;
* Adding a new test case and, changing it, saving it and recompiling it results in an error message about a &amp;quot;.swp&amp;quot; file. Make message go away.&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10663</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10663"/>
				<updated>2008-02-27T09:45:43Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
* CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
&lt;br /&gt;
* Clean up Logging, add information that allows for associating extracted test case with its &amp;quot;extraced&amp;quot; message&lt;br /&gt;
** Log original exception (make it part of test routine's state)&lt;br /&gt;
** Logging all thrown exceptions (no matter whether extraction is enabled)&lt;br /&gt;
** Split log files&lt;br /&gt;
* [DONE] Refactor CDD Manager-Routine invocation cache-CDD Capturer&lt;br /&gt;
* Sometimes when a CDD breakpoint is hit no routine invocation is extracted. Find out why and fix it.&lt;br /&gt;
* [DONE] Fix problem where exe of target is non standard (Anders report)&lt;br /&gt;
* &amp;quot;Cleanup button&amp;quot; deletes filtered unresolved extracted test cases&lt;br /&gt;
&lt;br /&gt;
* Pressing manual test case button may cause error. &lt;br /&gt;
* Fix CDD BP bug (in some cases there is a visible, disabled user bp generated which DOES make the application stop!)&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
* Revive system level test suite&lt;br /&gt;
* Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
* When runtime of interpreter is severly damaged a popup box shows up on Windows. Popup can be ignored and goes away. This popup does not come up when we shoot down interpreter, but when a faulty instruction is executed. (E.g. the inv bug can trigger this dialog)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10632</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10632"/>
				<updated>2008-02-22T15:19:37Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Download */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD extension for EiffelStudio? ==&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at ETH Zurich. The extension adds full support for unit testing to EiffelStudio. It also introduces the new idea of extracting test cases automatically from failures observed via the debugger. The following lists the main features of CDD:&lt;br /&gt;
&lt;br /&gt;
* Automated extraction of test cases from failures (For every exception thrown a new test case is created)&lt;br /&gt;
* Visualization of test cases and their outcomes&lt;br /&gt;
* One button creation of manual test cases&lt;br /&gt;
* Automated execution of test cases in the background&lt;br /&gt;
* Limit visible test cases via predefined filters and custom tags&lt;br /&gt;
* Testing occurs in the background and is undisruptive to the developer&lt;br /&gt;
* Easy test case management through tags&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [[http://eiffelstudio.origo.ethz.ch/forum/20| CDD forum]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:video_still.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video Play Video!]&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
(TODO: Update with more recent screenshot and video)&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Download ==&lt;br /&gt;
&lt;br /&gt;
* Download CDD Final 1&lt;br /&gt;
** Linux: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_1.tar.bz2 (Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;.)&lt;br /&gt;
** Windows&lt;br /&gt;
*** Installer: http://n.ethz.ch/~moris/download/Eiffel61cdd_gpl_final1-windows.msi (Note: Installation is independent of installations of official EiffelStudio 6.1)&lt;br /&gt;
*** Patch 1 for final 1 (fixes critical issue for windows): http://n.ethz.ch/~moris/download/EiffelStudio61cdd_final_1_patch_1.zip (replace existing ec.exe in &amp;lt;install_dir&amp;gt;\studio\spec\windows\bin with ec.exe found in archive. Delete EIFGENs in subdirectories of &amp;lt;install_dir&amp;gt;\precomp, since precompilations might not be compatible. Also build your projects from scratch by deleting their EIFGENs)&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* [[Using CDD]]&lt;br /&gt;
* [[CDD Common Problems|Common Problems]]&lt;br /&gt;
&lt;br /&gt;
== Old Documentation == &lt;br /&gt;
&lt;br /&gt;
Documentation for the release of CDD for EiffelStudio version 5.7 is available from [[CddOldDocumentation]].&lt;br /&gt;
&lt;br /&gt;
= Project Internal Stuff =&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 04.01.2008: Final experiment definition (questions to ask, how to conduct experiment)&lt;br /&gt;
* 08.01.2008: Finalized list of features to go in release (including logging and log submission)&lt;br /&gt;
* 27.01.2008: Beta 1 (feature complete version online)&lt;br /&gt;
* 04.02.2008: Beta 2 (designated testers test release, # &amp;gt; 3)&lt;br /&gt;
* 11.02.2008: Beta tester feedback in&lt;br /&gt;
* 11.02.2008: Beta 3&lt;br /&gt;
* 18.02.2008: Final 1 release&lt;br /&gt;
* 19.02.2008: Initial Questionnaire&lt;br /&gt;
* 20.02.2008: Final 1 handover to ISG&lt;br /&gt;
* 22.02.2008: Final 1 installed on students machines&lt;br /&gt;
* 29.02.2008: Final 2 release&lt;br /&gt;
* 05.03.2008: Final 2 handover to ISG&lt;br /&gt;
* 07.03.2008: Final 2 installed on students machines&lt;br /&gt;
* 25.03.2008: Midterm Questionnaire&lt;br /&gt;
* 19.05.2008: Final Questionnaire&lt;br /&gt;
* 20.05.2008: Having all data&lt;br /&gt;
* 06.06.2008: Finished analysis&lt;br /&gt;
&lt;br /&gt;
== Stefans Master Plan == &lt;br /&gt;
&lt;br /&gt;
* MA Start ca 17.12.2007&lt;br /&gt;
* MA End ca 17.6.2008&lt;br /&gt;
&lt;br /&gt;
* Testing the tester&lt;br /&gt;
** System level test for CDD (incl. framework)&lt;br /&gt;
** Recreating existing unit test suite with CDD&lt;br /&gt;
** Large scale validation of CDD&lt;br /&gt;
*** Info 4 and/or Software Engineering&lt;br /&gt;
*** Questions&lt;br /&gt;
**** Does testing (manual/extracted) increase developer productivity?&lt;br /&gt;
**** How many tests do ppl end up with (manual/extracted)?&lt;br /&gt;
**** ...&lt;br /&gt;
&lt;br /&gt;
== Various ==&lt;br /&gt;
* Regression testing: [[CddRegressionTesting]]&lt;br /&gt;
* TreeView Specification: [[CddTreeViewSpec]]&lt;br /&gt;
* [[CDDHowtoRollARelease]]&lt;br /&gt;
&lt;br /&gt;
=== Things we need from estudio ===&lt;br /&gt;
* Invariants should be checked during debugging equally to pre- and post conditions (they could also be visualised in the flat view the same way like pre- and post conditions are)&lt;br /&gt;
* The information whether some call is a creation call or a normal routine call (Not sure if this is really necessary, what if we assume every call to some creation procedure is always a creation call?)&lt;br /&gt;
* Support for multiple open targets&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10631</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10631"/>
				<updated>2008-02-22T15:18:16Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Download */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD extension for EiffelStudio? ==&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at ETH Zurich. The extension adds full support for unit testing to EiffelStudio. It also introduces the new idea of extracting test cases automatically from failures observed via the debugger. The following lists the main features of CDD:&lt;br /&gt;
&lt;br /&gt;
* Automated extraction of test cases from failures (For every exception thrown a new test case is created)&lt;br /&gt;
* Visualization of test cases and their outcomes&lt;br /&gt;
* One button creation of manual test cases&lt;br /&gt;
* Automated execution of test cases in the background&lt;br /&gt;
* Limit visible test cases via predefined filters and custom tags&lt;br /&gt;
* Testing occurs in the background and is undisruptive to the developer&lt;br /&gt;
* Easy test case management through tags&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [[http://eiffelstudio.origo.ethz.ch/forum/20| CDD forum]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:video_still.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video Play Video!]&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
(TODO: Update with more recent screenshot and video)&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Download ==&lt;br /&gt;
&lt;br /&gt;
* Download CDD Final 1&lt;br /&gt;
** Linux: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_final_1.tar.bz2 (Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;.)&lt;br /&gt;
** Windows&lt;br /&gt;
*** Installer: http://n.ethz.ch/~moris/download/Eiffel61cdd_gpl_final1-windows.msi (Note: Installation is independent of installations of official EiffelStudio 6.1)&lt;br /&gt;
*** Patch 1 for final 1 (fixes critical issue for windows): http://n.ethz.ch/~moris/download/EiffelStudio61cdd_final_1_patch_1.zip (replace existing ec.exe int &amp;lt;install_dir&amp;gt;\studio\spec\windows\bin with ec.exe found in archive. Delete EIFGENs in subdirectories of &amp;lt;install_dir&amp;gt;\precomp, since precompilations might not be compatible. Also build your projects from scratch)&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* [[Using CDD]]&lt;br /&gt;
* [[CDD Common Problems|Common Problems]]&lt;br /&gt;
&lt;br /&gt;
== Old Documentation == &lt;br /&gt;
&lt;br /&gt;
Documentation for the release of CDD for EiffelStudio version 5.7 is available from [[CddOldDocumentation]].&lt;br /&gt;
&lt;br /&gt;
= Project Internal Stuff =&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 04.01.2008: Final experiment definition (questions to ask, how to conduct experiment)&lt;br /&gt;
* 08.01.2008: Finalized list of features to go in release (including logging and log submission)&lt;br /&gt;
* 27.01.2008: Beta 1 (feature complete version online)&lt;br /&gt;
* 04.02.2008: Beta 2 (designated testers test release, # &amp;gt; 3)&lt;br /&gt;
* 11.02.2008: Beta tester feedback in&lt;br /&gt;
* 11.02.2008: Beta 3&lt;br /&gt;
* 18.02.2008: Final 1 release&lt;br /&gt;
* 19.02.2008: Initial Questionnaire&lt;br /&gt;
* 20.02.2008: Final 1 handover to ISG&lt;br /&gt;
* 22.02.2008: Final 1 installed on students machines&lt;br /&gt;
* 29.02.2008: Final 2 release&lt;br /&gt;
* 05.03.2008: Final 2 handover to ISG&lt;br /&gt;
* 07.03.2008: Final 2 installed on students machines&lt;br /&gt;
* 25.03.2008: Midterm Questionnaire&lt;br /&gt;
* 19.05.2008: Final Questionnaire&lt;br /&gt;
* 20.05.2008: Having all data&lt;br /&gt;
* 06.06.2008: Finished analysis&lt;br /&gt;
&lt;br /&gt;
== Stefans Master Plan == &lt;br /&gt;
&lt;br /&gt;
* MA Start ca 17.12.2007&lt;br /&gt;
* MA End ca 17.6.2008&lt;br /&gt;
&lt;br /&gt;
* Testing the tester&lt;br /&gt;
** System level test for CDD (incl. framework)&lt;br /&gt;
** Recreating existing unit test suite with CDD&lt;br /&gt;
** Large scale validation of CDD&lt;br /&gt;
*** Info 4 and/or Software Engineering&lt;br /&gt;
*** Questions&lt;br /&gt;
**** Does testing (manual/extracted) increase developer productivity?&lt;br /&gt;
**** How many tests do ppl end up with (manual/extracted)?&lt;br /&gt;
**** ...&lt;br /&gt;
&lt;br /&gt;
== Various ==&lt;br /&gt;
* Regression testing: [[CddRegressionTesting]]&lt;br /&gt;
* TreeView Specification: [[CddTreeViewSpec]]&lt;br /&gt;
* [[CDDHowtoRollARelease]]&lt;br /&gt;
&lt;br /&gt;
=== Things we need from estudio ===&lt;br /&gt;
* Invariants should be checked during debugging equally to pre- and post conditions (they could also be visualised in the flat view the same way like pre- and post conditions are)&lt;br /&gt;
* The information whether some call is a creation call or a normal routine call (Not sure if this is really necessary, what if we assume every call to some creation procedure is always a creation call?)&lt;br /&gt;
* Support for multiple open targets&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10628</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10628"/>
				<updated>2008-02-20T13:51:18Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* &amp;quot;Cleanup button&amp;quot; deletes filtered unresolved extracted test cases&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
* CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
* [DONE, temporary solution] Log original exception (make it part of test routine's state)&lt;br /&gt;
* Clean up Logging, add information that allows for associating extracted test case with its &amp;quot;extraced&amp;quot; message&lt;br /&gt;
* Fix CDD BP bug (in some cases there is a visible, disabled user bp generated which DOES make the application stop!)&lt;br /&gt;
* Fix identation bug in test class printer (not properly reset)&lt;br /&gt;
* When detecting if compilation succeeded, inlude &amp;quot;corrupted&amp;quot; as keyword when parsing output message (as in the &amp;quot;project corrupted. cannot continue message)&lt;br /&gt;
* Refactor CDD Manager-Routine invocation cache-CDD Capturer&lt;br /&gt;
* Second Chance re-run to find true prestate (with Jocelyn)&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Revive system level test suite&lt;br /&gt;
* Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Aborting the C compilation of the interpreter may leave the interpreter EIFGEN in an invalid state.&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10623</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10623"/>
				<updated>2008-02-20T09:31:55Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* 15.02.2008; 14:15&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* &amp;quot;Cleanup button&amp;quot; deletes filtered unresolved extracted test cases&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
* CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
* [DONE, temporary solution] Log original exception (make it part of test routine's state)&lt;br /&gt;
* Clean up Logging, add information that allows for associating extracted test case with its &amp;quot;extraced&amp;quot; message&lt;br /&gt;
* Fix CDD BP bug (in some cases there is a visible, disabled user bp generated which DOES make the application stop!)&lt;br /&gt;
* Refactor CDD Manager-Routine invocation cache-CDD Capturer&lt;br /&gt;
* Second Chance re-run to find true prestate (with Jocelyn)&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Revive system level test suite&lt;br /&gt;
* Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [NO EASY SOLUTION POSSIBLE] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Aborting the C compilation of the interpreter may leave the interpreter EIFGEN in an invalid state.&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===Comparisons===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
! contracts&lt;br /&gt;
! man. tests&lt;br /&gt;
! playing&lt;br /&gt;
! extr.&lt;br /&gt;
! synth.&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| experiment&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| A B&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
| ISSTA&lt;br /&gt;
| A B&lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
| &lt;br /&gt;
| A&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| B&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10621</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10621"/>
				<updated>2008-02-18T21:11:11Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Download */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD extension for EiffelStudio? ==&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at ETH Zurich. The extension adds full support for unit testing to EiffelStudio. It also introduces the new idea of extracting test cases automatically from failures observed via the debugger. The following lists the main features of CDD:&lt;br /&gt;
&lt;br /&gt;
* Automated extraction of test cases from failures (For every exception thrown a new test case is created)&lt;br /&gt;
* Visualization of test cases and their outcomes&lt;br /&gt;
* One button creation of manual test cases&lt;br /&gt;
* Automated execution of test cases in the background&lt;br /&gt;
* Limit visible test cases via predefined filters and custom tags&lt;br /&gt;
* Testing occurs in the background and is undisruptive to the developer&lt;br /&gt;
* Easy test case management through tags&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [[http://eiffelstudio.origo.ethz.ch/forum/20| CDD forum]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:video_still.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video Play Video!]&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
(TODO: Update with more recent screenshot and video)&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Download ==&lt;br /&gt;
&lt;br /&gt;
* Download CDD Beta 3&lt;br /&gt;
** Linux: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_beta_3.tar.gz (Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;.)&lt;br /&gt;
** Windows&lt;br /&gt;
*** Installer: http://n.ethz.ch/~moris/download/Eiffel61cdd_gpl_final1-windows.msi (Note: Installation is independent of installations of official EiffelStudio 6.1)&lt;br /&gt;
*** Patch: Not available for Final 1. (The CDD Edition of EiffelStudio is no longer overwriting installations of official EiffelStudio 6.1. So it does need its own registry entries which are set using the installer. For further releases a patch might be available again for patching installations produced by the installer below)&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* [[Using CDD]]&lt;br /&gt;
* [[CDD Common Problems|Common Problems]]&lt;br /&gt;
&lt;br /&gt;
== Old Documentation == &lt;br /&gt;
&lt;br /&gt;
Documentation for the release of CDD for EiffelStudio version 5.7 is available from [[CddOldDocumentation]].&lt;br /&gt;
&lt;br /&gt;
= Project Internal Stuff =&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 04.01.2008: Final experiment definition (questions to ask, how to conduct experiment)&lt;br /&gt;
* 08.01.2008: Finalized list of features to go in release (including logging and log submission)&lt;br /&gt;
* 27.01.2008: Beta 1 (feature complete version online)&lt;br /&gt;
* 04.02.2008: Beta 2 (designated testers test release, # &amp;gt; 3)&lt;br /&gt;
* 11.02.2008: Beta tester feedback in&lt;br /&gt;
* 11.02.2008: Beta 3&lt;br /&gt;
* 18.02.2008: Final 1 release&lt;br /&gt;
* 19.02.2008: Initial Questionnaire&lt;br /&gt;
* 22.02.2008: Final 1 handover to ISG&lt;br /&gt;
* 22.02.2008: Final 1 installed on students machines&lt;br /&gt;
* 29.02.2008: Final 2 release&lt;br /&gt;
* 05.03.2008: Final 2 handover to ISG&lt;br /&gt;
* 07.03.2008: Final 2 installed on students machines&lt;br /&gt;
* 25.03.2008: Midterm Questionnaire&lt;br /&gt;
* 19.05.2008: Final Questionnaire&lt;br /&gt;
* 20.05.2008: Having all data&lt;br /&gt;
* 06.06.2008: Finished analysis&lt;br /&gt;
&lt;br /&gt;
== Stefans Master Plan == &lt;br /&gt;
&lt;br /&gt;
* MA Start ca 17.12.2007&lt;br /&gt;
* MA End ca 17.6.2008&lt;br /&gt;
&lt;br /&gt;
* Testing the tester&lt;br /&gt;
** System level test for CDD (incl. framework)&lt;br /&gt;
** Recreating existing unit test suite with CDD&lt;br /&gt;
** Large scale validation of CDD&lt;br /&gt;
*** Info 4 and/or Software Engineering&lt;br /&gt;
*** Questions&lt;br /&gt;
**** Does testing (manual/extracted) increase developer productivity?&lt;br /&gt;
**** How many tests do ppl end up with (manual/extracted)?&lt;br /&gt;
**** ...&lt;br /&gt;
&lt;br /&gt;
== Various ==&lt;br /&gt;
* Regression testing: [[CddRegressionTesting]]&lt;br /&gt;
* TreeView Specification: [[CddTreeViewSpec]]&lt;br /&gt;
* [[CDDHowtoRollARelease]]&lt;br /&gt;
&lt;br /&gt;
=== Things we need from estudio ===&lt;br /&gt;
* Invariants should be checked during debugging equally to pre- and post conditions (they could also be visualised in the flat view the same way like pre- and post conditions are)&lt;br /&gt;
* The information whether some call is a creation call or a normal routine call (Not sure if this is really necessary, what if we assume every call to some creation procedure is always a creation call?)&lt;br /&gt;
* Support for multiple open targets&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10620</id>
		<title>CDDHowtoRollARelease</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10620"/>
				<updated>2008-02-18T20:29:42Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* List of CDD specific files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
== List of CDD specific files ==&lt;br /&gt;
The following files have to be overwritten or added to a stock EiffelStudio release:&lt;br /&gt;
* ec(.exe) -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/bin&lt;br /&gt;
* Src/library/base/ise/support/cdd -&amp;gt; $ISE_EIFFEL/library/base/ise/support/cdd&lt;br /&gt;
* [OBSOLETE, file has been reverted to version of official ES 6.1] Src/library/base/ise/runtime/debug/classic/rt_extension.e -&amp;gt; $ISE_EIFFEL/library/base/ise/runtime/debug/classic/rt_extension.e&lt;br /&gt;
* Src/examples/cdd -&amp;gt; $ISE_EIFFEL/examples/cdd&lt;br /&gt;
* Delivery/studio/help/defaults -&amp;gt; $ISE_EIFFEL/studio/help/defaults&lt;br /&gt;
* Delivery/studio/bitmaps/png/16x16.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/16x16.png&lt;br /&gt;
* Delivery/studio/bitmaps/png/splash_shadow.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/splash_shadow.png&lt;br /&gt;
* Delivery/studio/bitmaps/png/splash.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/splash.png&lt;br /&gt;
* [For Windows Only] estudio.exe -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/bin&lt;br /&gt;
&lt;br /&gt;
== How to roll a release on Windows ==&lt;br /&gt;
* prepare some directory &amp;lt;INSTALL_DIR&amp;gt; like this:&lt;br /&gt;
** create 3 subdirectories &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio, &amp;lt;INSTALL_DIR&amp;gt;/gcc, &amp;lt;INSTALL_DIR&amp;gt;/releases. Fill them according to the following description&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio] has to contain a complete delivery without the ec.exe binaries. Without a working delivery script, this can be achieved by &amp;quot;patching&amp;quot; an official installation:&lt;br /&gt;
**** copy the contents of the installation directory of a fresh offcial installation into &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio (fresh = NO EIFGENs produced. To be sure, uninstall the official version if alrady existing, manually delete the remains in the installation directory, and reinstall it WITHOUT building any precompilations)&lt;br /&gt;
**** Remove the ec.exe from  &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio/studio/spec/windows/bin&lt;br /&gt;
**** Replace/Add the cdd specific files (see list above). Watch out for the proper place to add/replace them (the above list is svn-specific)&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/gcc] needs to contain content of &amp;lt;svn-eiffel-branch&amp;gt;/free_add_ons/gcc&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/releases] needs to contain the ec.exe binaries (the cdd versions of course)&lt;br /&gt;
**** &amp;lt;INSTALL_DIR&amp;gt;/releases/gpl_version/ needs to contain ec.exe&lt;br /&gt;
**** &amp;lt;INSTALL_DIR&amp;gt;/releases/enterprise_version/ needs to contain ec.exe (take the same ec.exe, it's a dummy for using the scripts later on)&lt;br /&gt;
&lt;br /&gt;
* let env variable %INSTALL_DIR% point to the &amp;lt;INSTALL_DIR&amp;gt;&lt;br /&gt;
* let env variable %INIT_DIR% point to the &amp;lt;svn-eiffel-branch&amp;gt;/Delivery/scripts/windows folder&lt;br /&gt;
* finalize the &amp;quot;hallow&amp;quot; tool (&amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/hallow.ecf)&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/bin&lt;br /&gt;
* copy content of &amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/EIFGENs/hallow/F_code to INIT_DIR/install/bin&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/binaries/x86&lt;br /&gt;
* get a proper setup.dll (from manus probably, or build yourself, instructions for this will be added soon) and put it into %INIT_DIR%/install/binaries/x86/&lt;br /&gt;
* start command line, go to %INIT_DIR%/install/content/eiffelstudio and run:&lt;br /&gt;
** nmake /nologo clean&lt;br /&gt;
** nmake /nologo&lt;br /&gt;
** nmake /nologo gpl_x86&lt;br /&gt;
* wait some minutes .... and pray :-)&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10609</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10609"/>
				<updated>2008-02-14T19:10:11Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* ?&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* &amp;quot;Cleanup button&amp;quot; deletes filtered unresolved extracted test cases&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
* CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* [RECURRENT, currently valid]Check log for XML validity&lt;br /&gt;
* [DONE] Add timeout information to log&lt;br /&gt;
* [DONE] Distinguish extracted, synthesized and manual test cases in logs&lt;br /&gt;
* [DONE] Log TS Snapshot after execution (only executed routines)&lt;br /&gt;
* [DONE] Log TS Snapshot after compilation (complete test suite)&lt;br /&gt;
* [DONE] Log when ES starts up and shuts down&lt;br /&gt;
* [DONE] Log time it takes to extract test case&lt;br /&gt;
* [DONE] Log time it takes to compile SUT&lt;br /&gt;
* [DONE] Log time it takes to compile test suite&lt;br /&gt;
* [DONE, temporary solution] Log original exception (make it part of test routine's state)&lt;br /&gt;
* Second Chance re-run to find true prestate (with Jocelyn)&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Revive system level test suite&lt;br /&gt;
* Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* [WAITING FOR ANSWER TO EMAIL] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10608</id>
		<title>CddMeeting 13 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_13_02_2008&amp;diff=10608"/>
				<updated>2008-02-14T13:42:33Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 14.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* ?&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* [done] Add timeout judgement&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
* Make newly extracted test cases show up &amp;quot;expanded&amp;quot; in GUI treeview&lt;br /&gt;
* &amp;quot;Cleanup button&amp;quot; deletes filtered unresolved extracted test cases&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)&lt;br /&gt;
* CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* Check log for XML validity&lt;br /&gt;
* [DONE] Add timeout information to log&lt;br /&gt;
* [DONE] Distinguish extracted, synthesized and manual test cases in logs&lt;br /&gt;
* [DONE] Log TS Snapshot after execution (only executed routines)&lt;br /&gt;
* [DONE] Log TS Snapshot after compilation (complete test suite)&lt;br /&gt;
* Log when ES starts up and shuts down&lt;br /&gt;
* Log time it takes to extract test case&lt;br /&gt;
* [DONE] Log time it takes to compile SUT&lt;br /&gt;
* [DONE] Log time it takes to compile test suite&lt;br /&gt;
* Log original exception (make it part of test routine's state)&lt;br /&gt;
* Second Chance re-run to find true prestate (with Jocelyn)&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Revive system level test suite&lt;br /&gt;
* Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10601</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10601"/>
				<updated>2008-02-14T09:37:27Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Download */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD extension for EiffelStudio? ==&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at ETH Zurich. The extension adds full support for unit testing to EiffelStudio. It also introduces the new idea of extracting test cases automatically from failures observed via the debugger. The following lists the main features of CDD:&lt;br /&gt;
&lt;br /&gt;
* Automated extraction of test cases from failures (For every exception thrown a new test case is created)&lt;br /&gt;
* Visualization of test cases and their outcomes&lt;br /&gt;
* One button creation of manual test cases&lt;br /&gt;
* Automated execution of test cases in the background&lt;br /&gt;
* Limit visible test cases via predefined filters and custom tags&lt;br /&gt;
* Testing occurs in the background and is undisruptive to the developer&lt;br /&gt;
* Easy test case management through tags&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [[http://eiffelstudio.origo.ethz.ch/forum/20| CDD forum]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:video_still.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video Play Video!]&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
(TODO: Update with more recent screenshot and video)&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Download ==&lt;br /&gt;
&lt;br /&gt;
* Download CDD Beta 2&lt;br /&gt;
** Linux: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_beta_2.tar.gz (Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;.)&lt;br /&gt;
** Windows&lt;br /&gt;
*** Patch: There is no Patch release available for Beta3, since the CDD Edition of EiffelStudio is no longer overwriting installations of official EiffelStudio 6.1. So it does need its own registry entries which are set using the installer. (for further releases a patch might be available again for patching installations produced by the installer below)&lt;br /&gt;
*** Installer: http://n.ethz.ch/~moris/download/Eiffel61cdd_gpl_beta3-windows.msi (Note: Installation is independent of installations of official EiffelStudio 6.1)&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* [[Using CDD]]&lt;br /&gt;
* [[CDD Common Problems|Common Problems]]&lt;br /&gt;
&lt;br /&gt;
== Old Documentation == &lt;br /&gt;
&lt;br /&gt;
Documentation for the release of CDD for EiffelStudio version 5.7 is available from [[CddOldDocumentation]].&lt;br /&gt;
&lt;br /&gt;
= Project Internal Stuff =&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 04.01.2008: Final experiment definition (questions to ask, how to conduct experiment)&lt;br /&gt;
* 08.01.2008: Finalized list of features to go in release (including logging and log submission)&lt;br /&gt;
* 27.01.2008: Beta 1 (feature complete version online)&lt;br /&gt;
* 04.02.2008: Beta 2 (designated testers test release, # &amp;gt; 3)&lt;br /&gt;
* 11.02.2008: Beta tester feedback in&lt;br /&gt;
* 11.02.2008: Beta 3&lt;br /&gt;
* 18.02.2008: Final CDD release for experiment online&lt;br /&gt;
* 19.02.2008: Initial Questionnaire&lt;br /&gt;
* 25.03.2008: Midterm Questionnaire&lt;br /&gt;
* 19.05.2008: Final Questionnaire&lt;br /&gt;
* 20.05.2008: Having all data&lt;br /&gt;
* 06.06.2008: Finished analysis&lt;br /&gt;
&lt;br /&gt;
== Stefans Master Plan == &lt;br /&gt;
&lt;br /&gt;
* MA Start ca 17.12.2007&lt;br /&gt;
* MA End ca 17.6.2008&lt;br /&gt;
&lt;br /&gt;
* Testing the tester&lt;br /&gt;
** System level test for CDD (incl. framework)&lt;br /&gt;
** Recreating existing unit test suite with CDD&lt;br /&gt;
** Large scale validation of CDD&lt;br /&gt;
*** Info 4 and/or Software Engineering&lt;br /&gt;
*** Questions&lt;br /&gt;
**** Does testing (manual/extracted) increase developer productivity?&lt;br /&gt;
**** How many tests do ppl end up with (manual/extracted)?&lt;br /&gt;
**** ...&lt;br /&gt;
&lt;br /&gt;
== Various ==&lt;br /&gt;
* Regression testing: [[CddRegressionTesting]]&lt;br /&gt;
* TreeView Specification: [[CddTreeViewSpec]]&lt;br /&gt;
* [[CDDHowtoRollARelease]]&lt;br /&gt;
&lt;br /&gt;
=== Things we need from estudio ===&lt;br /&gt;
* Invariants should be checked during debugging equally to pre- and post conditions (they could also be visualised in the flat view the same way like pre- and post conditions are)&lt;br /&gt;
* The information whether some call is a creation call or a normal routine call (Not sure if this is really necessary, what if we assume every call to some creation procedure is always a creation call?)&lt;br /&gt;
* Support for multiple open targets&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10600</id>
		<title>CDDHowtoRollARelease</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10600"/>
				<updated>2008-02-14T08:13:28Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* List of CDD specific files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
== List of CDD specific files ==&lt;br /&gt;
The following files have to be overwritten or added to a stock EiffelStudio release:&lt;br /&gt;
* ec(.exe) -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/bin&lt;br /&gt;
* Src/library/base/ise/support/cdd -&amp;gt; $ISE_EIFFEL/library/base/ise/support/cdd&lt;br /&gt;
* Src/library/base/ise/runtime/debug/classic/rt_extension.e -&amp;gt; $ISE_EIFFEL/library/base/ise/runtime/debug/classic/rt_extension.e&lt;br /&gt;
* Src/examples/cdd -&amp;gt; $ISE_EIFFEL/examples/cdd&lt;br /&gt;
* Delivery/studio/help/defaults -&amp;gt; $ISE_EIFFEL/studio/help/defaults&lt;br /&gt;
* Delivery/studio/bitmaps/png/16x16.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/16x16.png&lt;br /&gt;
* Delivery/studio/bitmaps/png/splash_shadow.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/splash_shadow.png&lt;br /&gt;
* Delivery/studio/bitmaps/png/splash.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/splash.png&lt;br /&gt;
* [For Windows Only] estudio.exe -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/bin&lt;br /&gt;
&lt;br /&gt;
== How to roll a release on Windows ==&lt;br /&gt;
* prepare some directory &amp;lt;INSTALL_DIR&amp;gt; like this:&lt;br /&gt;
** create 3 subdirectories &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio, &amp;lt;INSTALL_DIR&amp;gt;/gcc, &amp;lt;INSTALL_DIR&amp;gt;/releases. Fill them according to the following description&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio] has to contain a complete delivery without the ec.exe binaries. Without a working delivery script, this can be achieved by &amp;quot;patching&amp;quot; an official installation:&lt;br /&gt;
**** copy the contents of the installation directory of a fresh offcial installation into &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio (fresh = NO EIFGENs produced. To be sure, uninstall the official version if alrady existing, manually delete the remains in the installation directory, and reinstall it WITHOUT building any precompilations)&lt;br /&gt;
**** Remove the ec.exe from  &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio/studio/spec/windows/bin&lt;br /&gt;
**** Replace/Add the cdd specific files (see list above). Watch out for the proper place to add/replace them (the above list is svn-specific)&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/gcc] needs to contain content of &amp;lt;svn-eiffel-branch&amp;gt;/free_add_ons/gcc&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/releases] needs to contain the ec.exe binaries (the cdd versions of course)&lt;br /&gt;
**** &amp;lt;INSTALL_DIR&amp;gt;/releases/gpl_version/ needs to contain ec.exe&lt;br /&gt;
**** &amp;lt;INSTALL_DIR&amp;gt;/releases/enterprise_version/ needs to contain ec.exe (take the same ec.exe, it's a dummy for using the scripts later on)&lt;br /&gt;
&lt;br /&gt;
* let env variable %INSTALL_DIR% point to the &amp;lt;INSTALL_DIR&amp;gt;&lt;br /&gt;
* let env variable %INIT_DIR% point to the &amp;lt;svn-eiffel-branch&amp;gt;/Delivery/scripts/windows folder&lt;br /&gt;
* finalize the &amp;quot;hallow&amp;quot; tool (&amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/hallow.ecf)&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/bin&lt;br /&gt;
* copy content of &amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/EIFGENs/hallow/F_code to INIT_DIR/install/bin&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/binaries/x86&lt;br /&gt;
* get a proper setup.dll (from manus probably, or build yourself, instructions for this will be added soon) and put it into %INIT_DIR%/install/binaries/x86/&lt;br /&gt;
* start command line, go to %INIT_DIR%/install/content/eiffelstudio and run:&lt;br /&gt;
** nmake /nologo clean&lt;br /&gt;
** nmake /nologo&lt;br /&gt;
** nmake /nologo gpl_x86&lt;br /&gt;
* wait some minutes .... and pray :-)&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10599</id>
		<title>CDDHowtoRollARelease</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10599"/>
				<updated>2008-02-14T08:09:12Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* List of CDD specific files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
== List of CDD specific files ==&lt;br /&gt;
The following files have to be overwritten or added to a stock EiffelStudio release:&lt;br /&gt;
* ec(.exe) -&amp;gt; $ISE_EIFFEL/studio/spec/$ISE_PLATFORM/bin&lt;br /&gt;
* Src/library/base/ise/support/cdd -&amp;gt; $ISE_EIFFEL/library/base/ise/support/cdd&lt;br /&gt;
* Src/library/base/ise/runtime/debug/classic/rt_extension.e -&amp;gt; $ISE_EIFFEL/library/base/ise/runtime/debug/classic/rt_extension.e&lt;br /&gt;
* Src/examples/cdd -&amp;gt; $ISE_EIFFEL/examples/cdd&lt;br /&gt;
* Delivery/studio/help/defaults -&amp;gt; $ISE_EIFFEL/studio/help/defaults&lt;br /&gt;
* Delivery/studio/bitmaps/png/16x16.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/16x16.png&lt;br /&gt;
* Delivery/studio/bitmaps/png/splash_shadow.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/splash_shadow.png&lt;br /&gt;
* Delivery/studio/bitmaps/png/splash.png -&amp;gt; $ISE_EIFFEL/studio/bitmaps/png/splash.png&lt;br /&gt;
&lt;br /&gt;
== How to roll a release on Windows ==&lt;br /&gt;
* prepare some directory &amp;lt;INSTALL_DIR&amp;gt; like this:&lt;br /&gt;
** create 3 subdirectories &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio, &amp;lt;INSTALL_DIR&amp;gt;/gcc, &amp;lt;INSTALL_DIR&amp;gt;/releases. Fill them according to the following description&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio] has to contain a complete delivery without the ec.exe binaries. Without a working delivery script, this can be achieved by &amp;quot;patching&amp;quot; an official installation:&lt;br /&gt;
**** copy the contents of the installation directory of a fresh offcial installation into &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio (fresh = NO EIFGENs produced. To be sure, uninstall the official version if alrady existing, manually delete the remains in the installation directory, and reinstall it WITHOUT building any precompilations)&lt;br /&gt;
**** Remove the ec.exe from  &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio/studio/spec/windows/bin&lt;br /&gt;
**** Replace/Add the cdd specific files (see list above). Watch out for the proper place to add/replace them (the above list is svn-specific)&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/gcc] needs to contain content of &amp;lt;svn-eiffel-branch&amp;gt;/free_add_ons/gcc&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/releases] needs to contain the ec.exe binaries (the cdd versions of course)&lt;br /&gt;
**** &amp;lt;INSTALL_DIR&amp;gt;/releases/gpl_version/ needs to contain ec.exe&lt;br /&gt;
**** &amp;lt;INSTALL_DIR&amp;gt;/releases/enterprise_version/ needs to contain ec.exe (take the same ec.exe, it's a dummy for using the scripts later on)&lt;br /&gt;
&lt;br /&gt;
* let env variable %INSTALL_DIR% point to the &amp;lt;INSTALL_DIR&amp;gt;&lt;br /&gt;
* let env variable %INIT_DIR% point to the &amp;lt;svn-eiffel-branch&amp;gt;/Delivery/scripts/windows folder&lt;br /&gt;
* finalize the &amp;quot;hallow&amp;quot; tool (&amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/hallow.ecf)&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/bin&lt;br /&gt;
* copy content of &amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/EIFGENs/hallow/F_code to INIT_DIR/install/bin&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/binaries/x86&lt;br /&gt;
* get a proper setup.dll (from manus probably, or build yourself, instructions for this will be added soon) and put it into %INIT_DIR%/install/binaries/x86/&lt;br /&gt;
* start command line, go to %INIT_DIR%/install/content/eiffelstudio and run:&lt;br /&gt;
** nmake /nologo clean&lt;br /&gt;
** nmake /nologo&lt;br /&gt;
** nmake /nologo gpl_x86&lt;br /&gt;
* wait some minutes .... and pray :-)&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_05_02_2008&amp;diff=10581</id>
		<title>CddMeeting 05 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_05_02_2008&amp;diff=10581"/>
				<updated>2008-02-07T09:17:29Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Stefan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 06.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* Tuesday, 12.02.2008, 10:00&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* Add timeout judgement&lt;br /&gt;
* Timeout -&amp;gt; 5 sec&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
* When test class gets removed manually, update test suite&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
* Display ignored test class compilation errors (looks like we will have this for free in 6.1)&lt;br /&gt;
* Red bg for failing test cases in view &lt;br /&gt;
* When debugging extracted test case, set first breakpoint in &amp;quot;covers.&amp;quot; feature&lt;br /&gt;
* Extraction for inline agents not currently working (at least not always)&lt;br /&gt;
** Create inline agent test case&lt;br /&gt;
** Fix extraction for inline agents&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Result type (like Current) produces syntax error in new test class&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* Distinguish extracted, synthesized and manual test cases in logs&lt;br /&gt;
* Log TS Snapshot after compilation&lt;br /&gt;
* Log TS Snapshot after testing&lt;br /&gt;
* Log when ES starts up and shuts down&lt;br /&gt;
* Log time it takes to extract test case&lt;br /&gt;
* Log time it takes to compile SUT&lt;br /&gt;
* Log time it takes to compile test suite&lt;br /&gt;
* Log original exception (make it part of test routine's state)&lt;br /&gt;
* Second Chance re-run to find true prestate (with Jocelyn)&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Revive system level test suite&lt;br /&gt;
&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
* CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [Jocelyn]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [Jocelyn]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [Jocelyn]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Jocelyn]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_05_02_2008&amp;diff=10580</id>
		<title>CddMeeting 05 02 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_05_02_2008&amp;diff=10580"/>
				<updated>2008-02-07T09:14:03Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Beta Tester Feedback */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Wednesday, 06.02.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* Tuesday, 12.02.2008, 10:00&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* Add timeout judgement&lt;br /&gt;
* Timeout -&amp;gt; 5 sec&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
* When test class gets removed manually, update test suite&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
* Display ignored test class compilation errors (looks like we will have this for free in 6.1)&lt;br /&gt;
* Red bg for failing test cases in view &lt;br /&gt;
* When debugging extracted test case, set first breakpoint in &amp;quot;covers.&amp;quot; feature&lt;br /&gt;
* Extraction for inline agents not currently working (at least not always)&lt;br /&gt;
** Create inline agent test case&lt;br /&gt;
** Fix extraction for inline agents&lt;br /&gt;
* Make general encoding/decoding routines for special feature names&lt;br /&gt;
** e.g. infix &amp;quot;+&amp;quot; (ES) &amp;lt;=&amp;gt; infix_plus (CDD)&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Result type (like Current) produces syntax error in new test class&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
* Distinguish extracted, synthesized and manual test cases in logs&lt;br /&gt;
* Log TS Snapshot after compilation&lt;br /&gt;
* Log TS Snapshot after testing&lt;br /&gt;
* Log when ES starts up and shuts down&lt;br /&gt;
* Log time it takes to extract test case&lt;br /&gt;
* Log time it takes to compile SUT&lt;br /&gt;
* Log time it takes to compile test suite&lt;br /&gt;
* Log original exception (make it part of test routine's state)&lt;br /&gt;
* Second Chance re-run to find true prestate (with Jocelyn)&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
* Revive system level test suite&lt;br /&gt;
&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
=== Bernd ===&lt;br /&gt;
* Define Project for SoftEng&lt;br /&gt;
** Find test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
* Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Fix config files for system level tests (remove cdd tag)&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
(Please put your name so we can get back to you in the case of questions)&lt;br /&gt;
* (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [Jocelyn]&lt;br /&gt;
** home directory? application_data directory?  [Jocelyn]&lt;br /&gt;
* (Jocelyn) There should be UI support for deletion of Test Case [Jocelyn]&lt;br /&gt;
* (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [Jocelyn]&lt;br /&gt;
* (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Jocelyn]&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* Task 1: Implement VCard API&lt;br /&gt;
* Task 2: Implement Mime API&lt;br /&gt;
* Task 3: Write test cases to reveal faults in foreign VCard implementations&lt;br /&gt;
* Task 4: Write test cases to reveal faults in foreign Mime implementations&lt;br /&gt;
&lt;br /&gt;
* Group A:&lt;br /&gt;
** Task 1, Manual Tests&lt;br /&gt;
** Task 2, Extracted Tests&lt;br /&gt;
** Task 3, Manual Tests&lt;br /&gt;
** Task 4, Extracted Tests&lt;br /&gt;
* Group B:&lt;br /&gt;
** Task 1, Extracted Tests&lt;br /&gt;
** Task 2, Manual Tests&lt;br /&gt;
** Task 3, Extracted Tests&lt;br /&gt;
** Task 4, Manual Tests&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
===How reliably can we extract test cases that reproduce the original failure?===&lt;br /&gt;
* Log original exception and exception received from first test execution&lt;br /&gt;
&lt;br /&gt;
===Are the extracted tests useful for debugging?===&lt;br /&gt;
* Ask developers, using CDD&lt;br /&gt;
&lt;br /&gt;
===What is the (time and memory) overhead of enabling extraction?===&lt;br /&gt;
&lt;br /&gt;
===What is the size of the extracted test cases?===&lt;br /&gt;
&lt;br /&gt;
===Are we able to reproduce bugs from industry?===&lt;br /&gt;
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.&lt;br /&gt;
&lt;br /&gt;
===Does it make a difference in the quality of the code, whether one tests manually or extracts them?===&lt;br /&gt;
* Compare projects using extracted tests and manual tests to ref test suite&lt;br /&gt;
&lt;br /&gt;
===Do contracts replace traditional testing oracles?===&lt;br /&gt;
* Original API without contracts&lt;br /&gt;
* Run failing test cases (the ones we get from second part) with reference API with contracts&lt;br /&gt;
* How many times does the contract replace the testing oracle?&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_31_01_2008&amp;diff=10564</id>
		<title>CddMeeting 31 01 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_31_01_2008&amp;diff=10564"/>
				<updated>2008-02-05T16:01:46Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Beta Tester Feedback */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Tuesday, 31.1.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* Tuesday, 5.2.2008, 10:00&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* [done] Commit dangling patch from 6.0 to 6.1&lt;br /&gt;
* [done] Make it so that tester target never has extraction or execution enabled&lt;br /&gt;
** remove hack from CDD_MANAGER.schedule_testing_restart&lt;br /&gt;
* [done] Make CDD Windows apear by default&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
* When test class gets removed manually, update test suite&lt;br /&gt;
* Clean up test case in interpreter after each execution (through garbage collection?)&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
* Display ignored test class compilation errors (looks like we will have this for free in 6.1)&lt;br /&gt;
* Make sure CDD Tools are visible by default (what layout would you prefer?)&lt;br /&gt;
** Main tool shares tabs with clusters/features tool, output tool after C output tool&lt;br /&gt;
* Red bg for failing test cases in view &lt;br /&gt;
* Write new simple &amp;quot;New Manual Test Case&amp;quot; dialog&lt;br /&gt;
* Test case for (user defined) expanded types&lt;br /&gt;
* Test case containing feature names with underscores and &amp;quot;like Current&amp;quot;&lt;br /&gt;
* When debugging extracted test case, set first breakpoint in &amp;quot;covers.&amp;quot; feature&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Result type (like Current) produces syntax error in new test class&lt;br /&gt;
* Fix interpreter hang after runtime crash&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations&lt;br /&gt;
* Check if interpreter compilation errors are propagated correctly (seems to start interpreter even though compilation has failed)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [DONE] Uniqe id to tag test cases with. To be used in logs. So test logs are resiliant to test class renamings&lt;br /&gt;
* [DONE, except probably for bad memory corruption bugs which didn't occur anymore since agents are ignored] Make popup on interpreter crash go away (win32 only)&lt;br /&gt;
&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
&lt;br /&gt;
* Logging&lt;br /&gt;
** What data to log?&lt;br /&gt;
** Implement storing&lt;br /&gt;
** Define how students should submit logs&lt;br /&gt;
* Data Gathering&lt;br /&gt;
** Define what data to gather&lt;br /&gt;
** Define how to process gather data&lt;br /&gt;
* Second Chance re-run to find true prestate (with Jocelyn)&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* Is it still necessary to ever call the routine update actions with argument &amp;quot;void&amp;quot;?&lt;br /&gt;
* Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Define Project for SoftEng (due by next meeting)&lt;br /&gt;
** Find System level test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Enable execution and extraction by default for new projects.&lt;br /&gt;
* Make CDD Window and CDD Log Window visiable by default&lt;br /&gt;
* &amp;quot;Debug selected test routine&amp;quot; should be grayed out if no test case is currently selected&lt;br /&gt;
* Testing V2 Application should not interupt flow&lt;br /&gt;
* Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* Extraction for inline agents not currently working (at least not always)&lt;br /&gt;
** Create inline agent test case&lt;br /&gt;
** Fix extraction for inline agents&lt;br /&gt;
* Revive system level test suite&lt;br /&gt;
* There is a performance problem when compiling a test suite for 'ec' itself. There are indications that the parent ec process consumes too much CPU cycles. Maybe the CDD output refresh is too eager. Find out what the problem is and fix it.&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
* It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?)&lt;br /&gt;
** home directory? application_data directory? &lt;br /&gt;
* There should be UI support for deletion of Test Case&lt;br /&gt;
* [BUG] the manual test case creation dialog should check if class with chosen name is already in the system&lt;br /&gt;
* It would be nice if there was a way to configure the timeout for the interpreter&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
==Do Contracts improve Tests?==&lt;br /&gt;
&lt;br /&gt;
* Is there a correlation between Tests quantity or quality and the quantity or quality of contracts?&lt;br /&gt;
&lt;br /&gt;
== Corellation between failure/fault type and test type? ==&lt;br /&gt;
&lt;br /&gt;
* Do certain kind of tests find certain kind of failures/faults?&lt;br /&gt;
&lt;br /&gt;
===Use of CDD increases development productivity===&lt;br /&gt;
* Did the use of testing decrease development time?&lt;br /&gt;
&lt;br /&gt;
* Meassures:&lt;br /&gt;
** Number of compilations&lt;br /&gt;
** Number of saves&lt;br /&gt;
** Number of revisions&lt;br /&gt;
** IDE time&lt;br /&gt;
** Asking the students&lt;br /&gt;
&lt;br /&gt;
Emphasis on quetionnair result. Correlation with logs only if it makes sense&lt;br /&gt;
&lt;br /&gt;
===Use of CDD increases code correctness===&lt;br /&gt;
* Is there a relation between code correctness of project (vs. some system level test suite) and test activity?&lt;br /&gt;
&lt;br /&gt;
* Measures:&lt;br /&gt;
** number of tests&lt;br /&gt;
** number of times test were run&lt;br /&gt;
** Number of pass/fail, fail/pass transitions, (also consider unresolved/* transitions ?)&lt;br /&gt;
** Secret test suite&lt;br /&gt;
&lt;br /&gt;
===Developer Profile: Is there a correlation between Developer Profile and the way they use testing tools===&lt;br /&gt;
* How did students use the testing tools?&lt;br /&gt;
* Are ther clusters of similar use? &lt;br /&gt;
* What is charactersitic for these clusters?&lt;br /&gt;
* Meassures:&lt;br /&gt;
** Aksing students before and after&lt;br /&gt;
** Are there projects where tests initially always fail resp. pass&lt;br /&gt;
** How often do they test?&lt;br /&gt;
** How correct is their project?&lt;br /&gt;
&lt;br /&gt;
Midterm questionnaire will be used to phrase questions for final questionnaire.&lt;br /&gt;
====Example profiles====&lt;br /&gt;
* Waldundwiesen Hacker&lt;br /&gt;
** No explicit structure. Does whatever seems appriorate at the time. No QA plan.&lt;br /&gt;
* Agile&lt;br /&gt;
** Processes interleave. Conscionsness for QA. Maybe even Test First or TDD.&lt;br /&gt;
* Waterfall inspired&lt;br /&gt;
** Explicit process model. Phases don't interleave.&lt;br /&gt;
* ?&lt;br /&gt;
&lt;br /&gt;
===How do extracted, synthesized and manually written test cases compare?===&lt;br /&gt;
* Which tests are the most useful to students?&lt;br /&gt;
* How many tests are there in each category?&lt;br /&gt;
* What's the test suite quality of each category?&lt;br /&gt;
* Were some excluded from testing more often than others?&lt;br /&gt;
* How many red/green and green/red transitions are there in each category?&lt;br /&gt;
* Which had compile-time errors most often that did not get fixed?&lt;br /&gt;
* Meassures:&lt;br /&gt;
** LOC&lt;br /&gt;
** Number of tests&lt;br /&gt;
** Number of executions&lt;br /&gt;
** Outcome transitions&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_31_01_2008&amp;diff=10563</id>
		<title>CddMeeting 31 01 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_31_01_2008&amp;diff=10563"/>
				<updated>2008-02-05T14:17:20Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Beta Tester Feedback */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Tuesday, 31.1.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* Tuesday, 5.2.2008, 10:00&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* [done] Commit dangling patch from 6.0 to 6.1&lt;br /&gt;
* [done] Make it so that tester target never has extraction or execution enabled&lt;br /&gt;
** remove hack from CDD_MANAGER.schedule_testing_restart&lt;br /&gt;
* [done] Make CDD Windows apear by default&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
* When test class gets removed manually, update test suite&lt;br /&gt;
* Clean up test case in interpreter after each execution (through garbage collection?)&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
* Display ignored test class compilation errors (looks like we will have this for free in 6.1)&lt;br /&gt;
* Make sure CDD Tools are visible by default (what layout would you prefer?)&lt;br /&gt;
** Main tool shares tabs with clusters/features tool, output tool after C output tool&lt;br /&gt;
* Red bg for failing test cases in view &lt;br /&gt;
* Write new simple &amp;quot;New Manual Test Case&amp;quot; dialog&lt;br /&gt;
* Test case for (user defined) expanded types&lt;br /&gt;
* Test case containing feature names with underscores and &amp;quot;like Current&amp;quot;&lt;br /&gt;
* When debugging extracted test case, set first breakpoint in &amp;quot;covers.&amp;quot; feature&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Result type (like Current) produces syntax error in new test class&lt;br /&gt;
* Fix interpreter hang after runtime crash&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations&lt;br /&gt;
* Check if interpreter compilation errors are propagated correctly (seems to start interpreter even though compilation has failed)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [DONE] Uniqe id to tag test cases with. To be used in logs. So test logs are resiliant to test class renamings&lt;br /&gt;
* [DONE, except probably for bad memory corruption bugs which didn't occur anymore since agents are ignored] Make popup on interpreter crash go away (win32 only)&lt;br /&gt;
&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
&lt;br /&gt;
* Logging&lt;br /&gt;
** What data to log?&lt;br /&gt;
** Implement storing&lt;br /&gt;
** Define how students should submit logs&lt;br /&gt;
* Data Gathering&lt;br /&gt;
** Define what data to gather&lt;br /&gt;
** Define how to process gather data&lt;br /&gt;
* Second Chance re-run to find true prestate (with Jocelyn)&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* Is it still necessary to ever call the routine update actions with argument &amp;quot;void&amp;quot;?&lt;br /&gt;
* Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Define Project for SoftEng (due by next meeting)&lt;br /&gt;
** Find System level test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Enable execution and extraction by default for new projects.&lt;br /&gt;
* Make CDD Window and CDD Log Window visiable by default&lt;br /&gt;
* &amp;quot;Debug selected test routine&amp;quot; should be grayed out if no test case is currently selected&lt;br /&gt;
* Testing V2 Application should not interupt flow&lt;br /&gt;
* Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* Extraction for inline agents not currently working (at least not always)&lt;br /&gt;
** Create inline agent test case&lt;br /&gt;
** Fix extraction for inline agents&lt;br /&gt;
* Revive system level test suite&lt;br /&gt;
* There is a performance problem when compiling a test suite for 'ec' itself. There are indications that the parent ec process consumes too much CPU cycles. Maybe the CDD output refresh is too eager. Find out what the problem is and fix it.&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
* It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?)&lt;br /&gt;
** home directory? application_data directory? &lt;br /&gt;
* There should be UI support for deletion of Test Case&lt;br /&gt;
* [BUG] the manual test case creation dialog should check if class with chosen name is already in the system&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
==Do Contracts improve Tests?==&lt;br /&gt;
&lt;br /&gt;
* Is there a correlation between Tests quantity or quality and the quantity or quality of contracts?&lt;br /&gt;
&lt;br /&gt;
== Corellation between failure/fault type and test type? ==&lt;br /&gt;
&lt;br /&gt;
* Do certain kind of tests find certain kind of failures/faults?&lt;br /&gt;
&lt;br /&gt;
===Use of CDD increases development productivity===&lt;br /&gt;
* Did the use of testing decrease development time?&lt;br /&gt;
&lt;br /&gt;
* Meassures:&lt;br /&gt;
** Number of compilations&lt;br /&gt;
** Number of saves&lt;br /&gt;
** Number of revisions&lt;br /&gt;
** IDE time&lt;br /&gt;
** Asking the students&lt;br /&gt;
&lt;br /&gt;
Emphasis on quetionnair result. Correlation with logs only if it makes sense&lt;br /&gt;
&lt;br /&gt;
===Use of CDD increases code correctness===&lt;br /&gt;
* Is there a relation between code correctness of project (vs. some system level test suite) and test activity?&lt;br /&gt;
&lt;br /&gt;
* Measures:&lt;br /&gt;
** number of tests&lt;br /&gt;
** number of times test were run&lt;br /&gt;
** Number of pass/fail, fail/pass transitions, (also consider unresolved/* transitions ?)&lt;br /&gt;
** Secret test suite&lt;br /&gt;
&lt;br /&gt;
===Developer Profile: Is there a correlation between Developer Profile and the way they use testing tools===&lt;br /&gt;
* How did students use the testing tools?&lt;br /&gt;
* Are ther clusters of similar use? &lt;br /&gt;
* What is charactersitic for these clusters?&lt;br /&gt;
* Meassures:&lt;br /&gt;
** Aksing students before and after&lt;br /&gt;
** Are there projects where tests initially always fail resp. pass&lt;br /&gt;
** How often do they test?&lt;br /&gt;
** How correct is their project?&lt;br /&gt;
&lt;br /&gt;
Midterm questionnaire will be used to phrase questions for final questionnaire.&lt;br /&gt;
====Example profiles====&lt;br /&gt;
* Waldundwiesen Hacker&lt;br /&gt;
** No explicit structure. Does whatever seems appriorate at the time. No QA plan.&lt;br /&gt;
* Agile&lt;br /&gt;
** Processes interleave. Conscionsness for QA. Maybe even Test First or TDD.&lt;br /&gt;
* Waterfall inspired&lt;br /&gt;
** Explicit process model. Phases don't interleave.&lt;br /&gt;
* ?&lt;br /&gt;
&lt;br /&gt;
===How do extracted, synthesized and manually written test cases compare?===&lt;br /&gt;
* Which tests are the most useful to students?&lt;br /&gt;
* How many tests are there in each category?&lt;br /&gt;
* What's the test suite quality of each category?&lt;br /&gt;
* Were some excluded from testing more often than others?&lt;br /&gt;
* How many red/green and green/red transitions are there in each category?&lt;br /&gt;
* Which had compile-time errors most often that did not get fixed?&lt;br /&gt;
* Meassures:&lt;br /&gt;
** LOC&lt;br /&gt;
** Number of tests&lt;br /&gt;
** Number of executions&lt;br /&gt;
** Outcome transitions&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddMeeting_31_01_2008&amp;diff=10562</id>
		<title>CddMeeting 31 01 2008</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddMeeting_31_01_2008&amp;diff=10562"/>
				<updated>2008-02-05T14:12:47Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Unassigned */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
=CDD Meeting, Tuesday, 31.1.2008, 10:00=&lt;br /&gt;
&lt;br /&gt;
== Next Meeting ==&lt;br /&gt;
* Tuesday, 5.2.2008, 10:00&lt;br /&gt;
&lt;br /&gt;
== Tasks ==&lt;br /&gt;
&lt;br /&gt;
===Andreas===&lt;br /&gt;
* Forumulate Experiment Hypothesis (Andreas)&lt;br /&gt;
* Fix AutoTest for courses&lt;br /&gt;
** New release&lt;br /&gt;
* Write documentation and videos tutorials (together with final release)&lt;br /&gt;
* [done] Commit dangling patch from 6.0 to 6.1&lt;br /&gt;
* [done] Make it so that tester target never has extraction or execution enabled&lt;br /&gt;
** remove hack from CDD_MANAGER.schedule_testing_restart&lt;br /&gt;
* [done] Make CDD Windows apear by default&lt;br /&gt;
* Finish tuple_002 test case&lt;br /&gt;
* Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
&lt;br /&gt;
===Arno===&lt;br /&gt;
* When test class gets removed manually, update test suite&lt;br /&gt;
* Clean up test case in interpreter after each execution (through garbage collection?)&lt;br /&gt;
* Build releasable delivery for Linux (after each Beta I guess...)&lt;br /&gt;
* Display ignored test class compilation errors (looks like we will have this for free in 6.1)&lt;br /&gt;
* Make sure CDD Tools are visible by default (what layout would you prefer?)&lt;br /&gt;
** Main tool shares tabs with clusters/features tool, output tool after C output tool&lt;br /&gt;
* Red bg for failing test cases in view &lt;br /&gt;
* Write new simple &amp;quot;New Manual Test Case&amp;quot; dialog&lt;br /&gt;
* Test case for (user defined) expanded types&lt;br /&gt;
* Test case containing feature names with underscores and &amp;quot;like Current&amp;quot;&lt;br /&gt;
* When debugging extracted test case, set first breakpoint in &amp;quot;covers.&amp;quot; feature&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Bug Fixing====&lt;br /&gt;
* Result type (like Current) produces syntax error in new test class&lt;br /&gt;
* Fix interpreter hang after runtime crash&lt;br /&gt;
* Check why EiffelStudio quits after debugging a test routine and ignoring violations&lt;br /&gt;
* Check if interpreter compilation errors are propagated correctly (seems to start interpreter even though compilation has failed)&lt;br /&gt;
&lt;br /&gt;
===Ilinca===&lt;br /&gt;
&lt;br /&gt;
* Integrate variable declarations into AutoTest trunk (by 8.2.2008)&lt;br /&gt;
&lt;br /&gt;
===Stefan===&lt;br /&gt;
* [DONE] Uniqe id to tag test cases with. To be used in logs. So test logs are resiliant to test class renamings&lt;br /&gt;
* [DONE, except probably for bad memory corruption bugs which didn't occur anymore since agents are ignored] Make popup on interpreter crash go away (win32 only)&lt;br /&gt;
&lt;br /&gt;
* [RECURRENT] Build releasable delivery on Windows&lt;br /&gt;
&lt;br /&gt;
* Logging&lt;br /&gt;
** What data to log?&lt;br /&gt;
** Implement storing&lt;br /&gt;
** Define how students should submit logs&lt;br /&gt;
* Data Gathering&lt;br /&gt;
** Define what data to gather&lt;br /&gt;
** Define how to process gather data&lt;br /&gt;
* Second Chance re-run to find true prestate (with Jocelyn)&lt;br /&gt;
* Allow for test case extraction of passing routine invocations (with Jocelyn)&lt;br /&gt;
&lt;br /&gt;
* Rebuilding manual test suite through extraction and synthesizing&lt;br /&gt;
* Find performance bottleneck of test case extraction and propose extraction method for second chance&lt;br /&gt;
&lt;br /&gt;
====Bugs/Things to look at====&lt;br /&gt;
&lt;br /&gt;
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES&lt;br /&gt;
* Is it still necessary to ever call the routine update actions with argument &amp;quot;void&amp;quot;?&lt;br /&gt;
* Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)&lt;br /&gt;
&lt;br /&gt;
===Manu===&lt;br /&gt;
* Define Project for SoftEng (due by next meeting)&lt;br /&gt;
** Find System level test suite for us to test students code&lt;br /&gt;
** Find project with pure functional part&lt;br /&gt;
* Install CDD in student labs (Manu)&lt;br /&gt;
* Devise questionnaires&lt;br /&gt;
** Initial (due next meeting after Manu's vacation)&lt;br /&gt;
** Midterm&lt;br /&gt;
** Final&lt;br /&gt;
* Analyze questionnaires&lt;br /&gt;
* Rework example profiles&lt;br /&gt;
* Assis will use CDD to get a feel for it and create a test suite for the students to start with&lt;br /&gt;
&lt;br /&gt;
===Unassigned===&lt;br /&gt;
&lt;br /&gt;
* Cache debug values when extracting several test cases.&lt;br /&gt;
* Enable execution and extraction by default for new projects.&lt;br /&gt;
* Make CDD Window and CDD Log Window visiable by default&lt;br /&gt;
* &amp;quot;Debug selected test routine&amp;quot; should be grayed out if no test case is currently selected&lt;br /&gt;
* Testing V2 Application should not interupt flow&lt;br /&gt;
* Retest if test cases with errors are properly ignored (after 6.1 port)&lt;br /&gt;
* Extraction for inline agents not currently working (at least not always)&lt;br /&gt;
** Create inline agent test case&lt;br /&gt;
** Fix extraction for inline agents&lt;br /&gt;
* Revive system level test suite&lt;br /&gt;
* There is a performance problem when compiling a test suite for 'ec' itself. There are indications that the parent ec process consumes too much CPU cycles. Maybe the CDD output refresh is too eager. Find out what the problem is and fix it.&lt;br /&gt;
&lt;br /&gt;
====Beta Tester Feedback====&lt;br /&gt;
* It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?)&lt;br /&gt;
* There should be UI support for deletion of Test Case&lt;br /&gt;
&lt;br /&gt;
== Questionnaires ==&lt;br /&gt;
&lt;br /&gt;
* Use ELBA&lt;br /&gt;
&lt;br /&gt;
== Software Engineering Project ==&lt;br /&gt;
&lt;br /&gt;
* One large project, but divided into testable subcomponents&lt;br /&gt;
* Students required to write test cases&lt;br /&gt;
* Fixed API to make things uniformly testable&lt;br /&gt;
* Public/Secret test cases (similar to Zeller course)&lt;br /&gt;
* Competitions:&lt;br /&gt;
** Group A test cases applied to Group A project&lt;br /&gt;
** Group A test cases applied to Groupt B project&lt;br /&gt;
&lt;br /&gt;
* Idea how to cancel out bias while allowing fair grading:&lt;br /&gt;
** Subtasks 1 and 2, Students divided into groups A and B&lt;br /&gt;
** First both groups do 1, A is allowed to use tool, B not&lt;br /&gt;
** Then both groups do 2, B is allowed to use tool, A not&lt;br /&gt;
** Bias cancelation:&lt;br /&gt;
*** Project complexity&lt;br /&gt;
*** Experience of students&lt;br /&gt;
*** Experience gained in first subtask, when developing second&lt;br /&gt;
*** Risk: One task might be better suited for the tool than the other&lt;br /&gt;
&lt;br /&gt;
== Data to harvest ==&lt;br /&gt;
* IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled&lt;br /&gt;
* Test Case Source (just final version, or all versions?)&lt;br /&gt;
** Use Profiler to get coverage approximation&lt;br /&gt;
* TC Meta Data (with timestamps -&amp;gt; Evolution of Test Case)&lt;br /&gt;
** TC Added/Removed/Changed&lt;br /&gt;
** TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication &amp;lt;-&amp;gt; does_not_compile &amp;lt;-&amp;gt; bad_input])&lt;br /&gt;
** TC execution time&lt;br /&gt;
** Modificiations to a testcase (compiler needs to recompile)&lt;br /&gt;
* Development Session Data&lt;br /&gt;
** IDE Startup&lt;br /&gt;
** File save&lt;br /&gt;
* Questionnairs&lt;br /&gt;
** Initial&lt;br /&gt;
** Final&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Meta&amp;quot; log entries&lt;br /&gt;
** Project opened (easy)&lt;br /&gt;
** CDD enable/disable (easy)&lt;br /&gt;
** general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)&lt;br /&gt;
&lt;br /&gt;
* CDD actions log entries&lt;br /&gt;
** Compilation of interpreter (start, end, duration)&lt;br /&gt;
** Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)&lt;br /&gt;
** Extraction of new test case (extraction time)&lt;br /&gt;
&lt;br /&gt;
* Test Suite Status&lt;br /&gt;
** Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)&lt;br /&gt;
** Test class: (do we need info on this level)&lt;br /&gt;
** Test routine: status (basically as you see it in the tool)&lt;br /&gt;
&lt;br /&gt;
==Experiment Hypotheses==&lt;br /&gt;
&lt;br /&gt;
==Do Contracts improve Tests?==&lt;br /&gt;
&lt;br /&gt;
* Is there a correlation between Tests quantity or quality and the quantity or quality of contracts?&lt;br /&gt;
&lt;br /&gt;
== Corellation between failure/fault type and test type? ==&lt;br /&gt;
&lt;br /&gt;
* Do certain kind of tests find certain kind of failures/faults?&lt;br /&gt;
&lt;br /&gt;
===Use of CDD increases development productivity===&lt;br /&gt;
* Did the use of testing decrease development time?&lt;br /&gt;
&lt;br /&gt;
* Meassures:&lt;br /&gt;
** Number of compilations&lt;br /&gt;
** Number of saves&lt;br /&gt;
** Number of revisions&lt;br /&gt;
** IDE time&lt;br /&gt;
** Asking the students&lt;br /&gt;
&lt;br /&gt;
Emphasis on quetionnair result. Correlation with logs only if it makes sense&lt;br /&gt;
&lt;br /&gt;
===Use of CDD increases code correctness===&lt;br /&gt;
* Is there a relation between code correctness of project (vs. some system level test suite) and test activity?&lt;br /&gt;
&lt;br /&gt;
* Measures:&lt;br /&gt;
** number of tests&lt;br /&gt;
** number of times test were run&lt;br /&gt;
** Number of pass/fail, fail/pass transitions, (also consider unresolved/* transitions ?)&lt;br /&gt;
** Secret test suite&lt;br /&gt;
&lt;br /&gt;
===Developer Profile: Is there a correlation between Developer Profile and the way they use testing tools===&lt;br /&gt;
* How did students use the testing tools?&lt;br /&gt;
* Are ther clusters of similar use? &lt;br /&gt;
* What is charactersitic for these clusters?&lt;br /&gt;
* Meassures:&lt;br /&gt;
** Aksing students before and after&lt;br /&gt;
** Are there projects where tests initially always fail resp. pass&lt;br /&gt;
** How often do they test?&lt;br /&gt;
** How correct is their project?&lt;br /&gt;
&lt;br /&gt;
Midterm questionnaire will be used to phrase questions for final questionnaire.&lt;br /&gt;
====Example profiles====&lt;br /&gt;
* Waldundwiesen Hacker&lt;br /&gt;
** No explicit structure. Does whatever seems appriorate at the time. No QA plan.&lt;br /&gt;
* Agile&lt;br /&gt;
** Processes interleave. Conscionsness for QA. Maybe even Test First or TDD.&lt;br /&gt;
* Waterfall inspired&lt;br /&gt;
** Explicit process model. Phases don't interleave.&lt;br /&gt;
* ?&lt;br /&gt;
&lt;br /&gt;
===How do extracted, synthesized and manually written test cases compare?===&lt;br /&gt;
* Which tests are the most useful to students?&lt;br /&gt;
* How many tests are there in each category?&lt;br /&gt;
* What's the test suite quality of each category?&lt;br /&gt;
* Were some excluded from testing more often than others?&lt;br /&gt;
* How many red/green and green/red transitions are there in each category?&lt;br /&gt;
* Which had compile-time errors most often that did not get fixed?&lt;br /&gt;
* Meassures:&lt;br /&gt;
** LOC&lt;br /&gt;
** Number of tests&lt;br /&gt;
** Number of executions&lt;br /&gt;
** Outcome transitions&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10560</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10560"/>
				<updated>2008-02-05T11:44:01Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Documentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD extension for EiffelStudio? ==&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at ETH Zurich. The extension adds full support for unit testing to EiffelStudio. It also introduces the new idea of extracting test cases automatically from failures observed via the debugger. The following lists the main features of CDD:&lt;br /&gt;
&lt;br /&gt;
* Automated extraction of test cases from failures (For every exception thrown a new test case is created)&lt;br /&gt;
* Visualization of test cases and their outcomes&lt;br /&gt;
* One button creation of manual test cases&lt;br /&gt;
* Automated execution of test cases in the background&lt;br /&gt;
* Limit visible test cases via predefined filters and custom tags&lt;br /&gt;
* Testing occurs in the background and is undisruptive to the developer&lt;br /&gt;
* Easy test case management through tags&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [[http://eiffelstudio.origo.ethz.ch/forum/20| CDD forum]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:video_still.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video Play Video!]&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
(TODO: Update with more recent screenshot and video)&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* Download CDD Beta 2&lt;br /&gt;
** Linux: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_beta_2.tar.gz (Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;.)&lt;br /&gt;
** Windows&lt;br /&gt;
*** Patch: http://n.ethz.ch/~moris/download/Eiffel61_cdd_beta_2_patch.zip (follow the instructions of the readme.txt contained in the archive)&lt;br /&gt;
*** Installer: http://n.ethz.ch/~moris/download/Eiffel_61_gpl_cdd_beta2-windows.msi (Note that the installer will overwrite an existing installation of Eiffel Studio 6.1.) &lt;br /&gt;
* [[Using CDD]]&lt;br /&gt;
* [[CDD Common Problems|Common Problems]]&lt;br /&gt;
&lt;br /&gt;
== Old Documentation == &lt;br /&gt;
&lt;br /&gt;
Documentation for the release of CDD for EiffelStudio version 5.7 is available from [[CddOldDocumentation]].&lt;br /&gt;
&lt;br /&gt;
= Project Internal Stuff =&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 04.01.2008: Final experiment definition (questions to ask, how to conduct experiment)&lt;br /&gt;
* 08.01.2008: Finalized list of features to go in release (including logging and log submission)&lt;br /&gt;
* 27.01.2008: Beta 1 (feature complete version online)&lt;br /&gt;
* 04.02.2008: Beta 2 (designated testers test release, # &amp;gt; 3)&lt;br /&gt;
* 11.02.2008: Beta tester feedback in&lt;br /&gt;
* 18.02.2008: Final CDD release for experiment online&lt;br /&gt;
* 19.02.2008: Initial Questionnaire&lt;br /&gt;
* 25.03.2008: Midterm Questionnaire&lt;br /&gt;
* 19.05.2008: Final Questionnaire&lt;br /&gt;
* 20.05.2008: Having all data&lt;br /&gt;
* 06.06.2008: Finished analysis&lt;br /&gt;
&lt;br /&gt;
== Stefans Master Plan == &lt;br /&gt;
&lt;br /&gt;
* MA Start ca 17.12.2007&lt;br /&gt;
* MA End ca 17.6.2008&lt;br /&gt;
&lt;br /&gt;
* Testing the tester&lt;br /&gt;
** System level test for CDD (incl. framework)&lt;br /&gt;
** Recreating existing unit test suite with CDD&lt;br /&gt;
** Large scale validation of CDD&lt;br /&gt;
*** Info 4 and/or Software Engineering&lt;br /&gt;
*** Questions&lt;br /&gt;
**** Does testing (manual/extracted) increase developer productivity?&lt;br /&gt;
**** How many tests do ppl end up with (manual/extracted)?&lt;br /&gt;
**** ...&lt;br /&gt;
&lt;br /&gt;
== Various ==&lt;br /&gt;
* Regression testing: [[CddRegressionTesting]]&lt;br /&gt;
* TreeView Specification: [[CddTreeViewSpec]]&lt;br /&gt;
* [[CDDHowtoRollARelease]]&lt;br /&gt;
&lt;br /&gt;
=== Things we need from estudio ===&lt;br /&gt;
* Invariants should be checked during debugging equally to pre- and post conditions (they could also be visualised in the flat view the same way like pre- and post conditions are)&lt;br /&gt;
* The information whether some call is a creation call or a normal routine call (Not sure if this is really necessary, what if we assume every call to some creation procedure is always a creation call?)&lt;br /&gt;
* Support for multiple open targets&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10551</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10551"/>
				<updated>2008-02-04T22:45:47Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Documentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD extension for EiffelStudio? ==&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at ETH Zurich. The extension adds full support for unit testing to EiffelStudio. It also introduces the new idea of extracting test cases automatically from failures observed via the debugger. The following lists the main features of CDD:&lt;br /&gt;
&lt;br /&gt;
* Automated extraction of test cases from failures (For every exception thrown a new test case is created)&lt;br /&gt;
* Visualization of test cases and their outcomes&lt;br /&gt;
* One button creation of manual test cases&lt;br /&gt;
* Automated execution of test cases in the background&lt;br /&gt;
* Limit visible test cases via predefined filters and custom tags&lt;br /&gt;
* Testing occurs in the background and is undisruptive to the developer&lt;br /&gt;
* Easy test case management through tags&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [[http://eiffelstudio.origo.ethz.ch/forum/20| CDD forum]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:video_still.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video Play Video!]&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
(TODO: Update with more recent screenshot and video)&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* Download CDD Beta 2&lt;br /&gt;
** Linux: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_beta_2.tar.gz (Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;.)&lt;br /&gt;
** Windows&lt;br /&gt;
*** Patch: http://n.ethz.ch/~moris/download/Eiffel61_cdd_beta_2_patch.zip (follow the instructions of the readme.txt contained in the archive)&lt;br /&gt;
*** Installer: (TODO) (Note that the installer will overwrite an existing installation of Eiffel Studio 6.1.) &lt;br /&gt;
* [[Using CDD]]&lt;br /&gt;
* [[CDD Common Problems|Common Problems]]&lt;br /&gt;
&lt;br /&gt;
== Old Documentation == &lt;br /&gt;
&lt;br /&gt;
Documentation for the release of CDD for EiffelStudio version 5.7 is available from [[CddOldDocumentation]].&lt;br /&gt;
&lt;br /&gt;
= Project Internal Stuff =&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 04.01.2008: Final experiment definition (questions to ask, how to conduct experiment)&lt;br /&gt;
* 08.01.2008: Finalized list of features to go in release (including logging and log submission)&lt;br /&gt;
* 27.01.2008: Beta 1 (feature complete version online)&lt;br /&gt;
* 04.02.2008: Beta 2 (designated testers test release, # &amp;gt; 3)&lt;br /&gt;
* 11.02.2008: Beta tester feedback in&lt;br /&gt;
* 18.02.2008: Final CDD release for experiment online&lt;br /&gt;
* 19.02.2008: Initial Questionnaire&lt;br /&gt;
* 25.03.2008: Midterm Questionnaire&lt;br /&gt;
* 19.05.2008: Final Questionnaire&lt;br /&gt;
* 20.05.2008: Having all data&lt;br /&gt;
* 06.06.2008: Finished analysis&lt;br /&gt;
&lt;br /&gt;
== Stefans Master Plan == &lt;br /&gt;
&lt;br /&gt;
* MA Start ca 17.12.2007&lt;br /&gt;
* MA End ca 17.6.2008&lt;br /&gt;
&lt;br /&gt;
* Testing the tester&lt;br /&gt;
** System level test for CDD (incl. framework)&lt;br /&gt;
** Recreating existing unit test suite with CDD&lt;br /&gt;
** Large scale validation of CDD&lt;br /&gt;
*** Info 4 and/or Software Engineering&lt;br /&gt;
*** Questions&lt;br /&gt;
**** Does testing (manual/extracted) increase developer productivity?&lt;br /&gt;
**** How many tests do ppl end up with (manual/extracted)?&lt;br /&gt;
**** ...&lt;br /&gt;
&lt;br /&gt;
== Various ==&lt;br /&gt;
* Regression testing: [[CddRegressionTesting]]&lt;br /&gt;
* TreeView Specification: [[CddTreeViewSpec]]&lt;br /&gt;
* [[CDDHowtoRollARelease]]&lt;br /&gt;
&lt;br /&gt;
=== Things we need from estudio ===&lt;br /&gt;
* Invariants should be checked during debugging equally to pre- and post conditions (they could also be visualised in the flat view the same way like pre- and post conditions are)&lt;br /&gt;
* The information whether some call is a creation call or a normal routine call (Not sure if this is really necessary, what if we assume every call to some creation procedure is always a creation call?)&lt;br /&gt;
* Support for multiple open targets&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10550</id>
		<title>CddBranch</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CddBranch&amp;diff=10550"/>
				<updated>2008-02-04T22:41:48Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* Documentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Testing]]&lt;br /&gt;
[[Category:EiffelDebugger]]&lt;br /&gt;
[[Category:CDD]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What is the CDD extension for EiffelStudio? ==&lt;br /&gt;
CDD (short for Contract Driven Development) is a project developed at ETH Zurich. The extension adds full support for unit testing to EiffelStudio. It also introduces the new idea of extracting test cases automatically from failures observed via the debugger. The following lists the main features of CDD:&lt;br /&gt;
&lt;br /&gt;
* Automated extraction of test cases from failures (For every exception thrown a new test case is created)&lt;br /&gt;
* Visualization of test cases and their outcomes&lt;br /&gt;
* One button creation of manual test cases&lt;br /&gt;
* Automated execution of test cases in the background&lt;br /&gt;
* Limit visible test cases via predefined filters and custom tags&lt;br /&gt;
* Testing occurs in the background and is undisruptive to the developer&lt;br /&gt;
* Easy test case management through tags&lt;br /&gt;
&lt;br /&gt;
If you have questions, feedback, or would like to report a bug please visit the [[http://eiffelstudio.origo.ethz.ch/forum/20| CDD forum]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:video_still.png|center]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;h3&amp;gt;&lt;br /&gt;
[http://se.ethz.ch/people/leitner/cdd/video Play Video!]&lt;br /&gt;
&amp;lt;/h3&amp;gt;&lt;br /&gt;
(TODO: Update with more recent screenshot and video)&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* Download CDD Beta 2&lt;br /&gt;
** Linux: http://se.ethz.ch/people/leitner/cdd/Eiffel61_cdd_beta_2.tar.gz (Download the above file and install it just like you would install a EiffelStudio tar ball. Afterwards proceed to section &amp;quot;Using CDD&amp;quot;.)&lt;br /&gt;
** Windows&lt;br /&gt;
*** Patch: http://n.ethz.ch/~moris/download/EiffelStudio_6.1_CDD_BETA_2_patch.zip (follow the instructions of the readme.txt contained in the archive)&lt;br /&gt;
*** Installer: (TODO) (Note that the installer will overwrite an existing installation of Eiffel Studio 6.1.) &lt;br /&gt;
* [[Using CDD]]&lt;br /&gt;
* [[CDD Common Problems|Common Problems]]&lt;br /&gt;
&lt;br /&gt;
== Old Documentation == &lt;br /&gt;
&lt;br /&gt;
Documentation for the release of CDD for EiffelStudio version 5.7 is available from [[CddOldDocumentation]].&lt;br /&gt;
&lt;br /&gt;
= Project Internal Stuff =&lt;br /&gt;
&lt;br /&gt;
== Schedule ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 04.01.2008: Final experiment definition (questions to ask, how to conduct experiment)&lt;br /&gt;
* 08.01.2008: Finalized list of features to go in release (including logging and log submission)&lt;br /&gt;
* 27.01.2008: Beta 1 (feature complete version online)&lt;br /&gt;
* 04.02.2008: Beta 2 (designated testers test release, # &amp;gt; 3)&lt;br /&gt;
* 11.02.2008: Beta tester feedback in&lt;br /&gt;
* 18.02.2008: Final CDD release for experiment online&lt;br /&gt;
* 19.02.2008: Initial Questionnaire&lt;br /&gt;
* 25.03.2008: Midterm Questionnaire&lt;br /&gt;
* 19.05.2008: Final Questionnaire&lt;br /&gt;
* 20.05.2008: Having all data&lt;br /&gt;
* 06.06.2008: Finished analysis&lt;br /&gt;
&lt;br /&gt;
== Stefans Master Plan == &lt;br /&gt;
&lt;br /&gt;
* MA Start ca 17.12.2007&lt;br /&gt;
* MA End ca 17.6.2008&lt;br /&gt;
&lt;br /&gt;
* Testing the tester&lt;br /&gt;
** System level test for CDD (incl. framework)&lt;br /&gt;
** Recreating existing unit test suite with CDD&lt;br /&gt;
** Large scale validation of CDD&lt;br /&gt;
*** Info 4 and/or Software Engineering&lt;br /&gt;
*** Questions&lt;br /&gt;
**** Does testing (manual/extracted) increase developer productivity?&lt;br /&gt;
**** How many tests do ppl end up with (manual/extracted)?&lt;br /&gt;
**** ...&lt;br /&gt;
&lt;br /&gt;
== Various ==&lt;br /&gt;
* Regression testing: [[CddRegressionTesting]]&lt;br /&gt;
* TreeView Specification: [[CddTreeViewSpec]]&lt;br /&gt;
* [[CDDHowtoRollARelease]]&lt;br /&gt;
&lt;br /&gt;
=== Things we need from estudio ===&lt;br /&gt;
* Invariants should be checked during debugging equally to pre- and post conditions (they could also be visualised in the flat view the same way like pre- and post conditions are)&lt;br /&gt;
* The information whether some call is a creation call or a normal routine call (Not sure if this is really necessary, what if we assume every call to some creation procedure is always a creation call?)&lt;br /&gt;
* Support for multiple open targets&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10535</id>
		<title>CDDHowtoRollARelease</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10535"/>
				<updated>2008-02-04T15:18:35Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* How to roll a release on Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
== List of CDD specific files ==&lt;br /&gt;
The following files have to be overwritten or added to a stock EiffelStudio release:&lt;br /&gt;
* cdd base library classes -&amp;gt; Src/library/base/ise/support/cdd&lt;br /&gt;
* the manual_test_class.cls file -&amp;gt; Delivery/studio/help/defaults&lt;br /&gt;
* the new 16x16.png -&amp;gt; Delivery/studio/bitmaps/png/&lt;br /&gt;
* the cdd examples folder -&amp;gt; Src/examples/cdd&lt;br /&gt;
* Splash screens -&amp;gt;  Delivery/studio/bitmaps/png/splash_shadow.png Delivery/studio/bitmaps/png/splash.png&lt;br /&gt;
&lt;br /&gt;
== How to roll a release on Windows ==&lt;br /&gt;
* prepare some directory &amp;lt;INSTALL_DIR&amp;gt; like this:&lt;br /&gt;
** create 3 subdirectories &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio, &amp;lt;INSTALL_DIR&amp;gt;/gcc, &amp;lt;INSTALL_DIR&amp;gt;/releases. Fill them according to the following description&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio] has to contain a complete delivery without the ec.exe binaries. Without a working delivery script, this can be achieved by &amp;quot;patching&amp;quot; an official installation:&lt;br /&gt;
**** copy the contents of the installation directory of a fresh offcial installation into &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio (fresh = NO EIFGENs produced. To be sure, uninstall the official version if alrady existing, manually delete the remains in the installation directory, and reinstall it WITHOUT building any precompilations)&lt;br /&gt;
**** Remove the ec.exe from  &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio/studio/spec/windows/bin&lt;br /&gt;
**** Replace/Add the cdd specific files (see list above). Watch out for the proper place to add/replace them (the above list is svn-specific)&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/gcc] needs to contain content of &amp;lt;svn-eiffel-branch&amp;gt;/free_add_ons/gcc&lt;br /&gt;
*** [&amp;lt;INSTALL_DIR&amp;gt;/releases] needs to contain the ec.exe binaries (the cdd versions of course)&lt;br /&gt;
**** &amp;lt;INSTALL_DIR&amp;gt;/releases/gpl_version/ needs to contain ec.exe&lt;br /&gt;
**** &amp;lt;INSTALL_DIR&amp;gt;/releases/enterprise_version/ needs to contain ec.exe (take the same ec.exe, it's a dummy for using the scripts later on)&lt;br /&gt;
&lt;br /&gt;
* let env variable %INSTALL_DIR% point to the &amp;lt;INSTALL_DIR&amp;gt;&lt;br /&gt;
* let env variable %INIT_DIR% point to the &amp;lt;svn-eiffel-branch&amp;gt;/Delivery/scripts/windows folder&lt;br /&gt;
* finalize the &amp;quot;hallow&amp;quot; tool (&amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/hallow.ecf)&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/bin&lt;br /&gt;
* copy content of &amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/EIFGENs/hallow/F_code to INIT_DIR/install/bin&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/binaries/x86&lt;br /&gt;
* get a proper setup.dll (from manus probably, or build yourself, instructions for this will be added soon) and put it into %INIT_DIR%/install/binaries/x86/&lt;br /&gt;
* start command line, go to %INIT_DIR%/install/content/eiffelstudio and run:&lt;br /&gt;
** nmake /nologo clean&lt;br /&gt;
** nmake /nologo&lt;br /&gt;
** nmake /nologo gpl_x86&lt;br /&gt;
* wait some minutes .... and pray :-)&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	<entry>
		<id>https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10534</id>
		<title>CDDHowtoRollARelease</title>
		<link rel="alternate" type="text/html" href="https://dev.eiffel.com/index.php?title=CDDHowtoRollARelease&amp;diff=10534"/>
				<updated>2008-02-04T15:17:26Z</updated>
		
		<summary type="html">&lt;p&gt;Mogh: /* How to roll a release on Windows */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:CDD]]&lt;br /&gt;
== List of CDD specific files ==&lt;br /&gt;
The following files have to be overwritten or added to a stock EiffelStudio release:&lt;br /&gt;
* cdd base library classes -&amp;gt; Src/library/base/ise/support/cdd&lt;br /&gt;
* the manual_test_class.cls file -&amp;gt; Delivery/studio/help/defaults&lt;br /&gt;
* the new 16x16.png -&amp;gt; Delivery/studio/bitmaps/png/&lt;br /&gt;
* the cdd examples folder -&amp;gt; Src/examples/cdd&lt;br /&gt;
* Splash screens -&amp;gt;  Delivery/studio/bitmaps/png/splash_shadow.png Delivery/studio/bitmaps/png/splash.png&lt;br /&gt;
&lt;br /&gt;
== How to roll a release on Windows ==&lt;br /&gt;
* prepare some directory &amp;lt;INSTALL_DIR&amp;gt; like this:&lt;br /&gt;
** create 3 subdirectories &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio, &amp;lt;INSTALL_DIR&amp;gt;/gcc, &amp;lt;INSTALL_DIR&amp;gt;/releases. Fill them according to the following description&lt;br /&gt;
** [&amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio] has to contain a complete delivery without the ec.exe binaries. Without a working delivery script, this can be achieved by &amp;quot;patching&amp;quot; an official installation:&lt;br /&gt;
*** copy the contents of the installation directory of a fresh offcial installation into &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio (fresh = NO EIFGENs produced. To be sure, uninstall the official version if alrady existing, manually delete the remains in the installation directory, and reinstall it WITHOUT building any precompilations)&lt;br /&gt;
*** Remove the ec.exe from  &amp;lt;INSTALL_DIR&amp;gt;/EiffelStudio/studio/spec/windows/bin&lt;br /&gt;
*** Replace/Add the cdd specific files (see list above). Watch out for the proper place to add/replace them (the above list is svn-specific)&lt;br /&gt;
** [&amp;lt;INSTALL_DIR&amp;gt;/gcc] needs to contain content of &amp;lt;svn-eiffel-branch&amp;gt;/free_add_ons/gcc&lt;br /&gt;
** [&amp;lt;INSTALL_DIR&amp;gt;/releases] needs to contain the ec.exe binaries (the cdd versions of course)&lt;br /&gt;
*** &amp;lt;INSTALL_DIR&amp;gt;/releases/gpl_version/ needs to contain ec.exe&lt;br /&gt;
*** &amp;lt;INSTALL_DIR&amp;gt;/releases/enterprise_version needs to contain ec.exe (take the same ec.exe, it's a dummy for using the scripts later on)&lt;br /&gt;
&lt;br /&gt;
* let env variable %INSTALL_DIR% point to the &amp;lt;INSTALL_DIR&amp;gt;&lt;br /&gt;
* let env variable %INIT_DIR% point to the &amp;lt;svn-eiffel-branch&amp;gt;/Delivery/scripts/windows folder&lt;br /&gt;
* finalize the &amp;quot;hallow&amp;quot; tool (&amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/hallow.ecf)&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/bin&lt;br /&gt;
* copy content of &amp;lt;svn-eiffel-branch&amp;gt;/Src/tools/hallow/EIFGENs/hallow/F_code to INIT_DIR/install/bin&lt;br /&gt;
* create if not exists directory %INIT_DIR%/install/binaries/x86&lt;br /&gt;
* get a proper setup.dll (from manus probably, or build yourself, instructions for this will be added soon) and put it into %INIT_DIR%/install/binaries/x86/&lt;br /&gt;
* start command line, go to %INIT_DIR%/install/content/eiffelstudio and run:&lt;br /&gt;
** nmake /nologo clean&lt;br /&gt;
** nmake /nologo&lt;br /&gt;
** nmake /nologo gpl_x86&lt;br /&gt;
* wait some minutes .... and pray :-)&lt;/div&gt;</summary>
		<author><name>Mogh</name></author>	</entry>

	</feed>