Difference between revisions of "CddMeeting01712008"
 (→Stefan)  | 
				 (→Andreas)  | 
				||
| Line 12: | Line 12: | ||
** Integrate AUT_TEST_CASE into CDD_TEST_CASE hierarchy  | ** Integrate AUT_TEST_CASE into CDD_TEST_CASE hierarchy  | ||
** New release  | ** New release  | ||
| + | * Break recursion in Erl-G lookup printer  | ||
* Write documentation and videos tutorials (together with final release)  | * Write documentation and videos tutorials (together with final release)  | ||
* Find out why errors in test classes are no longer ignored  | * Find out why errors in test classes are no longer ignored  | ||
Revision as of 00:41, 22 January 2008
Contents
- 1 CDD Meeting, Tuesday, 17.1.2008, 14:00
- 1.1 Next Meeting
 - 1.2 Tasks
 - 1.3 Questionnaires
 - 1.4 Software Engineering Project
 - 1.5 Data to harvest
 - 1.6 Experiment Hypotheses
 
 
CDD Meeting, Tuesday, 17.1.2008, 14:00
Next Meeting
- Friday, 23.1.2008, 10:00
 
Tasks
Andreas
- Forumulate Experiment Hypothesis (Andreas)
 -  Fix AutoTest for courses
- Integrate AUT_TEST_CASE into CDD_TEST_CASE hierarchy
 - New release
 
 - Break recursion in Erl-G lookup printer
 - Write documentation and videos tutorials (together with final release)
 - Find out why errors in test classes are no longer ignored
 
Arno
- Remove cdd_enabled and capture/replay and add execute tag in config
 - Right click clears CDD log window
 - Add CDD IDE log entry when new test case is extracted
 - Implement "New Manual Test Case" Button
 - Better Icons for GUI (Arno)
 - Status bar
 - Grid items contain number of (failing) test routines
 - Dropping stone on filter text automatically sets a new filter text (e.g. drop stone for ROOT_CLASS and filter text will be covers.ROOT_CLASS, for class MY_TESTS new filter text will be name.MY_TESTS)
 - When test class gets removed manually, update test suite
 - Restore open nodes and selection after full updates (incr. works already)
 -  Implement failure context window
- Maybe also additional information such as previous outcomes?
 
 - Environment variable (or better user preference) for qualifying class names (to avoid svn conflicts)
 -  TreeView
- by Type, by Outcome, by Testcase (default), by Tested Class, by Tag; Implement via drop down box
 
 - Start background execution after extraction (even if user is still debugging)
 - Clean up test case in interpreter after each execution (through garbage collection?)
 - Port to 6.1 (right after Beta 1)
 - Build releasable delivery for Linux (after each Beta I guess...)
 
Bug Fixing
- Extracted test case does not appear in grid (AST does not contain test feature)
 - Fix interpreter hang after runtime crash
 - Scrolling in CDD output window
 - Make sure tests directory exists before any of the printers try to print a class
 - Check why EiffelStudio quits after debugging a test routine and ignoring violations
 - Check if interpreter compilation errors are propagated correctly (seems to start interpreter even though compilation has failed)
 
Ilinca
- Integrate variable declarations into AutoTest trunk (by 8.2.2008)
 
Stefan
- [DONE] Add filters and tags for extracted, manual tests and synthesized tests
 - [DONE] Look at/fix test case execution for agents
 - Second Chance re-run to find true prestate (with Jocelyn)
 - Allow for test case extraction of passing routine invocations (with Jocelyn)
 -  Logging
- What data to log?
 - Implement storing
 - Define how students should submit logs
 
 -  Data Gathering
- Define what data to gather
 - Define how to process gather data
 
 - Automate CDD System level tests
 - [DONE] Add most important convenience routine to CDD_TEST_CASE (Stefan)
 - Move logs below cdd_tests
 - Uniqe id to tag test cases with. To be used in logs. So test logs are resiliant to test class renamings
 - [DONE] While extracting test cases, flag objects that are target to a currently executing routine
 - [DONE] During setup check inv of all objects that are not flaged
 - Make popup on interpreter crash go away (win32 only)
 -  Add info to indexing clause
- "This class has been automatically created by CDD"
 - "Visit ... to learn more about extracted test cases"
 - Creation date
 
 - Build releasable delivery on Windows
 - Rebuilding manual test suite through extraction and synthesizing
 - [DONE] In extracted test cases: break long manifest strings into smaller parts
 - Extract STRINGs correctly
 
Manu
-  Define Project for SoftEng (due by next meeting)
- Find System level test suite for us to test students code
 - Find project with pure functional part
 
 - Install CDD in student labs (Manu)
 -  Devise questionnaires
- Initial (due next meeting after Manu's vacation)
 - Midterm
 - Final
 
 - Analyze questionnaires
 - Rework example profiles
 - Assis will use CDD to get a feel for it and create a test suite for the students to start with
 
Unassigned
- Disable GUI visibility when running tests in interpreter (background testing)
 - Display ignored test class compilation errors (looks like we will have this for free in 6.1)
 - Do not extract test case for C calls like {CLASS_WITH_EXTERNALS}.some_function
 
Questionnaires
- Use ELBA
 
Software Engineering Project
- One large project, but divided into testable subcomponents
 - Students required to write test cases
 - Fixed API to make things uniformly testable
 - Public/Secret test cases (similar to Zeller course)
 -  Competitions:
- Group A test cases applied to Group A project
 - Group A test cases applied to Groupt B project
 
 
Data to harvest
- IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled
 -  Test Case Source (just final version, or all versions?)
- Use Profiler to get coverage approximation
 
 -  TC Meta Data (with timestamps -> Evolution of Test Case)
- TC Added/Removed/Changed
 - TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication <-> does_not_compile <-> bad_input])
 - TC execution time
 - Modificiations to a testcase (compiler needs to recompile)
 
 -  Development Session Data
- IDE Startup
 - File save
 
 -  Questionnairs
- Initial
 - Final
 
 
Experiment Hypotheses
Use of CDD increases development productivity
- Did the use of testing decrease development time?
 
-  Meassures:
- Number of compilations
 - Number of saves
 - Number of revisions
 - IDE time
 - Asking the students
 
 
Emphasis on quetionnair result. Correlation with logs only if it makes sense
Use of CDD increases code correctness
- Is there a relation between code correctness of project (vs. some system level test suite) and test activity?
 
-  Measures:
- number of tests
 - number of times test were run
 - Number of pass/fail, fail/pass transitions, (also consider unresolved/* transitions ?)
 - Secret test suite
 
 
Developer Profile: Is there a correlation between Developer Profile and the way they use testing tools
- How did students use the testing tools?
 - Are ther clusters of similar use?
 - What is charactersitic for these clusters?
 -  Meassures:
- Aksing students before and after
 - Are there projects where tests initially always fail resp. pass
 - How often do they test?
 - How correct is their project?
 
 
Midterm questionnaire will be used to phrase questions for final questionnaire.
Example profiles
-  Waldundwiesen Hacker
- No explicit structure. Does whatever seems appriorate at the time. No QA plan.
 
 -  Agile
- Processes interleave. Conscionsness for QA. Maybe even Test First or TDD.
 
 -  Waterfall inspired
- Explicit process model. Phases don't interleave.
 
 - ?
 
How do extracted, synthesized and manually written test cases compare?
- Which tests are the most useful to students?
 - How many tests are there in each category?
 - What's the test suite quality of each category?
 - Were some excluded from testing more often than others?
 - How many red/green and green/red transitions are there in each category?
 - Which had compile-time errors most often that did not get fixed?
 -  Meassures:
- LOC
 - Number of tests
 - Number of executions
 - Outcome transitions
 
 

