Difference between revisions of "CddMeeting02512008"
(→Stefan) |
(→Andreas) |
||
Line 12: | Line 12: | ||
** New release | ** New release | ||
* Write documentation and videos tutorials (together with final release) | * Write documentation and videos tutorials (together with final release) | ||
+ | * Commit dangling patch from 6.0 to 6.1 | ||
+ | * Make it so that tester target never has extraction or execution enabled | ||
+ | ** remove hack from CDD_MANAGER.schedule_testing_restart | ||
===Arno=== | ===Arno=== |
Revision as of 08:34, 30 January 2008
Contents
- 1 CDD Meeting, Tuesday, 17.1.2008, 14:00
- 1.1 Next Meeting
- 1.2 Tasks
- 1.3 Questionnaires
- 1.4 Software Engineering Project
- 1.5 Data to harvest
- 1.6 Experiment Hypotheses
CDD Meeting, Tuesday, 17.1.2008, 14:00
Next Meeting
- Thursday, 31.1.2008, 10:00
Tasks
Andreas
- Forumulate Experiment Hypothesis (Andreas)
- Fix AutoTest for courses
- New release
- Write documentation and videos tutorials (together with final release)
- Commit dangling patch from 6.0 to 6.1
- Make it so that tester target never has extraction or execution enabled
- remove hack from CDD_MANAGER.schedule_testing_restart
Arno
- Add CDD IDE log entry when new test case is extracted
- Implement "New Manual Test Case" Button
- Better Icons for GUI (Arno)
- Grid items contain number of (failing) test routines
- When test class gets removed manually, update test suite
- Restore open nodes and selection after full updates (incr. works already)
- Implement failure context window
- Maybe also additional information such as previous outcomes?
- Clean up test case in interpreter after each execution (through garbage collection?)
- Port to 6.1 (right after Beta 1)
- Build releasable delivery for Linux (after each Beta I guess...)
- Display ignored test class compilation errors (looks like we will have this for free in 6.1)
- Disable GUI visibility when running tests in interpreter (background testing)
- First step: Ask Ian what could be done.
- Do not extract test case for C calls like {CLASS_WITH_EXTERNALS}.some_function
- Don't extract when failure is due to developer exception
- Make sure CDD Tools are visible by default (what layout would you prefer?)
- Main tool shares tabs with clusters/features tool, output tool after C output tool
Bug Fixing
- More than one EIFFEL_CLASS_C with same name when EIFGENs is messed up
- Result type (like Current) produces syntax error in new test class
- Fix interpreter hang after runtime crash
- Scrolling in CDD output window
- Check why EiffelStudio quits after debugging a test routine and ignoring violations
- Check if interpreter compilation errors are propagated correctly (seems to start interpreter even though compilation has failed)
Ilinca
- Integrate variable declarations into AutoTest trunk (by 8.2.2008)
Stefan
- Clean up config handling (add always tester target, remove "enabled" attribute)
- Uniqe id to tag test cases with. To be used in logs. So test logs are resiliant to test class renamings
- Logging
- What data to log?
- Implement storing
- Define how students should submit logs
- Data Gathering
- Define what data to gather
- Define how to process gather data
- Second Chance re-run to find true prestate (with Jocelyn)
- Allow for test case extraction of passing routine invocations (with Jocelyn)
- Make popup on interpreter crash go away (win32 only)
- Build releasable delivery on Windows
- Rebuilding manual test suite through extraction and synthesizing
- Find performance bottleneck of test case extraction and propose extraction method for second chance
-- Bugs
- POINTER support for special and tuple objects
- Bank Account Example: make Vision 2 library read only
-- Building an msi...
- prepare some directory <INSTALL_DIR> like this:
- INSTALL_DIR/EiffelStudio : contains the complete delivery without the ec.exe binaries (there are several ways to do this, one is take the official release version and simply add/replace the new cdd files. Currently for 6.0 these are: cdd base library classes -> library/base/ise/support/cdd, the manual_test_class.cls file -> studio/help/defaults, the new 16x16.png -> studio/bitmaps/png/, the cdd examples folder -> /examples)
- INSTALL_DIR/gcc: the gcc directory (check out $EIFFEL_SRC/free_add_ons/gcc 'as' this)
- INSTALL_DIR/releases/gpl_version/ec.exe (the cdd version exe, obviously)
- INSTALL_DIR/releases/enterprise_version/ec.exe (take the same ec.exe, it's a dummy for script and won't be used)
- let env variable INSTALL_DIR point to this <INSTALL_DIR>
- let env variable INIT_DIR point to your Delivery/scripts/windows folder (i checked out trunk)
- finalize the "hallow" tool (Src/tools/hallow/hallow.ecf)
- create if not exists directory INIT_DIR/install/bin
- copy content of Src/tools/hallow/EIFGENs/hallow/F_code to INIT_DIR/install/bin
- get a proper setup.dll (from manus probably) and put it into INIT_DIR/install/binaries/x86/
- (create directories that don't exist)
- start command line, go to INIT_DIR/install/content/eiffelstudio and run:
- nmake /nologo clean
- nmake /nologo
- nmake /nologo gpl_x86
- wait some minutes .... and pray :-)
Manu
- Define Project for SoftEng (due by next meeting)
- Find System level test suite for us to test students code
- Find project with pure functional part
- Install CDD in student labs (Manu)
- Devise questionnaires
- Initial (due next meeting after Manu's vacation)
- Midterm
- Final
- Analyze questionnaires
- Rework example profiles
- Assis will use CDD to get a feel for it and create a test suite for the students to start with
Unassigned
- Cache debug values when extracting several test cases.
- Enable execution and extraction by default for new projects.
- Make CDD Window and CDD Log Window visiable by default
- "Debug selected test routine" should be grayed out if no test case is currently selected
- Fix spacing in "Creatre new test routine" dialog
- "Create new test routine dialog"
- Simplfy base case (no specific implementation under test)
- Default class name to "TEST_"
- Gray out "Create" button if class name or routine name do not contain "test" (case insenstive)
Questionnaires
- Use ELBA
Software Engineering Project
- One large project, but divided into testable subcomponents
- Students required to write test cases
- Fixed API to make things uniformly testable
- Public/Secret test cases (similar to Zeller course)
- Competitions:
- Group A test cases applied to Group A project
- Group A test cases applied to Groupt B project
Data to harvest
- IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled
- Test Case Source (just final version, or all versions?)
- Use Profiler to get coverage approximation
- TC Meta Data (with timestamps -> Evolution of Test Case)
- TC Added/Removed/Changed
- TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication <-> does_not_compile <-> bad_input])
- TC execution time
- Modificiations to a testcase (compiler needs to recompile)
- Development Session Data
- IDE Startup
- File save
- Questionnairs
- Initial
- Final
Experiment Hypotheses
Use of CDD increases development productivity
- Did the use of testing decrease development time?
- Meassures:
- Number of compilations
- Number of saves
- Number of revisions
- IDE time
- Asking the students
Emphasis on quetionnair result. Correlation with logs only if it makes sense
Use of CDD increases code correctness
- Is there a relation between code correctness of project (vs. some system level test suite) and test activity?
- Measures:
- number of tests
- number of times test were run
- Number of pass/fail, fail/pass transitions, (also consider unresolved/* transitions ?)
- Secret test suite
Developer Profile: Is there a correlation between Developer Profile and the way they use testing tools
- How did students use the testing tools?
- Are ther clusters of similar use?
- What is charactersitic for these clusters?
- Meassures:
- Aksing students before and after
- Are there projects where tests initially always fail resp. pass
- How often do they test?
- How correct is their project?
Midterm questionnaire will be used to phrase questions for final questionnaire.
Example profiles
- Waldundwiesen Hacker
- No explicit structure. Does whatever seems appriorate at the time. No QA plan.
- Agile
- Processes interleave. Conscionsness for QA. Maybe even Test First or TDD.
- Waterfall inspired
- Explicit process model. Phases don't interleave.
- ?
How do extracted, synthesized and manually written test cases compare?
- Which tests are the most useful to students?
- How many tests are there in each category?
- What's the test suite quality of each category?
- Were some excluded from testing more often than others?
- How many red/green and green/red transitions are there in each category?
- Which had compile-time errors most often that did not get fixed?
- Meassures:
- LOC
- Number of tests
- Number of executions
- Outcome transitions