Difference between revisions of "CddMeeting 13 02 2008"
(→Stefan) |
(→Stefan) |
||
Line 32: | Line 32: | ||
===Stefan=== | ===Stefan=== | ||
* [RECURRENT] Build releasable delivery on Windows | * [RECURRENT] Build releasable delivery on Windows | ||
− | * Check log for XML validity | + | * [RECURRENT, currently valid]Check log for XML validity |
* [DONE] Add timeout information to log | * [DONE] Add timeout information to log | ||
* [DONE] Distinguish extracted, synthesized and manual test cases in logs | * [DONE] Distinguish extracted, synthesized and manual test cases in logs | ||
* [DONE] Log TS Snapshot after execution (only executed routines) | * [DONE] Log TS Snapshot after execution (only executed routines) | ||
* [DONE] Log TS Snapshot after compilation (complete test suite) | * [DONE] Log TS Snapshot after compilation (complete test suite) | ||
− | * Log when ES starts up and shuts down | + | * [DONE] Log when ES starts up and shuts down |
− | * Log time it takes to extract test case | + | * [DONE] Log time it takes to extract test case |
* [DONE] Log time it takes to compile SUT | * [DONE] Log time it takes to compile SUT | ||
* [DONE] Log time it takes to compile test suite | * [DONE] Log time it takes to compile test suite | ||
− | * Log original exception (make it part of test routine's state) | + | * [DONE, temporary solution] Log original exception (make it part of test routine's state) |
* Second Chance re-run to find true prestate (with Jocelyn) | * Second Chance re-run to find true prestate (with Jocelyn) | ||
* Allow for test case extraction of passing routine invocations (with Jocelyn) | * Allow for test case extraction of passing routine invocations (with Jocelyn) | ||
Line 53: | Line 53: | ||
* For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES | * For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES | ||
− | * Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool) | + | * [WAITING FOR ANSWER TO EMAIL] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool) |
===Manu=== | ===Manu=== |
Revision as of 11:10, 14 February 2008
Contents
- 1 CDD Meeting, Wednesday, 14.02.2008, 10:00
- 1.1 Next Meeting
- 1.2 Tasks
- 1.3 Questionnaires
- 1.4 Software Engineering Project
- 1.5 Data to harvest
- 1.6 Logging
- 1.7 Experiment Hypotheses
- 1.7.1 How reliably can we extract test cases that reproduce the original failure?
- 1.7.2 Are the extracted tests useful for debugging?
- 1.7.3 What is the (time and memory) overhead of enabling extraction?
- 1.7.4 What is the size of the extracted test cases?
- 1.7.5 Are we able to reproduce bugs from industry?
- 1.7.6 Does it make a difference in the quality of the code, whether one tests manually or extracts them?
- 1.7.7 Do contracts replace traditional testing oracles?
CDD Meeting, Wednesday, 14.02.2008, 10:00
Next Meeting
- ?
Tasks
Andreas
- Forumulate Experiment Hypothesis (Andreas)
- Fix AutoTest for courses
- New release
- Write documentation and videos tutorials (together with final release)
- Finish tuple_002 test case
- [delayed] Retest if test cases with errors are properly ignored (after 6.1 port)
- [done] Add timeout judgement
Arno
- Build releasable delivery for Linux (after each Beta I guess...)
- Make general encoding/decoding routines for special feature names
- e.g. infix "+" (ES) <=> infix_plus (CDD)
- Make newly extracted test cases show up "expanded" in GUI treeview
- "Cleanup button" deletes filtered unresolved extracted test cases
Bug Fixing
- Check why EiffelStudio quits after debugging a test routine and ignoring violations (ask Jocelyn)
- CDD Output Tool window display is not saved (on initial start after installation it is there. But after closing and reopening it is gone)
Ilinca
- Integrate variable declarations into AutoTest trunk (by 8.2.2008)
Stefan
- [RECURRENT] Build releasable delivery on Windows
- [RECURRENT, currently valid]Check log for XML validity
- [DONE] Add timeout information to log
- [DONE] Distinguish extracted, synthesized and manual test cases in logs
- [DONE] Log TS Snapshot after execution (only executed routines)
- [DONE] Log TS Snapshot after compilation (complete test suite)
- [DONE] Log when ES starts up and shuts down
- [DONE] Log time it takes to extract test case
- [DONE] Log time it takes to compile SUT
- [DONE] Log time it takes to compile test suite
- [DONE, temporary solution] Log original exception (make it part of test routine's state)
- Second Chance re-run to find true prestate (with Jocelyn)
- Allow for test case extraction of passing routine invocations (with Jocelyn)
- Revive system level test suite
- Fix config files for system level tests (remove cdd tag)
- Rebuilding manual test suite through extraction and synthesizing
- Find performance bottleneck of test case extraction and propose extraction method for second chance
Bugs/Things to look at
- For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES
- [WAITING FOR ANSWER TO EMAIL] Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)
Manu
- Install CDD in student labs (Manu)
- Devise questionnaires
- Initial (due next meeting after Manu's vacation)
- Midterm
- Final
- Analyze questionnaires
- Rework example profiles
- Assis will use CDD to get a feel for it and create a test suite for the students to start with
Bernd
- Define Project for SoftEng
- Find test suite for us to test students code
- Find project with pure functional part
Unassigned
- Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)
- Cache debug values when extracting several test cases.
Beta Tester Feedback
(Please put your name so we can get back to you in the case of questions)
- (Jocelyn) It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [delayed]
- home directory? application_data directory? [Jocelyn]
- (Jocelyn) There should be UI support for deletion of Test Case [difficult for test routines, possible for classes. documentation added.]
- (Jocelyn) [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [delayed]
- (Jocelyn) It would be nice if there was a way to configure the timeout for the interpreter [Stefan will ask Jocelyn for concrete example]
Questionnaires
- Use ELBA
Software Engineering Project
- Task 1: Implement VCard API
- Task 2: Implement Mime API
- Task 3: Write test cases to reveal faults in foreign VCard implementations
- Task 4: Write test cases to reveal faults in foreign Mime implementations
- Group A:
- Task 1, Manual Tests
- Task 2, Extracted Tests
- Task 3, Manual Tests
- Task 4, Extracted Tests
- Group B:
- Task 1, Extracted Tests
- Task 2, Manual Tests
- Task 3, Extracted Tests
- Task 4, Manual Tests
- One large project, but divided into testable subcomponents
- Students required to write test cases
- Fixed API to make things uniformly testable
- Public/Secret test cases (similar to Zeller course)
- Competitions:
- Group A test cases applied to Group A project
- Group A test cases applied to Groupt B project
- Idea how to cancel out bias while allowing fair grading:
- Subtasks 1 and 2, Students divided into groups A and B
- First both groups do 1, A is allowed to use tool, B not
- Then both groups do 2, B is allowed to use tool, A not
- Bias cancelation:
- Project complexity
- Experience of students
- Experience gained in first subtask, when developing second
- Risk: One task might be better suited for the tool than the other
Data to harvest
- IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled
- Test Case Source (just final version, or all versions?)
- Use Profiler to get coverage approximation
- TC Meta Data (with timestamps -> Evolution of Test Case)
- TC Added/Removed/Changed
- TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication <-> does_not_compile <-> bad_input])
- TC execution time
- Modificiations to a testcase (compiler needs to recompile)
- Development Session Data
- IDE Startup
- File save
- Questionnairs
- Initial
- Final
Logging
- "Meta" log entries
- Project opened (easy)
- CDD enable/disable (easy)
- general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)
- CDD actions log entries
- Compilation of interpreter (start, end, duration)
- Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)
- Extraction of new test case (extraction time)
- Test Suite Status
- Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)
- Test class: (do we need info on this level)
- Test routine: status (basically as you see it in the tool)
Experiment Hypotheses
How reliably can we extract test cases that reproduce the original failure?
- Log original exception and exception received from first test execution
Are the extracted tests useful for debugging?
- Ask developers, using CDD
What is the (time and memory) overhead of enabling extraction?
What is the size of the extracted test cases?
Are we able to reproduce bugs from industry?
Take bug repository (say EiffelStudio, Gobo, eposix, ...). Use buggy version, try to reproduce bug, see if extracted test case is good.
Does it make a difference in the quality of the code, whether one tests manually or extracts them?
- Compare projects using extracted tests and manual tests to ref test suite
Do contracts replace traditional testing oracles?
- Original API without contracts
- Run failing test cases (the ones we get from second part) with reference API with contracts
- How many times does the contract replace the testing oracle?