CddMeeting 05 02 2008
Revision as of 04:35, 6 February 2008 by Aleitner (Talk | contribs) (→Are the extracted tests useful for debugging?)
Contents
- 1 CDD Meeting, Tuesday, 05.02.2008, 10:00
- 1.1 Next Meeting
- 1.2 Tasks
- 1.3 Questionnaires
- 1.4 Software Engineering Project
- 1.5 Data to harvest
- 1.6 Logging
- 1.7 Experiment Hypotheses
- 1.7.1 How reliably can we extract test cases that reproduce the original failure?
- 1.7.2 Are the extracted tests useful for debugging?
- 1.7.3 What is the (time and memory) overhead of enabling extraction?
- 1.7.4 What is the size of the extracted test cases?
- 1.7.5 Does it make a difference in the quality of the code, whether one tests manually or extracts them?
- 1.7.6 Do contracts replace traditional testing oracles?
CDD Meeting, Tuesday, 05.02.2008, 10:00
Next Meeting
- Tuesday, 12.02.2008, 10:00
Tasks
Andreas
- Forumulate Experiment Hypothesis (Andreas)
- Fix AutoTest for courses
- New release
- Write documentation and videos tutorials (together with final release)
- Finish tuple_002 test case
- Retest if test cases with errors are properly ignored (after 6.1 port)
- Add timeout judgement
- Timeout -> 5 sec
Arno
- When test class gets removed manually, update test suite
- Build releasable delivery for Linux (after each Beta I guess...)
- Display ignored test class compilation errors (looks like we will have this for free in 6.1)
- Red bg for failing test cases in view
- When debugging extracted test case, set first breakpoint in "covers." feature
- Extraction for inline agents not currently working (at least not always)
- Create inline agent test case
- Fix extraction for inline agents
Bug Fixing
- Result type (like Current) produces syntax error in new test class
- Check why EiffelStudio quits after debugging a test routine and ignoring violations
Ilinca
- Integrate variable declarations into AutoTest trunk (by 8.2.2008)
Stefan
- [RECURRENT] Build releasable delivery on Windows
- Distinguish extracted, synthesized and manual test cases in logs
- Log TS Snapshot after compilation
- Log TS Snapshot after testing
- Log when ES starts up and shuts down
- Log time it takes to extract test case
- Log time it takes to compile SUT
- Log time it takes to compile test suite
- Log original exception (make it part of test routine's state)
- Second Chance re-run to find true prestate (with Jocelyn)
- Allow for test case extraction of passing routine invocations (with Jocelyn)
- Revive system level test suite
- Rebuilding manual test suite through extraction and synthesizing
- Find performance bottleneck of test case extraction and propose extraction method for second chance
Bugs/Things to look at
- For big projects (like ES itself) background compilation of the interpreter leads to completely unresponsive ES
- Crash upon closing of EiffelStudio (feature call on void target in breakpoint tool)
Manu
- Install CDD in student labs (Manu)
- Devise questionnaires
- Initial (due next meeting after Manu's vacation)
- Midterm
- Final
- Analyze questionnaires
- Rework example profiles
- Assis will use CDD to get a feel for it and create a test suite for the students to start with
Bernd
- Define Project for SoftEng
- Find test suite for us to test students code
- Find project with pure functional part
Unassigned
- Only execute unresolved test cases once. Disable them afterwards. (Needs discussion)
- Cache debug values when extracting several test cases.
Beta Tester Feedback
(Please put your name so we can get back to you in the case of questions)
- It should be possible to set the location of the cdd_tests directory (what if location of .ecf file is not readable?) [Jocelyn]
- home directory? application_data directory? [Jocelyn]
- There should be UI support for deletion of Test Case [Jocelyn]
- [BUG] the manual test case creation dialog should check if class with chosen name is already in the system [Jocelyn]
- It would be nice if there was a way to configure the timeout for the interpreter [Jocelyn]
Questionnaires
- Use ELBA
Software Engineering Project
- Task 1: Implement VCard API
- Task 2: Implement Mime API
- Task 3: Write test cases to reveal faults in foreign VCard implementations
- Task 4: Write test cases to reveal faults in foreign Mime implementations
- Group A:
- Task 1, Manual Tests
- Task 2, Extracted Tests
- Task 3, Manual Tests
- Task 4, Extracted Tests
- Group B:
- Task 1, Extracted Tests
- Task 2, Manual Tests
- Task 3, Extracted Tests
- Task 4, Manual Tests
- One large project, but divided into testable subcomponents
- Students required to write test cases
- Fixed API to make things uniformly testable
- Public/Secret test cases (similar to Zeller course)
- Competitions:
- Group A test cases applied to Group A project
- Group A test cases applied to Groupt B project
- Idea how to cancel out bias while allowing fair grading:
- Subtasks 1 and 2, Students divided into groups A and B
- First both groups do 1, A is allowed to use tool, B not
- Then both groups do 2, B is allowed to use tool, A not
- Bias cancelation:
- Project complexity
- Experience of students
- Experience gained in first subtask, when developing second
- Risk: One task might be better suited for the tool than the other
Data to harvest
- IDE Time with CDD(extraction) enabled / IDE Time with CDD(extraction) disabled
- Test Case Source (just final version, or all versions?)
- Use Profiler to get coverage approximation
- TC Meta Data (with timestamps -> Evolution of Test Case)
- TC Added/Removed/Changed
- TC Outcome (transitions from FAIL/PASS/UNRESOLVED[bad_communication <-> does_not_compile <-> bad_input])
- TC execution time
- Modificiations to a testcase (compiler needs to recompile)
- Development Session Data
- IDE Startup
- File save
- Questionnairs
- Initial
- Final
Logging
- "Meta" log entries
- Project opened (easy)
- CDD enable/disable (easy)
- general EiffelStudio action log entries for Developer Behaviour (harder... what do we need??)
- CDD actions log entries
- Compilation of interpreter (start, end, duration)
- Execution of test cases (start, end, do we need individual duration of each test cases that gets executed?)
- Extraction of new test case (extraction time)
- Test Suite Status
- Test suite: after each refresh log list of all test cases (class level, needed because it's not possible to know when manual test cases get added...)
- Test class: (do we need info on this level)
- Test routine: status (basically as you see it in the tool)
Experiment Hypotheses
How reliably can we extract test cases that reproduce the original failure?
- Log original exception and exception received from first test execution
Are the extracted tests useful for debugging?
- Ask developers, using CDD
What is the (time and memory) overhead of enabling extraction?
What is the size of the extracted test cases?
Does it make a difference in the quality of the code, whether one tests manually or extracts them?
- Compare projects using extracted tests and manual tests to ref test suite
Do contracts replace traditional testing oracles?
- Original API without contracts
- Run failing test cases (the ones we get from second part) with reference API with contracts
- How many times does the contract replace the testing oracle?