Difference between revisions of "CddMeeting01082008"

(CDD makes development easier/more productive)
Line 74: Line 74:
 
==Experiment Hypotheses==
 
==Experiment Hypotheses==
  
===CDD increases development productivity===
+
===Use of CDD increases development productivity===
  
===CDD makes more correct code===
+
===Use of CDD increases code correctness===
===Profile of students (dev a, deb b style comparison)===
+
===Developer Profile===
===Given 3 kinds of test. what actually gets used and how effective is it?===
+
How did students use the testing tools. Are ther clusters of similar use? What is charactersitic for these clusters?
 +
===How do extracted, synthesized and manually written test cases compare?===

Revision as of 07:54, 8 January 2008

CDD Meeting, Tuesday, 8.1.2008, 10:00

Next Meeting

  • Friday, 11.1.2008, 10:00

Tasks

  • Add filters and tags for extracted, manual tests and automated tests
  • Fix extraction for tuples -> DONE, but needs testing, there are probably still problems with agents, but it's not certain if related to tuples or extraction (Stefan)
  • Look at/fix test case execution for agents (Stefan)
  • Add non-commited test cases (Stefan)
  • CDD log window in IDE (Arno)
  • "New manual test case" Button (Arno)
  • Better Icons for GUI (Arno)
  • Status / Progress bar (Arno)
  • Port to 6.1 (?, probably only after Beta 1)
  • Manual re-run to find true prestate (Jocelyn, Stefan)
  • Logging (Stefan)
    • What data to log?
    • Implement storing
    • Define how students should submit logs
  • Data Gathering (Stefan)
    • Define what data to gather
    • Define how to process gather data
  • Forumulate Experiment Hypothesis (Andreas)
  • Define Project for SoftEng (Manu)
    • Find System level test suite for us to test students code
    • Find project with pure functional part
  • "Execute visible test cases only" Button (?)
  • Restore open nodes and selection after grid update (Arno)
    • Maybe better/easier solved via incremental updates from tree
  • Automate CDD System level tests (Stefan)
  • Install CDD in student labs (Manu)
  • Pause test execution and compilation during regular compilation and execution (Arno)
  • Add most important convenience routine to CDD_TEST_CASE (Stefan)
  • Add failure context window (Arno)
    • Maybe also additional information such as previous outcomes?
  • Check why Gobo slows down compilation of project not using gobo when melting (performance issue for compiling interpreter)
  • Fix AutoTest for courses
    • Integrate AUT_TEST_CASE into CDD_TEST_CASE hierarchy
    • Variable declaration for failing test cases
    • New release
  • Move logs below cdd_tests
  • Environment variable (or better user preference) for qualifying class names (to avoid svn conflicts)
  • Uniqe id to tag test cases with. To be used in logs. So test logs are resiliant to test class renamings


Software Engineering Project

  • One large project, but divided into testable subcomponents
  • Students required to write test cases
  • Fixed API to make things uniformly testable
  • Public/Secret test cases (similar to Zeller course)
  • Competitions:
    • Group A test cases applied to Group A project
    • Group A test cases applied to Groupt B project

Data to harvest

  • Test Case Source (just final version, or all versions?)
    • Use Profiler to get coverage approximation
  • TC Meta Data (with timestamps)
    • TC Added/Removed
    • TC Outcome
    • TC execution time
  • Development Session Data
    • IDE Startup
    • File save
  • Questionnairs
    • Initial
    • Final

Experiment Hypotheses

Use of CDD increases development productivity

Use of CDD increases code correctness

Developer Profile

How did students use the testing tools. Are ther clusters of similar use? What is charactersitic for these clusters?

How do extracted, synthesized and manually written test cases compare?