Difference between revisions of "Test"
(→OVERVIEW) |
(Refer to testing category for overview.) |
||
(5 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
+ | [[Category:Testing]] | ||
This page describes the testing principles and schedule for release 6.0 of EiffelStudio. | This page describes the testing principles and schedule for release 6.0 of EiffelStudio. | ||
Line 5: | Line 6: | ||
=OVERVIEW= | =OVERVIEW= | ||
− | + | You can get an overview [[:Category:Testing|here]]. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | You can | + | |
− | + | ||
− | + | ||
− | + | ||
=TESTING PRINCIPLES= | =TESTING PRINCIPLES= | ||
Test Case Specification (TCS) | Test Case Specification (TCS) | ||
Line 241: | Line 230: | ||
'''Outcome''' | '''Outcome''' | ||
− | + | *Release 6.0 on platforms defined in advance (see Phase 1) (at a minimum Windows, Linux and Solaris) | |
− | + | ||
'''Rules''' | '''Rules''' | ||
Line 248: | Line 236: | ||
'''Approval for any changes whatsoever''' | '''Approval for any changes whatsoever''' | ||
− | + | *As in Phase 3. | |
'''Discussions between developer and tester''' | '''Discussions between developer and tester''' | ||
*As in Phase 3. | *As in Phase 3. |
Latest revision as of 08:58, 30 May 2007
This page describes the testing principles and schedule for release 6.0 of EiffelStudio.
When the release is out it should be reorganized to keep only the generic elements, release-specific material being moved to other pages.
Contents
OVERVIEW
You can get an overview here.
TESTING PRINCIPLES
Test Case Specification (TCS) S01. Name
S02. Id (we need to devise a coding scheme to identify tests uniquely)
S03. Source of test: one of
/ / Test devised by tester during QA process / / EiffelWeasel test / / Bug report originating with Eiffel software / / Bug report originating with non-Eiffel Software developer (open-source contributor, e.g. ETH) / / Bug report from other, e.g. commercial customer / / Automatically generated test, e.g. AutoTest
S04. Original author, date
S05. Any successive revision, author, date
S06. Other references, such as (zero or more):
/ / Bug database entry: _______________ / / Email message from ______ to _____, date: ____________ / / Minutes of meeting: reference ________________________ / / ISO/ECMA 367: Version: ____________ section, page: _________ / / Web page: URL: __________________ / / Other document: ________________ section, page: ____________ / / Other: __________________________
S07. Product or products affected (e.g. EiffelStudio, EiffelVision)
S08. Purpose (typically one to a few lines)
S09. Nature: one of
/ / Functional correctness / / Platform portability or compatibility Platforms involved: ________________ / / Performance: time / / Performance: memory / / Performance, other: ________________ (e.g. disk usage, both CPU time and memory, ...) / / Usability
S10. Context: one of
/ / Normal usage / / Stress/boundary/extreme conditions testing
S11. Scope: one of
/ / Feature: ____________ (in this case fill "class" next) / / Class: ___________ (in this case fill "cluster" next) / / Cluster/subsystem: ________________ / / Collaboration test: to check how two or more software tools|libraries|systems work together. Which ones: ____________________ / / System test: entire tool (e.g. EiffelStudio, EiffelBuild) entire application, ...) / / Eiffel language mechanism Name of mechanism: ________________ / / Other language mechanism, e.g. Lace, ECF Name of mechanism: ___________________________
S12. Release where this test must succeed: _________
S13. Severity if test fails
/ / Minor, does not prevent release / / Serious, requires management decision to approve release / / Blocking, prevents release
S14. Relations to other TCS
List code, and relationship
S15. Platform requirements if any
S16. Initial conditions
Any actions that must be performed or properties (e.g. database state) that must be satisfied for the test to make sense
S17. Expected results
This is a description of what constitutes expected vs. abnormal behavior for this test. It can be more or less detailed and rigorous.
S18. Any scripts needed to run this test
S19. Test procedure
How to run the test
S20. Status of last test run: one of
/ / Passed / / Failed TRR id: ______________
S21. Regression status: one of
/ / Some past test runs have failed / / Some past test runs have passed
Test Run Report (TRR)
R01. TCS id (refers to S02 above)
R02. Test run id; should be unique, and automatically generated
R03. Date and time run
R04. Precise identification of test run
Platform ________________________ Software tools involved (SUT plus any others needed including any testing tools);
include version numbers : ________________ ________________ ________________ Any other relevant information on conditions of the test run: ________________
R04. Name of tester
R05. Overall result as assessed by tester: one of
/ / Pass / / Fail
R06. Other test run data, e.g. performance figures (time, memory)
R07. More detailed description of test run if necessary and any other relevant details describing test run
R08. Caused update of TCS?
/ / Yes -- what was changed? ___________________ / / No
FINAL RELEASE PREPARATION PLAN FOR EIFFELSTUDIO 6.0
Phase 1: 1 week -- preparation
2 to 9 May 2007
Goal
- For testers, to become familiar with tested product and put together a general plan of attack
General task (tester, with help of developer if needed):
- Explore product, identify major functional areas, list TCS to be performed, start writing these TCS (the list should come first, then the texts themselves will be written as the testing effort progresses).
Outcome
- List of TCS, each with at least S01 (id) and S08 (purpose)
- Some fully written TCS themselves (as many as feasible)
- Description of overall test strategy
- Description of release (functionality, platforms...)
Rules
- The tester can play around with the SUT, but is not expected to produce TRRs during this phase (although this is permitted).
Approval for bug fixes
- Developer
Discussions between developer and tester
- Permitted and encouraged, as long as the aim is to identify major areas requiring testing and understand how to use the SUT.
Phase 2: 3 weeks -- exploration
10 to 30 May 2007
Goal
- To perform as many tests as possible, and fix problems as they are uncovered.
Tasks
- Tester: finalize TCSs, perform extensive testing, fill in TRRs
- Developer: as TRRs are sent, fix bugs
Outcome
- TRRs (tester)
- Bug fixes (developer)
Rules
- During this phase all testing should be done with the official framework, i.e. apply a TCS (to be written as part of the process if it doesn't yet exist, e.g. if a bug is found when testing for something else) and produce a TRR.
Approval for bug fixes
- Developer + Tester
- (no commit without approval of tester, i.e. he has to re-run the TCS and authorize fix).
Discussions between developer and tester
- Limited to exchange of TRRs (in particular, signaling those which indicate test failures) and anything that is strictly necessary. In general, minimize tester-developer interaction during this phase to ensure independence of testing.
Phase 3: 1 week -- consolidation
31 May to 6 June 2007
Goal
- To finalize corrections, in particular of delicate aspects.
Tasks
- Tester: check that all reported bugs have been fixed and that fixes work individually and together.
- Developer: maintain integrity of SUT, consistency of fixes
Outcome
- More TRRs
- In particular, more TRRs reporting success for previously failing TCSs
Rules
- Special care should be exercised during that phase. For any functionality that doesn't work a management decision has to be made to either urgently fix the problem or remove the functionality from the release.
Approval for bug fixes and any changes whatsoever
- Developer + Tester + Lead engineer
- (no commit without these 3 approvals)
Discussions between developer and tester
- Permitted with no restriction, and encouraged. The goal is to get things right using all possible techniques. The more eyes the better (on all parts of the project) to find any remaining problems (which lead to either a fix or discarding functionality).
Phase 4: 1 week -- distribution
7 to 13 June 2007
Goal
- To prepare the release
Tasks
- Integration, compilation on different platforms, tests of installation procedure
Outcome
- Release 6.0 on platforms defined in advance (see Phase 1) (at a minimum Windows, Linux and Solaris)
Rules
- Extreme care. No change except if critically needed. Normally at that stage no functionality should be removed, unless critical bug is found late in the process.
Approval for any changes whatsoever
- As in Phase 3.
Discussions between developer and tester
- As in Phase 3.