Version 2 (modified by boyan, 16 years ago) (diff) |
---|
Important note: This page is being worked on. You should regularily check for updates.
How to write manual tests
This document contains requirements and guidelines for writing good manual test cases. Here you will find information about what should be tested and how to use our Testlink server. Rules for reviewing will be provided as well. When writing manual tests, do not forget the general guideline of the project: Work towards the goal!
Test cases
We use a Testlink server for writing, executing and tracking manual test cases. The homepage of the Testlink project is http://testlink.sourceforge.net/docs/testLink.php.
Here follow some basic rules about how to write test cases:
- Each use case must be decomposed to simple (single action) steps.
- The use cases must cover all of the Sophie2 functionality.
- Every use case must consist of at most 15 steps.
- When you write a new use case, make sure that it does not already exist.
- Use cases must be organized by categories.
The following basic test plan should be executed on every iteration. It consists of basic functionality that should always work:
- Open Sophie2
- Create/open a new book
- Add/delete pages
- Add/delete frames
- Save/close the book
- Exit Sophie2
In the progress of testing, this plan will be expanded and more test plans will be added.
Reporting bugs
We use our Trac to report ant track bugs. The homepage of the Trac project is http://trac.edgewall.org/.
In order to report a new bug, just add a new ticket. Fill the ticket fields obeying the following rules:
- The name of the ticket should be in capital letters and should start with BUG_, followed by a couple of words briefly describing the bug (e.g. BUG_PRO_LIB_OWN).
- The description should be short but explanatory. This description will be expanded in the Analysis section of the bug's wiki page. You should link the wiki page in the description.
- You may leave the Assign to and Priority fields to their defaults.
- Make sure you have selected bug in the Type field.
- Select the current milestone from the drop-down menu and the version (2.0).
- Select the component this bug belongs to. When unable to tell, select uncategorized.
- You should estimate the importance of the bug and fill it.
- Also estimate the effort that it will take to fix the bug.
- The last thing to add is fill your name in the analysis owners field.
Once you have created the page, you should fill the Analysis section according to PLATFORM_STANDARDS_ANALYSIS.
Think well of a category this bug may belong to. Current suggestions are:
- data loss
- memory leak
- low performance
- regression (previously working feature is now broken),
- exception causing
- unexprected behaviour
You may add a new suggestion here. After some bugs are categorized, a new custom field with this category will be added to the Trac in order to ease the bug tracking.
Reviewing
This section contains rules for reviewing manual testing scenarios as well as for doing and reviewing the testing phase of tasks.
Rules
Here follows a description of the contents of the Testing section of a task's wiki page. It should contain:
- A link to the user documentation describing this task.
- A link to the release documentation if the result of this task will be part of the next release.
- Links to use cases in Testlink where applicable.
- related test cases should be considered and listed as well.
- Links to all auto tests related to this tasks.
- see PLATFORM_STANDARDS_AUTO_TESTS for more information on automatic testing.
- (recommended) A link to the Hudson test report regarding these tests.
- Explanations of the results of the tests.
- A brief explanation of the bugs reported with links to their trac tickets.
- links to related bugs should be provided as well.
Scoring
The testing reviewer should make sure everything listed in the above section is ok - the documentation is well written, manual testing scenarios cover all aspects of the task, bugs are adequately reported, etc. Reviewers should either follow the standards in this document or comment them in the Comments section of this page. If you state a task does not comply with the standards, point to the requirements that are not met. Scores are in the range 1-5. Here are the rules for scoring the testing phase:
- Score 1 (fail): The testing phase is not structured according to the standards (or is to very little extent).
- Score 2 (fail): The testing phase is structured according to the standards in the most part but has some things that are missing or bugs that are not linked or explained or test cases are not suitable, etc. - in general - the testing does not cover all aspects of the functionality.
- Score 3 (pass): The testing phase is structured according to the standards, covers the functionality but lacks some descriptions and more things can be added.
- Score 4 (pass): The testing phase is structured according to the standards and provides detailed information according to the requirements mentioned above.
- Score 5 (pass): The testing phase is structured according to the standards and there's nothing more to be added - it's perfect in such a way that a person who is not quite familiar with the project can clearly see that the feature(s) is/are implemented really well.
All reviews should be motivated. A detailed comment about why the testing phase fails is required. For a score of 3 a list of things that could be better should be provided. Comments are encouraged for higher scores as well. Non-integer scores are STRONGLY disencouraged. If you give the testing a score of 3.5, then you probably have not reviewed it thoroughly enough and cannot clearly state whether it is good or not. Once the testing phase has been reviewed, it cannot be altered. If you think it is wrong, you should request a super review. Currently all super reviews should be discussed with Milo. Make sure you are able to provide clear arguments of what the testing lacks before you request a super review.
Comments
Your comment here --developer.id@YYYY-MM-DD