Important note: This page is being worked on. You should regularily check for updates.
How to write manual tests
This document contains requirements and guidelines for writing good manual test cases. Here you will find information about what should be tested and how to use our TestLink server. Rules for reviewing will be provided as well. When writing manual tests, do not forget the general guideline of the project: Work towards the goal!
Test cases
We use a TestLink server for writing, executing and tracking manual test cases. The homepage of the Testlink project is http://testlink.sourceforge.net/docs/testLink.php.
Here follow some basic rules about how to write test cases:
- Each use case must be decomposed to simple (single action) steps.
- The use cases must cover all of the Sophie2 functionality.
- Every use case must consist of at most 15 steps.
- When you write a new use case, make sure that it does not already exist.
- Use cases must be organized by categories.
On every iteration these are the testplan cases on some basic functionalities that should be executed and that should always work:
Testcase: | Expected Results: |
Start Sophie2. | Sophie2 should start in default skin with open left, bottom and right flap. |
Create a new book. | A new book should be created. |
Add a page to the book. | A new page should be added to the book. |
Add a resource frame to a page of the book. | A resource frame is added to the page. |
Delete a frame from a page in the book. | The wanted frame is deleted from the page. |
Delete a page from the book. | The wanted page is deleted from the book. |
Save the book. | The book is saved. |
Close the book. | The book is closed. |
Open an existing book. | The book is loaded. |
Exit Sophie2 . | Sophie2 is closed. |
All testcases must
- begin with either one of these choices:
Steps: | Expected Results: |
1. Start Sophie2. | 1. Sophie2 should start in default skin with open left, bottom and right flap. |
2. Create a new book. | 2. A new book should be created. |
or
Steps: | Expected Results: |
1. Start Sophie2. | 1. Sophie2 should start in default skin with open left, bottom and right flap. |
2. Open an existing book. | 2. The book is loaded. |
and end with these steps:
Steps: | Expected Results: |
... | ... |
8. Save the book. | 8. The book is saved with all the made changes. |
9. Close the book. | 9. The book is closed. |
10. Open the saved book. | 10. The book should load with all the made changes before saving. |
There is no limit to the number of steps in a testcase. A testplan consists of number of existing testcases. The testcases in a testplan should be picked in a way that would lead to the expected result after executing them. In the progress of developing the project different testplans will be added depending on the various expectations towards the testplan results.
Reporting bugs
Reviewing
This section contains rules for reviewing manual testing scenarios as well as for doing and reviewing the testing phase of tasks.
Rules
Here follows a description of the contents of the Testing section of a task's wiki page. It should contain:
- A link to the user documentation describing this task.
- A link to the release documentation if the result of this task will be part of the next release.
- Links to use cases in TestLink where applicable.
- related test cases should be considered and listed as well.
- Links to all auto tests related to this tasks.
- see PLATFORM_STANDARDS_AUTO_TESTS for more information on automatic testing.
- (recommended) A link to the Hudson test report regarding these tests.
- Explanations of the results of the tests.
- A brief explanation of the bugs reported with links to their trac tickets.
- links to related bugs should be provided as well.
- Links to the related attachments for testing should be provided if needed.
Scoring
The testing reviewer should make sure everything listed in the above section is ok - the documentation is well written, manual testing scenarios cover all aspects of the task, bugs are adequately reported, etc. Reviewers should either follow the standards in this document or comment them in the Comments section of this page. If you state a task does not comply with the standards, point to the requirements that are not met. Scores are in the range 1-5. Here are the rules for scoring the testing phase:
- Score 1 (fail): The testing phase is not structured according to the standards (or is to very little extent).
- Score 2 (fail): The testing phase is structured according to the standards in the most part but has some things that are missing or bugs that are not linked or explained or test cases are not suitable, etc. - in general - the testing does not cover all aspects of the functionality.
- Score 3 (pass): The testing phase is structured according to the standards, covers the functionality but lacks some descriptions and more things can be added.
- Score 4 (pass): The testing phase is structured according to the standards and provides detailed information according to the requirements mentioned above.
- Score 5 (pass): The testing phase is structured according to the standards and there's nothing more to be added - it's perfect in such a way that a person who is not quite familiar with the project can clearly see that the feature(s) is/are implemented really well.
All reviews should be motivated. A detailed comment about why the testing phase fails is required. For a score of 3 a list of things that could be better should be provided. Comments are encouraged for higher scores as well. Scores should be in the range from 1 to 5 with a recommended step 1. A 0.5 step is also possible but if you give the testing a score of 3.5 then you probably have not reviewed it thoroughly enough and cannot clearly state whether it is good or not. Integer numbers are recommended. Once the testing phase has been reviewed, it cannot be altered. If you think it is wrong, you should request a super review. Currently all super reviews should be discussed with Milo. Make sure you are able to provide clear arguments of what the testing lacks before you request a super review.
Comments
Your comment here --developer.id@YYYY-MM-DD
- Test cases may contain prerequisites - of course opening Sophie and creating a new book won't be a step in every test case. They should be marked as Step0
- Talking about "obeying rules" in bug reports is a good wish, but does not make any sense since we are allowing people to commit bugs on their own. Unless you think someone will go after and moderate tickets, this is useful. Requirements to bug reporting should be minimalistic as reporting bugs will be done not only by people with technical knowlege. This also applies to bug categories where we definitely need an uncategorized.
- Using bold and italics is not used by many developers, probably using them here makes this page a little inconsistant with all of the wiki contents. We use headings and bullets.
- Sentences like "When writing manual tests, do not forget the general guideline of the project: Work towards the goal! " are not in place - this is a serious project, not a daycare.
- Sentences like "Non-integer scores are STRONGLY disencouraged." are not in place - this is a serious project, not a daycare. You do not have to SHOUT to get understood.
--deyan@2009-02-05
- Here is a goos example for a bug report form: https://bugs.opera.com/wizard/. So simple. I propose making a similar form. --kyli@2009-02-16