How to write manual tests
This document contains requirements and guidelines for writing good manual test cases. Here you will find information about what should be tested and how to use our Testlink server. Rules for reviewing will be provided as well. When writing manual tests, do not forget the general guideline of the project: Work towards the goal!
Test cases
We use a Testlink server for writing, executing and tracking manual test cases. The homepage of the Testlink project is http://testlink.sourceforge.net/docs/testLink.php.
Here follow some basic rules about how to write test cases:
- Each use case must be decomposed to simple (single action) steps.
- The use cases must cover all of the Sophie2 functionality.
- Every use case must consist of at most 15 steps.
- When you write a new use case, make sure that it does not already exist.
- Use cases must be organized by categories.
On every iteration these are the testplan cases on some basic functionalities that should be executed and that should always work:
Testcase: | Expected Results: |
Start Sophie2. | Sophie2 should start in default skin with open left, bottom and right flap. |
Create a new book. | A new book should be created. |
Add a page to the book. | A new page should be added to the book. |
Add a resource frame to a page of the book. | A resource frame is added to the page. |
Delete a frame from a page in the book. | The wanted frame is deleted from the page. |
Delete a page from the book. | The wanted page is deleted from the book. |
Save the book. | The book is saved. |
Close the book. | The book is closed. |
Open an existing book. | The book is loaded. |
Exit Sophie2 . | Sophie2 is closed. |
All testcases must
- begin with either one of these choices:
Steps: | Expected Results: |
1. Start Sophie2. | 1. Sophie2 should start in default skin with open left, bottom and right flap. |
2. Create a new book. | 2. A new book should be created. |
or
Steps: | Expected Results: |
1. Start Sophie2. | 1. Sophie2 should start in default skin with open left, bottom and right flap. |
2. Open an existing book. | 2. The book is loaded. |
and end with these steps:
Steps: | Expected Results: |
... | ... |
8. Save the book. | 8. The book is saved with all the made changes. |
9. Close the book. | 9. The book is closed. |
10. Open the saved book. | 10. The book should load with all the made changes before saving. |
There is no limit to the number of steps in a testcase. A testplan consists of number of existing testcases. The testcases in a testplan should be picked in a way that would lead to the expected result after executing them. In the progress of developing the project different testplans will be added depending on the various expectations towards the testplan results.
Reporting bugs
Bugs are described and fixed in tickets
Analysis of bugs
Reporting an issue is the analysis of the bug. Bugs can be reported by all registered users and by anonymous users, but with limitations. If the following things are not described, the bug may be considered invalid. The text in italic is not mandatory.
- Summary:
- May start with "Crash:", "Tweak:", "Unexpected behavior:". Crash means that the report form is evoked. Error report is needed as an attachment
- "TLID:" testlink id, if the bug can be reproduced by executing a testcase
- Short summary, custom text, what exactly happens in few words
- Type: bug
- Priority: determine the priority of the bug.
- Keywords - fill in related keywords, for example "text"
- Components - fill in component if you can determine the component
- reporter: DevID, automatically added by trac
- Milestone - current milestone
- cc: a reminder to someone
- Analysis_owners: your DevID
- Description: contains more detailed description and steps to recreate in order to reproduce the bug.
- If a bug is a TLID type it must be clarified in the comment of the ticket on which step the application crashes.
- Attachments: you may add attachments to your ticket (screenshots, crash logs, books) reffering to an attachment named "filename.ext" is done by [attachment:filename.ext]
- It is recommended that the attachments' names do not have spaces or any special symbols because you might not be able to link them properly in the ticket.
When you create the ticket, move it's status to analysis finished.
- Design
- Design and implementation are done in a separate branch.
- Design can be described in a comment of the ticket. If the space isn't enough (the fix is not trivial), the designer may create a wiki page named BUG_<ticket_id> to put his design section there (where <ticket_id> is the number of the ticket in trac. This page should be linked in the description field of the ticket. Bug pages are created using bug page template. When design is finished, the status should be updated and implementation may start without a design review. Same goes for the implementstion.
- Design should have a system test that reproduces them. It should fail at design phase and pass at implementation.
- In design and implementation the attachments must be linked properly if mentioned.
- Implementation: Implementation changeset should be linked. The ticket status is moved to implementation finished.
- Test: Test is performed by the reporter or in rare cases by an integrator. If the bug is not present, the ticket is closed.
Reviews and resolutions: When the developer understands the bug, he moves it to analysis ok phase and starts the design. Review of the design and implementation is done by the integrators.
- Analysis
- The description describes the problem and the bug can be reproduced by following the instructions there - 3p
- The ticket contains crash report +0.5p
- The ticket contains attached books, screenshots, etc +0.5p (+1p)
- The ticket contains linked test case +1p
- The ticket contains links to related bugs, tasks, external links +0.5p
Design and implementation are usually reviewed at once
- Design
- Explanation how it will be fixed
- Methods
- New classes if needed
- Tests - system test that proves bug is present before the implementation and fixed after it.
- What caused that issue?
- Explanation how it will be fixed
- Implementation
- Lame implementation (bad code) and hacks are not allowed
- Should follow the design
- Test: Testing is performed by the reporter or in some needed cases by an integrator. If the problem is not present the ticket is closed. The test is not reviewed. If any testcases are related they should be executed.
Resolutions:
- fixed - set by the tester, if the problem is not longer present and fixed in this ticket
- invalid - set by anyone, if the problem is not present or something else is wrong, for example, the description is missing
- wontfix - if something is designed this way or doesn't worth fixing or will be fixed post final release
- later - this will be fixed later, as part of another task
- worksforme - this one is clear
Note: In daily report, you state the bug task as BUG_<ticketid>, for example
an BUG_1685 100% 25m de BUG_1685 40% 25m im-re BUG_1685 4p 10m
Reviewing
This section contains rules for reviewing manual testing scenarios as well as for doing and reviewing the testing phase of tasks.
Rules
Here follows a description of the contents of the Testing section of a task's wiki page. It should contain:
- A link to the user documentation describing this task.
- A link to the release documentation if the result of this task will be part of the next release.
- Links to use cases in Testlink where applicable.
- related test cases should be considered and listed as well.
- Links to all auto tests related to this tasks.
- see PLATFORM_STANDARDS_AUTO_TESTS for more information on automatic testing.
- (recommended) A link to the Hudson test report regarding these tests.
- Explanations of the results of the tests.
- A brief explanation of the bugs reported with links to their trac tickets.
- links to related bugs should be provided as well.
- Links to the related attachments for testing should be provided if needed.
Scoring
The testing reviewer should make sure everything listed in the above section is ok - the documentation is well written, manual testing scenarios cover all aspects of the task, bugs are adequately reported, etc. Reviewers should either follow the standards in this document or comment them in the Comments section of this page. If you state a task does not comply with the standards, point to the requirements that are not met. Scores are in the range 1-5. Here are the rules for scoring the testing phase:
- Score 1 (fail): The testing phase is not structured according to the standards (or is to very little extent).
- Score 2 (fail): The testing phase is structured according to the standards in the most part but has some things that are missing or bugs that are not linked or explained or test cases are not suitable, etc. - in general - the testing does not cover all aspects of the functionality.
- Score 3 (pass): The testing phase is structured according to the standards, covers the functionality but lacks some descriptions and more things can be added.
- Score 4 (pass): The testing phase is structured according to the standards and provides detailed information according to the requirements mentioned above.
- Score 5 (pass): The testing phase is structured according to the standards and there's nothing more to be added - it's perfect in such a way that a person who is not quite familiar with the project can clearly see that the feature(s) is/are implemented really well.
All reviews should be motivated. A detailed comment about why the testing phase fails is required. For a score of 3 a list of things that could be better should be provided. Comments are encouraged for higher scores as well. Scores should be in the range from 1 to 5 with a recommended step 1. A 0.5 step is also possible but if you give the testing a score of 3.5 then you probably have not reviewed it thoroughly enough and cannot clearly state whether it is good or not. Integer numbers are recommended. Once the testing phase has been reviewed, it cannot be altered. If you think it is wrong, you should request a super review. Currently all super reviews should be discussed with Milo. Make sure you are able to provide clear arguments of what the testing lacks before you request a super review.