wiki:PLATFORM_STANDARDS_AUTO_TESTS
Last modified 12 years ago Last modified on 01/26/09 21:35:53

Error: Macro BackLinksMenu(None) failed
compressed data is corrupt

Important note: This page is being worked on. You should regularily check for updates.

How to write automatic tests

This document contains requirements and guidelines for writing good auto tests (unit, integration, system). Here you will find information about what should be tested and how. Rules for reviewing will be provided as well. When writing automatic tests, do not forget the general guideline of the project: Work towards the goal!

General conventions

Here is some general information about the testing infrastructure of Sophie2:

  • We are using JUnit as a testing framework.
  • The base classes that are used for testing are in the org.sophie2.core.testing package.
  • Tests should either extend one of UnitTestBase, IntegrationTestBase and SystemTestBase or be marked with @UnitTest, @IntegrationTest or @SystemTest annotation when they need to inherit another class.
  • All resources needed for tests should be placed in the /src/test/resources folder of the module where the test is in.
  • Resources should be loaded by the Class.getResource method (as an example you can see how the log4j configuration file is loaded in the TestBase class.
  • Unit tests are required for all classes, integration tests should be written where is needed, system tests are not required at the current stage of development.

Here follow some conventions about writing tests (these apply to all kinds of auto tests):

  • A test method should test a specific thing only.
  • A test class should have several test methods.
  • A test class/method should test only what it is intended to.
  • Test methods should be independent from one another.
  • Tests should not initialize instance fields outside of the setUp methods. This is dirty.
  • If you override setUp(), the first line of your setUp should be super.setUp().
  • If you overrid tearDown(), the last line of your tearDown should be super.tearDown().
  • Don't test trivial things
  • Don't use random test data.
  • Test special (boundary) cases.

Writing Javadoc is compulsory for all tests. Avoid fake Javadocs that repeat the method name in a sentence as in the following example. Instead, provide a brief description of how is the functionality tested.

/**
* Test method for {@link Storage#makeChild(String)}.
*/
@Test
public void testMakeChild(){
        // test method code here
        ...
}

Unit Testing

Units are the smallest testable part of an application. In our case these are methods and classes. Unit tests ensure that every part of the application is working as a unit. We require unit tests to be written for all classes. That is because they:

  • Make refactoring easier - if you break something, you will notice immediately.
  • Simplify integration - when you know that separate units are working, you can easily integrate them.
  • Provide documentation - unit tests actually provide examples of using the features.
  • Provide design benefits - unit tests actually define the required functionality of the tested unit.

You can quickly start writing a unit test by making Eclipse generate its skeleton (i.e. add one test method for each method of the tested class. Follow these steps to do that:

  • Go to the /src/test/java folder of the module the tested class is in.
  • Create a package with the same name as the package of the tested class (if it does not already exist).
  • Select New -> JUnit Test Case from the File or context menu.
  • Fill in the name of the test class and the class under test.
    • The name of the test class should start or end with Test. It is usually the name of the tested class, followed by test.
    • You have to start typing in the class under test selection dialog in order to get some results
    • Since Javadoc is required, select Generate comments and click next.
  • Select the methods you want to test and click Finish
    • Simple getter methods do not need to be tested.
  • The new class is generated. Don't forget to write more informative Javadoc and actually fill in the test methods.
  • Change your class to extend UnitTestBase and feel free to add more test methods.
  • NOTE: Asserttions must be enabled. To do that you may need to execute these steps:
    • Go to Window -> Preferences -> Java
      • In the JUnit subcategory make sure Enable assertions for new JUnit launch configurations is checked.
      • In the Installed JREs subcategory select the JRE you are using and add a -ea switch to the VM arguments.

For further reading on unit testing, see:
http://www.basilv.com/psd/blog/2006/how-to-write-good-unit-tests
http://en.wikipedia.org/wiki/Unit_testing

Integration Testing

Integration tests control the communication (integration) between two or more units of the application, specifically the flow of data/information/control from one component to another. Integration tests should be run after the units participating have been testing. Integration test cases should typically focus on scenarios where one component is being called from another. Also the overall application functionality should be tested to make sure it works when the different components are brought together. As a rule of thumb, if you call methods of another class (other than the tested), you are writing an integration test. Currently integration tests should be written when they are a task requirement or when you feel that the task you work on can benefit from them.

For further reading on integration testing, see:
http://en.wikipedia.org/wiki/Integration_testing

System Testing

System (functional) testing ensures that the functionality achieved is the functionality expected. It does not require knoledge of the inner design of the application. Currently we have not reached the phase of system testing.

For further reading on system testing, see:
http://en.wikipedia.org/wiki/System_testing

Running tests

If you want to run a single test, right-click on the test class and select Run As... -> JUnit Test. A JUnit view will open with the test results shown. You can also run all test via Maven or Hudson.

Maven

When building, Maven automatically runs all tests (except when the -Dmaven.test.skip=true is passed as an argument). The results from the tests can be found in the \trunk\sophie2-platform\modules\<module-name>\target\surefire-reports folder. If you want to run the tests without building, you have two ways of doing that:

  • In Eclipse - right-click on the pom.xml file of the module that you want to run all tests for (sophie2-platform if you want to test all modules) and select Run as... -> Maven test.
  • In console - from the root folder of the module you want to test (or sophie2-platform if you want to test all modules), run the mvn test command.

Hudson

When building, Hudson also automatically runs tests. You can see the test results using the web interface at http://sophie2.org:8080 When you select the Sophie2.0 project from the home page, you will be taken to a summary page with a link to the latest test result. From there you can select tests and see their output. You can also select a build and see its tests - this is useful to see when a test has been broken.

GUI testing

GUI testing is somewhat difficult, because it involves clicking, selecting, dragging, etc. - user actions that it is hard to reproduce. Currently GUI testing should be performed by using some helper classes (like those in base.dialogs.mock) or by executing manual tests. At a later stage, a GUI testing library will be introduced to help GUI testing.

Reviewing

This section contains rules for reviewing auto-tests as well as for doing and reviewing the testing phase of tasks.

Rules

Implementation reviewers should make sure that the created auto tests comply with the code and Javadoc conventions and the rules specified in this document. Otherwise an implementation should not pass. An implementation with unit tests that fail due to a bug in other than the tested functionality may pass but that should be commented in the ticket.

In the testing phase all auto tests should be run. If there is a failiure it should be reviewed:

  • If it is due to bad implementation of the task (i.e. functionality cover by this task is causing errors), a super review should be requested.
  • If it is due to an error in the test itself, it should be corrected.
  • If it is due to a bug in another part of the application, that bug should be reported.
  • If it is introduced by code commited after that, there is no universal rule. Seek the best solution in each specific case.

In all three cases the results from the tests should be described in the Testing section of the task's wiki page. Here is a more thourough description of what this section should contain:

  • A link to the user documentation describing this task.
  • A link to the release documentation if the result of this task will be part of the next release.
  • Links to use cases in Testlink where applicable.
    • see PLATFORM_STANDARDS_MANUAL_TESTS for more information on Testlink and manual testing scenarios.
    • related test cases should be considered and listed as well.
  • Links to all auto tests related to this tasks.
    • (recommended) A link to the Hudson test report regarding these tests.
  • Explanations of the results of the tests.
    • when there are failures an explanation of what the errors are due to.
  • A brief explanation of the bugs reported with links to their trac tickets.
    • links to related bugs should be provided as well.

Scoring

Reviewers should either follow the standards in this document or comment them in the Comments section of this page. If you state a task does not comply with the standards, point to the requirements that are not met. Scores are in the range 1-5. Here are the rules for scoring the testing phase:

  • Score 1 (fail): The testing phase is not structured according to the standards (or is to very little extent).
  • Score 2 (fail): The testing phase is structured according to the standards in the most part but has some things that are missing or bugs that are not linked or explained or test cases are not suitable, etc. - in general - the testing does not cover all aspects of the functionality.
  • Score 3 (pass): The testing phase is structured according to the standards, covers the functionality but lacks some descriptions and more things can be added.
  • Score 4 (pass): The testing phase is structured according to the standards and provides detailed information according to the requirements mentioned above.
  • Score 5 (pass): The testing phase is structured according to the standards and there's nothing more to be added - it's perfect in such a way that a person who is not quite familiar with the project can clearly see that the feature(s) is/are implemented really well.

All reviews should be motivated. A detailed comment about why the testing phase fails is required. For a score of 3 a list of things that could be better should be provided. Comments are encouraged for higher scores as well. Non-integer scores are STRONGLY disencouraged. If you give the testing a score of 3.5, then you probably have not reviewed it thoroughly enough and cannot clearly state whether it is good or not. Once the testing phase has been reviewed, it cannot be altered. If you think it is wrong, you should request a super review. Currently all super reviews should be discussed with Milo. Make sure you are able to provide clear arguments of what the testing lacks before you request a super review.

Comments

Your comment here --developer.id@YYYY-MM-DD