wiki:PLATFORM_STANDARDS_GENERAL
Last modified 16 years ago Last modified on 01/29/09 14:04:15

Error: Macro BackLinksMenu(None) failed
compressed data is corrupt

IMPORTANT NOTE: This document has not yet passed the implementation phase. To see the latest approved version, go to PLATFORM_STANDARDS_GENERAL.

General Platform Standards

This document contains information about the general standards for analysing, designing, implementing and testing the diffferent kinds of tasks. For specific requirements for a given phase, follow the links in the PLATFORM_STANDARDS_GENERAL section. When doing a given task, you should not forget the general guideline of this project: Work towards the goal!

Tasks kinds

Currently we have five task kinds, each with different requirements for what should be done in each of the four phases - analysis, design, implementation and testing. Note that the examples given are created according to the old standards and do not fully reflect the new ones. Some tasks may belong to several of the categories below.

Coding task

These are tasks that are related to code - adding new functionality, improving features, refactoring, etc. The different phases' requirements are as follows:

  • Analysis - contains a brief overview of the task and information about the required functionality, the expected results, ideas for implementation and demonstration as well as links to related tasks.
  • Design - describes the technologies that will be used for reaching the task's requirements. It should contain initial tests, libraries needed, rough algorithm explanation, class diagrams, etc.
  • Implementation - describes what has been done. The following things are required:
    • a link to the changeset of the commit(s) where the modifications were done
    • explanation of the changes made (improvements, added functionality, etc.)
    • links to any new classes/packages/modules created
    • links to any new auto tests added
  • Testing - includes writing user documentation, release documentation (where applicable), manual test cases in Testlink, executing test cases and reporting bugs.

Coding tasks can be split in the following subkinds that have specific requirements for the different phases:

  • External feature (visible from the user):
    • Analysis should include a specification (a draft of a specification diagram is recommended).
    • Design should provide:
      • a manual testing scenario and/or inital auto tests/demos
      • if the code exists - a description of what is to be added/changed
    • Implementation should follow the design and make sure all features work and all tests pass. During the process more tests should be added where needed.
    • Testing does not have any specific requirements for this kind of task.
    • Example - PRO_LIB_INSPECTOR_R0
  • Researching a technology or library
    • Analysis should state what needs to be researched - what will the new technology/library solve and why it is needed.
    • Design should do the actual research. You can experiment with things but you should not pollute the main source - use the new libraries in a copy of the project or in another branch.
    • Implementation should present the written results/conclusions of your reseach, demo code, tutorials and how-tos, etc.
    • Testing should make sure the library is usable and suits our needs.
    • Example - S2S_DEPLOY_TECHNOLOGIES_R0
  • Internal feature (not directly visible)
    • Analysis should state what the new feature should provide.
    • Design does not have any specifics related to this kind of task.
    • Implementation should follow the design, make sure the required functionality is achieved and add tests where needed.
    • Testing does not have any specific requirements for this kind of task.
    • Example - BASE_PERSISTENCE_COMMONS_R0
  • Structure changes (refactoring)
    • Analysis should state what needs to be changed (and why).
    • Design should explain the changes to be made and how the problems can be fixed.
    • Implementation should to the actual refactoring and describe the changes made.
    • Testing should make sure the changes do no break the old code.
    • Example - PRO_LIB_CORE_COMMONS_R1

Bug Fix

Bug fixes are unplanned tasks. They represent different kinds of unwanted application behavior - lack of functionality, misfunctionality, errors, etc. Bug tasks should be presented in the trac as "BUG_TASK_NAME". Their phases are the same as other tasks but differ in meaning. The analysis is a thourough description of the bug and is written when the bug is reported. There is no testing phase for bug fixes - the implementation reviewer should make sure the bug is fixed. Here is a brief explanation about the requirements of each phase:

  • Analysis - contains a detailed description of the bug.
  • Design - suggests ideas for fixing the bug.
  • Implementation - does the actual fixing and describes it.
  • Testing - currently there is no testing phase for bug fixing tasks.

Document

Document tasks require the creation/improvement of different documents. In most cases, these documents are auxiliary for other tasks. The most common type of document we use is a wiki page but the result may also be a google doc or something else. The content of the documents varies and may include text, diagrams, media files, spreadsheets, etc. There is no testing phase for document tasks. Instead, the implementation reviewer should make sure the document is written as expected. Here are the requirements for the different sections:

  • Analysis - contains document requirements (type, structure and contents of the document).
  • Design - provides an outline of the document with the sections that it contains and a description of each section's content.
  • Implementation - contains link to the document(s) and a brief overview. Explanation of things added/changed should be provided.
  • Testing - currently there is no testing phase for the document tasks.
  • Example - PLATFORM_DEPLOYMENT_BUILD_ECLIPSE_R1

Setup

The result of these tasks is hardware/software setup of different computer appliances that will be used for executing other tasks. These include website, wiki, developer platform setup, etc. There is no testing phase for setup tasks. Instead, the implementation reviewer should make sure the setup is done well. Here are the requirements for the different sections:

  • Analysis - states what the requirements for this appliance are - both hardware and software. For example, some of the community server hardware requirements are hard disk space and bandwidth, and the software ones - a running web server, security issues, etc.
  • Design - describes which computer appliance will satisfy the requirements, how it will be set up, what technologies will be used.
  • Implementation - describes how this appliance was set up. Links the result (a new server, some wiki pages, etc.)
  • Testing - currently there is no testing phase for maintenance tasks.
  • Example - SCS_MACHINE_SETUP_R1

Maintenance

Maintenance tasks should keep servers, important documents, code, etc. in a perfect working condition. These taskshave revisions on every iteration. There is no testing phase for maintenance tasks. Instead, the implementation reviewer should make sure the maintenance is performed well. Here are the requirements for the different sections:

  • Analysis - covers current issues of the server/document/code and suggestions for improving. Should also contain a list of trivial actions that have to be done in every revision.
  • Design - explains what should be done for meeting the requirements, links to tools that will be used, algorithms, diagrams and whatever is needed for an easy implementation.
  • Implementation - consists of trivial actions done every maintenance and improvements listed in the design. Implementation steps should be described. A link should be provided to any documents maintained.
  • Testing - currently there is no testing phase for maintenance tasks.
  • Example - INTERNAL_BACKLOG_MAINTENANCE_R2

Task results

The results of the tasks should be described in the Implementation section of the task's page. Depending on the task kind, results can include (but are not limeted to):

  • Source code (with unit tests)
  • Diagrams (or design section of the same tasks)
  • Documents (wiki pages, googledocs, etc.)

In the Task Result section of the analysis the expected results should be described. In the implementation section they should be linked. In different revisions same results can be linked, but the changes made in the current revision should be described.

Reviewing

Requirements

Here are the requirements for each of the phases (in general). For specific requirements, see the other standards pages linked below.

Scoring

Reviewers should either follow the standards in this document or comment them in the Comments section of this page. If you state a task does not comply with the standards, point to the requirements that are not met. Scores are in the range 1-5. Here are general guidelines for what score a given task should get. Specific guidelines for the separate phases can be found in the the other documents (linked below).

  • Score 1 - the phase reviewed does not comply to the standards in the current document or the other standards documents.
  • Score 2 - the phase reviewed complies with the standards to the most part but has some things missing and/or is unclear and/or might be misleading to the designer/interpreter.
  • Score 3 - the phase reviewed complies with the standards but can be structured better and can include a lot more things.
  • Score 4 - the phase reviewed complies with the standards and is clear, well-structured and sufficient.
  • Score 5 - the phase reviewed complies with the standards and is clear, well-structured and sufficient and there is nothing more to be added - even a person that is not deep into the project can understand it.

All reviews should be motivated. A detailed comment about why a task phase fails is required. For a score of 3 a list of things that have to be improved/added should be provided. Comments are encouraged for higher scores as well. Non-integer scores are STRONGLY disencouraged. If you give a task a score of 3.5, then you probably have not reviewed it thoroughly enough and cannot clearly state whether it is good or not. Once a given phase has been reviewed, it cannot be altered. If you think it is wrong, you should request a super review. Currently all super reviews should be discussed with Milo. Make sure you are able to provide clear arguments of what is wrong before you request a super review.

Naming conventions for wiki pages

When creating a new wiki page, comply with the following conventions:

Other standards

This document provides only general rules. For specific ones, take a look at:
PLATFORM_STANDARDS_ANALYSIS
PLATFORM_STANDARDS_DESIGN
PLATFORM_STANDARDS_CODE
PLATFORM_STANDARDS_AUTO_TESTS
PLATFORM_STANDARDS_MANUAL_TESTS

Comments

Your comment here --developer-id@YYYY-MM-DD