wiki:REPORTS
Last modified 12 years ago Last modified on 10/15/09 23:28:39

Error: Macro BackLinksMenu(None) failed
compressed data is corrupt

Overview

Since we need a way to track each developer's activities on a daily basis and also for each sprint as a whole, we have defined two kinds of reports. Each of these two types of reports aims to provide tight and important information about the developers and their activities. The two kinds of reports are daily report and iteration report. Daily reports provide information about everyone's daily work. Iteration reports provide an overall snapshot of a developer's work during the iteration and provides some statistics about time, productivity, etc.

Daily report

  • All reports must be put in a file named "<user>-<YYYY-MM-DD>.txt" where <user> is the user id, and the rest is the date of today in the iso format.
  • The naming is case sensitive. Example "tanya-2009-09-13.txt"
  • The report must contain several lines, each describing a performed activity.
  • Each activity must be described on a single line.
  • You must follow the format strictly, or the report generating scripts will fail.
  • In order to check the consistency of a daily report, one must run the '0check.py' file which is locate in '/manage/reports/'.
  • In order to run this file one must have Python installed.
  • On Windows: open Command Prompt and enter "python 0check.py"
  • On Linux: open Terminal and enter "python2.5 0check.py"
  • The Python script outputs whether there are mistakes and where they are located.

Each line must be formated as: <activity> <task_id> <status> <time> [comment]

<activity>:

  • one of an, de, im, te, an-re, de-re, im-re, te-re, an2, de2, im2, te2, an-re2, de-re2, im-re2, te-re2, an3..
  • which means analisys, design, implementation, test, analisys-review, etc.
  • if a review fails, the index of the activity should increase (if "an-re" indicates that "an" fails, then the next is "an2" (which is reviewed in "an-re2", and so on)

<task_id>:

  • either a real task ID (according to the WBS or wiki, like PLATFORM_STRUCTURE_R0)
  • or an UNTRACKED_BLA_BLA defining what you have done. In this case, it should be something that has been determined to be necessary for whatever reason but is not a part of a given task. Spending time on such things is a negative indicator.

<status>:

  • If the activity is work (not review) the current progress is (after the performed work) in percentage. Like "30%", or "100%". Please give realistic estimates.
  • If the activity is review, use 1p, 2p, 3p, 4p, or 5p for the score of the review. 1p or 2p means not acceptable, 3p, 4p, or 5p means acceptable.

<time>:

  • Time should always describe the amount of time required by the activity.
  • You should use a number in minutes or hours.
  • Suffix the number with the used unit ('m' for minutes, 'h' for hours). You may use floating point. For example:
  • 20m
  • 0.5h
  • 12h

<comment>:

  • Put a brief comment in free text. You can write whatever you want till the end of the line. In case of review, it can only be a summary of the remarks provided in the task log.

Other notes:

  • If two developers work together, they should both log in their reports on the work.

Example: (tanya-2008-09-22.txt)

an-re PLATFORM_STRUCTURE_R0	2p 15m 		 Unclear and a bit wrong (self conflicting).
an-re PLUGIN_SUPPORT_LIB_BASE_R0 2p 5m 		 Not on the template, not clear enough.
an2 CORE_MVC_BASE_R0 100% 40m			 Refactored.
an2 BASE_MODEL_FRAME_CONTENT_R0 100% 40m	 Refactored.
de BASE_MODEL_FRAME_CONTENT_R0 100% 70m		 Should be ready.
an-re PLATFORM_STANDARDS_REPOSITORY_R0 2p 15m	 Some parts of the analysis are missing.
an-re PRO_LIB_CORE_TUTORIAL_R0 2p 5m		 Does not apply to the template.
an-re BASE_BOUND_CONTROLS_R0 2p 10m 	         This analysis is for the whole task.
an-re BOOK_WINDOW_R0 1p 0.25h			 Does not apply to the template, other problems also.

Weekly Subteam Report

  • The purpose of this report is to shorten the duration of the weeklies and help the subteams to prepare easier for the sprint closings, the result of this is smaller overhead and minimizing the time that we loose on meetings.
  • This report should be done by each team analysis person. It is recommended to discuss some aspects of it with the subteam leader.
  • The Weekly Subteam Report is .txt document which contains:
    • Subteam members. This is needed because some teams may change in the future.
    • Information about the regular committing of daily reports
      • Each team member should have daily reports for the days he/she was working. It is duty to the analysis person in each subteam to follow if all subteam members write and commit regularly their daily reports.
    • Information about the Internal Backlog filling
      • Whether all subteam members have filled their availability
      • Collect the impediments and the open questions submitted by the subteam members
    • Information about the tasks that the subteam took on the weekly
      • What tasks were declared
      • Which of them are finished
      • Which are not
      • Propositions how to proceed with the unfinished tasks. Whether other person should take them or they are going to be done by the same person (not recommended). This part should be discussed with the subteam leader.
    • This report could be in plain text.
  • File names are t?-m??-week?.txt
    • t? - stands for team name
    • m?? - stand for the current iteration.
    • week? - stands for the number of the week.
  • Example
    t3-m08-week3.txt
    

Iteration report

  • The name of the file where it is stored should follow the format: <name>-<milestone_id>-final.txt where:
    • <name> is the name of the person whose report is written in the file.
    • <milestone_id> is the id of the milestone for example: m01, m02 ... m11, m12
    • final points out that this is the final report for current iteration.
  • It should be written after the sprint is over and prior the next has started.
  • Example:
    tanya-m01-final.txt
    
  • The first part of the report is for statistics and things are arranged in the following order:
    • work-time - total time spent working in hours.
    • reported-time - total time according to reports in hours.
    • reported-factor - reported-time/work-time (in percents).
    • tracked-work - the total reported time spent on tracked tasks followed by the list of tasks that the person has worked on as they appear on daily reports.
    • untracked-work - the total reported time spent on untracked tasks followed by the list of tasks that the person has worked on as they appear on daily reports.
    • There are four formulae that when summed represent the score one accumulates over the sprint:
      • Let 't' be a task. It provides some attributes used in the formulae to obtain the score. The attributes we use to calculate the formulae are:
        • t-effort - represents the effort of every task.
        • t-part - is the part of the task that is done by the particular person. For example if an analysis is done by two people either of them will have a t-part of 0.5 . Note that we have t-part for analysis, design, implementation and testing.
        • t-an-score - these are the points assigned to the analysis part after the an-re has passed.
        • t-im-socre - these are the points assigned to the implementation part after im-re has passed. Note that we do not has t-de-score since design and implementation go together.
        • t-te-score - these are the points assigned to the testing part after the te-re has passed.
      • analysis-score:
        • represents the score one accumulates for done analysis.
        • is calculated by the formula: sum(( t-effort * t-part * t-an-score ) / 16)
      • de-im-score:
        • represents the score one accumulates for done design + implementation.
        • is calculated by the formula: sum(( t-effort * t-part * t-im-score ) / 4)
      • te-score:
        • represents the score one accumulates for done testing.
        • is calculated by the formula: sum(( t-effort * t-part * t-te-score ) / 16)
      • re-score:
        • represent the score one accumulates for done review.
        • is calculated by the formula: sum(( t-effort * (-1)wrong ) / 16) where wrong is a boolean value (1 or 0) and defines whether the review is WRONG or NOT WRONG. This way (-1)wrong == 1 when wrong==0 and (-1)wrong==-1 when wrong==1
  • The second part of the report is for other information which is more general and analytical and it contains the following:
    • What went well - everyone describes by bullets what was well in they opinion during the sprint.
    • What could be better - everyone describes by bullets what could be improved. Note that these are general ideas that would help the team.
    • How to improve - for every bullet of the previous section, one should write a solutions to the problem.
    • Comments - this part is optional. It contains general comments pointed out by bullets. It can contain complaints, wishes, desires, ideas, proposals etc.
  • Every report should end with a new line.

Example:

work-time: 70h
reported-time: 59h
reported-factor: 80.8%
tracked-work: 59h
untracked-work: 0h
analysis-score: 10p
de-im-score: 10p
te-score: 10p
re-score: 10p 
list:
an PLATFORM_STANDARDS_AUTO_TESTS_R0 100% 20m
an GLOBAL_SPEC_STRUCTURE_R0 100% 25m
an SCS_MAIL_LIST_R0 100% 20m
an SCS_FORUM_R0 100% 15m
an INTERNAL_BACKLOG_STRUCTURE_R0 100% 25m
an PLATFORM_STANDARDS_CODE_R0 50% 30m
an APP_MAIN_WINDOW_R0 100% 35m
im UNTRACKED_GENERAL_DISCUSSION 1% 1.5h
im UNTRACKED_WBS 1% 40m Adding Revisions
an-re PLATFORM_STANDARDS_ANALYSIS_R0 4p 30m
im-re SCHEDULE_WBS_TIME_ALLOC_R0 3.5p 2h
an2 PLATFORM_STANDARDS_CODE_R0 100% 20m
de PLATFORM_STANDARDS_CODE_R0 100% 20m
an2 INTERNAL_BACKLOG_STRUCTURE_R0 100% 20m
de INTERNAL_BACKLOG_STRUCTURE_R0 100% 10m
im INTERNAL_BACKLOG_STRUCTURE_R0 100% 3h
im-re SCS_FORUM_R0 4p 5m
im-re PLATFORM_STANDARDS_AUTO_TESTS_R0 3p 40m
an2 PLUGIN_MODULE_STRUCTURE_R0 100% 40m
im UNTRACKED_ANALYSIS_REVIEW 1% 2h
an-re PLATFORM_STRUCTURE_R0	2p 15m 				Unclear and a bit wrong (self conflicting).
an-re PLUGIN_SUPPORT_LIB_BASE_R0 2p 5m 			Not on the template, not clear enough.
an2 CORE_MVC_BASE_R0 100% 40m					Refactored.
an2 BASE_MODEL_FRAME_CONTENT_R0 100% 40m		Refactored.
de BASE_MODEL_FRAME_CONTENT_R0 100% 70m			Should be ready.
an-re PLATFORM_STANDARDS_REPOSITORY_R0 2p 15m	Some parts of the analysis are missing.
an-re PRO_LIB_CORE_TUTORIAL_R0 2p 5m			Does not apply to the template.
an-re BASE_BOUND_CONTROLS_R0 2p 10m 			This analysis is for the whole task.
an-re BOOK_WINDOW_R0 1p 0.25h					Does not apply to the template, other problems also.
an-re PLATFORM_STANDARDS_REPOSITORY_R0 3p 10m
de-re PLATFORM_STANDARDS_REPOSITORY_R0 3p 10m
im-re PLATFORM_STANDARDS_REPOSITORY_R0 3.5p 10m
an-re PLUGIN_DECOMPOSITION_R0 3.5p 30m
im UNTRACKED_DISCUSSION_ANALYSIS 1% 4h explanations of how to do things right
de-re PLUGIN_DECOMPOSITION_R0 3p 10m
an-re PLATFORM_NFR_EXTENSIBILITY_R0 2p 10m The analysis should be more detailed.
an-re PLATFORM_STANDARDS_DESIGN_R0 4p 10m OK
im UNTRACKED_ALL_TASKS_REVIEW 1% 15m
de-re PLATFORM_STANDARDS_DESIGN_R0 3p 10m One of the points was a little incorrect.
im PLATFORM_NFR_COMPATIBILITY_R0 100% 2h Wiki page PLATFORM_NFR_COMPATIBILITY is created.
im UNTRACKED_REPORTS_REVIEW 20% 10m Reports Template is applied to one of the old results.
im-re PLATFORM_STANDARDS_DESIGN_R0 2p 5m Design template is not done.
an-re PLATFORM_STANDARDS_GENERAL_R0 3.5p 10m
de-re PLATFORM_STANDARDS_GENERAL_R0 3p 10m
im-re PLATFORM_STANDARDS_GENERAL_R0 3.5p 30m
an-re PLATFORM_INFRASTRUCTURE_OVERVIEW_R0 4p 5m
de-re PLATFORM_INFRASTRUCTURE_OVERVIEW_R0 4p 5m
im-re PLATFORM_INFRASTRUCTURE_OVERVIEW_R0 4p 1h
an-re PLUGIN_MODULE_STRUCTURE_R0 3.5p 30m Review + Refactor
an-re SCS_ISSUE_TRACKER_MAINTAINCE_R0 3.5p 10m
de-re SCS_ISSUE_TRACKER_MAINTAINCE_R0 3.5p 10m
im UNTRACKED_REPORTS_FIX 100% 30m Apply template to old reports.
an-re BASE_BOUND_CONTROLS_R0 3.5p 10m
im UNTRACKED_ECLIPSE_CONFIG 100% 3h
de PLUGIN_MODULE_STRUCTURE_R0 25% 3h Research some things. Get into JPF and Maven.
im UNTRACKED_MAVEN_JPF_RESEARCH 1% 4h
de PRO_LIB_CORE_TUTORIAL_R0 100% 30m
im PRO_LIB_CORE_TUTORIAL_R0 50% 6h
an-re PLATFORM_NFR_EXTENSIBILITY_R0 2p 5m Incorrect. You don't have to describe plugin architecture.
de-re PLATFORM_NFR_EXTENSIBILITY_R0 1p 5m It is analysis not design.
im PRO_LIB_CORE_TUTORIAL_R0 25% 1h
im PRO_LIB_CORE_TUTORIAL_R0 100% 6h Finished!
im UNTRACKED_SPRINT_REVIEW 100% 5h
untracked-work: 0h
list:
What went well:
* We learnt how to write analysis and design and how to make review.
* We learnt how easily to track our work (for example how to write reports). It will be very useful.
* Have tutorials and unit-tests helpers - it will be easier for the development process.
What could be better:
* Better analysis
* Dependencies
How to improve:
* List the tasks according to the dependencies
* Ask! if you don't know what the task is about

Comments

  • The name of the page is incorrect. It should be Reports, or a sub page of the task page that generated it. --milo @ 2008-12-22
  • The example report for the iteration is not very correct. You should include some of the information, but the statistical things are now generated by manage/reports/0check.py and you can see an example of the generated data at manage/reports/mega-report-2008-12-22.txt