Conduct test metrics analysis using several steps, which include:
?sorting data into categories,
?performing calculations,
?analyzing to determine trends.
Sort Data into Categories
Sort the data gathered into the applicable categories based on the objectives defined at the beginning of the test metrics evaluation process. For example, if the objective is to determine the most frequent source of defects, the data is sorted by source of defect.
Perform Calculations
Perform calculations, such as percentages and averages, to compare the types of data gathered.
For example, to determine the percentage for each source of defect category, calculate the total based on each defect source and calculate the total for all defects. Then, divide the number of defects for each source of defect type by the total number of defects, to determine the category with the highest percentage of occurrence.
Frequently, additional categories need to be calculated. For example, to determine the cost to fix each source of defect category, calculate the percentage of effort required to fix the defects for each source of defect category. To do this, calculate the total number of hours to fix the defects for each source of defect category, and the total number of hours to fix all defects, then divide the number of hours for each source of defect category by the total number of hours to fix all defects.
Analyze to Determine Trends
Review the calculated findings to determine trends. For example, the majority of the defects were coding defects, but the most costly to fix were the functional specification defects.
Additional calculation may also be necessary. For example, the determination of the average number of hours to fix each source of defect category may allow better comparison of the cost of defects.
Evaluate the causes of the findings. For example, further discussion with the programmers reveals that it takes longer to fix functional specification defects, because the analysis and design processes must occur again.
The analysis process may also reveal additional factors that need to be collected to improve the reliability of the findings. For example, to clarify the causes of the coding errors, the identification of the programs where each defect occurred, as well as the complexity of the programs, may help. When determining complexity, established metrics, such as McCabe's Complexity Measure and Function Point Estimating are useful.
Whenever possible, use automated means, such as a database, to accumulate and analyze test metrics.
When to Complete the Analysis
Depending on the objective, the analysis process can be done on a continual basis or at the end of the data gathering period. When the objective is improvement of the current project, the analysis process should be done on a continual basis during the data collection. When the objective is improvement of a future project, the analysis process can be done at the end of the data gathering period.
Continuous analysis can be comparative or cumulative. For example, monitoring system readiness by calculating and comparing the number of defects each day, to determine if there is a downward trend, illustrates comparative analysis. In other situations, the analysis needs to be done on a cumulative basis. For example, when determining the most frequent and costly types of defects, a continuous analysis of information as it is accumulated helps with early identification of areas requiring corrective action.
An example of analysis at the end of the data gathering period is the collection of work hours used for testing, for the purpose of estimating future testing efforts. For greater reliability, accumulate and analyze data from as many projects as possible.