Wednesday, June 10, 2009

Test Process Improvement


Is a tried and tested structured assessment of an organization’s testing maturity with a view to improving its overall testing and QA effectiveness and efficiency.

The TPI has been developed by Sogeti, a wholly-owned subsidiary of the international Capgemini organization. It goes hand in hand with the Test Management Approach (TMap) which is also a testing methodology from Sogeti.

The TPI methodology is an approach of evaluating the current state of the QA and testing processes in the organization according to 20 key dimensions (Key Areas). It provides a quick, well structured status report to the existing maturity of the test processes in the organizations as well as a detailed road-map to the steps needed to take in order to increase the maturity and quality of the testing process.

This model has four basic components -
  1. Key Areas
  2. Levels
  3. Checkpoints
  4. Improvement Suggestions

Key Areas

Life Cycle

Test strategy
Life-cycle model
Moment of involvement

Techniques

Estimating and planning
Test specification techniques
Static test techniques
Metrics

Infrastructure

Test automation
Test environment
Office environment

Organization

Commitment and motivation
Test functions and training
Scope of methodology
Communication
Reporting
Defect management
Test-ware management
Test process management

All Cornerstones

Evaluation
Low level testing

Levels
Each of the above key areas is assessed at various levels like A, B, C and D.
The number of levels for all the key areas is not the same. For e.g. Static Testing Techniques’ key area has only two levels – A and B. However, ‘Test Strategy’ has 4 levels – A, B, C and D.

Checkpoints

Each level has certain checkpoints for each of the key areas.
The test process under assessment should satisfy these checkpoints to be certified for that level.

Improvement Suggestions

The model also includes improvement suggestions to assist the organizations in achieving higher levels of maturity.

Monday, May 11, 2009

Defect Removal Efficiency...

(From Systematic software testing By Rick David Craig, Stefan P. Jaskiel): Ref: Google Books...

A measure of the number of defects discovered in an activity versus the number that could have been found. Often used as a measure of testing effectiveness.

Defect Removal Efficiency (DRE) is a measure of the efficacy of your SQA activities.. For eg. If the DRE is low during analysis and design, it means you should spend time improving the way you conduct formal technical reviews.

DRE = E / ( E + D )


Where E = No. of Errors found before delivery of the software and D = No. of Errors found after delivery of the software.

Ideal value of DRE should be 1 which means no defects found. If you score low on DRE it means to say you need to re-look at your existing process. In essence DRE is a indicator of the filtering ability of quality control and quality assurance activity . It encourages the team to find as many defects before they are passed to the next activity stage. Some of the Metrics are listed out here:
DRE doesnt measure EFFICIENCY. It can be better called DEFECT DETECTION PERCENTAGE

DRE = (Number of bugs found in testing / number of bugs found in testing + number of bugs not found)*100
Number of bugs found in testing = 80 + 40 + 100 + 20 + 50 + 30 = 320
DRE = 320 / [320+30]*100 = 0.91*100 = 91%

NOTE: DRE is also sometimes used as a way to measure the effectiveness of a particular level of test. For example, the system test manager may want to know what the DRE is for system testing. The number of bugs found in system testing should be placed in the numerator, while those same bugs plus the acceptance test and production bugs should be used in the denominator. See the example below: (consider the same example as above)

System test DRE = # of bugs found in system testing / [# of bugs found in ST + # of bugs found in AT and Production]*100

System test DRE = (50/[50+30+30])*100 = 45.45%

If we already have automation, what's the need for Agents?

“Automation” and “agent” sound similar — but they solve very different classes of problems. Automation = Fixed Instruction → Fixed Outcome ...