Tuesday, August 26, 2008

Implementing Configuration Management for Software Testing Projects

To analyze test process performance, testers typically review and analyze the test process artifacts produced and used during a project cycle. However, these testing artifacts, along with their related use cases, evolve during a project cycle and can frequently have multiple versions by project end. Hence, analysis of the process performance from different perspectives requires that testers know exactly which versions of artifacts they used for different tasks. For example, to analyze why the test effort estimates were not sufficiently accurate, testers need the initial versions of use cases, test analysis, and test design specifications they used as a basis for the effort estimation. In contrast, a causal analysis of software defects missed in testing requires testers to have the latest versions of use cases, test analysis, and test design specifications used in test execution.

Read rest of the article here: http://www.stsc.hill.af.mil/crosstalk/2005/07/0507Boycan.html
Courtesy: STSC (Software Technology Support Center)

Sunday, August 24, 2008

S/w program vs. industrial product

One advantage a software progam has over an industrial product is it can be reworked / reverted to a previous state if defects are found later, which is not the case with an industrial product; once a defect is injected, it stays and makes the product of no or lesser value.

Lehman's laws of software evolution

In 1980 Lehman and Belady came out with two laws explaining why software evolution could be the longest of the life cycle processes. These can be summarized as below:

  1. Law of continuing change: To continue being useful, a system must undergo changes.
  2. Law of increased complexity: The program structure detiorates as changes are introduced into it over time. Eventually, the complexity rises to an extent where it is cost effective to write a new program than to try to maintain it.

Sunday, August 17, 2008

T&M and Fixed Bid - Testing Services

Even before companies raise RFP for a project/outsourced work (as also prior to starting any software project initiation), the estimates for work are worked on at a high level to understand the time it will take to complete various phases of the work. Typically, the effort distribution for various stages of the software development lifecycle is: Design 15%, Construction/Coding 50%, Testing 30%, and Documentation 5%.

In the above generic estimate, unit testing is considered part of the development work. The figures tell us if coding itself consumes 50% of the time, rest of the time is the cumulative effort spent on design, testing and documentation. Therefore, if we estimate for 200 man days of coding effort, this would mean it will take another 200 days for the complete project!

We came across a scenario where the client was trying to develop some generic principles on T&M and Fixed Bid for Testing Services such that he can use them directly say Unit Testing can be done by Fixed Bid, Stress/Volume testing can be done by T&M!

However, the problem here is the client is not confident of the project estimate. So, the sub allocation of tasks within testing services will also change along with the project estimates. The 30% of X would keep changing with the value of X. What the client does not realize is there are certain factors to be considered which are unique to each project. There cannot be a thumb rule as the client expects. Depending on the time, money, resources and the quality of work expected out of the project (we shall have to desist talking about "ALQ, accepted level of quality as there is nothing "acceptable" in real sense") we define what kind of tests are critical for the project depending on the shipping date. Usually, companies go for fixed bid when there is a limitation of the budget and when the schedule is of priority continuous support is needed, they go for T&M. To summarize, investor options are important for making a decision on the kind of bid a company would like to offshore especially for testing service.

Tuesday, August 12, 2008

On SCM

We had an interesting discussion on SCM today. A PM was arguing configuration management of artifacts involves only checking in the final versions while working on the intermediate versions and not caring to check them in on a daily basis. Many people confuse or rather are arrogant/careless on not checking in their work. The configuration manager must ensure that not only the code but all other client deliverables including technical design, tech specs, code, etc. are checked in the version control. This helps. In the absence of a person, another one can take up the work and continue working on the code/deliverables. Thus, we ensure the work is not person dependent! And also since the deliverables are checked in daily, all versions of them are available at any moment so that if required they a branch can be taken and worked upon.

If we already have automation, what's the need for Agents?

“Automation” and “agent” sound similar — but they solve very different classes of problems. Automation = Fixed Instruction → Fixed Outcome ...