Skip to navigation

Blog


Be sure to subscribe to our Blog to stay in tune with
the latest and greatest about Recruitment Process Outsourcing


Measuring Requirements and Design Effectiveness in Software Projects

Tuesday, September 02, 2008 10:06 pm - by ccountouris
1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

We know how to measure implementation effectiveness and productivity using a development backlog and concepts like Running Tested Features (RTF).

Good user experience is critical to a good software product, especially a software product delivered within the confines and quirks of modern web browsers. For this reason all design and requirements gathering activities at Accolo are performed by full time employees, while off-shoring almost all implementation through contractors and vendors.

We know how to measure implementation effectiveness and productivity using a development backlog and concepts like Running Tested Features (RTF).

So, how do we measure design and requirements effectiveness? Implementation measures do not do a good job of measuring design effectiveness, so we devised a measurement that we lovingly call the Feature Punt Ratio (FPR).

For you football fans (“American Football” for our friends outside the U.S.) you are painfully aware of what a punt means. You didn’t reach your goal and you need to relinquish control of the rock to the other team. It’s not much different in software development, you didn’t reach your goal on a particular feature and you have to punt it away into another development iteration.

The Feature Punt is the number of features that were planned for implementation in an iteration that failed to get implemented because of requirements and design inconsistencies, changes or flaws. Compare the number of features punted due to design and requirements problems against the number of features implemented in the iteration and you have the Feature Punt Ratio.

Features may be punted for various reasons. Like most applications, we have actions in our application that are available to some users and not others. We had this user permissions information sprinkled throughout each action’s requirements document. During the iteration it was clear that the engineers did not understand how the action permissions related to each other. The best way to communicate the permission interactivity was to put together a permissions matrix. When the engineers got he permissions matrix, mid-iteration, they determined they would not be able to complete the feature.

More generically, we’ve seen feature punt because we failed to consider a particular use-case. We changed the requirements to account for the missed use-case in the middle of the iteration. The changes were significant enough that the engineer was no longer confident that the feature could be implemented in the original iteration, so it was moved to the next iteration. It is not uncommon to realize some missed use-cases when implementation begins, so this is our most common reason for feature punt.

After an iteration is complete an inventory of features implemented is taken. Hopefully all features that were committed to were completed in the iteration and maybe some more. If some features were not implemented you evaluate the features to determine if they were not implemented because of design and/or requirements issues or because of implementation problems.

You then apply this basic ratio:

Features Punted/Total Features implemented * 100
You get a percentage of features punted versus the total features implemented for the iteration.

We strive for a feature punt ratio of <= 5%.