Recently I conducted a study on projects sized in function points that covers projects put into production from 1990 to the present, with a focus on ones completed since 2000. For an analyst like myself one of the fun things about a study like this is that you can identify trends and then consider possible explanations for why they are occurring. A notable trend from this study of over 2000 projects is that productivity, whether measured in function points per person month (FP/PM) or hours per function point, is about half of what it was in the 1990 to 1994 time frame.
Part of this decline can be attributed to a sustained decrease in average project size over time. The overhead on small projects just doesn’t scale to their size, thus they are inherently less productive. Technology has changed, too. But, aren’t the tools and application development software languages of today more powerful than they were 25 years ago?
I want to propose a simple answer to the question of why productivity has decreased: the problems with productivity are principally due to management choices rather than issues with developers.
Allow me to elaborate. Measures aimed at improving productivity have focused on two areas: process and tool improvements. What’s wrong with process improvement? Well, nothing except that it is often too cumbersome for the small projects that are currently being done and is frequently viewed by developers as make-work that distracts them from their real jobs. And what’s wrong with improved tools? Again, nothing. Developers and project managers alike are fond of good tools. The point is, however, that neither process improvement nor better tools address the real issues surrounding software project productivity. Here are some reasons:
- Coding and unit testing do not comprise the bulk of software project effort. One popular estimate of coding and unit testing on a medium to large project puts these activities at 30 percent of total project effort. So even if tools and processes make these activities twice as efficient, that will only reduce total project effort by 15 percent. Simply stated, the bulk of the activities that go into a software project have been ignored by productivity improvement measures.
- Misdirected Project Effort. QSM did a study a few years ago in which we compared the productivity, quality, and time to market of projects that allocated more than the average amount of effort to analysis and design with those that didn’t. The average at that time was 20% and has since decreased. The projects that spent more time and effort up front outperformed the other groups in every measure. I repeated this analysis looking only at projects completed since the beginning of 2000 with similar results, (see table below). Since the amount of effort spent in analysis and design has decreased since the first study, this does not bode well for either productivity or quality.
- Failure to understand the impact of “aggressive schedules”. I’m being generous here: all too often it is willful ignorance. Schedule pressure is the principal reason project teams don’t spend adequate time in analysis and design. There is a steady progression apparent in the data: productivity decreases sharply the more schedules are compressed. Projects with schedules in the top third in length were 3 times more productive than those in the shortest third. You get to choose one: quick to market or high productivity; but not both.
Choosing between spending more time in analysis and design or between optimizing schedule or productivity are not choices developers make; they are management decisions that directly impact productivity and
Want to learn more about how ALM software can benefit your company and increase productivity? Browse additional blog posts, product reviews and our free top 10 application development software reports using Business-Software.com’s ALM resource page.