On reading "Introduction to the Personal Software Process" by Watts S. Humphrey (1997)
The Personal Software Process (PSP) is an extension of the Capability Maturity Model from an organization to the individual. It's basic idea is really very simple: measure the status quo, use these results to extrapolate into the future, and then measure again to compare against the prediction. This approach combines just two very simple, but also very powerful concepts: empiricism and feedback.
In his book, the author applies these ideas to two fields primarily: time planning and defect reduction. In the time planning part, he suggests to measure how much time one spends on various activities and to use this data to predict how much time future tasks will take. Although this is a very interesting idea, it rests on the assumption that one has done a lot of basically similiar work before, and will continue to do comparable work in the future. This assumption may be fulfilled for practitioners working for large, structured organizations, but it may not be very applicable for small projects in organizations where each individual has to take on a variety of different roles over the course of the project.
However, in developing his ideas, the author makes a number of very interesting points. One is the (in the software field) revolutionary idea of keeping an engineering notebook. This arises quite naturally, since all the time measurements have to be recorded somewhere. From there, it is only a small step to record all kinds of other information, such as design notes and calculations, on a frequent basis. In other fields, specifically laboratory sciences, such notebooks are common practice, and serve not only to record measurements, but also to provide proof for the authenticity of measurements reported elsewhere over time. In a similar spirit, the software engineering notebook is (among other things) seen as a tool to prove that reasonable standards of care have been taken during the software development process, should the organization ever have to defend the product in a liability lawsuit. This consideration may not apply to all project equally, but it is worth a thought!
Clearly, the author places a (currently un-fashionable) emphasis on planning. He distinguishes between two kinds of plans: period plans (schedules) and product plans (feature lists). Among the benefits of plans he lists some that are rather subtle and indirect, for instance when he points out that "many of the problems in software engineering are caused by ill-considered shortcuts, carelessness, and inattention to detail", and that "in most cases, the proper methods were known and specified but not followed". A well-thought out plan, set up in advance, might therefore help to keep the practitioner doing what he (or she) knows he (or she) should be doing.
The author acknowledges that planning is hard, but can be made easier and more successfull over time, by actively trying to use past experience in a feedback process. And he underlines the importance of proper planning by reminding everyone of the economical environment that most of software engineering takes place, when he points out that plans are used to allocate funds for software projects and continues to say: "Without adequate funds, engineering and manufacturing might have to cut back their plans. When they cut back their plans, YOU might be laid off."
The author also sees a product plan as a tool to focus effort, since it provides a tangible, fixed description of the project goal and as a help to avoid time wasted in trying to figure out what to do next. While he clearly has a point (projects that have no idea what they are supposed to achieve should not be undertaken, at least not as software projects, but rather as research and inception projects), this also raises what I believe to be one of the fundamental limitations, if not outright blind spots, of the entire discussion: non-manufacturing activities (such as requirements elicitation) are never mentioned! The author's description seems to presuppose that the software engineer's activities are limited to design and implementation from a functional spec! One has to keep this limitation of the approach under discussion here squarely in mind: the author's recommendations are primarily applicable to the construction phase of a software project!
The truly sensational suggestion, however, is made in the context of defect reduction. To reduce the number of defects "injected" (in the author's parlance) into one's code, the author suggests to print out all new code and review it before the first compile! While doing this, one keeps (again) track of every defect found, and groups them into the following categories (in ascending order of complexity):
While code reviews (either by the author or by team members) are not uncommon, I had never considered reviews of uncompiled code! The author offers four reasons to have the review before the first compilation:
The arguments with the most weight are of course items 2 and 3 in the list above - if one is honest with oneself, one will have to admit that the author is right. In particular in an environment where reviews (even if only by the author) are common, the observation that reviews will be necessary and will take about the same amount of time before and after compilation is quite correct.
Still, this goes against every instinct and habit in common practice. Could the author really be serious? After considering this for a while, it reminded me of an experience that I have had occasionally when developing some particular critical and intricate piece of code. In such situations, I would actually work out the code in full detail on a piece of paper (actually, many pieces, with most of them ending up crumpled on the floor or in the waste basket). I would verify the code formally and then code it very carefully from paper into the computer. Naturally, such code would often result in a clean compile (and correct execution) on the first attempt. Ahh, what a powerful experience!
All the "review before compile" approach attempts to do is to make this experience the commonplace experience. What a concept!
I am still not sure how realistic this suggestion is. It certainly requires an enormous amount of discipline on the part of the programmer. It is also in almost direct contradiction to the development of modern programming tools, all of which try to make the compile/run/debug cycle as close and integrated as possible. However, as a model, it is a very powerful concept!
Similarly to the approach taken in regards to time planning, the author again suggests to use the data gathered to improve the process in the future. By keeping track of the kinds of errors made most often, one can build personalized checklists to be used in reviews. This helps to place focus on those constructs in one's code which are most likely to be defective. These checklists should be revised periodically, in light of possible improvements in areas that have been weak in the past.
The author also suggests to use statistics on defects found to predict the most likely number of defects remaining. The idea is to keep track of the number of defects "injected" per project phase, where the phases of a project are:
The "Phase yield", i.e. the number of defects found during the current phase, divided by the total number of defects found so far, should drop rapidly as the project progresses. Failure to decrease rapidly is a strong sign of an unreliable, low-quality product. As rule of thumb, the author suggests to assume that the number of undetected defects remaining in the code is equal to the number of defects found in the most recent project phase.
The suggestions on time planning are interesting, but not necessarily earth-shattering, sometimes even a bit on the whimsical side (for instance, when the author suggests using a stop-watch to time interruptions by the phone etc.).
The suggestions on defect reduction, however, strike me as incredibly powerful. The notion to write correct (and this includes syntactically correct) the first time around, is very appealing. The organized approach to use data from past experience to predict not only one's own most likely trouble areas, but also to determine the likely defect rate for the entire product holds the promise of quite tangible benefits. (Note that the use of empirical data may be more applicable in this context than for time planning, since the activity under consideration, namely coding, is more likely to be the same.)
However, the big question remains how compatible any of these ideas are
with reality. The promise that lies in the suggested organized, repeatable,
and structured workmode is undeniable. But how likely is it that real
developers will accept these suggestion? Too much of it seems to be
almost intentionally designed to have "as little fun as possible"!