다음을 통해 공유


Lehman’s Laws of Software Evolution and the Staged-Model

Abstract

Many of the software lifecycle models of today focus on and prioritize the initial development of software; with many more implicitly suggesting that the maintenance of software is a homogenous open-ended process whose termination is made at the leisure of its maintainers. While perhaps in theory these models may hold true, they require circumstances and conditions so perfect that they are rarely seen in the real world; moreover, even fewer of them are structured in a manner that reflects the truisms and principles of Lehman's Laws.

Implicitly based on Lehman's Laws, Bennett, Rajlich & Wilde offer an a new model for the software development lifecycle they termed the staged-model. It consists of five stages through which the software and its development team progress. The model represents a lifecycle arch through which all systems are conceived and continue to evolve to meet the needs of their users before they eventually succumb to software decay and are eventually retired. No other to other software model today attempts to incorporate the factors that preservation of architectural integrity and retention of system-expertise over the system has in lifecycle of a product.

This paper opens with a review of the current state of software lifecycle theory, continues with a discussion of software evolution and Lehman's Laws, and then concludes with a detailed walk-through of each of the five stages that comprise the staged-model to include a discussion of how each stage reaffirms Lehman's Laws and how traditional views of software maintenance today may be insufficient.

Introduction

In order to understand how traditional views, as they pertain to software lifecycle theory and the topic of software maintenance may be insufficient, one must gain an understanding of what these traditional concepts are for sake of comparison. In the following sections, this traditional view will be summarized, to include some of the early work upon which they are based, before a discussion of Lehman's Laws and software evolution is held. His laws, which are merely a set of observations of behaviors and phenomenon derived from empirical-based evidence of large scale systems over the past thirty years, serve to indicate how perhaps traditional views of software maintenance may be too simplistic. 

No one can be sure from where the term software maintenance comes or was coined; however, the widely accepted definition of it is can be expressed through the commonly cited definition of it offered by IEEE: "The process of modifying a software system or component after delivery to correct faults, improve performance or other attributes, or adapt to a changed environment." (IEEE 610.12, 1990)

With regards to how this definition fits into the traditional view of the software development lifecycle, again IEEE provides another widely accepted and cited definition of it:

The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software life cycle typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and, sometimes, retirement phase. Note: These phases may overlap or be performed iteratively (IEEE 610.12, 1990).

Upon causal glance, arguably few would have a problem with this definition; however upon further examination, notice how it contains only a single one word diminutive reference to the role that maintenance plays within the software lifecycle and it shares that distinction with a reference to the role that its use plays (i.e. operation and maintenance phase). While some may argue that this is fitting in that they agree with the notion that the majority of the engineering and thought that is put into the development of a software system is in its initial development; however, may be a misconception, for the empirical evidence indicates otherwise.

According to several sources, and perhaps counter to intuition, the maintenance of software comprises from 50% to 90% of the overall lifecycle costs (Pigoski, 1997; Lientz & Swanson, 1980)

 Clearly, its role in the overall software lifecycle is not well represented by traditional definitions.

Despite this, and to highlight the bias that exists towards the significance that the initial development of a system has over maintenance, one can also look at the commonly cited definition of successful software system. According to the Standish Group, in their frequently cited CHAOS report (1995), a software success -- or what they term a Resolution Type 1 -- is defined as, "[when] the project is completed on-time and on-budget, with all features and functions as initially specified." (p. 2) Whereas this may be an appropriate definition from the perspective solely of its development, upon further review, it indicates two things: first, the very use of the term resolution suggests that the project is complete once it is initially fielded for use; thereby, completely ignoring the significant role that maintenance plays in its overall lifecycle; second, through the use of the term initially specified it suggests that all subsequent specifications are immaterial to designating a system as a success or not.

The Standish Group cites that a full 83.8% of software projects surveyed fail to achieve their definition of success. They also cite several common reasons as to why software project fail, but chief among them, they attribute to shifting requirements. Being that requirements-shift is nearly a constant in software engineering; this suggests that the primary reason why software projects fail is due to their inability to evolve efficiently to match the shifting requirements. Being that it is well expected that perfective and adaptive modifications are two of the three types of modifications that are performed during software maintenance (Pigoski, 1997); does this not indicate that software evolution of a system continues well past its initial development? If so, how can failure to evolve be the primary reason for software development failure before its deployment, but considered of no significance after its deployment? Moreover, the very definitions of success and failure do not account for any activity after its deployment.

To make sense of this, perhaps what is needed is to stop placing such priority on when software evolution occurs in the lifecycle, but focus on the role it plays in the lifecycle heuristically. As such, in the next section, the a closer examination of software evolution will be conducted along with an introduction six laws offered by Dr. Meir Lehman that serve to define several observed behavior regarding it.

Section 1: Software Evolution & Lehman's Laws

The term software evolution stems for a series of works, commonly referred to today as Lehman's laws, that were first proposed by Dr. Meir Lehman in his work Programs, Life Cycles, and Laws of Software Evolution (1980). In order to understand the concept of software evolution, one should back up a moment and recall that the purpose of software systems is to provide solutions to problems. These problems are human based and software is developed to alleviate them through automation and functionality they provide. Often, the term software solution is used to highlight their purpose of solving problems; however, the problems of humans are not static and change over time (e.g. new regulations are passed, workflow procedures change, etc.) In order to keep these software solutions relevant to the problems they are intended to solve, they too must be modified to change along with their corresponding problems. This is the cusp of what software evolution is; it is the concept that software is modified to keep it relevant to its users.

As such, Lehman's Laws essentially describe software evolution as a force that is responsible for both the driving of new and/or revising of developments in a system; whereas, it also being simultaneously responsible for the hindering of future progress in the implementation of those new developments through a byproduct of evolution, which is software decay. Later in this document, six of Lehman's Laws will be discussed; however, before proceeding it is important to understand the types of systems to which they pertain, as defined by Lehman.

Section 1.1: Lehman's System Types

The identification of this force, or rather where the concept evolution originated, was derived from a series of empirical observations Lehman made of large system development (Programs, Life Cycles, and Laws of Software Evolution, 1980). Through those observations, Lehman categories all software systems to consist of one of three types: S-Types, P-Types, and E-Types.

Section 1.1.1: S-Type Systems

S-type or static-type systems are those that can be described formally in a set of specifications and whose solution is equally well understood. Essentially, these systems are those whose stakeholders understand the problem and exactly know what is needed to resolve it. The static part of these types of systems refers to their distinction of have specifications that do not change. An example of a type of this system would be one that performs mathematical computations where both the desired functionality and how to accomplish it are both well understood beforehand and which are not likely to change. Systems of these types are generally the simplest of all the three types and least subjective to evolutionary forces. For this reason, being that once initially developed, there is little occurrence of change, Lehman's Laws do not pertain to these simple systems.       

Section 1.1.2: P-type systems

P-type systems, or practical-type systems, are those that can be described formally, but whose solution is not immediately apparent. In essences, these are the systems that the stakeholders know what end result is needed, but have no good means of describing how to get there. For systems of this type, they require an iterative approach in their discovery to facilitate the feedback necessary to them to understand what they do need, what they do not need, and how to articulate it. This iterative process to facilitate feedback is analogous to how a witness of a crime would work with a sketch-artist to derive an accurate picture of the suspect.

To illustrate this analogy, imagine the difficulties that a witness would have in articulating a complete and accurate description of a suspect to a sketch-artist without the benefit of iterative feedback to seeing how the artist is rendering that description. It's logical to assume that portraits developed through this manner would rarely result in an accurate likeness of the suspect. Instead, in order to develop an accurate likeness, witnesses must sit with artists and only through the iterative feedback of seeing how their descriptions are being rendered, can they refine and ascertain their understanding of exactly what they need. P-Systems are similar in this respect in that systems stakeholders (i.e. the witness) require the iterative feedback provided by the system engineers (i.e. the sketch artist) in order to effectively state the problem that the system is to solve and to develop a design that can accurately facilitate it. Largely due to that these systems cannot be described formally; Lehman's Laws do not pertain to this type of system either

Section 1.1.3: E-Type Systems

The final type of system proposed by Lehman is the E-types, which is short for embedded-types. E-Types are those that characterize the majority of software in everyday use (2002). The embedded part of its name refers to the notion that these systems serve to model real-world processes, and through their use, become a component of the world in which they intend to model. In other words, opposed to a simple calculator program (i.e. as with s-type system), systems of this type become components of real world processes; processes which would fail without them. An example of such a highly embedded type system would be the air-traffic control system of an airport; without it, the business of orchestrating the safe departure and arrival of flights would be impossible. In this example, it is easy to see how the system has become component of the real world.

Being that the real world is constantly in-flux and changing, in order to remain relevant, systems of this type must change as the world does. An example of such a system would be one that is used by an investment firm to ensure to execute trades. As financial regulations that govern the practices of financial trades changes, or as new sets of them are introduced, so must the software in order to reflect that change; otherwise, its relevance diminishes along with its value to the firm. Due to their high occurrence of evolution, it is systems of this system that Lehman's Laws pertain.

Section 1.2: Lehman's Laws

With an understanding of to what type of systems Lehman's Laws pertain (i.e. E-type), a discussion regarding those laws can commence. Lehman offers eight laws, which are not offered as laws of nature, but rather the product of observations made during the aforementioned survey of large system. It is important to understanding that they pertain to all E-type  systems regardless of the development or management methodologies or technologies used to develop them. As these laws pertain to the Staged-Model, which will be introduced later, six of them will be discussed.

Section 1.2.1: Lehman's 1st Law - The Law of Continuing Change

The first of Lehman's laws is the Law of Continuing Change. As suggested, since the environment of the real world is ever changing (e.g. new laws and regulations are always being passed, new base expectations of users are constantly shifting, etc.), and since E-type systems are a component of those real-world processes, in order for a system to continue to be relevant, it must adapt (i.e. evolve) as the world changes or face becoming progressively less applicable and useful.

Section 1.2.2: Lehman's 2nd Law - The Law of Increasing Complexity

The second of Lehman's laws is the Law of Increasing Complexity. This law states that, without direct intervention and proactive efforts to mitigate the risk posed by it, the implementation of all E-type systems will continue to become more complex as they evolve. The implications of this, as will be seen in Lehman's third law (described next) as complexity of the system increases, the ability for conserve familiarity with its implementation diminishes; thereby hindering the ability for it to continue to evolve. This diminishing ability to evolve is a product of software decay, which will be discussed later on in this paper.

Section 1.2.3: Lehman's 3rd Law - The Law of Conservation of Familiarity

The third of Lehman's Laws is the Law of Conservation of Familiarity. This law refers to the idea that in order for E-type system to continue to evolve, its maintainers must possess and maintain a mastery of its subject manner and implementation must. In other words, in order for a system to efficiently continue to evolve to meet the forces-of-change exerted on it by the real world, a deep understand of how the system functions (i.e. how it works) and why it has been developed to function in that manner (i.e. why does/must it work in this way) must be preserved at all costs. Logically, to get somewhere one must first know where they are. The concept here is similar. Without a familiarity of how and why the system was designed in the way it was, it becomes very difficult to implement changes to it without compromising the ability to understand it. This law refers back to the second of Lehman's laws: The Law of Increasing Complexity.

Section 1.2.4: Lehman's 4th Law - The Law of Continuing Growth

The fourth law that is relevant to the discussion of the Staged-Model is the Law of Continuing Growth. It states that in order for a E-type system to remain relevant to the business problems it is intended to resolve, the size of the system's implementation, will continue to increase over its lifestyle. This is implicitly tied of Lehman's second law, the Law of Increasing Complexity, in that there is a direct relationship between an increase in the number of function points that a system has offers and the complexity required to achieve that increased functionality; therefore, this law reaffirms his second law in that without due care and direct efforts to mitigate it, the expansion of the systems size can have negative affects to the ability for it to be comprehended along with its ability to evolve.

Section 1.2.5: Lehman's 5th Law - The Law of Declining Quality

The fifth of Lehman's laws is the Law of Declining Quality. Poorly conceived modifications lead to the introduction of defects and partial fulfillment of specifications. This law serves to also reaffirm the Lehman's second through fourth laws in that without a direct and rigorous effort to preserve the system's structural integrity by minimizing the effects that continuous evolution will have in increasing its size, complexity and its ability to be comprehended, that the overall effect will lead to a decline in the systems quality.

Section 1.2.6: Lehman's 6th Law - The Law of Feedback Systems

The final law of Lehman that is relevant to the Staged-Model is the Law of Feedback Systems. It states that in order sustain continuous change, or evolution, of a system in a manner that minimizes the threats that software decay and loss of familiarity poses, there must be a means to monitor the performance it will have. This law refers to the importance that all E-type systems include feedback mechanisms that permit its maintainers to collect metrics on the systems and maintenance efforts performance. Whereas the nominal value of these metrics may not intrinsically provide much insight; by performing a trend analysis of the metric's values over time, they serve as a barometric-like indication of how the systems evolution is proceeding by observing their relative change.

For example, a common metric such as the break/fix ratio, which is the number of discrepancy report (DR) fixes or service request (SR) changes, divided by the sum of closed DRs and SRs over the same period of time (Oman & Pfleeger, 1997). When it's value is trended over time, would provide a powerful feedback mechanism that provides an indication of the time being spent on correcting the system versus enhancing it. A break/fix ratio weighing on the SR side would provide an indication that the software baseline is relative stable and that the maintenance team is spending more time bettering the system than stabilizing it; likewise, a ratio weighing more on the DR side would be an indication that the state of the software unstable and in need of correction more than enhancing by its user community. Without the capability of capitalizing on feedback mechanisms such as these, a system's maintainers are essentially flying blind in their ability to judge the course of the system's evolution and to see whether or not corrective actions should be taken to minimize the detrimental effects of it.

Section 2: The Staged-Model of the Software Lifecycle

In discussion of the Staged-Model, it is easiest to understand what it is by examining first what it is not. In contrast to IEEE 12207: International Standard: Software Engineering - Softare Life Cycle Processes that defines all the processes, activities, and tasks required for designing, developing and maintaining software (2008) and in contrast to IEEE 14764, International Standard: Software Engineering - Softare Life Cycle Processes - Maintenance (2006) which offers expanded guidance on the processes that comprise the Maintenance category defined in IEEE Std. 12207; the Staged-Model is not an alternative or contradicting set of guidance for enumerating the best practices of developing an maintain software. For that matter, the Staged-Model does not directly address these activities but rather the considerations that should be made in effectively maintaining software.

In contrast to development methodologies such as Waterfall and Agile, or management methodologies such as SCRUM, the Staged-Model is not a methodology to be followed and executed. For that matter, the tenants of the Staged-Model are meant to supplement whatever development and management methodology that a project team elects to use by setting expectations of how efficient the activities will be given the state of the system.

So what is the Staged-Model? It is the instantiation of the observations made by Lehman in his laws that result in two fundamental tenants: first, all E-type systems will require change to remain relevant; second, in order to mitigate the detrimental effects that change will have on a system, rigorous and deliberate actions must be taken to minimize that effect, which can only be effectively accomplished through an retention of expertise of the system's implementation and the business domain of its purpose.

As such, the Staged-Model is a proposition that maintenance of software systems is not a uniform single phase as unintentionally eluded to IEEE 12207 and IEEE 14674; rather, it is comprised of five distinct and sequential stages, which are is in turn distinct from software evolution itself; they are: initial development, evolution, servicing, phase-out and closedown (Software Evolution and the Staged Model of the Software Lifecycle, 2002) .

Bennett, Rajlich & Wilde state that these stages are not technically distinct, but each requires its own different business perspective and therefore have varying implications that should serve to set the expectations of what can be accomplished. In the subsequent sections, these stages will be enumerated along with a description of these phases is provided along with an enumeration of some of the observed considerations that should be kept in mind in order to understand their respective implications.

Section 2.1: Stage 1 - Initial Development

The first stage in the model is initial development, which as its name suggests, is the stage that encompasses the initial delivery and production use of the first version of the software system. As Bennett, Rajlich & Wilde indicate, this stage has been well described and documented in software literature with many models, methodologies and standards that address it. Two of these standards are those mentioned previously, IEEE 12207 and IEEE 14675, both of which offer an enumeration of guidance and best practices associated with the controlling the process of software development effort and the formulation of a subsequent maintenance engagement, respectively. However, few of these models directly address the effects that retention (or the loss) system and business domain expertise has on this process and how continued uncheck software decay can have on the ability to perform it.

In the earlier days of software development, it was quite common for systems to be developed completely from scratch; however, today it is much more common for the initial development of a system includes situations where there may be an inheritance of a legacy system or consist of integrations with prefabricated components such as COTS products. Regardless of the circumstances of its inception though, the outcome of this stage results in two paramount aspects that ultimately define the course of the effectiveness of maintenance throughout the rest of the system's lifecycle; they are: the architecture of the system and the software teams expertise.

The initial architecture of the delivered system has an enormous implication on its future of its maintainability and therefore the feasibility to sustain evolution. The architecture of the system is consists of the components from which the system is built, the logical organization of those components, the manner in which those components interact, and the properties that define these components. The initial architecture of the system, which is to really an indication of its elasticity or flexibility to change, is one of the primary determinants of whether the system's architecture will maintain its integrity and accommodate the changes made to it during evolution or collapse under the weight of their impact - here again a concept known as software decay.

 Bennett, Rajlich & Wilde note that one significant factor that influences the resulting strength of architecture's initial integrity is the degree to which its development process was subject to requirements-creep. Requirements-creep, or the essentially the concept that requirements change as the software is being develop, has a detrimental effect on the architectures integrity, for it hinders the ability for its designers to establish an clear and logical architecture under a signal vision. In illustrate this concept with an analogy, if one were to set out to develop an engine that is fuel efficient and then later to be required that is also be powerful, one should expect that the resulting engine design would be nether be very fuel efficient nor provide a lot of power.

The implication of this is that the initial development of a system should consist of an architecture that not only meets the requirements of its initial requirements, but also is flexible enough to withstand ongoing change, which should be expected.

The second paramount result of the initial development of a software system is the software development team's expertise, for it is during this initial stage when the foundation of it is established. It is during this stage when the software development team cultivates their understanding the business domain, their understanding of the problem the system is intended to resolve, and the manner which the solution will be implement (i.e. the formulation of an architecture). Referring back to the third of Lehman's Laws (i.e. the Law of Conservation of Familiarity), cultivation and retention of this expertise is a vital component in facilitating evolution of the system.

One consideration to be made concerning the implications that cultivating and retaining system expertise has is that it is likely not portable between team members. As Bennett, Rajlich & Wilde suggest, there has been many attempts to document this expertise with the idea that it may be conveyed to others. While definitely better than not doing so, in practice much of the experience gained through the initial development stage is tacit and difficult to document formally.

The implication of this is that, in order to sustain software evolution in a manner that is least detrimental to the system's architectural integrity, it is vital that key members of the initial development team be retained.

Section 2.2: Stage 2 - Evolution

The evolutionary stage is the key stage of the Staged-Model and is characterized as an iterative process of adding, modifying and removing significant software features and it is also the stage that marks the first major difference between it and other traditional lifecycle models.

In traditional lifecycle models, such as Waterfall, the emphasis is placed on the software's initial development. The traditional models hold the view that the software is essentially finished after its initial development, when its fulfillment of its requirements have been verified and validated, and then is fielded into production. At that point, traditional models dictate that the software is simply passed along - if not just thrown over the fence -- to a maintenance team to provide ongoing support. Whereas in theory this might make logical sense to assume; in practice, this view may be an oversimplification.

By way of demonstration through the use of case studies, Bennett, Rajlich & Wilde offer that when software is released for production use, particularly when the development of the software is considered a success, it begins to foster enthusiasm among its user base who invariably begin to provide feedback and requests for new features.

At this point, the maintenance team, which is often the original design team that sees the new system through its early days of production use, is living in an environment of success. Naturally, there will be defects detected during this stage, with generally the corrections to them scheduled for distribution in its next release. It is during this stage, when the expertise of the original design team has been retained, that continued modifications to the system can be made with the minimal amount of erosion of its architecture integrity.

This stage of evolution, one that is denoted by the continuous incorporation of significant  non-trivial modifications, will continue so long as its team of engineers has the appropriate level expertise over the system's architecture and its business domain to ensure those changes are implemented both accurately and with minimal degradation of its architectural integrity. Bennett, Rajlich & Wilde offer that in order for evolution of architecture to continue, it requires that the maintenance team consist of the very high technical expertise, the ability, and the leadership to overcome and defend the project from the business pressures that often seek to take technical shorts in order to deliver changes quickly and that are politically sensitive. Without this counterweight to those pressures, often taking the shortcuts can lead to rushed or inelegant modifications that can seriously deteriorate its architectural integrity. Worse yet, without the expertise to understand the threat that taking technical shortcuts poses, these modifications can be made without the team even knowing the extent of the damage that is being done, leading them to be more easily persuaded by external pressures down the road; thereby placing the system in even more jeopardy.

Bennett, Rajlich & Wild maintain that there has been many attempts to establish contractually what is meant by maintainable or evolvable software; however, defining the processes and will produce software with these characteristics has proven extremely difficult. They continue to offer that perhaps the best possible solution is to adhere to IEEE or ISO standards in the managing and implementing changes, to use modern tools (i.e. CASE tools), and to document as much as possible.

It is intuitively true here that there is no simple answer to how to make architectures easily evolvable. Inevitably, all design decisions have their trade-offs between gains made now, and anticipated gains in the future. The implication of this is that there is no definitive way to ensure that the architecture of a system will be flexible enough to accommodate all changes in its future. In addition to adhering to general standards of process and to make of use of modern tools, the best one can do is to strive for flexibility and to adhere to architectural design best practices by eliminating unnecessary coupling and maximizing logical cohesion, incorporating changes that are inline with the original architectural paradigm, striving to keep the code-base from being difficult to comprehend; and perhaps most importantly, retaining the expertise of its maintenance team over its architecture and business domain. Whereas taking any or all of these measures is no guarantee that software will continue to be evolvable, disregarding these concepts is almost a guarantee that the opposite will be true.

There is perhaps another measure that can be taken. As discussed previously, the logical organization of software's architecture is often undermined by the effect that changings requirements have in its initial inception. Traditionally, users are responsible for providing a comprehensive set of accurate requirements from which a functional implementation is provided; however, in reality, because of a lack of knowledge or through just omission, this is often the case; moreover, requirements often change during development, for as by the definition of E-type systems state, they must change because the world in which they are embedded continues to change. The implication of this is it is exceptionally rare for software requirements to be complete, and when they are, they are only accurate for that point in time and will undoubtedly change as the world does.

As such, there is a current trend in software engineering to minimize the process of initial development and supplant it with a perspective that it is merely an effort to deliver a preliminary skeletal version of the application. Under this perspective and counter to traditional software models (i.e. waterfall), there is no point in which the software is deemed finished; but rather, its full development is an ongoing open-ended practice consisting of endless iterations to fulfill the requirements as they stand at those time. In essence, by the Staged-Model advocates replacing the notation of initial development with the idea that development should be looked at heuristically as a series of small iterations of software evolution - a concept referred to as evolutionary development.

One methodology that embraces evolutionary development is the Unified Software Development Process. It consists of a process that advocates that development should consist of incremental iterations, with each adding new functionality or features. The advantage of this model is that it minimizes the risk posed by incomplete or stale requirements by providing a feedback mechanism to the users that allows them to observe project as it progresses. Metaphorically speaking, it could be considered growing software into place instead of building it.

In this vein of minimizing the role of initial development, there are other alternative methodologies that are gaining growing support. These are those that are traditionally described as being members of the Agile family of methodologies. For example, Extreme Programming (XP), a member of the Agile family, advocates a process that essentially abandons the concept of initial development and instead advocates that developers work intimately close with stakeholders and subject matter experts to develop a set of stories which describe each feature of the new software.  During each iteration, or sprint-cycle as some call it, the developers take these stories and decompose them into a series of tasks, with only one developer taking ultimate ownership of each. The users then determine which stores should be implemented during the next sprint-cycle.

One way in which methodologies such as XP improve the conservation of expertise is that, while only one developer is ultimately responsible for the completion of a task, all tasks are performed in pairs, with one developer implementing the feature and the other testing its implementation - a concept referred to as paired programming. In this way, the knowledge of how each task is implemented is shared between developers; thereby reducing the risks associated with the knowledge being isolated to just one person. Due to the self-checking nature of this methodology, there is also little need for organized walkthroughs or inspecting, freeing the team to focus on implementations. By maintaining a large set of test cases, maintenance tests can be rerun in the future, which provides a mechanism of feedback as advocated in Lehman's sixth law, The Law of Feedback Systems. In this regard, Agile methodologies such as XP serve to promote software evolution by placing an emphasis on retaining expertise of the project team personnel.

In the next section, a discussion will be held regarding the end of evolution of a software system. When this occurs, either through a deliberate decision to do so or one that is made for them by way of loss of architectural integrity through software decay, the software system moves into a stage referred to as servicing.

Section 2.3: Stage 3 - Servicing

As stated previously, a software system enters the serving model as the result of the loss of the expertise of the engineering team and/or through the loss of architecture integrity due to software decay. During this stage, for either or both of these two reasons, it becomes increasingly difficult and expensive to implement changes. Transition of the system to this stage does not affect its importance to its users; for that matter, E-type system having reached this stage after any period of time are likely so ingrained in the real-world processes in which they are member that they are arguably more relied upon by the users than when they were in evolution.

Section 2.3.1: Loss of Architectural Integrity

To understand the circumstances that delineate evolution from servicing, it is important to explore the concept of software decay. What exactly is software decay? As generally eluded to several times previously, it is the concept that architectural integrity has been compromised by less-than optimal (i.e. kluge) implementations; however, what exactly constitutes a less than optimal implementation? Software decay can be characterized as implementations that involve overly complex, bloated, inelegant, or inefficient code; orphaned code that no longer supports any feature; frequent changes to the code base; code that involves an unnecessary high degree of coupling that affects more parts of the system than necessary; and code that is characterized as consisting of low cohesion and therefore promotes duplication of functionality or splitting logical organizations of it. Generally speaking, code decay can be thought of as any implementation of functionality that makes the overall comprehension of the system more difficult to attain and increases the efforts required to implement future changes. This is, by definition, is the notion of software decay. When a system has been subjected to too much decay, it is said that its architectural integrity has been lost, for the architecture no longer supports further modifications. It is when the state of the architecture has reached this point is that it has transitioned from being evolvable to only serviceable.

Section 2.3.2: Loss of System Expertise

In addition to loss of architectural integrity, the other way in which software can slip from evolution into servicing is when the development team has lost the expertise over the system's architecture and its application domain.   

With regards to the system's architecture, in order to efficiently implement changes, developers require a mastery of the domain (object) model to include their properties, relationships (e.g. dependencies and inheritance). They must understand the processes and data structures upon which these objects are built and the sequences in which they interact. They must have both a heuristic and decomposed understanding of the system's architecture to include its strengths and weaknesses, to include areas which have diverged from the original paradigm or where its implementation generally differs from the ideal. To complicate this, and as indicated previously, such a comprehensive understanding of the systems architecture is practically impossible to document comprehensive; for that matter, it is generally only attained through direct experience of having worked with the system. There are two primary ways in which loss of system expertise occurs.

The first and most direct way in which loss of system expertise occurs is through attrition. It is perhaps beyond reasonable expectations to assume that all key members of a development team (i.e. those with the comprehensive expertise) will remain with the project throughout its lifecycle. Without needing to cite all the reasons, there are a number of them that account for the loss of personnel; however, one of note, is the effect that software decay can indirectly affect promote attrition. As a software system succumbs to decay, the task at hand of implementing changes becomes more difficult with the failure rate proportionally increasing. Arguably, growing frustration and dissatisfaction of developers with a job that is increasingly more difficult and less rewarding to perform can promote the occurrence of people seeking other opportunity. Loss of expertise can have a multiplying affect, for as key personnel leave the project (i.e. expertise is loss), it further exacerbates the difficulties of sustaining evolution, causing additional personnel to become frustrated or dissatisfied; and so on.

The second way expertise can be lost is indirectly through the loss of comprehension of the system. As discussed previously, as the architectural integrity of a system is derogated by decay, and as it succumbs to Lehman's second and fourth laws (The Law of Increasing Complexity and The Law of Continuing Growth, respectively), it eventually fulfills Lehman's fifth law (The Law of Declining Quality). The result of system's code base that is ever growing in size and complexity, and declining in quality, is one where it becomes ever more difficult to comprehend. This indirectly can cause the erosion of system expertise as its declining quality is hindering its ability to be understood at a level required to sustain evolution.

As stated previously, the implications of a software system that has entered into its servicing stages is that major modifications to the system because increasingly difficult and expensive to implement. The result of this is that fewer and only critical changes are generally made.

Section 2.4: Stage IV - Phase-Out

At some point during servicing, either the architectural integrity of the system has become so damaged by decay and/or the expertise required to support is no longer exist, that it becomes it becomes cost-prohibitive to implement any further modifications. A helpdesk may still be employed to assist users; however, requests for modifications, even minor corrective ones, are no longer entertained. Metaphorically speaking, the system is on life-support. The staged-model refers to this point in a software systems lifecycle as the phase-out stage.

As stated previously, the stage at which a system finds itself has no bearing on the degree to which it is relied upon by its users. As before, often a system that has reached this phase has been in use for a long time and therefore become heavily integrated into the business processes of the real-world in which it is embedded.

This being often the case, and always the case that the world continues to change regardless of whether the software can, often software systems in this state become increasingly out of line with the needs of its users. As a result of often being heavily relied upon, and being unable to change with the world as it is does, generally plans are made to replace the system with one that better matches the needs of its users and that can be more easily adapted to them as they change. This is where the name of this stage originates; the phase-out stage is the point when a software system is maintained until a replacement can be implemented.

Section 2.5: Stage V - Close-down

The final stage of the stage-model, close-down, is a brief one. Not much is noteworthy of this stage other than stating that it is simply involves the retirement of the system from use and generally coincides with the fielding of its replacement. One interesting observation regarding this stage is that it demonstrates the cyclical nature of software systems; as one software system is reaching the end of the staged model, another one (i.e. its replacement) is starting at the beginning of it - and so the cycle continues.

Conclusion

Many of the software lifecycle models of today focus on and prioritize the initial development of software with many more implicitly suggesting that the maintenance of software is a homogenous open-ended process whose termination is made at the leisure of its maintainers and for reasons other than technical necessity. While perhaps in theory these models may hold true, they require circumstances and conditions so perfect that they are rarely seen in the real world; moreover, even fewer of them address the truisms and principles of Lehman's Laws.

Bennett, Rajlich & Wild's, by way of both gathered empirical evidence and based on the personal experience of its authors, the staged-model offers a new abstraction of the software development lifecycle that provides several new perspectives that both reaffirm the principles demonstrated in Lehman's Laws and contribute to the industry's understanding of the software lifecycle.

First, the staged-model offers a perspective that accounts for the significance that the architectural integrity of a software system has in its ability to be maintained. Bennett, Rajlich & Wild offer that as a system's architectural degrades over time through software decay, the ability for its maintainers to implement change likewise degrades as the code base become increasingly large and complex. This perspective is directly in line with Lehman's second and fourth laws, the Law of Increasing Complexity and the Law of Continuing Growth, respectively. The value of the stage-model here that it offers managers the lesson that not only is important to implement modifications necessary to keep their system relevant and valuable to its users, but they must be done so in a manner that does not undermine the ability for such changes to be implemented down the road.

The second perspective that the stage-model offers over other models is it accounts for the enormous implication that the loss of system-expertise has on the ability for a software system to evolve. It shows how that system-expertise can be lost directly through attrition or indirectly through detrimental effect that software decay has on the ability for a system to be comprehended. Bennett, Rajlich & Wild implicitly reaffirm Lehman's Law of Conservation of Familiarity by showing how the ability of software system to evolve is a directly dependent on the system-expertise of its maintainers; and that as that level of expertise erodes, there is corresponding increase in the likelihood in the occurrence of software decay.

Logically, the concepts of loss of architectural integrity and loss of system expertise may be linked. For instance, as attrition directly depletes the amount of system-expertise available on a project, this may lead to an increase in the likelihood that less than optimal changes will be implement by less knowledgeable replacement staff; thereby increasing the complexity and decreasing it ability to be comprehended (i.e. indirectly eroding system-expertise). This in turn makes implementing of additional changes more difficult; which can stimulate additional attrition as this increased difficulty leads to frustration and an increase in job satisfaction. The value of this to managers is that it provides a lesson on how preserving a system's architectural integrity, and therefore its ability to evolve, not only involves defending it from the malicious effects of software decay, but also involves retaining the system-expertise of his team. The take-away of this being that fighting to retain the system-expertise is just as important as fighting to preserve its architectural integrity.

Beyond the obvious promotion of the importance that architectural integrity and the retention of system-expertise has in the ability for a system to evolve, and beyond their implicit reaffirmation of Lehman's Laws, Bennett, Rajlich & Wild, are essentially making also a statement about the importance maintaining system comprehension. Where the concepts of architectural integrity and system-expertise meet is in how easily the system's implementation can be understood.

As described earlier, the result of software decay is essentially that it muddles and obfuscates the ability for the system's code to be easily comprehended. When the implementation has become so mired down by decay, it is said that the system's architectural integrity has been lost; however, what is really meant is that either it has grown so larger and/or complex (i.e. succumbing to Lehman's Law of Increased Complexity and the Law of Continuing Growth, respectively) that ability for it to be comprehended is what is lost.

Likewise, preservation of the system-expertise is really in same concept, but observed from its opposite end. Preservation of system-expertise is essentially about retaining those who already understand - or comprehend - it. As discussed previously, acquiring comprehension of a system, let alone an expertise of it, is not something that's easily achieved. While many valuable practices and techniques have been developed over the years, none have achieved the ability to captures the same level of understanding and knowledge that presently only can be gained through direct hands-on experience of working with the system. This is why is so vital to preserve system-expertise, for once it is lost; it is extremely difficult, if not impossible, to get back.

To conclude, while the staged-model is not a methodology meant to supplant traditional models such as Waterfall or new comers such as Extreme Programming, it serves to provide several very important perspectives to keep in mind when employing these or any software development model. Preservation of architectural integrity and system-expertise, or essentially just system-comprehension, is of paramount importance. For once a system has lost either of these concepts; it loses its ability to evolve with the world of which it is a part; thereby, losing its relevance and usefulness to its users.

References

Bennet, K., Rajlich, V., & Wilde, N. (2002). Software Evolution and the Staged Model of the Software Lifecycle. (M. Zelkowitz, Ed.) Advances in Computers, 56, pp. 1-54.

IEEE 610.12. (1990). Standard Glossary of Software Engineering Terminology. New York: Internatonal Organization for Standardization and Institute of Electrical and Electronic Engineers. Retrieved August 1, 2010 from https://www.apl.jhu.edu/Notes/Hausler/web/glossary.html.

ISO/IEC 12207, & IEEE 12207. (2008). International Standard: Software Engineering - Softare Life Cycle Processes. New York: Internatlional Organization for Standardization and Institute of Electrical and Electronic Engineers.

ISO/IEC 14764 , & IEEE 14764. (2006). International Standard: Software Engineering -- Software Life Cycle Processes -- Software Maintenance. New York: Internatonal Organization for Standardization and Institute of Electrical and Electronic Engineers.

Lehman, M. (1980). Programs, Life Cycles, and Laws of Software Evolution. IEEE, 9, pp. 1060-1076.

Lientz, B., & Swanson, E. (1980). Software maitneannce managmeent: a study of the maintenasnce of computer application software in 487 data processing organisations. Addison-Wesley.

Oman, P., & Pfleeger, S. (1997). Applying software metrics. IEEE Computer Society Press.

Pigoski, T. (1997). Practice Software Maintenance: Best Practices for Managing Your Software Investment. New York: John Wiley & Sons, Inc.

Standish Group International. (1995). CHAOS: Project failure and success report. Retrieved August 1, 2010, from ProjectSmart: https://www.projectsmart.co.uk/docs/chaos-report.pdf

Comments

  • Anonymous
    January 10, 2013
    what a fine writing i really like it.

  • Anonymous
    July 09, 2013
    Thank you for this Wonderful/very helpful article. I was very confused in S,P and E Type of software system, but this article gave me crystal clear idea. Thank you!!!

  • Anonymous
    January 06, 2014
    please what are the limitations of lemans law

  • Anonymous
    February 05, 2014
    Very good article. I would like to however, suggest one correction.   "..with one developer implementing the feature and the other testing its implementation - a concept referred to as paired"   That is not, strictly speaking, pairing in the XP context.  In XP pairing is two developers, trading off as the 'driver' and navigator'  Certainly testing is a continual part of all the Agile practices.  Moreover, we have, in recent years, used the term pairing to extend beyond the original XP practice, to now improve the collaboration between the cross functional members of teams, e.g. a developer and a QA person to test, working together, side by side solving a problem, building software that users love.  While there are many great sources on XP, I prefer James Shore. Thanks again on a wonderful explanation of Lehman's Laws.

  • Anonymous
    June 23, 2014
    this is my grandfathers law. he never even told me about this he was the most modest man ever