Time-to-Release – the missing System Quality Attribute
I’ve been looking at different ways to implement the ATAM method these past few weeks. Why? Because I’m looking at different ways to evaluate software architecture and I’m a fan of the ATAM method pioneered at the Software Engineering Institute at Carnegie Mellon University. Along the way, I’ve realized that there is a flaw that seems difficult to address.
Different lists of criteria
The ATAM method is not a difficult thing to understand. At it’s core, it is quite simple: create a list of “quality attributes” and sort them into order, highest to lowest, for the priority that the business wants. Get the business stakeholders to sign off. Then evaluate the ability of the architecture to perform according to that priority. An architecture that places a high priority on Throughput and a low priority on Robustness may look quite different from an architecture that places a high priority on Robustness and a low priority on Throughput.
So where do we get these lists of attributes?
A couple of years ago, my colleague Gabriel Morgan posted a good article on his blog called “Implementing System Quality Attributes.” I’ve referred to it from time to time myself, just to get remind myself of a good core set of System Quality Attributes that we could use for evaluating system-level architecture as is required by the ATAM method. Gabriel got his list of attributes from “Software Requirements” by Karl Wiegers.
Of course, there are other possible lists of attributes. The ISO defined a set of system quality attributes in the standard ISO 25010 and ISO 25012. They use different terms. Instead of System Quality Attributes, there are three high level “quality models” each of which present “quality characteristics.” For each quality characteristic, there are different quality metrics.
Both the list of attributes from Wiegers, and the list of “quality characteristics” from the ISO are missing a key point… “Time to release” (or time to market).
The missing criteria
One of the old sayings from the early days of Microsoft is: “Ship date is a feature of the product.” The intent of this statement is fairly simple: you can only fit a certain number of features into a product in a specific period of time. If your time is shorter, the number of features is shorter.
I’d like to suggest that the need to ship your software on a schedule may be more important than some of the quality attributes as well. In other words, “time-to-release” needs to be on the list of system quality attributes, prioritized with the other attributes.
How is that quality?
I kind of expect to get flamed for making the suggestion that “time to release” should be on the list, prioritized with the likes of reliability, reusability, portability, and security. After all, shouldn’t we measure the quality of the product independently of the date on which it ships?
In a perfect world, perhaps. But look at the method that ATAM proposes. The method suggests that we should created a stack-ranked list of quality attributes and get the business to sign off. In other words, the business has to decide whether “Flexibility” is more, or less, important than “Maintainability.” Try explaining the difference to your business customer! I can’t.
However, if we create a list of attributes and put “Time to Release” on the list, we are empowering the development team in a critical way. We are empowering them to MISS their deadlines of there is a quality attribute that is higher on the list that needs attention.
For example: let’s say that your business wants you to implement an eCommerce solution. In eCommerce, security is very important. Not only can the credit card companies shut you down if you don’t meet strict PCI compliance requirements, but your reputation can be torpedoed if a hacker gets access to your customer’s credit card data and uses that information for identity theft. Security matters. In fact, I’d say that security matters more than “going live” does.
So your priority may be, in this example:
- Security,
- Usability,
- Time-to-Release,
- Flexibility,
- Reliability,
- Scalability,
- Performance,
- Maintainability,
- Testability, and
- Interoperability.
This means that the business is saying something very specific: “if you cannot get security or usability right, we’d rather you delay the release than ship something that is not secure or not usable. On the other hand, if the code is not particularly maintainable, we will ship anyway.”
Now, that’s something I can sink my teeth into. Basically, the “Time to Release” attribute is a dividing line. Everything above the line is critical to quality. Everything below the line is good practice.
As an architect sitting in the “reviewer’s chair,” I cannot imagine a more important dividing line than this one. Not only can I tell if an architecture is any good based on the criteria that rises “above” the line, but I can also argue that the business is taking an unacceptable sacrifice for any attribute that actually falls “below” the line.
So, when you are considering the different ways to stack-rank the quality attributes, consider adding the attribute of “time to release” into the list. It may offer insight into the mind, and expectations, of your customer and improve your odds of success.
Comments
Anonymous
March 23, 2012
The comment has been removedAnonymous
April 05, 2012
The SABSA methodology has the concept of "Business Attributes Profile" and has a taxonomy of business attributes. It's fairly well-rounded and extensive, even though it started from a security perspective. I have a copy of the book "Enterprise Security Architecture: A Business-Driven Approach" that discusses it, but I suspect you can find some or all of it online. Good article. I too agree that time-to-release is important. Balancing tradeoffs is much more than trading off on technical attributes. If internal and external business -related attributes and context are not considered then the value proposition of a proposed architecture or solution goes out the window. On a related note, I remember taking an SEI architecture course and the instructor saying that while you can propose quality attributes to a customer/stakeholder, always accept any quality attribute they dream up as long as they can tell you how they would measure it. The challenge then is to understand the push pull of the attribute against other attributes. Eliciting requirements, quality attributes included, is always a tough job, because you will never get all of them identified and prioritized, and even if you do some stakeholders will say they don't care about some of them. But as the saying goes, they don't care until they do. Process (such as architecture reviews) plus experience will take you furthest in the right direction.Anonymous
April 06, 2012
The comment has been removedAnonymous
April 06, 2012
The comment has been removed