There are several things that architects do when building and delivering solutions but one of the core responsibilities often seems to get lost early in the project lifecycle - ensuring system quality. For this reason, I thought it'd be interesting to talk about how this should be done to bring system quality back to the forefront of an architect’s thinking. Here are a few ways in which an architect can address system quality:
All of these are warranted in my mind and demonstrate comprehensive thinking of nifty techniques for improving system quality. One thing that seems to be missing here which is an area I've been recently thinking of is managing system quality. Someone smart once stated "You can't manage what you can't measure". I've experimented with measuring system quality for a couple of years now and think that there is great benefit for building high-quality systems but unfortunately isn't currently well exploited in the architect community. The ability to measure system quality, at least theoretically, provides the ability to help predict the lifespan of systems once released into production. This goes beyond traditional testing of systems. That is, testing systems today usually involves functional, security (or penetration), failover, integration, code coverage, etc testing. This helps ensure that the system will function once released into production. What's missing here is some level of assurance that the system will survive business or IT changes. I assert that if a system design optimizes system qualities such as Flexibility, Reusability, Testability, Maintainability, Interoperability, et al that it can withstand changes better and therefore has a longer lifespan and therefore potential greater ROI.
Software Engineering Institute's Atrribute Tradeoff Analysis Method (ATAM) is a great idea for understanding the tradeoffs for system quality. I once worked with a couple of sharp individuals to create a simplified method of measuring system quality called Microsoft System Review (MSSR). Although both are useful and successful in their usage they both require a bit more to provide the necessary process and tools for an architect to manage system quality. For this reason, I've been thinking of an approach which I've called System Quality Attribute Plan. I don't know if this name will stick...my idea is in the really, really early development stages. Anyway, SQAP has a few components:
SQAP is a rough approach at taking a stab at measuring system quality. It is not baked but more of an idea of mine. What is yours?
A great idea - I am trying to figure out how I can measure the quality attributes for our organization's new architecture.
I am mostly interested in SQA Review - have you made any progress with this?
Some progress has been made. Because my role focuses on Enterprise Applicatin Architecture and work with Solution Architects to deliver solutions, I have to guide Solution Architects on best practices like this one rather than do them myself. Having said this, I aim to implement variations of this idea into my EA responsibiliites via Architecture Review Board, System Quality Review Checkpoints, Architectural Position Papers, etc.
Anyway, I recently was part of an IASA initiative where I published an article called "Implementing System Quality Attributes" which describe a lot more on this idea. Here's the link to the article: http://www.iasahome.org/c/portal/layout?p_l_id=PUB.1.269&p_p_id=20&p_p_action=1&p_p_state=exclusive&p_p_col_id=null&p_p_col_pos=3&p_p_col_count=4&_20_struts_action=%2Fdocument_library%2Fget_file&_20_folderId=61&_20_name=01-Implementing.pdf
Have a look and tell me what you think.
This is very interesting work and you have written a great paper. It is aslo good to see you calling this "System Quality Attributes" rather putting under the banner of "Non function requirements".
I am keen to know what further progress you have made with MSSR.
MSSR has been adopted by a team in the Enterprise Services (formerly Microsoft Consulting Services) business called Technical Quality Assurance (TQA). Essentially, all highly-visible, moderate to high technically complex projects are evaluated by this crack specialist team to mitigate risk of a poor system be implemented for a customer. The TQA team have been formerly in existence for a couple of years now and have extended MSSR to new levels. For example, they use questionnaires to prepare and document system reviews and have accumulated a database of their research and findings. This database is becomming quite interesting. They use it to improve productivity of system reviews as well as help drive guidance to teams to build higher quality systems.
If you'd like to hear more about TQA, let me know by sending me an email (firstname.lastname@example.org) and I'll connect you to the team.
I wanted to follow up a note I mentioned in a previous blog I wrote on how to Manage System Quality and