Freigeben über


Architecture Review Boards

Recently here at Microsoft we had a number of email threads on the purpose and value of architecture review boards (ARB's) for IT organizations. As the CTO for Microsoft IT for a number of years, I created (actually, re-created) and chaired our ARB, and we learned a lot about them: what worked; what didn't; what added value to projects; and what didn't. Here's a quick summary of our learnings -- your mileage may vary, of course.

  • Be clear on your purpose.  We used the ARB to examine projects from several perspectives: a technical overview, including key technology choices, context diagrams, and data models and flows; adherence to standards; and being a healthy, "good citizen" in the overall ecosystem of applications and data. We specifically and intentionally avoided certain areas like headcount allocation and project costing as these were covered in other governance forums.  
  • Pick your projects. An architectural review to be meaningful and relevant takes time and therefore you should focus on those projects where there is the most investment and where there is most risk – and any other criteria you think important. (For example, I reserved a few sessions for projects I deemed “strategic” – projects in very early stage but over time would be transformational – such as master data projects.) Don’t try to do everything. Our ARB met once a month for a half day and reviewed 2-3 projects at most.
  • Of those, only review major releases, for similar reasons.
  • We adopted the idea of an “engagement architect” who would work with project teams to prepare for the ARB. The “engagement architect” met with the project team several weeks in advance and ensured that the presentation was of high quality, was accurate (more on that later), and covered all the points that the Board wanted.
  • Do the ARB early in the project lifecycle. Prior to my tenure, early ARB's were performed late in the development cycle and merely ended up as “red flag” exercises which resulted in time-consuming rework and delayed projects -- and needless to say, were therefore not held in high regard. If you review early, you have an opportunity to make mid-course corrections and keep the project on track. There is no value in doing a review late – except as noted in the next bullet.
  • Optionally, you can do a checkin late in the lifecycle – to make sure suggested changes made it in.
  • One of the innovations we made was to scorecard apps based on overall improvement in each of the reviewed areas. If for example the new version of the app was “better” from a data quality standpoint, we gave it a +1. If no changes were made in the data quality area, it got a zero. If it regressed in our view, it got a -1. (In many cases it was not a project goal to improve in certain areas to getting a "0" was not necessarily a bad thing.) It There were 8 such areas, and we could then provide to the CIO an at-a-glance view into how well the ecosystem overall was improving, or not.
  • The ARB must have the power to stop a project. This gives "teeth" to the ARB function. In our case we exercised this power very, very rarely -- in fact, only once, where we asked a project team to align with another having a similar goal.

One of the great benefits of the ARB, in my opinion, was that it gave the solution architects a chance to present in front of their peers. This led to great discussion about technologies and technology tradeoffs, it educated the broader architecture community about what was going on, about what approaches were being taken, about assets they could take advantage of, and overall led to a sense of community. In many cases as well it was a great forcing function for the project to be explicit about the technical decisions they were making.

Comments

  • Anonymous
    July 19, 2015
    The comment has been removed
  • Anonymous
    November 25, 2015
    In the ARB meetings often discussed item is the "Buy vs Build". An old beast getting new life every time... With the given new scenarios of Cloud/Lake, IoT, BYOD, Mobile computing converged with Distributed ML, do you have any special insights?  Given the success story and relatively less experience of these tools in this arena gives less confidence while evaluating them.  More often less vendors and appliance based Infrastructure is more risky as going on commodity servers is still preferred for many reasons. A Buy vs Build with 90:10 can work in a longer term? I work as an EA designing systems with several Peta of storage and computation/analytics uses entire data which is still growing.  All the above techs are inevitable and a plan is required to address scalability.  The only hurdle is to forget Cloud.  Everything stays within the org.