Compartilhar via


TechEd Europe Report: Bill Hilf: Open Source, and Increasing Operational Efficiency

Bill Hilf, "Linux and Open Source: A Technical Perspective"

In addition to being an exceptional presenter, Bill Hilf runs the "Linux Lab" in Redmond, which is dedicated to the analysis of open source systems.  With over 50 flavours of Linux and other open source systems running on different servers, the Linux Lab is, as Hilf puts it, a "funky place."  A significant consideration in their studies is interoperability between open source and Microsoft systems.  The analysis is not being performed strictly for competitive reasons, but rather to seek an understanding of open source systems at a deeper level. 

"It's not about Windows and Linux," he explained.  Businesses base operating system decisions on application needs, reliability needs, integration needs.  And there are tradeoffs around things like support.  One might choose to rely on the community for support - not necessarily a bad thing - but understanding the ramifications of choosing to do so is essential for a decision maker.

Hilf described the "pyramid meritocracy" that underlies Linux development.  He also provided hard data around issues, for example, like Linux 2.6 kernel patches.  He discussed the way in which the "pareto distribution," or the "90-10 rule," impacts Open Source development, just as it does other software development projects: 90% of the development is done by a very small core group.  So Hilf calls the bluff of the "mythology" that the thousands of people potentially working on Open Source projects all make a substantial contribution.  And, particularly relevant to the companion myth that open source is about something like freedom and liberty, the majority of that core "10%" group of developers are paid for their efforts.  Of the top 12 Linux contributors, 10 are commercial developers!

Hilf also underscored the challenges inherent in a developer's capacity to extend open source projects in ways that suit then.  He described how, in the course of his contributions to the Apache web server project, he'd once added functionality to the Apache web server that, while useful to him, broke other core functionality.  This subsequently led to his maintaining his own version of Apache in addition to his other daily work, as Apache continued to evolve.

Hilf was quick to point out that Microsoft has learned a great deal from the open source communities.  Windows XP, for example, is the software with the widest distribution on the planet, and, in Hilf's estimation, Microsoft's understanding of the importance of minimizing the number of separate configurations of such a widely distributed complex system has been largely underscored by Open Source.  The real innovation of the Open Source community, as he sees it, is indeed the community.  And transparency, another open source staple, is something Microsoft has deeply internalized.  The shared source initiative (and blogs like ours!) reflect this.  He cited WIX, the shared-source Windows Installer XML, as an example of how an application can be developed with the community.

Finally, he showed a demo of Monad, the new command shell.  (The running joke, apparently, is that until he joined Microsoft, Bill didn't know how to use a mouse!)  Being within the .NET object model lets you do fantastic things from the command prompt, like generate charts and Excel documents. 

I was so impressed by the lucidity and content of Bill's talk that I also attended his afternoon session, Reducing IT Complexity and Increasing Operational Efficiency Using Windows Server System Products.   In that presentation, he detailed the Common Engineering Criteria, which are both a list of criteria, and also a process for helping reduce complexity. 

Although not formally related, the talks offered an interesting counterpoint. 

Traditionally, Microsoft offered server products like the mail server (Exchange) and the database server (SQL).  It wasn't always clear how they interoperated, how dependent they were, and what their individual upgrade paths were like.  Also, although not everyone's a security expert, there is commonly a desire to (1) run secure servers, (2) prevent the crash of a single service from taking down an entire server, and (3) reduce the time and headspace necessary to maintain systems.

So this led to three design goals being manifested in the Common Engineering Criteria: (1) Make systems secure and reliable, (2) make systems consistent and predictable, and (3) make them integrated and predictable.

The best way to appreciate the Common Engineering Criteria is to look them up here.  The CEC report, as published to the Internet, provides a transparent view of which server products conform to the CEC, and for those that don't, either why not, or when they will.

The second part of the talk was around WS-System, a core component in the upcoming CEC 2006.  WS-System is a series of specifications that will allow for more secure, reliable and predictable infrastructure, through features like service publication in Active Directory, the Volume Shadow copy writer, and an auto-dependency check upon install.  The overall goal is consistent management.

The talk concluded with discussions of the Exchange Best Practice Analyser - a self-updating tool which offers prescriptive configuration guidance while taking into account the fact that best practices themselves are dynamic - and the Security Configuration Wizard, which, released with Windows Server 2003 Service Pack 1, is a policy-driven tool for helping assess the configuration of your system.  Deploying software with the best of intentions, while failing to consider all paths through a configuration (one of which may be vulnerable), is the root cause of the majority of server installation problems. 

Hilf's second presentation ended with an intriguing recommendation: Find a way to measure how much time you get back for adding a technology, such as a server product, to your environment.  Typically, when we add a technology to our lives, we (1) make a decision to do so, and then (2) fight until it works.  It certainly makes good sense to (3) perform a postmortem and ask if it was worth it in the first place!