Jaa


1: The Old Way and the New Way

patterns & practices Developer Center

On this page: Download:
Application lifecycle management with Visual Studio | Contoso and Fabrikam; or something old, something new | From Contoso to Fabrikam | Application lifecycle management tools | Moving to the new way: adopting Visual Studio for ALM - Chapter 2: Unit Testing: Testing the Inside, Chapter 3: Lab Environments, Chapter 4: Manual System Tests, Chapter 5: Automated System Tests, Chapter 6: A Testing Toolbox, Chapter 7: Testing in the Software Lifecycle, Appendix: Setting up the Infrastructure | The development process | Testers vs. developers? | Agile development | Summary | Where to go for more information

Download PDF

Download code samples

Download Paperback

Today, software must meet your customers' needs in an ever-changing landscape. New features must be released continuously, updates must be timely, and bug fixes cannot wait for version 2. That's the reality in software development today, and it is never going back to the days of a new release every few years. That's especially true for cloud applications, but it's true for software in general as well.

Such agility requires lifecycle management that's built specifically to meet modern needs. But there are obstacles. Testing is one aspect of software development that can present a roadblock to agility. To publish even a one-line fix, you typically need to re-test the whole system. That can mean finding and wiring up the hardware, reassembling the team of testers, reinstalling the right operating systems and databases, and setting up the test harness. In the past, this overhead prevented the continuous update cycle that is so necessary today.

Many teams are still operating that old way, putting their competitiveness at risk. Fortunately, yours does not have to be such a team. You don't have to build and test software the way your ancestors did.

So how can you move your software testing into the 21st century? That is the subject of this book. This book is about how to streamline the testing of your software so that you can make updates and fix bugs more rapidly, and continuously deliver better software.

Because testing can be a significant obstacle, our aim in this book is to help you substantially reduce the overhead of testing while improving the reliability and repeatability of your tests. The result should be a shorter cycle of recognizing the need for a change, making the change, performing the tests, and publishing the update.

The planning and execution of this cycle is known as application lifecycle management (ALM).

JJ159336.4B696E6D6383D8538BA5A7684F13BD42(en-us,PandP.10).png+

The application lifecycle

Application lifecycle management with Visual Studio

Microsoft Visual Studio will figure prominently in this book and in your efforts to refine your own application lifecycle management. Application lifecycle management with Visual Studio is an approach that takes advantage of a number of Microsoft tools, most of which are found in Visual Studio and Team Foundation Server, and which support each part of the lifecycle management process. In these pages, we'll show you how to perform testing with Visual Studio Team Foundation Server 2012 RC. Though we will focus on that version, you can also use Visual Studio 2010. (We note the differences where necessary.) With the setup guidance we provide here, your team should be able to test complex distributed systems within a few days.

Of course, you may already be doing everything we suggest here, so we can't promise the scale of improvements you'll see. But we envisage that, like us, you’re concerned with the increasing demand for rapid turnaround. If your software runs in the cloud—that is, if users access it on the web—then you will probably want no more than a few days to pass between a bug report and its fix. The same is true of many in-house applications; for example, many financial traders expect tools to be updated within the hour of making a request. Even if your software is a desktop application or a phone app, you'll want to publish regular updates.

Before we dig into lifecycle management with Visual Studio, let's take a look at the two fictitious companies that will introduce us to two rather different approaches to testing and deployment: Contoso, the traditional organization, and Fabrikam, the more modern company. They will help define the ALM problem with greater clarity.

Contoso and Fabrikam; or something old, something new

At Contoso, testing has always been high priority. Contoso has been known for the quality of its software products for several decades. Recently however, testing has begun to seem like a dead weight that is holding the company back. When customers report bugs to Contoso, product managers will typically remark "Well, the fix is easy to code, but we'd have to re-test the whole product. We can't do that for just one bug." So they have to politely thank the customer for her feedback and suggest a workaround.

Meanwhile, their competitor, Fabrikam, a much younger company with more up-to-date methods, frequently releases updates, often without their users noticing. Bugs are gone almost as soon as they are reported. How do they do it? How do they test their whole product in such a short time?

In our story, the problems begin when Fabrikam and Contoso merge. The two cultures have a lot to learn from each other.

Follow link to expand image

Follow link to expand image

Follow link to expand image

Follow link to expand image

The two companies take very different approaches, and yours might be somewhere between the extremes. In the chapters that follow, we show you how to get set up to do things the new way, and we explain the choices you can make based on the situation in your organization.

From Contoso to Fabrikam

Let's take a look at the pain that Contoso experiences in the beginning and the benefits they realize as they move to a testing process more like Fabrikam's. You'll get a better understanding of the benefits to testing using Visual Studio, Team Foundation Server, and a virtual lab environment.

Here are some of the pain points that Contoso experienced by doing things the old way:

  • Updating an existing product is expensive no matter how small the change. Partly, this is because test hardware has to be assigned, private networks have to be wired up, and operating system and other platform software has to be installed. This is a lengthy process. As a result, if customers find bugs in the operational product, they don't expect to see them fixed soon.
  • During a project, the team frequently needs to test a new build. But the results are sometimes inconsistent because the previous build didn't uninstall properly. The most reliable solution would be to format the disk and install everything from scratch; but this is too costly.
  • Manual tests sometimes yield inconsistent results. The same feature is tested by slightly different procedures on different occasions, even if the same person is doing the test.
  • Developers often complain they can't reproduce a bug that was seen during a test run. Even if the tester has faithfully reported the steps that reproduce the bug, the conditions on a development machine might be different from those on the test environment.
  • When the requirements change, it can be difficult to discover which tests should be updated; and it can be difficult to find out how well the latest build of the product meets the stakeholders' needs. Different unintegrated tools mean that there are no traceable relationships between bugs, tests, customer requirements, and versions of the product. The manual steps needed to make the tools work together make the process unreliable.
  • Testing at the system level is always manual. To repeat a set of tests is costly, so that it is not economical to develop the software by incremental improvements. System testing is often abbreviated, so that bugs can go undiscovered until release.
  • Revisiting code is risky and expensive. Changing code that has already been developed means rerunning tests, or running the risk that the changes have introduced new bugs. Development managers prefer to develop all of one part of the code, and then another, integrating the parts only towards the end of the project when it is often too late to fix integration problems.

Here are the benefits Fabrikam enjoys and which Contoso will realize by moving to the new way:

  • Virtual machines are used to perform most tests, so new testing environments can be easily created and existing environments can rapidly be reset to a fresh state.
  • Configurations of machines and software can be saved for future use: no more painful rebuilding process.
  • Manual tests are guided by scripts displayed at the side of the screen while the tester works. This makes the tests more repeatable and reliable. Test plans and test scripts are stored in the Team Foundation Server database, and linked to the code and to the test configuration.
  • Tests are easy to reproduce. Testers' comments, screenshots and actions can be recorded along with the test results. When a bug is found, a snapshot of the exact state of the environment is stored along with the bug. Developers can log in to the environment to investigate.
  • Much more test automation. Manual test steps can be recorded and replayed rapidly. The recordings can also form the basis of automated system tests. Automated tests can be performed frequently, and at little cost.
  • Reports on the project website show the progress of requirements in terms of the relevant tests.
  • When requirements change, it's easy to trace which tests should be updated. An integrated set of tools manages and reports on requirements, test cases and test results, project planning, and bug management.
  • In addition to the core tools, third-party products can be integrated.
  • Changing existing code is commonplace. Automated tests are performed frequently and can be relied on to pick up bugs that were inadvertently introduced during a change. Development teams can produce a very basic version of an end-to-end working product at an early stage, and then gradually improve the features. This substantially improves the chances of delivering an effective product.

Application lifecycle management tools

We'll assume that you've met Visual Studio, the Microsoft software development environment. As well as editing code in Visual Studio, you can run unit tests, break into the code to debug it, and use the IntelliTrace feature of Visual Studio to trace calls within the running code. You can analyze existing code with dependency, sequence, and class diagrams, and create models to help design or generate code.

Team Foundation Server is a tool for tracking and reporting the progress of your project's work. It can also be the source control server where your developers keep their code. It can build and test your software frequently as it grows, and provide reports and dashboards that show progress. In particular, you can get reports that show how far you've gone towards meeting the requirements—both in terms of work completed and tests passing.

Microsoft Test Manager (MTM) is the tool for testers. With it, you can plan and execute both manual and automated tests. While you are performing tests, you can log a bug with one click; the bug report will contain a trace of your recent actions, a snapshot of the state of the system, and a copy of any notes you made while exploring the system. You can record your actions in the test case, so that they can be played back on later occasions.

MTM also includes tools for setting up and managing lab machines. You can configure a virtual lab in which to install a distributed system, and link that lab to the test plan. Whenever you need to repeat tests—for example when you want to publish a change to your system—the lab can be reconfigured automatically.

Moving to the new way: adopting Visual Studio for ALM

Now we aren't suggesting that everyone should work by the same methods. Some teams want to aim for a rapid development cycle; others are developing embedded software for which that wouldn't be appropriate.

Nor do we suggest that people in Contoso are doing it all wrong. Their attitude to testing is clearly admirable: they take a pride in releasing high-quality software.

But we do believe that, no matter what development books a team reads, they do need to run tests, and they can benefit from having a brisker turnaround in their test runs—if only to reduce the boredom of repeating the same old tests by hand.

But such changes aren't just about adopting tools. A software team—the extended team that includes all the stakeholders who collaborate to produce the working software—consists of interacting individuals. To keep everyone in sync, new ways of doing things have to be tried out and agreed upon in measured steps.

Visual Studio provides a lot of different facilities. It therefore makes sense to adopt it one step at a time. Of course, this book is about testing, but since the test tools and other features such as work tracking and source control are closely integrated, we have to talk about them to some extent too.

The following diagram shows a typical order in which teams adopt Visual Studio for application lifecycle management. They begin by just using Visual Studio, and work gradually on up through source control, server builds, system testing, and on to automated system tests. The need for gradual progress is mostly about learning. As your team starts to use each feature, you'll work out how to use it in the best way for your project.

And of course a team learns more slowly than any of its members. Learning how to open, assign, and close a bug is easy; agreeing who should do what takes longer.

JJ159336.F84AC04FC8B4DF9577A189A5C388EC7B(en-us,PandP.10).png

Adopting Visual Studio for ALM

There are a number of ways you can gradually adopt Visual Studio for ALM, and the steps below represent one way. Naturally, this isn't a precise scheme. But here's what you get in each of the stages of adoption that we've shown:

  • Just Visual Studio – Visual Studio is used for development. To test the application, Contoso's testers basically just press F5 and find out whether the application works.
    Unit tests are written by the developers to test individual classes and components.
    Coded UI Tests are a neat way to run automated tests of the whole application through its user interface.
  • Team Foundation Server Basics – when you install Team Foundation Server, you get a host of features. The first features you'll want to take advantage of are:
    • Source Control to avoid overwriting each other's work. And after a while, you might start using:
      Check-in rules – which remind developers to run tests and quality analysis on their code before checking it into the server.
      Shelvesets – a way of copying a set of changes from one user to another for review before checking in.
      Branches - which help manage work in a large project.
    • Task Management is about tracking the backlog (the list of tasks), bugs, issues, and requirements. Each item is recorded in a work item. You can assign work items to people, project iterations, and areas of work; you can organize them into hierarchies; and you can edit them in Excel, and sync with other project management tools.
  • The Project Portal is a SharePoint website, integrated with Team Foundation Server so that each project automatically gets its own site. And, even more interestingly, you can get some very nice dashboards, including graphs and charts of your project's progress. These reports are based on the work items, and on the test results.
  • The Build Service is a feature of Team Foundation Server that performs a vital function for the development team. It builds all the code that has been checked in by developers, and runs tests. Builds can run on a regular or continuous cycle, or on demand. The team gets email alerts if a compilation or test fails, and the project portal shows reports of the latest results.
    The email alert is very effective at keeping code quality high: it prominently mentions who checked in code before the failed build.
  • Microsoft Test Manager is where it gets interesting from the point of view of the professional tester. Microsoft Test Manager makes tests reliably repeatable and speeds up testing. Using it, you can:
    • Write a script for each manual test, which is displayed at the side of the screen while the test is being performed.
    • Partially automate a test by recording the test actions as you perform them. The next time you run the test, you can replay the actions.
    • Fully automate a test so that it can run in the build service. To do this, you can adapt code generated from partly automated tests.
    • Associate tests with user requirements. The project portal will include charts that display the progress of each user requirement in terms of its tests.
    • Organize your tests into suites and plans, and divide them up by functional areas and project iterations.
    • Perform one-click bug reporting, which includes snapshots of the state of the machine.
  • Lab Environments are collections of test machines—particularly virtual machines. Without a lab, you can test an application locally, running it on your own computer. During development, applications are typically debugged on the development machine, often with several tiers running on the same machine. But with lab facilities, you can:
    • Deploy a system to one or more machines and collect test data from each machine. For example, a web client, Microsoft Internet Information Services (IIS), and a database would run on separate machines.
    • Run on freshly-created virtual machines, so that there's no need to uninstall old versions, no chance of the application corrupting your own computer, and you can choose any platform configuration you like.
    • Configure an environment of virtual machines for a particular test suite, and store it for use whenever you want to run that suite again.
    • Take a snapshot of the state of an environment and save it along with a bug report.
  • Automated build, deploy, and test. The simplest setup of the build service runs unit tests in the same way the developer typically does—all on one machine. But for web and other distributed applications, this doesn't properly simulate the real operational conditions. With automated deployment, you can run tests on a lab environment as part of the continuous or regular build.
    The automation builds the system, instantiates the appropriate virtual environment for the tests, deploys each component to the correct machine in the environment, runs the tests, collects data from each machine, and logs the results for reporting on the project portal.

Now let's take a look at what you'll find in the remaining chapters.

Chapter 2: Unit Testing: Testing the Inside

Developers create and run unit tests by using Visual Studio. These tests typically validate an individual method or class. Their primary purpose is to make sure changes don't introduce bugs. An agile process involves the reworking of existing software, so you need unit tests to keep things stable.

Typically developers spend 50 percent of their time writing tests. Yes, that is a lot. The effort is repaid many times over in reduced bug counts. Ask anyone who's tried it properly. They don't go back to the old ways.

Developers run these tests on their own machines initially, but check both software and tests into the source control system. There, the build service periodically builds the checked-in software and runs the tests. Alarms are raised if any test fails. This is a very effective method of ensuring that the software remains free of bugs—or at least free of the bugs that would be caught by the tests. It's part of the procedure that when you find a bug, you start by adding new tests.

Chapter 3: Lab Environments

To test a system, you must first install it on a suitable machine or set of machines. Ideally, they should be fresh installations, starting from the blank disc because any state lingering from previous installations can invalidate the tests. In Visual Studio, lab environments take a lot of the tedium out of setting up fresh computers and configuring them for testing.

A lab environment is a group of computers that can be managed as a single entity for the purposes of deployment and testing. Typically the computers are virtual machines, so you can take snapshots of the state of the complete environment and restore it to an earlier state. Setting up a new environment can be done very quickly by replicating a template.

Chapter 4: Manual System Tests

System tests make sure that the software you are developing meets the needs of the stakeholders. System tests look at what you can do and see from outside of the system: that is, from the point of view of users and other systems that are external to yours.

In many organizations, this kind of testing is done by specialist testers who are not the same people as the developers. That's a strategy we recommend. A good tester can write software and a good developer can test it. But you don't often find the strongest skills of creating beautiful software coexisting in the same head as the passion and cunning that is needed to find ingenious ways to break it.

System testing is performed with Microsoft Test Manager. As well as planning tests and linking them to requirements, Microsoft Test Manager lets you set up lab environments—configurations of machines on which you run the tests.

While you are running tests, Microsoft Test Manager's Test Runner sits at the side of the screen, prompting you with the steps you have to perform. It lets you record the results and make notes, and will record the actions you take to help diagnose any bugs that you find. You can log a bug with one click, complete with a screenshot, a snapshot of the machine states, and a log of the actions you took leading up to the failure. 

Chapter 5: Automated System Tests

System testing starts with exploration—just probing the system to see what it does and looking for vulnerabilities ad hoc. But gradually you progress to scripted manual testing, in which each test case is described as a specific series of steps that verifies a particular requirement. This makes the tests repeatable; different people can work through the same test, without a deep understanding of the requirement, and reliably obtain the same result.

Manual tests can be made faster by recording the actions of the first tester, and then replaying them for subsequent tests. In the later tests, the tester only has to verify the results of each step (and perform some actions that are not accurately recorded).

But the most effective tests are performed entirely automatically. Although it requires a little extra effort to achieve this, the payback comes when you run the tests every night. Typically you'll automate the most crucial tests, and leave some of the others manual. You'll also continue to do exploratory manual testing of new features as they are developed. The idea is that the more mature tests get automated.

A fully automated system test builds the system, initializes a lab environment of one or more machines, and deploys the system components onto the machines. It then runs the tests and collects diagnostic data. Bug reports can be logged automatically in the case of failures. Team members can view results on the project website in terms of the progress of each requirement's tests.

Chapter 6: A Testing Toolbox

Functional tests are just the beginning. You'll want to do load tests to see if the system can handle high volumes of work fast enough; stress tests to see if it fails when short of memory or other resources; as well as security, robustness, and a variety of other kinds of tests.

Visual Studio has specialized tools for some of these test types, and in others there are testing patterns we can recommend.

Discovering a failure is just the first step to fixing the bug. We have a number of tools and techniques that help you diagnose the fault. One of the most powerful is the ability to save the state of the lab machines on which the failure occurred, so that the developer can log in and work out what happened. Diagnostic data adapters collect a variety of information while the tests are running, and IntelliTrace records where the code execution went prior to the failure.

Lab environments can be run by developers from Visual Studio while debugging—they aren't just a tool for the system tester.

Chapter 7: Testing in the Software Lifecycle

Whether you're aiming for deployment ten times a day, or whether you just want to reduce the cost of running tests, it isn't just a matter of running the tools. You have to have the right process in place.

Different processes are appropriate for different products and different teams. Continuous delivery might be appropriate for a social networking website, but not less so for medical support systems.

Whether your process is rapid-cycle or very formal, you can lower the risks and costs of software development by adopting some of the principles of agile development, including rigorous testing and incremental development. In this chapter we'll highlight the testing aspects of such processes: how testing fits into iterative development, how to deal with bugs, what to monitor, and how to deal with what you see in the reports.

Appendix: Setting up the Infrastructure

If you're administering your test framework, the Appendix is for you. We walk through the complete setup and discuss your options. If you follow it through, you'll be ready to hire a team and start work. (Alternatively, a team will be ready to hire you.)

We put this material at the end because it's quite likely that someone else has already done the setting up, so that you can dig right into testing. But you'll still find it useful to understand how the bits fit together.

The bits we install include: Visual Studio Team Foundation Server and its source and build services; Microsoft SharePoint Team Services, which provides the project website on which reports and dashboards appear; Microsoft Hyper-V technology and Microsoft System Center Virtual Machine Manager (SCVMM) to provide virtual machines on which most testing will be performed; lab management to manage groups of machines on which distributed systems can be tested; a population of virtual machine templates that team members will use; and a key server to let you create new copies of Windows easily. We'll also sort out the maze of cross-references and user permissions needed to let these components work together.

The development process

To simplify our book, we'll make some assumptions about the process your team uses to develop software. Your process might use different terms or might work somewhat differently, but you'll be able to adapt what we say about testing accordingly. We'll assume:

  • Your team uses Visual Studio to develop code, and Team Foundation Server to manage source code.
  • You also use Team Foundation Server to help track your work. You create work items (that is, records in the Team Foundation Server database) to represent requirements. You might call them product backlog items, user stories, features, or requirements. We will use the generic term "requirement." When each requirement has been completed, the work item is closed.
  • You divide the schedule of your project into successive iterations, which each last a few weeks. You might call them sprints or milestones. In Team Foundation Server, you assign work items to iterations.
  • Your team monitors the progress of its work by using the charts on the project website that Team Foundation Server provides. The charts are derived from the state of the work items, and show how much work has been done, and how much remains, both on the whole project and on the current iteration.
  • You have read Agile Software Engineering with Visual Studio by Sam Guckenheimer and Neno Loje (Addison-Wesley Professional, 2011), which we strongly recommend. It explains good ways of doing all the above.

In this book, we will build on that foundation. We will recommend that you also record test cases in Team Foundation Server, to help you track not just what implementation work has been done, but also how successfully the requirements are being met.

Testers vs. developers?

In some software development shops, there's a deep divide between development and test. There are often good reasons for this. If you're developing an aircraft navigation system, having a test team that thinks through its tests completely independently from the development team is very good hygiene; it reduces the chances of the same mistaken assumptions propagating all the way from initial tender to fishing bits out of the sea. Similar thinking applies to the acceptance tests at the end of a traditional development contract: when considering whether to hand over the money, your client does not want the application to be tested by the development team.

Contoso operates a separate test team. When a product's requirements are determined, the test leads work out a test plan, setting out the manual steps that more junior testers will follow when an integrated build of the product becomes available.

This divide is less appropriate for Fabrikam's rapid cycle. Testing has to happen more or less concurrently with development. The skills of the exploratory manual tester are still required, but it is useful if, when the exploration is done, that person can code up an automated version of the same tests.

Contractual acceptance tests are less important in a rapid delivery cycle. The supplier is not done with the software as soon as it is delivered. Feedback will be gathered from the operational software, and when a customer finds a bug, it can be fixed within days.

These forces all lead towards a more narrow division between testers and developers. Indeed, many agile teams don't make that distinction at all. Testing of all kinds is done by the development team.

That isn't to say the separate approach is invalid. Far from it; where very high reliability is sought, there is a strong necessity for separate test teams. But there are also some companies like Contoso, in which separate test teams are maintained mostly for historical reasons. They could consider moving more towards the developers=testers end of the slider.

JJ159336.1BCE523E14636E3E0C1F3DA225A21A02(en-us,PandP.10).png

Who finds the bugs?

Where dev and test are separate, unit testing is the province of the developers, and whole-system testing is what the test team does. But even where there is a strong divide for good reasons, our experience is that it can be very useful to move people between the teams to some extent. Developers benefit from thinking about what's required in terms of exact tests, and testers benefit from understanding development. Knowing the code helps you find its vulnerabilities, and automating tests allows you to run tests more reliably and more often.

Agile development

We recommend that, if you aren't doing so already, you should consider using more agile principles of development. Agile development is made possible by a strong testing regime.

Please note, "agile" doesn't mean giving up a rigorous approach to project governance where that is appropriate. If you develop software for my car's braking system or my bank's accounting systems, then I would ask you to keep doing the audit trails, the careful specifications, and so on.

At the same time, it is true of any project—formal or not—that an iterative approach to project planning can minimize the risks of failed projects and increase the capacity to respond to changes in the users' needs.

To see why, consider a typical large development project. The Contoso team needs to develop a website that sells ice cream. (Okay, forget that they could just get an off-the-shelf sales system. It's an example.) A morning's debate determines all the nice features they would like the site to have, and the afternoon's discussion leads to a block diagram in which there is a product catalog database, a web server, an order fulfillment application, and various other components.

Now they come to think of the project plan. One of the more traditionally-minded team members proposes that they should start with one of the components, develop it fully with all the bells and whistles, and then develop the next component fully, and so on. Is this a good strategy? No. Only near the end of the project, when they come to sew all the parts together, will they discover whether the whole thing works properly or not; and whether the business model works; and whether their prospective customers really want to buy ice cream from the internet.

A better approach is to begin by developing a very basic end-to-end functionality. A user should be able to order an ice cream; no nice user interface, no ability to choose a flavor or boast on networking sites about what he is currently eating. That way, you can demonstrate the principle at an early stage, and maybe even run it in a limited business trial. The feedback from that will almost certainly improve your ideas of what's required. Maybe you'll discover it's vital to ask the users whether they have a refrigerator.

Then you can build up the system's behavior gradually: more features, better user interface, and so on.

But wait, objects a senior member of the team. Every time you add new behavior, you'll have to revisit and rework each component. And every time you rework code, you run the risk of introducing bugs! Far better, surely, to write each component, lock it down, and move on to the next?

No. As stated, you need to develop the functionality gradually to minimize the risk of project failure. And you'll plan it that way: each step is a new feature that the user can see. A great benefit is that everyone—business management, customers, and the team—can see how far the project has come: progress is visible with every new demonstrated feature.

But, to address that very valid objection, how do you avoid introducing bugs when you rework the code? Testing. That's what this book is about.

Follow link to expand image

Summary

Testing the whole system should no longer slow down your responsiveness when your software products must be updated. Provided you adapt your development and operational processes appropriately, there are tools that can help you automate many of your tests and speed up manual testing for the rest. The integration of testing tools with requirements and bug tracking makes the process reliable and predictable. The ability to save a test plan, complete with lab configurations, and to work through the test interactively, allows you to repeat tests quickly and reliably.

Where to go for more information

There are a number of resources listed in text throughout the book. These resources will provide additional background, bring you up to speed on various technologies, and so forth. For your convenience, there is a bibliography online that contains all the links so that these resources are just a click away.

Next Topic | Previous Topic | Home | Community

Last built: Aug 6, 2012