다음을 통해 공유


FIM 2010 Performance Testing - Introduction

One of the areas I have been focused on recently has been testing the performance of our product.  With the release of RC1, I thought I would start off with some insight into what you can expect with the product. 

A key problem with figuring out the performance of a given product is the number of variables that impact the results you observe, this is especially around performance.  Many customers have a simple question of can your product support my company of size X.  While on the surface this is a simple question there are a large number of variables that play into the answer of that question.  For FIM 2010 there are a couple key pieces of information needed in evaluating what that answer is.

  1. Topology - What is the topology you plan to deploy the product in.  Will SQL Server be on the same box as the FIMService?  Will you be using a Network Load Balancer?
  2. Hardware - What hardware are you running on each piece of your topology?  What is CPU, Memory, Disk, Network?  How are your drives configured?  How is SQL configured to store your files?
  3. Policy Objects -  Policy objects are a key component of FIM 2010.  These include Sets, Management Policy Rules, Schema, Workflows, Sync Rules, etc.  Depending on how you configure these, there will be additional work that the product must do & that will impact your performance.
  4. Scale - Typically scale is talked about in terms of the number of users, but in the case of FIM you also need to think about the other object types in the system depending on the solution you are deploying.  How many groups & of what type?  Do you have calculated groups?  Do you have custom object types you are managing like computers?
  5. Load - How do you expect your system to be used?  How often do you expect someone to create a group?  What type of load do you expect from Password Reset deployment?  Do you expect users to use the Portal or the Outlook Add-in for Office 2007 more?

How you answer each of these questions will likely impact the performance of the product.  This is a classic problem as a tester we often find a matrix of variables & then need a way to help answer some of these questions.  My goal here is not to give you a definitive answer for your specific case, but instead to share information directly with you of how we have approached our testing which can then inform you in your deployment.

For our own internal testing we have worked to leverage feedback of our customers & most specifically MSIT to model a basis of how our product will be deployed and perform.  From that we have been working to expand this model to then see how changes to some of these variables then impact our performance.

In my next few posts I will discuss how we have approached each of these items, both for planning our testing & our eventual deployment.

Comments

  • Anonymous
    October 05, 2009
    Welcome to the community Darryl - we are eagerly awaiting the outcome of your testing. We are especially concerned as to the added overhead of ERE/DRE processing on the sync engine and whether or not the increased memory management of the 64-bit sync engine will displace any of that. My last project involved a single MA with 2.6 million objects of which only 300k needed AD accounts, so processing in FIM would require all of them to be in the portal with at least an additional 300k ERE's. It took us 26 hours to FI/FS the single MA - will FIM scale to this level?

  • Anonymous
    October 06, 2009
    Thanks Brad.  The addition of ERE & DRE objects in the system and their impact to performance is an area that we continue to explore in our testing.  Our current targets for performance internally include 150k users + 440k groups and their respective EREDREs for codeless provisioning.