Compartir vía


MIIS 2003 Capacity Planning Test Summary - Processor

Applies To: Windows Server 2003 with SP1

Previous Sections in This Guide

This section presents a discussion of the different tests used to evaluate processor performance. It describes the hardware platforms used, the data set used, the test procedures, and the results of each test.

Currently, little information is available about the scalability and performance of MIIS 2003 relative to the hardware platform that hosts it. The goals of the tests presented in this section are to provide a set of recommendations for selecting a processor configuration that ensures the best performance for the MIIS 2003 server and present the resulting test data to substantiate those recommendations.

Test Description

The processor tests consisted of staging and synchronizing data representing between 10,000 and 500,000 user accounts on servers using different processor configurations. The tests gathered statistical data, such as time to complete and operations per second, to determine how processor configuration affected the performance of the servers. The goal was to determine whether the processor speed or the number of processors most significantly affected server performance.

The staging and synchronization processes were tested because they are the most processor-intensive operations performed by MIIS 2003. During staging, any connector filter rules that have been defined must be evaluated and applied to every object being processed. During synchronization, all join and projection rules are applied to all objects being processed.

To isolate processor performance during these tests, other elements in the environment remained as constant as possible. Specifically, the following conditions were maintained throughout the testing cycles:

  • Memory - All test platforms were configured with 1 GB RAM for each processor.

  • Storage - All platforms used the same storage array so disk performance would be consistent for all tests.

  • Network - The tests were performed on an isolated network to eliminate interference from external traffic.

  • Data - The same data set was used for each test to maintain a consistent number of objects and to maintain a consistent ratio of objects-to-attributes. Only the number of users changed for various tests.

  • Sequential Processing - All tests were performed sequentially to avoid interference with each other.

Expected Results

MIIS 2003 supports concurrent processing of management agents. This means that multiple management agents can be run at the same time and that each management agent will process data separately. Concurrent processing of management agents, such as staging or export operations, can utilize additional processors. However, synchronization operations follow a different behavior and do not process multiple requests at the same time and may encounter database record locks, which can result in retries. When considering processor utilization, this implies that all tasks benefit from higher processor speeds while only a subset of tasks benefit from more processors.

Although SQL Server supports the use of multiple processors, none of the tests indicated that additional processors significantly improved performance during MIIS 2003 transactions. In fact, even during the tests involving 500,000 user objects, the processor utilization levels on the SQL servers stayed well below maximum and the difference between the dual-processor and quad-processor platforms was negligible. Multiple processor considerations for SQL servers become more important if they are centrally located and service other clients in addition to MIIS 2003. In that type of environment, the multiprocessor support for SQL Server would provide better performance.

If you intend to run concurrent staging or export operations, your environment will benefit from multiple processors. If you do not intend to perform concurrent staging or export operations, there will be little or no advantage to purchasing a high number of processors. In this case, processor speed would be a better investment.

Test Results Summary

The majority of the test results indicated that MIIS 2003 performance is more influenced by the speed of the processors in the server rather than the number of processors in the server.

  • When making your choice regarding processor configuration, select a server based on the speed of the processors rather than the number of processors. Based on the results of these tests, a fast dual-processor server is recommended. If the future scalability of the server is a concern, a quad-processor chassis can be purchased but only install two of the processors until such a time that the workload on the server requires the addition of the remaining two processors.

Test Scenario

During this test, a variety of platform configurations were used to complete a series of MIIS 2003 staging and synchronization operations. The operations were designed to put load on the system processors. User account objects were imported from SQL Server and file-based management agents and then synchronized. Once this was complete, the user objects were exported into a clean instance of Active Directory. The same series of tests was performed on each hardware configuration. Over the course of the testing cycles, the number of users was increased from 10,000 to 500,000 in varying increments to explore the impact on performance.

Server Hardware Configuration

Two servers in various configurations were used to perform these tests. The table below summarizes the configuration of each server used in the testing process.

Table 1: Server hardware configuration

Server Designation Model Description

Test Server 1

IBM xSeries 336

Dual Intel EMT64 @ 3.6 GHz

No hyper-threading

2 GB RAM

Internal LSI SCSI controller

Dual 34.6 GB Ultra 320 HDD (10,000 RPM)

Windows Server 2003 Enterprise Edition (32-bit edition)

Test Server 2

IBM xSeries 445

Quad Intel Xeon MP @ 2.8 GHz

No hyper-threading

4 GB RAM

Internal LSI SCSI Controller

Dual 34.6 GB Ultra 320 HDD (10,000 RPM)

Windows Server 2003, Enterprise Edition (32-bit edition)

Storage Solution

IBM EXP 400

14-Disc SCSI Enclosure (10 spindles used)

Storage Controller

IBM ServeRAID-6i

Used in the IBM xSeries 445 for connectivity to the storage enclosure.

Storage Controller

IBM ServeRAID-4

Used in the IBM xSeries 336 for connectivity to the storage enclosure.

All servers used the same storage solution to reduce performance variances based on disk access, although the two test servers did have different controllers.

Processor Count and Speed

Two sets of tests were run to monitor and measure CPU performance. In both cases, the MIIS configuration during the tests was identical, but the number of objects within the connected directories and the hardware configuration changed.

Three hardware platform configurations where used during these tests. For the purposes of this discussion the configurations will be referred to using a single letter to indicate the number of processors (D for dual-processor, Q for quad-processor) followed by two digits that indicate clock speed. For example the server designation D36 indicates it is a dual-processor server running at 3.6 GHz, and Q28 is a quad-processor server running at 2.8 GHz. The three configurations used for these tests are listed in the following table.

Table 2: Processor configuration

Server Designation Model Configuration

D28

IBM xSeries 445

Dual 2.8 GHz processors

No hyper-threading

2 GB RAM

D36

IBM xSeries 336

Dual 3.6 GHz processors

No hyper-threading

2 GB RAM

Q28

IBM xSeries 445

Quad 2.8 GHz processors

No hyper-threading

4 GB RAM

The two IBM 445 configurations provided information about the effects of an increase in the number of processors, where as the IBM 336 test was added to compare the effect of an increased processor speed rating. The IBM 445 platform tests were conducted on a single server.

Note

The IBM xSeries 336 has faster memory, Front Side Bus (FSB) and processor ratings than the IBM xSeries 445. Even though the processors use a 64-bit architecture, the operating systems were 32-bit editions. Therefore, there would be no real benefits to using the 64-bit architecture. None of the tests compared 32-bit performance to 64-bit performance.

MIIS Configuration

Microsoft Identity Integration Server 2003 with Service Pack 1 was used on the servers being tested. The different roles of the servers in the test environment and the software installed are summarized in the table below.

Table 3: Server configurations

Server Role Installed Software Description

MIIS Server

MIIS 2003 SP1

Microsoft SQL Server 2000 SP3a

The different hardware platforms being tested were set up using this configuration. Each server hosted an instance of MIIS 2003 SP1 and the SQL database used by it. The text files for the file-based management agents were also located on each MIIS 2003 server.

SQL Data Source

Microsoft SQL Server 2000 SP3a

This server hosted another SQL database that was used as a data source for some tests.

Active Directory Data Source

Microsoft Windows Server 2003, Enterprise Edition

This server hosted the instance of Active Directory that was used for exported data.

The following sections describe additional details of the installation.

Database Configuration

The MIIS 2003 database was hosted on the same server as MIIS 2003. This eliminated the need to send the data over the network.

Management Agent Configuration

Three management agents were installed on MIIS 2003. The following table summarizes the configuration of each one.

Table 4: Management agents

Management Agent Name Type Notes

ADMA

Active Directory management agent

  • Data and logs on different volumes for performance.

  • Remote data source (dedicated Active Directory server).

  • At the beginning of each test, the directory is empty except for default administrative accounts created during setup.

SQLMA

SQL Server management agent

  • Data and logs on different volumes for performance.

  • Remote data source (dedicated SQL server).

  • Varying number of user objects based on the test cycle being performed (10,000, 50,000, 100,000, 200,000, and 500,000).

  • Each user object has 25 attributes defined.

  • No multivalued or reference attributes.

TXTMA

Text File management agent

  • Local data source (text files used as the data source are located on the MIIS 2003 server)

  • Varying number of user objects based on the test cycle being performed (10,000, 50,000, 100,000, 200,000, and 500,000).

  • Each user object has 25 attributes defined.

  • No multivalued or reference attributes.

Run Profile Configuration

Separate run profiles were created for the staging and synchronization operations and both run profiles were configured to use full operations rather than deltas. This was necessary because MIIS 2003 was reset after each test in order to begin each test with an empty connector space and metaverse. Because no objects existed prior to each test, full staging and synchronization operations were required. Delta-related data has been included in the section "Additional Performance Data - Delta Operations" below.

Note

The synchronization portion of the tests included staging the user objects for export to Active Directory. Run profiles for actually exporting the staged data to Active Directory were created and the export operations were performed. However, the statistics for the actual export operations were not included in the test results being presented here because the performance of the export operations is skewed based on the performance of the data source that received the export. Our goal for the tests presented here is to focus entirely on the performance of MIIS 2003.

No concurrent operations were tested; all run profiles were executed sequentially.

Rules Extensions

No rules extensions were used with any of the management agents. The provisioning code used was only the minimum amount required to create the user objects in Active Directory and a Microsoft Exchange mailboxes for each user.

Test Results

As the staging and synchronization tests were performed, the results were compiled and recorded. The following is a detailed presentation of the results for each test.

Staging Performance

The staging tests compared the performance of various platforms by using the SQLMA and TXTMA run profiles.

Staging Time

Chart:Total staging time

Table 5: Source data for Figure1 (in seconds)

Users D36 D28 Q28

10,000

79

71

72

50,000

412

402

412

100,000

853

929

921

200,000

1734

2517

2316

500,000

4638

7448

7235

Observations
  • Because the staging tests were run sequentially, a common scenario for many customers, and the fact that each management agents is run as a single-threaded process the, faster server (D36) outperforms the other server configurations tested (D28 and Q28).

  • If multiple concurrent operations were processed, such as staging TXTMA and SQLMA at the same time, the multiple processors in Q28 would have been better utilized and might have outperformed D36. Concurrent processing can be used as long as the concurrent operations do not need to access the same metaverse objects at the same time. If this occurs, object access might be locked by one process and prevent access by the other. The majority of deployments process management agents sequentially so tests performed for this document were based on sequential processing. Organizations that process run profiles concurrently should perform additional testing to determine the actual performance advantage of the additional processors in their environment.

  • Note that there is only a small performance increase when the additional processors are added (D28 compared to Q28). During the 500,000 user test, Q28 completes the staging process only a few minutes quicker.

  • When a smaller number (100,000 and fewer) of objects is being managed, there is almost no difference in the staging performance between the various platform configurations. A visible performance increase only becomes apparent when more than 100,000 objects are being processed.

Objects Staged Per second

Chart:Objects staged per second

Table 6: Source data for Figure 2 (objects/sec)

Users D36 D28 Q28

10,000

126.58

140.85

138.89

50,000

121.36

124.38

121.36

100,000

117.23

107.64

108.58

200,000

115.34

79.46

86.36

500,000

107.81

67.13

69.11

Observations
  • Note that D36 maintains a more constant level of performance as the number of objects increases. While its performance is not as good in the small scenarios, D36 outperforms the other platforms in all the scenarios that involve 100,000 or more user objects.

Synchronization Performance

Some variations in the data are difficult to see when attempting to describe both large and small data sets in a chart format. This section separates the data into two groups: one is fewer than 100,000 objects and the other is 100,000 objects or more. For each group, it compares the staging performance of the various platforms during all synchronization operations.

Synchronization Time

Chart:Total Sync Time

Table 7: Source data for Figure 3 (seconds)

Users D36 D28 Q28

10,000

368

551

497

50,000

2,286

4,200

4,958

Chart:Total Sync Time

Table 8: Source data for Figure 4 (seconds)

Users D36 D28 Q28

100,000

5,537

12,685

11,708

200,000

21,188

38,957

38,067

500,000

113,442

150,852

172,798

Observations
  • The synchronization statistics show that the faster server (D36) outperforms the server with more processors. This is a clear indication that the synchronization process benefits from faster processors rather than more processors.
Objects Synchronized per Second

Chart:Objects synchronized per second

Table 9: Source data for Figure 5 (objects/seconds)

Users D36 D28 Q28

10,000

27.17

18.15

20.12

50,000

21.87

11.90

10.08

100,000

18.06

7.88

8.54

200,000

9.44

5.13

5.25

500,000

4.41

3.31

2.89

Observations
  • Once again the data shows that the server with faster processors (D36) consistently shows better performance than the server with a higher number of processor (Q28).

  • The performance converges as the number of objects being processed increases. When synchronizing 500,000 objects, the performance difference is only around one object per second, compared to the difference of seven objects per second during the tests of the smaller data sets. Data sets of more than 500,000 objects were not tested.

Delta Operation Performance Data

After tests of the staging and synchronization of new user objects were completed, additional tests were run after making modifications and deletions to the original data set. The tests were designed to determine any difference in performance characteristics of the hardware configurations when performing full staging and synchronization operations as compared to delta staging and synchronization operations. The results of the tests demonstrated the same trends in performance whether full or delta-based operations were being run.

Delta Staging Statistics

Two sets of tests were performed. For the first set of tests, changes were made to five attributes of 20% of each of the user objects. For the second set of tests, 10% of the objects were deleted. New run profiles were created to perform delta operations and the tests were run using the same hardware platforms as for the earlier tests.

Delta Staging Time - Modified Objects

Chart:Delta staging 20% change

Table 10: Source data for Figure 6. (Seconds)

Users D36 D28 Q28

2,000

8

8

8

10,000

47

48

48

20,000

139

159

161

40,000

294

371

368

100,000

1,497

1,757

1,747

Observations
  • There is not a significant performance increase visible for delta staging. In fact the curve looks similar to that of the full staging tests presented earlier. Keep in mind, the scale of the curves is different and this test involves only 20% of the original data set. For the test involving the 500,000 object data set (100,000 modified user objects), the difference in total staging time between D36 and Q28 is about five minutes.

  • The performance increase, while minimal, is still maintained by the faster dual processor server. This test supports the pervious recommendations.

Delta Staging Time - Deleted Objects

Chart:Delta staging 10% delete

Table 11: Source data for Figure 7. (Seconds)

Users D36 D28 Q28

1,000

8

8

8

5,000

39

38

39

10,000

87

95

94

20,000

155

236

224

50,000

1,341

1,347

1,353

Observations
  • As with the performance statistics for Delta Staging Time - Modified Objects, there is little gain in performance when staging so few changes.

  • The results of this test support previous recommendations.

Delta Staging Operations per Second - Modified Objects

Chart:Delta staging 20% change

Table 12: Source data for Figure 8. (Seconds)

Users D36 D28 Q28

2,000

250.00

250.00

250.00

10,000

212.77

208.33

208.33

20,000

143.88

125.79

124.22

40,000

136.05

107.82

108.70

100,000

66.80

56.92

57.24

Observations
  • Test results confirm previous recommendations.

  • There is a notable difference in the number of objects per second being processed in the larger test, which also supports the previously discussed testing trends.

Delta Staging Operations per Second - Deleted Objects

Chart:Delta staging 10% delete

Table 13: Source data for Figure 9. (Seconds)

Users D36 D28 Q28

1,000

125.00

125.00

125.00

5,000

128.21

131.58

128.21

10,000

114.94

105.26

106.38

20,000

129.03

84.75

89.29

50,000

37.29

37.12

36.95

Observations
  • Test results confirm previous recommendations.

External Factors and Other Considerations

This document represents the first effort at providing some basic guidance around hardware selection, specifically processor configuration, based on the specific requirements of MIIS 2003. By no means is it a comprehensive study of all external factors that can affect the processor performance in all environments. Because every environment has unique characteristics that affect the overall performance of all deployed components, accurately reproducing and testing all external factors is not possible. This document presents a baseline set of tests that can be built upon to provide a complete assessment of performance considerations for components of an MIIS environment.

For more information about external factors that can affect performance, see "Other Capacity and Performance Considerations" in MIIS Capacity Planning - Additional Performance Considerations in this guide.

Testing for Additional Hardware Considerations

The following options were not tested:

  • The effect of 64-bit processors

  • The effect of hyper-threading

Next

See Also

Concepts

Introduction to MIIS 2003 Capacity Planning
MIIS 2003 Capacity Planning Test Summary - SQL Server
MIIS 2003 Capacity Planning Test Summary - Disk Performance
MIIS 2003 Capacity Planning Test Summary - Memory
MIIS 2003 Capacity Planning Test Summary - Database Size
MIIS 2003 Capacity Planning Test Summary - Network
MIIS 2003 Capacity Planning - Additional Performance Considerations