Sdílet prostřednictvím


Estimate performance and capacity requirements for InfoPath Forms Services in SharePoint Server 2010

 

Applies to: InfoPath Forms Services

This article provides guidance on the effects of using InfoPath Forms Services in Microsoft SharePoint Server 2010 on topologies running Microsoft SharePoint Server 2010.

The tests described in this article were designed to help develop estimates of how different farm configurations respond to changes to the following variables:

  • Front-end Web server scale out for different submit operations

  • Front-end Web server scale out for different InfoPath list operations

  • Impact of form complexity on throughput

For general information about capacity planning for SharePoint Server 2010, see Performance and capacity management (SharePoint Server 2010).

In this article:

  • Test farm characteristics

  • Test results

  • Recommendations

Test farm characteristics

The specific capacity and performance figures presented in this article will differ from the figures in real-world environments. The figures presented are intended to provide a starting point for the design of an appropriately scaled environment. After you have completed your initial system design, test the configuration to determine whether your system will support the factors in your environment.

Hardware, settings, and topology

This section describes the hardware and topology used to complete these tests, as well as the test scenarios in the following sections:

  • Lab hardware

  • Topology

  • Test scenarios

Lab hardware

To provide a high level of test-result detail, several farm configurations were used for testing. Farm configurations ranged from one to six Web servers and a single database server that is running Microsoft SQL Server 2008 database software. Load testing was performed with Visual Studio Team System 2008. The tests also included two agent computers. All computers were 64-bit.

The following table lists the specific hardware that was used for testing.

Web server Database server Agent 1 and Agent 2

Role

Front-end Web server

SQL Server

Agent

Processors

2x Xeon L5420 @ 2.5 GHz (8 cores)

4x Xeon E7330 @ 2.4 GHz (16 cores)

2x Xeon L5420 @ 2.5 GHz (8 cores)

RAM

16 GB

32 GB

16 GB

Operating system

Windows Server 2008 R2

Windows Server 2008 R2

Windows Server 2008 R2

Storage: operating system

4x 146 GB, 10K RPM, RAID 0

2x 146 GB, 15K RPM, RAID 1

4x 146 GB, 10K RPM, RAID 0

Storage: backups

3x 300 GB, 15K RPM, RAID 5

Storage: SQL Server Data

9x 300 GB, 15K RPM, RAID 5

Storage: SQL Server Log

6x 300 GB, 15K RPM, RAID 5

Number of network adapters

1

4

1

Network adapter speed

1 Gb per second

1 Gb per second

1 Gb per second

Authentication

NTLM

NTLM

NTLM

Software version

SharePoint Server 2010 (Pre-Release Version)

SQL Server 2008 SP1 CU6

Number of SQL Server instances

1

Load balancer type

Windows Network Load Balancing

Windows Network Load Balancing

N/A

Information Rights Management (IRM) settings

Off

Off

Anti-virus settings

Not installed

Not installed

Not installed

Topology

InfoPath capacity planning topology

Capacity Planning for InfoPath

Test scenarios

This section defines the test scenarios and provides an overview of the test process that was used for each scenario. Test results are given in later sections in this article.

Form templates

Testing was performed with a form template that consists of text boxes, radio buttons, and drop-down list boxes. This template will be referred to as the baseline solution. The following is a screen shot of the form template for context.

Passport application form

Passport Application Form

The baseline solution was used to create derivative form templates. These form templates are created by making scoped modification to the baseline solution template and saving it as a new template. This approach enabled us to do comparison of different operations and aspects of form design. The following table describes the different form templates used in testing.

Form template Number of fields Type of submit Number of validation rules First request optimized Administrator deployed Notes

baseline solution

44

None

4

Yes

No

 

baseline solution with Web service submit

44

Web service

4

Yes

Yes

 

baseline solution with document library submit

44

SharePoint document library

4

Yes

Yes

 

Baseline solution without first request optimization

44

Web service

5

No

Yes

The extra validation rule is Date is in the past. Since this rule uses the today() function, the first request requires state data.

Baseline solution with 2x fields

88

Web service

4

Yes

Yes

 

Baseline solution with 3x fields

132

Web service

4

Yes

Yes

 

Baseline solution with 4x fields

176

Web service

4

Yes

Yes

 

Baseline solution with validation

44

Web service

10

No

Yes

 

Baseline solution with 2x validation

44

Web service

20

No

Yes

 

Baseline solution with 4x validation

44

Web service

40

No

Yes

 

InfoPath list form

A modified version of an issue tracking list was used to test the InfoPath list form operations. Two modifications were made to the list. First, the Assigned To column was removed. Second, the Related Issues column was set to not allow multiple values. Finally, the list was prepopulated with 100 items. The following is a screen shot of the list.

List form

InfoPath list form

Test definitions

Scale-out tests

The following table describes the tests used to for the Web front-end, scale-out tests.

Scenario description Form template used Test steps Number of postbacks

Baseline solution new

Baseline solution

  • Open a new instance of the baseline solution.

0

Save new baseline solution

Baseline solution

  1. Open a new instance of the baseline solution.

  2. Fill out a form and save it to a document library.

1

Baseline solution with document library submit

Baseline solution with document library submit

  1. Open a new instance of the baseline solution with document library submit.

  2. Fill out a form and then click Submit. This sends the form data to a SharePoint document library.

1

Baseline solution with Web service submit

Baseline solution with Web service submit

  1. Open a new instance of the baseline solution with Web service submit.

  2. Fill out a form and then click Submit. This sends the form data to a Web service.

1

Baseline solution with document library submit x5

5 copies of the baseline solution with Web service submit form template with each one deployed to its own document library

For each document library:

  1. Open a new instance of the baseline solution with document library submit.

  2. Fill out a form and then click Submit. This sends the form data to a SharePoint document library.

1

Baseline solution open

Baseline solution with document library submit

  • Open a baseline solution form that has already been completed. The form is opened from a document library.

0

Form complexity tests

The following table describes the tests used for the form complexity tests.

Test name Form template used Test steps Number of postbacks

Baseline solution with 1x controls

Baseline solution with Web service submit

  1. Open a new instance of the baseline solution with Web service submit.

  2. Fill out a form and then click Submit. This sends the form data to a Web service.

1

Baseline solution with 2x controls

Baseline solution with 2x controls

  1. Open a new instance of the baseline solution with 2x controls.

  2. Fill out a form and then click Submit. This sends the form data to a Web service.

1

Baseline solution with 3x controls

Baseline solution with 3x controls

  1. Open a new instance of the baseline solution with 3x controls.

  2. Fill out a form and then click Submit. This sends the form data to a Web service.

1

Baseline solution with 4x controls

Baseline solution with 4x controls

  1. Open a new instance of the baseline solution with 4x controls.

  2. Fill out a form and then click Submit. This sends the form data to a Web service.

1

Baseline solution without first request optimization

Baseline solution without first request optimization

  1. Open a new instance of the baseline solution without first request optimization.

  2. Fill out a form and then click Submit. This sends the form data to a Web service.

1

Baseline solution with validation

Baseline solution with validation

  1. Open a new instance of the baseline solution with validation.

  2. Fill out a form and then click Submit. This sends the form data to a Web service.

1

Baseline solution with 2x validation

Baseline solution with 2x validation

  1. Open a new instance of the baseline solution with 2x validation.

  2. Fill out a form and then click Submit. This sends the form data to a Web service.

1

Baseline solution with 4x validation

Baseline solution with 4x validation

  1. Open a new instance of the baseline solution with 4x validation.

  2. Fill out a form and then click Submit. This sends the form data to a Web service.

1

InfoPath list form tests

The following table describes the tests used for the InfoPath list form tests.

Test name Test Step # of Postbacks

Issue tracking display

  • Open an existing issue tracking list item in display view.

0

Issue tracking edit

  • Open an existing issue tracking list item in edit view.

0

Issue tracking new

  • Open a new item for the issue tracking list.

0

Test results

All the tests reported in this article were conducted without think time, a natural delay between consecutive operations. In a real-world environment, each operation is followed by a delay as the user performs the next step in the task. By contrast, in these tests, each operation was immediately followed by the next operation, which resulted in a continual load on the farm. This load can cause database contention and other factors that can adversely affect performance.

For each topology, a series of three tests was run: calibration, green zone, and maximum throughput. The calibration run uses a step-load pattern. A step-load pattern increases the number of virtual users over time. The results of the calibration run determine the user load for the green zone and maximum throughput tests. The green zone and maximum throughput tests both use constant load pattern for a period of 5 minutes. The requests per second (RPS) reported in this document is the average RPS at the end of the five minute constant load test.

Some of the cells in the results tables have a dash. This indicates that the test was not run for that topology. The test was not run because the results of other runs indicate that there is no expected increase in RPS for that particular topology.

Bottlenecks in InfoPath Forms Services in SharePoint Server 2010 are described in greater detail in Common bottlenecks and their causes, later in this article.

Effect of Web front-end scale-out for different submit operations

The following table shows the green zone test results of scaling out front-end Web servers for various submit operations in SharePoint Server 2010.

  Baseline solution save Baseline solution with Web service submit Baseline solution with SharePoint Server 2010 submit Baseline solution with SharePoint Server 2010 submit using five document libraries

1x1

165

245

160

139

2x1

292

471

301

280

4x1

479

896

478

544

6x1

467

1395

-

599

The following graph shows the green zone throughput for different InfoPath submit operations on different Web front-end topologies. SharePoint Server 2010 submit can scale to four front-end Web servers. However, a farm running five document library submit forms in parallel can achieve more throughput with six front-end Web servers than a single document library can with six front-end Web servers. A farm will generally have more than one InfoPath solution that is deployed. This result means that one of those individual solutions will reach maximum throughput at four front-end Web servers. However, the collective throughput of all the solutions can scale beyond four front-end Web servers. Web service submit has the most throughput and scales to six front-end Web servers.

Green zone throughput for submit operations

Green zone throughput for submit ops

The following table shows the Maximum test results of scaling out front-end Web servers for various submit operations in SharePoint Server 2010.

  Baseline solution save Baseline solution with Web service submit Baseline solution with SharePoint Server 2010 submit Baseline solution with SharePoint Server 2010 submit using five document libraries)

1x1

286

470

301

285

2x1

484

912

464

518

4x1

-

1484

478

601

6x1

-

1483

-

-

The following graph shows the maximum throughput for different InfoPath submit operations on different front-end topologies. SharePoint Server 2010 submit and save scale to two front-end Web servers. However, a farm running five document library submit forms in parallel can achieve more throughput with four front-end Web servers than a single document library with four front-end Web servers. A farm will generally have more than one InfoPath solution that is deployed. This result means that one of those individual solutions will reach maximum throughput at four front-end Web servers. However, the collective throughput of all the solutions can scale beyond four front-end Web servers. Web service submit has the most throughput and scales to four front-end Web servers.

Maximum throughput for submit operations

Maximum throughput for submit operations

Effect of front-end Web server scale out for InfoPath list operations

The following table shows the green zone test results of adding front-end Web servers for InfoPath list operations in SharePoint Server 2010.

  Issue tracking display Issue tracking new Issue tracking edit

1x1

77

67

56

2x1

153

125

106

4x1

295

236

212

6x1

455

431

416

The following graph shows the green zone throughput for InfoPath list operations. All the operations show increasing throughput from adding front-end Web servers. The results also suggest that adding more than six front-end Web servers will continue to increase throughput. This increase was observed outside the capacity planning testing. The display operation has more throughput than the new operation, which has more throughput than the edit operation.

Green zone throughput for list operations

Green zone throughput for list operations

The following table shows the maximum throughput test results of adding front-end Web servers for InfoPath list operations in SharePoint Server 2010.

  Issue tracking display Issue tracking new Issue tracking edit

1x1

143

126

100

2x1

263

243

191

4x1

524

457

364

6x1

747

679

521

The following graph shows the maximum throughput for the InfoPath list operations. All the operations show increasing throughput from adding front-end Web servers. The results also suggest that adding more than six front–end Web servers will continue to increase throughput. This increase was observed outside the capacity planning testing. The display operation has more throughput than the new operation, which has more throughput than the edit operation.

Maximum throughput for list operations

Maximum throughput for list operations

Effect of Web front-end scale out for new and open operations

The following table shows the test results of adding front-end Web servers for new and open InfoPath operations in SharePoint Server 2010.

  Issue tracking new Issue tracking display Baseline solution new Baseline solution open

1x1

67

77

197

129

2x1

125

153

379

296

4x1

236

295

802

575

6x1

431

455

1182

869

The following graph shows the green zone throughput for new and open InfoPath operations. All the operations see increasing throughput from adding front-end Web servers. The results suggest that adding more than six front-end Web servers will continue to increase throughput. This increase was observed outside the capacity planning testing. Document library new and open operations have more throughput than InfoPath list new and display operations.

Green zone throughput for new and open operations

Green zone throughput for new and open ops

  Issue tracking new Issue tracking display Baseline solution New Baseline solution open

1x1

126

143

408

282

2x1

243

263

775

558

4x1

457

524

1285

996

6x1

679

747

1360

1104

The following graph shows the maximum throughput for InfoPath list operations. All the operations show increasing throughput from adding front-end Web servers. The results show that the document library new and open operations scale to six front-end Web servers. However, the results suggest that the InfoPath list operations could benefit from more than six front-end Web servers. Document library new and open operations have more throughput than InfoPath list new and display operations.

Maximum throughput for new and open operations

Maximum throughput for new and open ops

Effect of form complexity on throughput

The following table shows the test results of adding form controls to a form template. All results were collected on a farm topology that has four front-end Web servers.

  Baseline solution 1x controls Baseline solution 2x Controls Baseline solution 3x controls Baseline solution 4x controls

Maximum throughput

1484

1424

1310

1201

Green zone

896

834

760

608

The following graph shows the test results of adding form controls to a form template. The number of fields and controls in a form has a significant effect on throughput. These results show that increasing the number of controls by a factor of four can decrease the green zone throughput by more than 30 percent.

Impact of number of controls on throughput

Impact of number of controls on throughput

The following table shows the test results of adding form controls to a form template. All results were collected on a farm topology that has four front-end Web servers.

  Baseline solution Baseline solution without first request optimization Baseline solution with validation Baseline solution with 2x validation Baseline solution with 4x validation

Maximum throughput

1484

1323

1271

1202

1074

Green zone

896

788

724

676

612

The following graph shows the test results of adding validation rules to a form template. The number of validation rules in a form has a measureable effect on throughput. These results show that increasing the number of validation rules by a factor of four can decrease the green zone throughput by more than 30 percent.

Impact of number of validation rules on throughput

Impact of number of validation rules on throughput

Hardware cost per transaction

Maximum RPS of issue tracking display operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

91.5%

85.8%

85.8%

81.1%

Reliability

Average page time

0.088

0.093

0.11

0.098

 

Failure rate

0%

0%

0%

0%

Green zone RPS of baseline solution new operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

44.1%

43.7%

46.5%

46.5%

Reliability

Average page time

0.024

0.025

0.027

0.033

 

Failure rate

0%

0%

0%

0%

Maximum RPS of baseline solution new operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

93.7%

91.1%

77.5%

54.0%

Reliability

Average page time

0.048

0.050

0.052

0.056

 

Failure rate

0%

0%

0%

0%

Green zone RPS of baseline solution save operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

40.8%

41.3%

37.3%

24.2%

Reliability

Average page time

0.059

0.074

0.099

0.10

 

Failure rate

0%

0.21%

0.0014%

0%

Maximum RPS of baseline solution save operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

85.8%

76.8%

-

-

Reliability

Average page time

0.090

0.12

-

-

 

Failure rate

0%

0.18%

-

-

Green zone RPS of baseline solution document library submit operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

40.6%

44.9%

35.9%

-

Reliability

Average page time

0.061

0.079

0.11

-

 

Failure rate

0%

0%

0%

-

Maximum RPS of baseline solution document library submit operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

89.1%

74.8%

-

-

Reliability

Average page time

0.11

0.12

-

-

 

Failure rate

0.0022%

0%

-

-

Green zone RPS of baseline solution with Web service submit operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

45.0%

44.0%

43.8%

46.0%

Reliability

Average page time

0.040

0.042

0.046

0.059

 

Failure rate

0%

0%

0.00074%

0%

Maximum RPS of baseline solution with Web service submit operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

91.8%

91.4%

74.6%

48.9%

Reliability

Average page time

0.076

0.080

0.091

0.11

 

Failure rate

0%

0%

0%

0%

Green zone RPS of baseline solution with five document library submit operations

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

38.4%

39.8%

40.8%

-

Reliability

Average page time

0.070

0.077

0.10

-

 

Failure rate

0%

0%

0%

-

Maximum RPS of baseline solution with document library with five submit operations

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

88.4%

80.5%

44.3%

29.7%

Reliability

Average page time

0.12

0.16

0.12

0.12

 

Failure rate

0%

0%

0.000011%

0%

Green zone RPS of baseline solution open operation

Scorecard Dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

39.2%

45.8%

45.5%

46.2%

Reliability

Average page time

0.036

0.038

0.041

0.049

 

Failure rate

0%

0%

0%

0%

Maximum RPS of baseline solution open operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

90.6%

90.6%

82.1%

60.0%

Reliability

Average page time

0.063

0.067

0.069

0.084

 

Failure rate

0%

0%

0%

0%

Green zone RPS of issue tracking display operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

44.8%

45.4%

44.6%

46.4%

Reliability

Average page time

0.061

0.067

0.073

0.072

 

Failure rate

0%

0%

0%

0%

Maximum RPS of issue tracking display operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

91.5%

85.8%

85.8%

81.1%

Reliability

Average page time

0.088

0.093

0.11

0.098

 

Failure rate

0%

0%

0%

0%

Green zone RPS of issue tracking edit operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

45.7%

43.6%

45.1%

60.0%

Reliability

Average page time

0.086

0.090

0.10

0.11

 

Failure rate

0%

0%

0%

0%

Green zone RPS of issue tracking display operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

89.8%

87.2%

82.9%

79.3%

Reliability

Average page time

0.12

0.13

0.13

0.14

 

Failure rate

0%

0%

0.00092%

0.012%

Maximum RPS of issue tracking display operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

91.5%

85.8%

85.8%

81.1%

Reliability

Average page time

0.088

0.093

0.11

0.098

 

Failure rate

0%

0%

0%

0%

Green zone RPS of issue tracking new operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

44.8%

42.9%

40.9%

50.5%

Reliability

Average page time

0.072

0.076

0.089

0.097

 

Failure rate

0%

0%

0%

0%

Maximum RPS of issue tracking new operation

Scorecard dashboard Scorecard metric 1x1 2x1 4x1 6x1

CPU

Average front-end Web server CPU

92.6%

89.2%

85.1%

84.9%

Reliability

Average page time

0.12

0.12

0.12

0.14

 

Failure rate

0%

0%

0%

0%

Green zone RPS of baseline solution with controls

Scorecard dashboard Scorecard metric 1x 2xControls 3xControls 4xControls

CPU

Average front-end Web server CPU

 

43.9%

49.8%

 

Reliability

Average page time

 

0.050

0.054

 
 

Failure rate

 

0%

0%

 

Maximum RPS of baseline solution with controls

Scorecard dashboard Scorecard metric 1x 2xControls 3xControls 4xControls

CPU

Average front-end Web server CPU

 

79.2%

80.9%

80.2%

Reliability

Average page time

 

0.098

0.12

0.12

 

Failure rate

 

0%

0%

0.00056%

Green zone RPS of baseline solution validation operation

Scorecard dashboard Scorecard metric Without first request optimization 1xValidation 2xValidation 4xValidation

CPU

Average front-end Web server CPU

45.4%

44.7%

45.5%

46.3%

Reliability

Average page time

0.055

0.057

0.061

0.068

 

Failure rate

0%

0%

0.19%

0%

Maximum RPS of baseline solution validation operation

Scorecard dashboard Scorecard metric Without first request optimization 1xValidation 2xValidation 4xValidation

CPU

Average front-end Web server CPU

80.4%

82.4%

86.8%

85.2%

Reliability

Average page time

0.10

0.11

0.13

0.11

 

Failure rate

0.0015%

0%

0%

0.00055%

Recommendations

This section provides general performance and capacity recommendations. Use these recommendations to determine the capacity and performance characteristics of the starting topology that you created to decide whether you have to scale out or scale up the starting topology.

Hardware recommendations

For specific information about minimum and recommended system requirements, see Hardware and software requirements (SharePoint Server 2010).

Note

Memory requirements for Web servers and database servers depend on the size of the farm, the number of concurrent users, and the complexity of features and pages in the farm. You should carefully monitor memory use in order to determine whether more memory must be added.

Scaled-up and scaled-out topologies

To increase the capacity and performance of one of the starting-point topologies, you can either scale up by increasing the capacity of your existing server computers, or scale out by adding additional servers to the topology. This section describes the general performance characteristics of several scaled-out topologies. The sample topologies represent the following common ways to scale out a topology for an InfoPath Forms Services scenario:

  • To provide for more user load, add Web server computers.

  • To provide for more data load, add capacity to the database server role by increasing the capacity of a single (clustered or mirrored) server, by upgrading to a 64-bit server, or by adding clustered or mirrored servers.

  • Maintain a ratio of no more than eight Web server computers to one (clustered or mirrored) database server computer. Although testing in our lab yielded a specific optimum ratio of Web servers to database servers for each test scenario, deployment of more robust hardware, especially for the database server, may yield better results in your environment.

Estimating throughput targets

Many factors can affect throughput. Each of these factors can have a major effect on farm throughput. You should carefully consider each of these factors when you plan your deployment. These factors include the following:

  • Number of users

  • The type, complexity, and frequency of user operations

  • The number of postbacks in an operation

  • The performance of data connections

SharePoint Server 2010 can be deployed and configured in a wide variety of ways. As a result, there is no simple way to estimate how many users can be supported by a given number of servers. Therefore, make sure that you conduct testing in your own environment before you deploy SharePoint Server 2010 in a production environment.

Optimizations

The following sections discuss methods for improving farm performance by optimizing form templates and the database server.

Form template design optimizations

  • Optimize the first request, that is, the request to open the form, for form templates that do not have onLoad events or business logic. Optimize the first request by delaying the creation of session-state entry in the database until a POST occurs. For such form templates, if the only POST was to close the form after a submit operation, the SQL session state will not be created. To apply this optimization, the form designer must set the Submit advanced setting to close the form after the submit operation. For more information about form template design optimizations, see the six-part blog series at Designing browser-enabled forms for performance in InfoPath Forms Services (https://go.microsoft.com/fwlink/p/?LinkId=129548).

  • If a scenario involves saving a form to a document library, it is better to submit the form to the library instead of saving it. A submit operation triggers only one POST request or round trip, whereas a save operation triggers two POST requests. The name of the form can be dynamically generated by using a rule or by using a control in the form.

  • Document library forms can achieve greater throughput than InfoPath list forms. If high throughput is needed for a solution, consider using a document library form instead of an InfoPath list form.

  • Form complexity, such as the number of controls and amount of form logic, affects throughput. As form complexity increases, the front-end Web server CPU cost also increases. Therefore, more complex forms need more front-end Web servers to achieve greater throughput.

  • To reduce user latency, we recommend that the form designer reduce the number of controls per view. For first-page view optimization, position controls that have a high resource cost, such as rich text fields, in subsequent views instead of in the default view.

Common bottlenecks and their causes

During performance testing, several different common bottlenecks were revealed. A bottleneck is a condition in which the capacity of a particular constituent of a farm is reached. This causes a plateau or decrease in farm throughput.

The following table lists some common bottlenecks and describes their causes and possible resolutions.

Troubleshooting performance and scalability

Bottleneck Cause Resolution

Database contention (locks)

Database locks prevent multiple users from making conflicting modifications to a set of data. When a set of data is locked by a user or process, no other user or process can modify that same set of data until the first user or process finishes modifying the data and relinquishes the lock.

To help reduce the incidence of database locks, you can:

  • Distribute submitted forms to more document libraries.

  • Scale up the database server.

  • Tune the database server hard disk for read/write.

Methods exist to circumvent the database locking system in Microsoft SQL Server 2005, such as the NOLOCK parameter. However, we do not recommend or support use of this method due to the possibility of data corruption.

Database server disk I/O

When the number of I/O requests to a hard disk exceeds the disk’s I/O capacity, the requests will be queued. As a result, the time to complete each request increases.

Distributing data files across multiple physical drives allows for parallel I/O. The blog SharePoint Disk Allocation and Disk I/O (https://go.microsoft.com/fwlink/p/?LinkId=129557) contains useful information about resolving disk I/O issues.

Web server CPU use

When a Web server is overloaded with user requests, average CPU use will approach 100 percent. This prevents the Web server from responding to requests quickly and can cause timeouts and error messages on client computers.

This issue can be resolved in one of two ways. You can add more Web servers to the farm to distribute user load, or you can scale up the Web server or servers by adding higher-speed processors.

Performance monitoring

To help you determine when you have to scale up or scale out a system, use performance counters to monitor the health of the system. Use the information in the following tables to determine which performance counters to monitor, and to which process the performance counters should be applied.

Web servers

The following table shows performance counters and processes to monitor for Web servers in your farm.

Performance counter Apply to object Notes

Processor time

Total

Shows the percentage of elapsed time that this thread used the processor to execute instructions.

Memory use

Application pool

Shows the average use of system memory for the application pool. You must identify the correct application pool to monitor.

The basic guideline is to identify peak memory use for a given Web application, and assign that number plus 10 to the associated application pool.

Database servers

The following table shows performance counters and processes to monitor for database servers in your farm.

Performance counter Apply to object Notes

Average disk queue length

Hard disk that contains SharedServices.mdf

Average values greater than 1.5 per spindle indicate that the write times for that hard disk are insufficient.

Processor time

SQL Server process

Average values greater than 80 percent indicate that processor capacity on the database server is insufficient.

Processor time

Total

Shows the percentage of elapsed time that this thread used the processor to execute instructions.

Memory use

Total

Shows the average use of system memory.

See Also

Other Resources

InfoPath Forms Services 2010 Web Testing Toolkit