Compartilhar via


Example Disk Subsystem Throughput Test

 

Topic Last Modified: 2011-02-10

This topic describes a sample configuration of a disk subsystem throughput test using JetstressWin.exe. The goal of this test is to identify the peak working input/output per second (IOPS) value that the storage subsystem can sustain while remaining within the disk latency targets established by Microsoft Exchange.

Configure Disk Subsystem Throughput Test

  1. Click Start and navigate to Exchange Jetstress 2010.

  2. On the Welcome to Microsoft Jetstress page, click Start new test to begin the test.

    Note

    Check that the status text does not ask for a restart and that the last two lines state that the ESE engine and performance libraries were detected.

  3. On the Open Configuration page, select Create a new test configuration.

  4. On the Define Test Scenario page, select Test a disk subsystem throughput and click Next.

  5. On the Select Capacity and Throughput page, select Suppress tuning and use thread count. For more information about configuring thread count, see Configuring Thread Count. Click Next.

  6. On the Select Test Type page, select Performance. If the Exchange database design will have background database maintenance (BDM) disabled (it is enabled by default), clear the Run Background Database Maintenance check box. For DAS (Direct Attached Storage) deployments, accept the defaults. Click Next.

    Note

    Don't select Multi-host test unless the test is running against a shared storage solution.

  7. On the Define Test Run page, browse to select the folder for storing the test results, and set the correct duration for the Jetstress test. Performance tests should run for a minimum of 2 hours. You can set a shorter than 2 hour test by typing directly into the window. For example:

    • 0.75 = 45 minutes

    • 0.50 = 30 minutes

    • 0.25 = 15 minutes

  8. On the Define Database Configuration page, configure the following information to represent your production environment:

    • Number of databases is the total number of databases on this server including all database copies.

    • Number of copies per database represents the number of total copies that will exist for each unique database. This value simulates some LOG I/O reads to account for the log shipping between active and passive databases; it doesn't copy logs between servers. For example, if your six-server database availability group (DAG) contained 30 databases, with one active copy, two passive high availability copies, and one lagged copy per database (or 120 database copies spread across six servers, with each server hosting 20 copies), you would set the Number of Databases to 20 and the Number of copies per database to 4.

      Configure the database and log file paths appropriately. Click Next.

  9. On the Select Database Source page, if this is the first time the test has been run, select Create new databases, otherwise, select Attach existing databases. Click Next.

  10. On the Review & Execute Test page, verify that your selected paths are correct, and click Prepare Test.

  11. The Test in Progress page will begin database initialization. This process will vary depending on your configuration, but plan on 24 hours for every 100 TB worth of data to be initialized. This value should equate to 80 percent of the available storage. After the test has been initialized, click Execute Test to begin the Jetstress test.

  12. After the test has completed, close Jetstress and copy the Jetstress report and performance data to a file location for analysis.

Jetstress Output Files

Each performance test generates the files shown in the table below. Ensure that you make a copy of all these files. In addition, you may also want to make a copy of the *.evt files which contain event log data taken during the test. The following table describes the content and purpose of the output files.

Jetstress output files

File Purpose

Performance_<date>.blg

Provides detailed binary performance data captured during the performance test for analysis. Open this file in Performance Monitor (Perfmon.exe) and examine the counters to understand reasons for failure.

Performance_<date>.html

Provides an easy-to-read HTML status report for the test.

Performance_<date>.xml

Provides the status report data in XML format.

DBChecksum_<date>.blg

Provides binary performance data gathered during the checksum of the database. Useful if the checksum fails or takes a long time to complete.

DBChecksum_<date>.html

Provides an easy-to-read HTML status report for the checksum test.

DBChecksum_<date>.xml

Provides status report data in XML format.

XMLConfig_<date>.xml

Provides a backup of the Jetstress XML configuration file used for the test.