Share via


Performance Tips for Azure DocumentDB – Part 2

Performance Tips for Azure DocumentDB – Part 2

Azure DocumentDB allows you to tune the performance of your database to best meet the needs of your application.  In part 1 of this series, we looked at networking and SDK configuration options available in DocumentDB and their impact on performance. This post will continue the discussion and cover the performance impact of indexing policies, throughput optimization and consistency levels.  Like any performance tuning recommendation, not every one of these tips may be applicable for your use case, but you can use this information as a guide in order to assist you in making the right design choices for your applications.

INDEXING POLICY

Indexing Policy Tip #1: Use lazy indexing for faster peak time ingestion rates

DocumentDB allows you to specify – at the collection level – an indexing policy, which enables you to choose if you want the documents in a collection to be automatically indexed or not.  In addition, you may also choose between synchronous (Consistent) and asynchronous (Lazy) index updates. By default, the index is updated synchronously on each insert, replace or delete of a document to the collection. This enables the queries to honor the same consistency level as that of the document reads without any delay for the index to “catch up".

Lazy indexing may be considered for scenarios in which data is written in bursts, and you want to amortize the work required to index content over a longer period of time. This allows you to use your provisioned throughput effectively and serve write requests at peak times with minimal latency. It is important to note, however, that when lazy indexing is enabled, query results will be eventually consistent regardless of the consistency level configured for the DocumentDB account.

Hence, Consistent indexing mode (IndexingPolicy.IndexingMode is set to Consistent) incurs the highest request unit charge per write, while Lazy indexing mode (IndexingPolicy.IndexingMode is set to Lazy) and no indexing (IndexingPolicy.Automatic is set to False) have zero indexing cost at the time of write.

Indexing Policy Tip #2: Exclude unused paths from indexing for faster writes

DocumentDB’s indexing policy also allows you to specify which document paths to include or exclude from indexing by leveraging Indexing Paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). The use of indexing paths can offer improved write performance and lower index storage for scenarios in which the query patterns are known beforehand, as indexing costs are directly correlated to the number of unique paths indexed.  For example:

 //exclude index paths

collection.IndexingPolicy.ExcludedPaths.Add("/\"metaData\"/*"); collection.IndexingPolicy.ExcludedPaths.Add("/\"subDoc\"/\"subSubDoc\"/\"someProperty\"/*");
collection = await client.CreateDocumentCollectionAsync(databaseLink, collection);

Indexing Policy Tip #3: Specify range index path type for all paths used in range queries

DocumentDB currently supports two index path types: Hash and Range. Choosing an index path type of Hash enables efficient equality queries. Choosing an index type of Range enables range queries (using >, <, >=, <=).   For example:

 var collection = new DocumentCollection
{
    Id = ConfigurationManager.AppSettings["CollectionId"]
};

collection.IndexingPolicy.IncludedPaths.Add(new IndexingPath
{
    IndexType = IndexType.Hash,
    Path = "/",
});

collection.IndexingPolicy.IncludedPaths.Add(new IndexingPath
{
    IndexType = IndexType.Range,
    Path = @"/""shippedTimestamp""/?",
    NumericPrecision = 7
});

collection = await client.CreateDocumentCollectionAsync(databaseLink, collection);

Indexing Policy Tip #4: Vary index precision for write vs query performance vs storage tradeoffs

Finally, indexing policies allow you to change the index path precision in bytes to improve query performance. Queries against a path indexed with a higher precision are typically faster, but incur a correspondingly higher storage overhead for the index. Conversely, choosing a lower precision means that more documents might have to be processed during query execution, but the storage overhead will be lower.

For more details on indexing policies, refer to this documentation.

Note: Currently in the Preview release the index policy for a collection can only be specified when a collection is created.

Throughput

Throughput Tip #1: Measure and Tune for lower request units/second usage

DocumentDB offers a rich set of database operations including relational and hierarchical queries with UDFs, stored procedures and triggers – all operating on the documents within a database collection. The cost associated with each of these operations will vary based on the CPU, IO and memory required to complete the operation. Instead of thinking about and managing hardware resources, you can think of a request unit (RU) as a single measure for the resources required to perform various database operations and service an application request.

Request units are provisioned for each Database Account based on the number of capacity units that you purchase. Request unit consumption is evaluated as a rate per second. Applications that exceed the provisioned request unit rate for their account will be throttled until the rate drops below the reserved level for the Account. If your application requires a higher level of throughput, you can purchase additional capacity units.

The complexity of a query impacts how many Request Units are consumed for an operation. The number of predicates, nature of the predicates, number of UDFs, and the size of the source data set all influence the cost of query operations.

To measure the overhead of any operation (create, update or delete), inspect the x-ms-request-charge header (or the equivalent RequestCharge property in ResourceResponse<T> or FeedResponse<T> in the .NET SDK) to measure the number of request units consumed by these operations.

 // Measure the performance (request units) of writes

ResourceResponse<Document> response = await client.CreateDocumentAsync(collectionSelfLink, myDocument);

Console.WriteLine("Insert of document consumed {0} request units", response.RequestCharge);

// Measure the performance (request units) of queries

IDocumentQuery<dynamic> queryable = client.CreateDocumentQuery(collectionSelfLink, queryString).AsDocumentQuery();

while (queryable.HasMoreResults)
{

    FeedResponse<dynamic> queryResponse = await queryable.ExecuteNextAsync<dynamic>();

    Console.WriteLine("Query batch consumed {0} request units", queryResponse.RequestCharge);
}

The request charge returned in this header is a fraction of your provisioned throughput (i.e. 2000 RUs / second).   For example, if the query above returns 1000 1KB documents, the cost of the operation will be 1000. As such, within one second, the server will honor only two such requests before throttling subsequent requests.

Throughput Tip #2: Handle Server throttles/request rate too large 

When a client attempts to exceed the reserved throughput for an account, there will be no performance degradation at the server and no use of throughput capacity beyond the reserved level. The server will preemptively end the request with RequestRateTooLarge (HTTP status code 429) and return the x-ms-retry-after-ms header indicating the amount of time, in milliseconds, that the user must wait before reattempting the request.

HTTP Status 429,
Status Line: RequestRateTooLarge
x-ms-retry-after-ms :100

If you are using the .NET Client SDK and LINQ queries, then most of the time you never have to deal with this exception, as the current version of the .NET Client SDK implicitly catches this response, respects the server-specified retry-after header, and retries the request. Unless your account is being accessed concurrently by multiple clients, the next retry will succeed.

If you have more than one client cumulatively operating consistently above the request rate, the default retry currently set to 3 by the .NET client may not suffice; in this case, the client will throw a DocumentClientException with status code 429 to the application. Note that with the current release of the .NET SDK, there is no way to turn off the default retry count.

While the automated retry behavior helps to improve resiliency and usability for the most applications, it might come at odds when doing performance benchmarks, especially when measuring latency.  The client-observed latency will spike if the experiment hits the server throttle and causes the client SDK to silently retry. To avoid latency spikes during performance experiments, measure the charge returned by each operation and ensure that requests are operating below the reserved request rate.

Throughput Tip #3: Delete empty collections to utilize all provisioned throughput

Every document collection created in a DocumentDB account is allocated reserved throughput capacity based on the number of Capacity Units (CUs) provisioned, and the number of collections created.   A single CU makes available 2,000 request units (RUs) and supports up to 3 collections. If only one collection is created for the CU, the entire CU throughput will be available for the collection. Once a second collection is created, the throughput of the first collection will be halved and given to the second collection, and so on. By provisioning additional CUs, the throughput for an existing collection can be increased. During DocumentDB Preview, a single collection can scale up to 10 GB and can be allocated up to the maximum throughput of a single CU, which is 2000 request units/second. To maximize the throughput available per collections, ensure that the number of capacity units to collections is 1:1.

Throughput Tip #4: Design for smaller documents for higher throughput

The Request Charge (i.e. request processing cost) of a given operation is directly correlated to the size of the document. Operations on large documents cost more than operations for small documents.

CONSISTENCY LEVELS

Consistency Levels Tip #1: Use Weaker Consistency Levels for better Read latencies

Another important factor to take into account while tuning the performance of a DocumentDB applications is Consistency Level. The choice of consistency Level has performance implications for both reads and writes. You can configure the Default Consistency Level on the database account and the chosen Consistency Level then applies to all the collections (across all of the databases) within the DocumentDB account. In terms of write operations, the impact of changing consistency level is observed as request latency. As stronger consistency levels are used, write latencies will increase. On the other hand, the impact of consistency level on read operations is observed in terms of throughput. Weaker consistency levels allow higher read throughput to be realized by the client.

By default all reads and queries issued against the user defined resources will use the default consistency level specified on the database account. You can, however, lower the consistency level of a specific read/query request by specifying the x-ms-consistency-levelrequest header.

WRAPPING UP

We hope that between this post and part 1 you’ve found performance tips which are both useful and applicable to your usage of DocumentDB. 

As always, we’d love to hear from you about the DocumentDB features and experiences you would find most valuable. 

Please submit your suggestions on the Microsoft Azure DocumentDB feedback forum

If you haven’t tried DocumentDB yet, then get started here.