다음을 통해 공유


Real-Time Operational Analytics: Compression Delay Option for Nonclustered Columnstore Index (NCCI)

The previous blog https://blogs.msdn.microsoft.com/sqlserverstorageengine/2016/03/06/real-time-operational-analytics-filtered-nonclustered-columnstore-index-ncci/  showed how filtered nonclustered columnstore index (NCCI) can be used to minimize the impact on transactional workload. There are many workloads where there is no natural column to identify ‘hot’ rows and for this reason they can’t benefit from filtered NCCI. Yes, in some cases, you can add a new column to leverage filtered NCCI but it would require application changes, not an option for most customers. However, for many transactional workloads, a set of rows remain active (i.e. go thru multiple changes) for a short duration which can be minutes, hours and then it becomes dormant (i.e. cold). The idea behind compression delay is to keep the ‘active’ rows in delta rowgroup and only transition these rows to compressed rowgroup after a specified delay. By keeping rows in delta rowgroups longer, you can reduce overall maintenance overhead of NCCI as described below

  • When a compressed row is deleted, it is actually not removed but marked as deleted. It continues to consume disk-storage and memory when one or more columns of the compressed rowgroup are read into memory. It also degrades query performance because these rows need to be filtered out before returning the query results. Just to take an extreme example, if the total size compressed rowgroups is 50GB and 50% rows are marked deleted, we are wasting 25GB of storage and similar number for the memory. Though SQL Server 2016 provides REORGANIZE command that can be used to remove the deleted rows based on a policy but still removing deleted rows involves reading the compressed rowgroups(s), remove the deleted rows and then re-compress them. If the rows to be deleted were in delta rowgroup, they are directly removed without causing any fragmentation.
  • When a compressed row is updated, it is deleted and then re-inserted into delta rowgroup. Consider, if as part of transactional work a row is updated 10 times every 15 minutes and then it becomes dormant. Also assume that row gets compressed every 10 minutes (i.e. additional 1 million rows get inserted after the row updated). For this workload pattern, we will end up compressing the same row 10 times along with overhead of deleted rows as described in the previous point. Now, imagine if you could instruct NCCI to keep the row(s) in the delta rowgroup for at least 150 minutes, you will be able update the row in the delta rowgroup which is much cheaper than updating in the compressed rowgroup.

The compression delay allows you an option to control how long a set of rows stay in the delta rowgroup. Here is the TSQL syntax to specify compression delta. The default value of compression_delay is 0 minutes.

create nonclustered columnstore index <index-name> on <disk-based table> (<columnlist>) with (compression_delay= 150)

Link to detailed documentation https://msdn.microsoft.com/en-us/library/gg492153.aspx

Adding compression delay option will lead to a larger number of delta rowgroups. For example, if the transactional workload is inserting 1 million rows every 10 minutes and the compression delay is 150 minutes, there will be four delta rowgroups of sizes (1 million, 2 million, 4 million, 8 million). Recall  https://blogs.msdn.microsoft.com/sqlserverstorageengine/2016/03/04/real-time-operational-analytics-dml-operations-and-nonclustered-columnstore-index-ncci-in-sql-server-2016/ on the delta rowgroup size. There can be more if the thread that compresses delta rowgroup falls behind. One important point to note that compression_delay time is computed when a delta rowgroup is closed. So for example, if a delta rowgroup was closed at 8 AM, it will be eligible for compression @10:30AM assuming compression delay was set to 150 minutes.

Here are the recommended best practices

  • Insert/Query workload: If your workload is primarily inserting data and querying it, the default COMPRESSION_DELAY of 0 is the recommended option. The newly inserted rows will get compressed once 1 million rows have been inserted into a single delta rowgroup.Some example of such workload are (a) traditional DW workload (b) click-stream analysis when you need to analyze the click pattern in a web application
  •  OLTP workload: If the workload is DML heavy (i.e. heavy mix of Update, Delete and Insert), you may see columnstore index fragmentation by examining the DMV sys. dm_db_column_store_row_group_physical_stats. If you see that > 10% rows are marked deleted in recently compressed rowgroups, you can use COMPRESSION_DELAY option to add time delay when rows become eligible for compression. For example, if for your workload, the newly inserted stays ‘hot’ (i.e. gets updated multiple times) for say 60 minutes, you should choose COMPRESSION_DELAY to be 60.

In summary, compression_delay is one more option in your kitty to minimize the maintenance overhead of NCCI. The best part is that it just an index option and does not require any changes to your application

In the next blog, I will share some performance numbers that we have seen in our internal testing with compression delay

Thanks

Sunil