다음을 통해 공유


SQL Updates Newsletter – June 2017

Recent Releases and Announcements

 

 

Issue Alert

  • Critical: Do NOT delete files from the Windows Installer folder. C:\windows\Installer is not a temporary folder and files in it should not be deleted. If you do it on machines on which you have SQL Server installed, you may have to rebuild the operating system and reinstall SQL Server.
  • Critical: Please be aware of a critical Microsoft Visual C++ 2013 runtime pre-requisite update that may be required on machines where SQL Server 2016 will be, or has been, installed.
    • https://blogs.msdn.microsoft.com/sqlcat/2016/07/28/installing-sql-server-2016-rtm-you-must-do-this/
    • If KB3164398 or KB3138367 are installed, then no further action is necessary. To check, run the following from a command prompt:
      • powershell get-hotfix KB3164398
      • powershell get-hotfix KB3138367
    • If the version of %SystemRoot%\system32\msvcr120.dll is 12.0.40649.5 or later, then no further action is necessary. To check, run the following from a command prompt:
      • powershell "get-item %systemroot%\system32\msvcr120.dll | select versioninfo | fl"
  • Important: If the Update Cache folder or some patches are removed from this folder, you can no longer uninstall an update to your SQL Server instance and then revert to an earlier update build.
    • In that situation, Add/Remove Programs entries point to non-existing binaries, and therefore the uninstall process does not work. Therefore, Microsoft strongly encourages you to keep the folder and its contents intact.
    • https://support.microsoft.com/en-us/kb/3196535
  • Important: You must precede all Unicode strings with a prefix N when you deal with Unicode string constants in SQL Server
  • Important: Default auto statistics update threshold change for SQL Server 2016
  • Performance impact of memory grants on data loads into Columnstore tables
    • Problem: We found that only at the beginning of the run, there was contention on memory grants (RESOURCE_SEMAPHORE waits), for a short period of time. After that and later into the process, we could see some latch contention on regular data pages, which we didn’t expect as each thread was supposed to insert into its own row group.
    • Cause: For every bulk insert we first determine whether it can go into a compressed row group directly based on batch size. If it can, we request a memory grant with a timeout of 25 seconds. If we cannot acquire the memory grant in 25 seconds, that bulk insert reverts to the delta store instead of compressed row group.
    • Solution: We created and used a resource governor workload group that reduced the grant percent parameter to allow greater concurrency during data load
    • https://blogs.msdn.microsoft.com/sqlcat/2017/06/02/performance-impact-of-memory-grants-on-data-loads-into-columnstore-tables/
  • You may see “out of user memory quota” message in errorlog when you use In-Memory OLTP feature

 

Recent Blog Posts and Articles

  • Azure Marketplace Test Drive
    • One feature in Azure Marketplace that is especially useful for learning about products is “Test Drive.”
    • Test Drives are ready to go environments that allow you to experience a product for free without needing an Azure subscription. An additional benefit with a Test Drive is that it is pre-provisioned - you don’t have to download, set up or configure the product and can instead spend your time on evaluating the user experience, key features, and benefits of the product.
    • https://azure.microsoft.com/en-us/blog/azure-marketplace-test-drive/
  •  Building an Azure Analysis Services Model on Top of Azure Blob Storage—Part 2 + 3
  •  Use WITH clause in OPENJSON to improve parsing performance
  • Smart Transaction log backup, monitoring and diagnostics with SQL Server 2017
    • New DMF sys.dm_db_log_stats which we released in SQL Server 2017 CTP 2.1 will enable DBAs and SQL Server community to build scripts and solutions that performs smart backups, monitoring and diagnostics of transaction log.
    • Column log_since_last_log_backup_mb can be used in your backup script to trigger a transaction log backup when log generated since last backup exceeds [an activity] threshold value.
    • We have exposed columns for VLF monitoring total_vlf_count and active_vlfs allowing you to monitor and alert if the total number of VLFs of the transaction log file exceeds a threshold value.
    • New log_truncation_holdup_reason column to understand the cause of the log truncation holdup
    • If a log truncation doesn’t happen and active_vlfs approaches total_vlfs, it would lead to autogrow causing total_vlfs to increase.
    • The log_backup_time column in sys.dm_db_log_stats can be used to determine the last transaction log backup and can be used to alert a DBA and trigger a backup in response to the alert. The last log backup time can also be derived from msdb database but one of the advantage of using log_backup_time column in sys.dm_db_log_stats is it also accounts for the transaction log backup completed on secondary replica if the database is configured in Availability groups.
    • For long running transaction in killed\rollback state scenario, a DBA can look at recovery_vlf_count and log_recovery_size_mb to understand the number of vlfs to recover and log size to recover if the database is restarted.
    • https://blogs.msdn.microsoft.com/sql_server_team/smart-transaction-log-backup-monitoring-and-diagnostics-with-sql-server-2017/
  • What is plan regression in SQL Server?
    • Plan regression happens when SQL Server starts using sub-optimal plan, which increases CPU time and duration.
    • One way to mitigate this is to recompile query with OPTION(RECOMPILE) if you find this problem. Do not clear procedure cache on production system because it will affect all queries!
    • Another option would be to use automatic plan choice correction in SQL Server 2017 that will look at the history of plans and force SQL Server to use last known good plan if plan regression is detected.
    • https://blogs.msdn.microsoft.com/sqlserverstorageengine/2017/06/09/what-is-plan-regression-in-sql-server/
  • Columnstore Index: How do I find tables that can benefit from Clustered Columnstore Index
  • Azure SQL databases in logical servers, elastic pools, and managed instances
    • Logical servers enable you to perform administrative tasks across multiple databases – including specifying regions, login information, firewall rules, auditing, threat detection, and failover groups. Databases cannot share resources, and each database has guaranteed, and predicable performance defined by its own service tier. Some server-level specific features such as cross-database querying, linked servers, SQL Agent, Service Broker, or CLR are not supported in Azure SQL databases placed in logical server.
    • Elastic Pools: Databases that need to share resources (CPU, IO, memory) can be stored in elastic pools instead of a logical server. Databases within the elastic pools cannot have different service tiers because they share resources that are assigned to entire pool.
    • Managed instances (In private preview): In May 2017, the concept of a managed instance was announced. With a managed instance, features like SQL CLR, SQL Server Agent, and cross-database querying will be fully supported. Furthermore, a managed instance will have the current capabilities of managed databases, including automatic backups, built-in high-availability, and continuous improvement and release of features in the Microsoft cloud-first development model.
    • You may sign-up for the limited preview here: https://sqldatabase-migrationpreview.azurewebsites.net/
    • https://azure.microsoft.com/en-us/blog/new-options-to-modernize-your-application-with-azure-sql-database/
    • https://blogs.msdn.microsoft.com/sqlserverstorageengine/2017/06/13/azure-sql-databases-in-logical-servers-elastic-pools-and-managed-instances/
  • Indirect Checkpoint and tempdb – the good, the bad and the non-yielding scheduler
    • In SQL Server 2016, indirect checkpoint is ON by default with target_recovery_time set to 60 seconds for model database.
    • With indirect checkpoint, the engine maintains partitioned dirty page lists per database to track the number of dirty pages in the buffer pool for that database from each transaction.
    • DIRTY_PAGE_POLL is a new system waittype introduced to support timed wait of recovery writer thread. If you see it as one of the high wait events in sys.dm_os_wait_stats, you can safely ignore it.
    • One of the scenarios where skewed distribution of dirty pages in the DPList is common is tempdb.
    • If recovery writer starts falling behind resulting into long DPLists, the individual worker threads running on the scheduler starts playing the role of recovery writer.
    • In scenarios when the DPList has grown very long, the recovery writer may produce a non-yielding scheduler dump…If you are running SQL Server 2016, we recommend monitoring the output of sys.dm_os_spinlock stats DMV for DP_LIST spinlock to establish a baseline and detect spinlock contention on DP_List.
    • For tempdb, …. indirect checkpoint is still important to smoothen the IO burst activity from automatic checkpoint and to ensure the dirty pages in tempdb do not continue to take away buffer pool pages from user database workload. In this scenario, we recommend tuning the target_recovery_interval to higher value (2-5 mins) for recovery writer to be less aggressive and strike a balance between large IO burst and DPList spinlock contention.
    • https://blogs.msdn.microsoft.com/sql_server_team/indirect-checkpoint-and-tempdb-the-good-the-bad-and-the-non-yielding-scheduler/
  • SQL Server: Large RAM and DB Checkpointing
    • The legacy implementation of database checkpointing, which we’ll call FlushCache, needs to scan the entirety of SQL Server’s Buffer Pool for the checkpoint of any given Database.
    • The new Indirect Checkpoint option relies […] on Dirty Page Managers (DPM), and […] doesn’t rely on the Buffer Pool scan anymore.
    • SQL Server 2016 will by default create new databases with Indirect Checkpoint enabled, and therefore uses the DPM approach [for checkpoints]. [However] there are other places in SQL Server 2016, such as Restore Log which still rely on the FlushCache implementation and will still hit the scan delay on large RAM servers.
    • To activate DPM logic in SQL 2014:  Ensure SQL is at the required SQL build level, activate Indirect Checkpoint for the relevant databases, and enable trace flag 3449.
    • https://blogs.msdn.microsoft.com/psssql/2017/06/29/sql-server-large-ram-and-db-checkpointing/

 

Recent Training and Technical Guides

 

Monthly Script and Tool Tips

 

Fany Carolina Vargas | SQL Dedicated Premier Field Engineer | Microsoft Services