다음을 통해 공유


Be aware of 701 error if you use memory optimized table variable in a loop

In blog “Importance of choosing correct bucket count of hash indexes on a memory optimized table”,  I talked about encountering performance issues with incorrect sized bucket count.  I was actually investigating an out of memory issues with the following error.

Msg 701, Level 17, State 103, Line 11
There is insufficient system memory in resource pool 'default' to run this query.

I simplified the scenario but customer’s code is very similar to the loop below.  Basically, this customer tried to insert 1 million row into a memory optimized table variable and process them.  Then he deleted the rows from the memory optimized table variable and inserted another 1 million.  His goal was to process 1 billion rows.  But before he was able to process 1 billion rows, he would run out of memory (701 error as above)

DECLARE @t2 AS [SalesOrderDetailType_inmem]
insert into @t2 select * from t2

while 1 = 1  --note that this customer didn’t use 1=1. I just simplified to reproduce it
begin
       delete @t2
       insert into @t2 select * from t2
end

This customer was puzzled because he delete existing rows. At any given time, there should not be more than 1 million rows.  SQL Server should not have run out of memory.

This is actually by-design behavior documented in “Memory-Optimized Table Variables”).  Here is what is state “Unlike memory-optimized tables, the memory consumed (including deleted rows) by table variables is freed when the table variable goes out of scope)”.  With a loop like above, all deleted rows will be kept and consume memory until end of the loop. 

 

Complete Repro
=============

Step 1 Create  a disk based table and populate 1 million rows

CREATE table t2(
  [OrderQty] [smallint] NOT NULL,
  [ProductID] [int] NOT NULL,
  [SpecialOfferID] [int] NOT NULL,
  [LocalID] [int] NOT NULL
  )

Step 2 create a type using memory optimized table

CREATE TYPE [SalesOrderDetailType_inmem] AS TABLE(
  [OrderQty] [smallint] NOT NULL,
  [ProductID] [int] NOT NULL,
  [SpecialOfferID] [int] NOT NULL,
  [LocalID] [int] NOT NULL,

  INDEX [IX_ProductID] HASH ([ProductID]) WITH ( BUCKET_COUNT = 1000000),
  INDEX [IX_SpecialOfferID] NONCLUSTERED (LocalID)
)
WITH ( MEMORY_OPTIMIZED = ON )

Step 3 run the following query and eventually you will run of memory
DECLARE @t2 AS [SalesOrderDetailType_inmem]
insert into @t2 select * from t2

while 1 = 1
begin
       delete @t2
       insert into @t2 select * from t2
end

 

Jack Li |Senior Escalation Engineer | Microsoft SQL Server

twitter| pssdiag |Sql Nexus

Comments

  • Anonymous
    February 23, 2017
    I like this post, enjoyed this one regards for posting.
  • Anonymous
    February 24, 2017
    Okay, so if you have to loop, how would you work around that? Dynamic SQL for the whole thing?
  • Anonymous
    March 16, 2017
    The comment has been removed
  • Anonymous
    April 18, 2017
    Hi Jack. It would be good to know what solution options exist for this scenario. Thanks
    • Anonymous
      May 08, 2017
      Alex, the solution would be to use a memory-optimized table, not a memory-optimized table variable. Please refer to Scenario B or Scenario C in this post: http://bit.ly/2pcuKU3
      • Anonymous
        December 13, 2017
        Thank you, this is what I just looked for. We have just installed SQL 2016, immediately I thought I test all cool feature. I have cursor which uses #temp table and works perfectly, I have spent time lot of time to optimize it.I saw couple nice test about the memory table variable and it is faster really faster than the temp database. I thought I just what I need.I rewrote the code and after I managed to run it, the code worked perfectly I thought I run on the server on the whole night and I have the required data by morning.The first thing when I got up of course to check the result and what I got the insufficient memory error message.I figured out the issue is the same.I use memory optimized table variable and I inserted 16000 records into 23 column (varchar(15) wide memory optimized table and 23 times deleted and inserted the remaining selected records to find the best only one records I need. I run it 60000 times and using 12 instances.the server has 1 TB memory. Some of the process finished the work, some of them failed and probably I used all resources of the server.Lesson learned.