Compartilhar via


Load 1TB in less than 1 hour

OVERVIEW

 

This project was done using SQL Server 2005 Enterprise Edition (with SP1 beta) using the BULK INSERT command with 60 parallel input files, each of approximately 16.67 GB.  The best time I got was 50 minutes.  Hewlett Packard was kind enough to loan me some time one night on the machine they use to do the TPC-C tests.  It is an Itanium2 Superdome with 64 1.6GHZ 9M  (Madison) CPUs.  For the insert tests the system was equipped with 256GB of RAM. The HP SAN configuration is rated at 14GB per second throughput.  You can find the specific configurations at https://tpc.org/tpcc/results/tpcc_result_detail.asp?id=103082701.  (Note:  The configurations at this site may be updated periodically).

 

 

BEST PRACTICES and LESSIONS LEARNED

Here is a list of things we learned during this project that should be helpful in your projects.

  1. Run as many load processes as you have available CPUs.  If you have 32 CPUs, run 32 parallel loads.  If you have 8 CPUs, run 8 parallel loads. 
  2. If you have control over the creation of your input files, make them of a size that is evenly divisible by the number of load threads you want to run in parallel.  Also make sure all records belong to one partition if you want to use the switch partition strategy.
  3. Use BULK insert instead of BCP if you are running the process on the SQL Server machine. 
  4. Use table partitioning to gain another 8-10%, but only if your input files are GUARANTEED to match your partitioning function, meaning that all records in one file must be in the same partition.
  5. Use TABLOCK to avoid row at a time locking.
  6. Use ROWS PER BATCH = 2500, or something near this if you are importing multiple streams into one table.

 

 

TEST SETUP

The files were on the same SAN as the database, but on different drive arrays.  There were 12 files on a single drive array and it took 5 separate disk arrays to handle all 60 input files.  The database has two filegroups and 96 files. The table used for inserts was LINEITEM which resided on its own filegroup and is placed on the outer tracks of the disk.

 

The database was in BULK LOGGED mode, so there was very little transaction log activity.  The extent allocations and related metadata are logged, but not the individual page changes and not the individual record inserts.

 

The input files were character mode with the pipe | character as column delimiters.  These flatfiles were generated using DBGen, a standard tool supplied by the TPC Council. The flatfiles can be considered clean, i.e. no scrubbing or any other operation is needed. The row size was a variable length, approximately 130 bytes on average.

 

The final setup included creating 60 jobs each containing a single BULK INSERT command.  This was done so that the number of running commands could be easily controlled without having a lot of query windows opened.  The sp_startjob command was used to start each individual job.

 

The first test was done into a single table with ROWS PER BATCH = 2500.  Very little blocking was observed until 40 threads were started.  Since batch size was 2,500, the locks were fairly short and were short lived.  Running one of the new Data Management Views (DMV) told me that the blocks were coming from waits on extent allocations on the single table.  Variations of these tests were very consistently completed in 65 minutes.  In an attempt to avoid the extent allocation waits, the next tests were designed to use multiple tables.

 

The next round of tests were done with separate tables for each input.  Each table had one partition and at the end, the partitions were switched in to a single large table that contained 60 partitions.  (Switching is a new term used in SQL 2005 table partitioning).  The switching is very fast because it is just a meta-data switch, and our tests averaged 22 – 35 milliseconds per switch.  The difficulty here was to make sure that each record in each input file was really part of that partition.  The input files could only contain records that matched that partition range and did not contain records from other ranges.  With this test, it made very little difference what the batch size was since there was always one BULK INSERT thread per table.  TABLOCK was an important to avoid any lock management.

 

In the end, this gave us about 8-10% improvement in overall throughput by going to separate tables. This was the load scenario that took 50 minutes to just to load the data into the table.  There was not time to do a full test of the index build or the constraint creation that validates the partition.  These steps are saved for a later test.

 

 

CPU BOTTLENECK

The limiting factor was CPU, not disk IO.  The SAN has a throughput rating of 14GB per second.  SQLIO was the tool used to measure the maximum throughput of the drives.  With all 60 threads running BULK INSERT in our tests, the maximum write activity was 333MB/sec.  This gave a maximum of slightly over 2.1 million BULK COPY rows per second.  PERFMON was used to watch these counters.

 

One BULK INSERT thread would take over an idle CPU and consume 99% to 100%.   Subsequent jobs were very good at choosing another CPU and did not seem to schedule themselves on the same as a busy CPU. 

 

TEST PROCESS

The first tests were run by starting one job at a time and letting it run to a steady state.  All measurements were then recorded and another job was started. The increase in each of the counters being watched was very predictable, but not quite linear meaning that the second job was running at a rate just less than the first.  In fact, it turns out that the curve is logarithmic and starts to become very flat after 60 concurrent jobs.

 

The first tests were also using BCP until I quickly noticed that each BCP process was taking up a few percent of a CPU.  Then by 55 jobs, the CPU total was at 100%, so I switched to BULK INSERT so that the import threads ran under the SQL Server process. After switching to the BULK INSERT command, the throughput increased for each job up to 60 concurrent jobs.  At this point, the total CPU was consistently over 95%.  At 61 jobs to 64 jobs, the throughput was roughly the same as at 60. 

 

During the first tests, there were 96 files of approximately 8GB to 9GB in size.  I could easily import the first 48 files in a very consistent 30 minutes and the other 48 files in 30 minutes as long as I did not run over 60 jobs at a time.  Loading 1TB in 60 minutes is a pretty good time. 

 

Once I noticed the CPU pattern being the limiting factor, I created 60 files each approximately 16.67GB in size.  This gave the best run time of 50 minutes but was somewhat of a manufactured test because not every customer will have this fine grained control over their input files.  As mentioned earlier, these input files exactly matched the final partitioning scheme that we designed for the large table.

 

The final test was to start all 60 jobs at once to make sure the CPU balancing was at a desired level.  It was fine, meaning that there were never any two BULK INSERT threads running on the same CPU.  The developers on the SQL product team tell me that they specifically check the UMS scheduler to attempt to have only one bulk command per scheduler.  Of course, if you run more commands than you have CPUs it will schedule multiple bulk commands per CPU.

 

NOT FINISHED YET

At this point we had all the data in 60 separate tables.  This is a good scenario for anyone wanting to use local partitioned views, which is defined by creating a single view that does a UNION ALL over the tables.  In this case, we would be done with the load. 

 

Customers that would like to use table partitioning have to take several more steps to get all these partitions into one table.  The steps are as follows:

  1. Create a File Scheme
  2. Create a Partition Function
  3. Create a clustered index on each file
  4. Create the check constraint that will be used for partitioning.
  5. Switch the partitions into the final large table.

There is a reason to create the index before the check constraint.  If we did it the other way around a table scan would be done to validate each row against the constraint.  By creating the clustered index first, the constraint can check the first and last values to ensure that all rows have passed the constraint.

 

NEXT TESTS

There are plenty of other ideas of what to try next, but since I was just borrowing time at night between the TPC-H runs to do this, I will have to wait until the machines are available again. However, given system availability Hewlett Packard has offered to continue with this line of tests.  Actually, the patterns are so very predictable that the tests could be run on machines with less CPUs to see the patterns and timings.  Of course, you won’t be able to load 1TB in less than an hour unless you have a lot of CPUs. 

 

  • Run the BCP or BULK insert on other client machines to remove the need to SQL Agent and SQL Server to handle the direct reading of the files.
  • Move the input files to another computer to see what the load pattern looks like with the network involved.
  • Run using the SSIS bulk insert task.  Run this from other machines to lower the CPU burden on the SQL Server machine.  If we run this on the SQL Server machine, it will most likely be the same effect as using BCP. 
  • Create indexes on the file.
  • Run tests on smaller machines since we don’t expect all customers to have a 64 CPU machine.
  • Presort the input files and create the clustered index first before loading the records. If they are presorted it should allow us to shorten the create index step.
  • See if NUMA settings have any impact.  It may seem that this may not help if we have a CPU bottleneck, but if there is any foreign memory assigned to the nodes it will be slower.  There is a way to avoid getting foreign memory – see https://blogs.msdn.com/slavao

Credits:   Thanks to Sunil Agarwal and his team from the Microsoft SQL Server product group and Mike Fitzner from Hewlett Packard for their assistance.

 

enjoy -- Kevin

Comments

  • Anonymous
    May 24, 2006
    Interesting read, I look forward to your additional tests ( the SSIS test in particular) .  Can you elaborate on one of the best practices you mentioned above?

    “Use ROWS PER BATCH = 2500, or something near this if you are importing multiple streams into one table.”

    How did you determine that this was the best value to use?  Wouldn’t this be specific to the data that you are loading.  When I perform migration of large sets of data I can determine what the correct batch size is by looking at the data types in the table.  In SQL 2000 the max log block is 60KB.  I therefore will insert my data in batches that will size the insert as close to 60KB with out going over.

    How many extent allocations does it take to reach this 60KB limit?  

    In you tests it appears that this would not have mattered much as you state your issues related to waits on extent allocations but on lesser systems it would be good to understand how to size BULK INSERTS around the 60KB log flush.





  • Anonymous
    May 24, 2006
    Amazing statistics.. Interesting that disk IO on the SAN configuration did NOT turn out to be the bottleneck in this case. What was the memory consumption during this test?

  • Anonymous
    May 24, 2006
    Response to Bert: 2500 rows per batch is used to avoid lock escalation.    

    How many extent allocations get to the 60kb limit on the log flush?  I dont know and not sure it's a significant enough delay to worry about.

    Response to Ted on the memory consumption:  Unfortunately I've deleted the perfmon logs to get more disk space.  The machine had 256GB RAM and I remember that it wasnt getting close.  I would have paid more attention to it if there were any memory waits.

  • Anonymous
    May 29, 2006
    I am curious on the NUMA affects,
    how many log write threads can SQL Server use?
    If more than 1, can affinity bw controlled?
    Is there any benefit for the log write thread to be on the same NUMA node, or even the same processor/core,
    Penalty for context switch vs. comminication between threads
    If the system has hyper-threading, is there any benefit to having the log thread be on the other logical proc as the thread that did the insert

  • Anonymous
    May 30, 2006
    What is the Maximum Page Size in SQL Server 2005?
    Is posible modificate?

  • Anonymous
    May 30, 2006
    PingBack from http://www.hypergene.com/blog/?p=10

  • Anonymous
    May 30, 2006
    Very nicely done... It does sound a bit like an advertisment for HP and pretty much demonstrates that you can do just about anything if the hardware is there...

    Still, you did a nice job writing this up and I appreciate your sharing your hard earned findings with us.

  • Anonymous
    September 26, 2006
    re: 20万行/秒, 40MB/秒を達成 by SqlBulkCopy

  • Anonymous
    November 21, 2006
    最近在協助客戶驗證使用 SQL Server 2005 來處理超過 10 億筆的資料匯入並建立索引。要處理這麼大量資料的例子還是第一次,所以做了一些功課,透過這個 Blog 整理以下給有需的的人參考。

  • Anonymous
    May 04, 2007
    OK, so several times recently I have heard about SSIS being able to load 1TB in an hour. These comments

  • Anonymous
    February 11, 2008
    Contributors : Stuart Ozer, with Prem Mehra and Kevin Cox Technical Reviewers : Lubor Kollar, Thomas

  • Anonymous
    October 17, 2008
    撰稿人 : Stuart Ozer、Prem Mehra 和 Kevin Cox 技术审阅 人 : Lubor Kollar、Thomas Kejser、Denny Lee、Jimmy May、Michael

  • Anonymous
    October 17, 2008
    寄稿者 : Stuart Ozer、Prem Mehra、Kevin Cox テクニカル レビュー担当者 : Lubor Kollar、Thomas Kejser、Denny Lee、Jimmy May

  • Anonymous
    October 30, 2008
    Авторы : Стюарт Озер ( Stuart Ozer ), при участии Према Мехры ( Prem Mehra ) и Кевина Кокса ( Kevin Cox

  • Anonymous
    January 18, 2009
    PingBack from http://www.keyongtech.com/3166221-bulk-insert-performance

  • Anonymous
    January 21, 2009
    PingBack from http://www.keyongtech.com/2069166-number-of-servers-required-to

  • Anonymous
    February 23, 2009
    构建大型关系数据仓库的十大最佳实践 撰稿人:StuartOzer、PremMehra和KevinCox 技术审阅人:LuborKollar、ThomasKejser、Denny...

  • Anonymous
    June 08, 2009
    PingBack from http://insomniacuresite.info/story.php?id=7985

  • Anonymous
    June 08, 2009
    PingBack from http://menopausereliefsite.info/story.php?id=1222