Kronos: Oracle to SQL Server Migration – Performance – Part 2
Author:
This article is written by our contributing author Ken Lassesen. His bio can be found here
[Prior Post in Series] [Next Post in Series]
This is part of the post-arc started by this overview of Migrating from Oracle to SQL Server, and deals with experiences with Kronos migrations.
Javed Iqbal, a Kronos Migration specialist has just finished doing the first migration of data from Oracle to SQL Server on a client’s test system. He was not ecstatic about the performance and asked me to assemble recommended practices to improve performance. I contacted the SSMA team and Welly Lee kindly provided me with some reference material and information about changes coming this summer with the release of version 5.0 of SQL Server Migration Assistant for Oracle.
Field Experience
Javed had done some of the items in the recommendations below and he was running on a production-grade test system that had more cores than I have fingers. The data migration took 12 hours for 120 GB of data in Oracle (excluding indexes) for around 10GB/hour. This is better than the typical 4GB/hour reported from developer-grade systems (illustrating the value of having 15,000 RPM SCSI drives in RAID arrays).
This is respectable performance but when you are dealing with terabyte databases, waiting becomes old.
I checked with Welly if SSMA 4.2 was doing the migration in a single-threaded style and the answer was yes L. He also informed me that version 5.0 will exploit multiple cores :-) with initial results suggesting that improvement may be proportional to the number of cores.
I should add a caveat that your performance with 5.0 may be considerably less depending on hardware configuration. Issues of insufficient memory, hard drive spindle contention and network congestion tend to bottleneck performance.
Recommendations to improve SSMA Performance
SSMA uses bulk import operations which help us understand how it should be configured. I will skip over the mechanics and provide the key points:
- Run Business Performance Analyzer on the destination SQL Server instance and check for any issues that could impact performance, for example incorrect disk partitioning.
- Setting the Batch Size in SSMA4O to a larger value
- Tables should not be replicated
- If you need replication, do it after the migration
- Predefined TempDB and Database size to estimated size
- Disable all indexes
- Disable all triggers
- Disable all constraints
- Set data recovery to simple
- Set Table Lock to true
For more information on bulk import see: Prerequisites for Minimal Logging in Bulk Import. For some numbers on performance see: Making data imports into SQL Server as fast as possible.
The Coding Solution
Some of the items above can result in carpal tunnel syndrome trying to do it through SQL Server Management Studio. I enclose some code solutions to the above items.
Predefined TempDB and Database size to estimated size
In our example we had a 120 GB estimate of the Oracle database size, so we defined the new one to be the same size.
ALTER DATABASE {database_name}
MODIFY FILE (NAME = {db_file_name}, SIZE = 120000MB, FILEGROWTH = 20%);
Disable/Enable all indexes
SET NOCOUNT ON
DECLARE @CMD nvarchar(max)
DECLARE @Schema nvarchar(max)
DECLARE @Table nvarchar(max)
DECLARE @Index nvarchar(max)
DECLARE @State nvarchar(max)
SET @State='DISABLE' -- Use 'REBUILD' to Enable
-- STEP A
DECLARE PK_Cursor CURSOR FOR
SELECT S.Name, O.Name,I.Name FROM sys.indexes I
JOIN sys.objects O
ON I.Object_id=O.Object_ID
JOIN sys.schemas S
ON O.Schema_Id= S.Schema_Id
AND O.Type IN ('U','V')
WHERE I.Type in (1,2)
OPEN PK_Cursor;
FETCH NEXT FROM PK_Cursor INTO @Schema,@Table, @Index
WHILE @@FETCH_STATUS = 0
BEGIN
SET @CMD='ALTER INDEX ['+@Index+'] ON ['+@Schema+'].['+@Table+'] '+@State
BEGIN TRY
EXEC (@CMD)
END TRY
BEGIN CATCH
SET @CMD='Error: '+@CMD
END CATCH
FETCH NEXT FROM PK_Cursor INTO @Schema,@Table,@Index
END;
CLOSE PK_Cursor;
DEALLOCATE PK_Cursor;
Disable/Enable all triggers
SET NOCOUNT ON
DECLARE @CMD nvarchar(max)
DECLARE @Schema nvarchar(max)
DECLARE @Table nvarchar(max)
DECLARE @State nvarchar(max)
SET @State='DISABLE' -- Use 'ENABLE' to Enable
-- STEP A
DECLARE TR_Cursor CURSOR FOR
SELECT DISTINCT S.Name, O.Name FROM sys.triggers T
JOIN sys.objects O
ON T.parent_id=O.Object_ID
JOIN sys.schemas S
ON O.Schema_Id= S.Schema_Id
AND O.Type IN ('U','V')
OPEN TR_Cursor;
FETCH NEXT FROM TR_Cursor INTO @Schema,@Table
WHILE @@FETCH_STATUS = 0
BEGIN
SET @CMD='ALTER TABLE ['+@Schema+'].['+@Table+'] '+@State+ ' TRIGGER ALL'
BEGIN TRY
EXEC (@CMD)
END TRY
BEGIN CATCH
SET @CMD='Error: '+@CMD
END CATCH
FETCH NEXT FROM TR_Cursor INTO @Schema,@Table
END;
CLOSE TR_Cursor;
DEALLOCATE TR_Cursor;
Disable all constraints
SET NOCOUNT ON
DECLARE @CMD nvarchar(max)
DECLARE @Schema nvarchar(max)
DECLARE @Table nvarchar(max)
DECLARE @State nvarchar(max)
SET @State='NOCHECK' -- Use 'CHECK CHECK' to Enable (yes, 'CHECK CHECK')
-- STEP A
DECLARE TR_Cursor CURSOR FOR
SELECT DISTINCT
SCHEMA_NAME(schema_id) AS SchemaName,
OBJECT_NAME(parent_object_id) AS TableName
FROM sys.objects
WHERE type_desc LIKE '%CONSTRAINT'
OPEN TR_Cursor;
FETCH NEXT FROM TR_Cursor INTO @Schema,@Table
WHILE @@FETCH_STATUS = 0
BEGIN
SET @CMD='ALTER TABLE ['+@Schema+'].['+@Table+'] '+@State
+ ' CONSTRAINT ALL'
BEGIN TRY
EXEC (@CMD)
END TRY
BEGIN CATCH
SET @CMD='Error: '+@CMD
END CATCH
FETCH NEXT FROM TR_Cursor INTO @Schema,@Table
END;
CLOSE TR_Cursor;
DEALLOCATE TR_Cursor;
Set data recovery to simple
ALTER DATABASE {database_name} SET RECOVERY SIMPLE
Set data recovery to simple
ALTER DATABASE {database_name} SET RECOVERY SIMPLE
You need to change recovery back to the setting recommend by Kronos (i.e. your ISV) after the migration.
Set Table Lock to true
We use sp_Tableoption to do this operation.
SET NOCOUNT ON
DECLARE @CMD nvarchar(max)
DECLARE @Schema nvarchar(max)
DECLARE @Table nvarchar(max)
DECLARE @State nvarchar(max)
SET @State='1' -- Use '0' to restore normal operation
-- STEP A
DECLARE TR_Cursor CURSOR FOR
SELECT
SCHEMA_NAME(schema_id) AS SchemaName,
OBJECT_NAME(object_id) AS TableName
FROM sys.objects
WHERE type='U'
OPEN TR_Cursor;
FETCH NEXT FROM TR_Cursor INTO @Schema,@Table
WHILE @@FETCH_STATUS = 0
BEGIN
SET @CMD='exec sp_tableoption @TableNamePattern=''['
+@Schema+'].['+@Table
+']'' ,@OptionName=''table lock on bulk load'', @OptionValue='
+@State
BEGIN TRY
EXEC (@CMD)
END TRY
BEGIN CATCH
SET @CMD='Error: '+@CMD
END CATCH
FETCH NEXT FROM TR_Cursor INTO @Schema,@Table
END;
CLOSE TR_Cursor;
DEALLOCATE TR_Cursor;
Comments
Anonymous
April 05, 2011
Thanks have to put them to work....Anonymous
April 06, 2011
Nice blog! with very useful scripts. I see the SQL user database size is set to same size as the Oracle user database. Is there any guidance on potential db size growth. Should we be doing anything about the size of the trx log. Are there any duration insights from the implementor, not only for loading but also the duration taken to enable all the objects (indexes; constraints) etc. It's great to see these kind of blogs. Thank you!Anonymous
April 29, 2011
Hey Ken. Good stuff. I've done the cross post to www.thekronosguy.com but left out the code. I'm sending them here for that detail! Bryan deSilva www.improvizations.com/kronosblog