Share via


Azure SQL Database: How to Develop and Deploy

This article provides guidelines on how to deploy an existing on-premise SQL Server database into Azure SQL Database and guidelines around best practices during data migration.

Note
If you wish to contribute to this page, use the Edit tab at the top (sign-in required). If you wish to provide feedback for this documentation please either send e-mail to azuredocs@microsoft.com or use the Comment field at the bottom of this page (sign-in required).

 

Developing with SQL Database

SQL Database is built on the SQL Server’s core engine, hence developing against SQL Database is very similar to developing against on-premise SQL Server. While there are certain features that are not compatible, but most T-SQL syntax is compatible. The MSDN link http://msdn.microsoft.com/en-us/library/ee336281.aspx provides a comprehensive description of T-SQL features that are supported, not supported and partially supported in SQL Database.

The release of SQL Server 2008 R2 adds client tools support for SQL Database including added support to Management Studio (SSMS). SQL Server 2008 R2 (and above) have full support for SQL Database – in terms of seamless connectivity, viewing objects in the object explorer, SMO scripting etc.

At this point of time, if you have an application that needs to be migrated into SQL Database, there is no way to test it locally to see if it works against SQL Database. The only way to test is to actually deploy the database into SQL Database.

Back to top

Connecting to SQL Database Using SSMS

The SQL Server 2008 R2 release of SQL Server Management Studio (SSMS) provides full support for SQL Database.

The SQL Database July 2011 service release requires SQL Server 2008 R2 SP1. 

For more information, see Announcing: Azure SQL Database July 2011 Service Release.

If you need to connect to a specific database, click on the Options button above, and enter the database name in the Connect to database box.

Note: USE <Database> is not supported. So if you need to connect to another database after you are logged in, right click anywhere in the Editor, click on Connection and then on Change Connection. If you are using SQL Server 2008 R2 Management Studio, you can click on the database you wish to connect to, and then click on the New Query button.

When using prior version of SSMS, the login shall be in the format: [Username]@[SQLAzureServerName].

Back to top

Connecting to SQL Database Using Sqlcmd

You can connect to SQL Database with the sqlcmd command prompt utility that is included with SQL Server. The sqlcmd utility lets you enter Transact-SQL statements, system procedures, and script files at the command prompt. To connect to SQL Database by using sqlcmd, append the SQL Database server name to the login in the connection string by using the <login>@<server> notation. For example, if your login is login1 and the fully qualified name of the SQL Database server is servername.database.windows.net, the username parameter of the connection string is: login1@servername. This restriction places limitations on the text you can choose for the login name. For more information, see CREATE LOGIN (Azure SQL Database).

SQLCMD does not come with the base install of SQL Server or the client tools. It can be installed from the SQL Server 2008 R2 Feature Pack.

Note: SQL Database does not support the –z and –Z options used for changing user’s password with SQLCMD. To change login passwords, you can use the ALTER LOGIN (Azure SQL Database) after connecting to the masterdatabase.

The following example shows how to connect to a user database in a SQL Database server and create a new table in the database:

1> CREATE TABLE table1 (Col1 int primary key, Col2 varchar(20)); 
2> GO 
3> QUIT

Note: SQL Database requires all tables to have Clustered Index. If you try to insert data into a table that is a heap, you will see an error message.

Back to top

Deploying to SQL Database

Deploying your database developed on premise into SQL Database involves 2 steps – schema migration and data migration. At this time, backing up and restoring an on-premise database into SQL Database is not supported. Depending on what tools you use to generate the schema, it can be a little tricky. This is because SQL Database supports only a subset of the TSQL supported by SQL Server 2008. As new features are being added to SQL Database, the tools supporting the schema generation need to be modified to support those new features. SQL Server 2008 R2 has full support for SQL Database. You can point the database “Generate Scripts Wizard” to script against a SQL Database and the scripts generated can be executed directly on a SQL Database. For customers that do not have the SQL Server 2008 R2 November CTP version of SSMS, there is a workaround.

Schema Migration with SQL Server 2008 R2

The November 2010 update to SQL Server 2008 R2 includes support for SQL Database. The

Generate Scripts Wizard now allows you to script for database version SQL Database so the scripts generated are directly compatible to be executed on SQL Database.  The scripts thus generated are compatible with SQL Database and can be compiled on SQL Database without any further modifications. For detailed information, see [[How to Migrate Schema to Azure SQL Database using the Generate SQL Server Scripts Wizard]]. 

Schema Migration with Pre-SQL Server 2008 R2

The script generated via the “Generate Scripts” option from previous versions of SSMS needs to be modified to make it compatible with SQL Database.

· Right click on the database that you want to generate SQL Database scripts for in the Object Explorer of your on premise SQL Server, point to Tasks and select Generate Scripts.

· In the Script Wizard dialog box, click Next to get to the Select Database step. Select Script all objects in the selected database, and then click Next.

· In Choose Script **Options, **click on the Advanced button and, set the following options:

  • Convert UDDTs to Base Types = True
  • Script Extended Properties = False
  • Script Logins = False
  • Script USE DATABASE = False
  • Script Data = False

SQL Database does not support user-defined data types, extended properties, Windows authentication, or the USE statement.

· Click Next until Finish. The Script Wizard generates the script. Click Close when the script is completed.

· In the generated script, delete all instances of "SET ANSI_NULLS ON".

· Each CREATE TABLE statement includes a "WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]" clause. Delete all instances of that clause.

· Each CREATE TABLE statement includes the "ON [PRIMARY]" clause. Delete all instances of that clause.

In the wizard above, there is an option to script data along with schema. This option can be set to True if you have few tables with few rows of data. However, if you have several tables with tens of thousands of rows, this script can become quite large with an INSERT statement for each row. A more efficient way to migrate your data is via BCP or SSIS or by using the SqlBulkCopy API, as described in the following section.

Data Migration: Following are the options available to migrate data from on-premise SQL Server into SQL Database.

· Setting the Script Data option to True in the database Generate Scripts Wizard

· BCP

· SQL Server Integration Services (SSIS) using ADO.NET or ODBC providers only

· Custom solution using SqlBulkCopy API

The Generate Scripts Wizard generates a TSQL file with an INSERT statement for each row. Depending on how much data you have this may or may not be efficient. For smaller data migrations this can be useful. However, if you have data in the order of GB, bulk copying the data can be faster and efficient.

Back to top

Data Migration Using BCP

Support for BCP.EXE and bulk copy is included in SQL Database.

(1) Create a format file - A format file provides a flexible system for writing data files that requires little or no editing to comply with other data formats or to read data files from other software programs. Create either an XML format or the non-XML format file using one of the methods as described inhttp://msdn.microsoft.com/en-us/library/ms191516.aspx

(2) Export the data into a data file - After creating the format file, export the data in the database tables into data files by specifying the out option in the BCP command options.

-- Exporting all data from a table:

BCP YourDatabase.dbo.yourTable OUT C:\Folder\Tabledata.dat -S ServerName -U username -P Password

Note: The Queryout option is currently not supported yet.

(3) Import data files into SQL Database - The data files created above can be imported into SQL Database as follows:

BCP SQLAzureDb.dbo.YouTable IN C:\Folder\Tabledata.dat -S servername.database.windows.net -U username@servername -P password -f c:\folder\Formatfile.fmt -b 1000

Migrating data into SQL Database can cause I/O stress on the SQL Database node hosting your database. This can cause throttling resulting in the termination of your SQL Database connection. A smaller batch size can help reduce the I/O stress. Using the SqlBulkCopy API provides flexibility in handling throttling errors. When a connection is terminated due to throttling, a specific error message with a specific error code is returned.

Following is a list of error messages related to connection constraints:

Error

Severity

Description (message text)

40197

16

The service has encountered an error processing your request. Please try again. Error code %d.

40501

20

The service is currently busy. Retry the request after 10 seconds. Code: %d.

40544

20

The database has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions. Code: %d

40549

20

Session is terminated. Reason: Long running transaction.

40613

17

Database '%.*ls' on server '%.*ls' is not currently available. Please retry the connection later. If the problem persists, contact customer support, and provide them the session tracing id of '%.*ls'.

A graceful way to handle throttling errors in your application is to add retry logic for the errors listed above. If you receive an error, retry the command. If you receive the error again, close your current, broken database connection, re-establish the connection and then retry the command. If SQL Database terminates the connection again, it means the system is under I/O stress and will reject new connections. The best way to mitigate this situation is to allow a wait time of 2-5 seconds before re-establishing a connection and retrying the command.

Currently, for SQL Database, there are 2 offerings - a 1 GB Web based edition and a 10 GB Business Edition. If the database reaches its maximum size limit, you will see an error stating that the database is full with error number 40544. When this happens, you cannot add any new data or objects such as indexes, tables, stored procedures etc. You can, however, delete data, truncate tables or drop tables/indexes in order to free space.

SSIS supports connections to SQL Database by using the ADO.NET provider. OLEDB is not supported at this time. You can build the SSIS package connecting to SQL Database and create the data flow tasks the same way as you would against a typical on-premise SQL Server.

Here are some best practices to optimize the data migrations using SSIS packages:

  • Build multiple data-flow tasks enabling data to be uploaded in parallel. If there is a primary key-foreign key relationship between data, serialize the loads accordingly.
  •  If you have tables on the order of hundreds that need to be migrated you can spread the data flow tasks across multiple SSIS packages to further parallelize the migration. Group tables into one package logically depending upon the primary key/foreign key relationships. Disabling the constraints and re-enabling them after data loads can also provide faster data loads.
  •  The DefaultBufferSize property and DefaultBufferMaxRows properties can also be adjusted to get better performance. The MSDN article Improving the Performance of the Data Flow has more details on how to adjust their properties.
  •  The EngineThreads property can be changed to allocate more threads during execution. The default value is 5.
  •  Make sure the option Use Bulk Insert when possible is checked, in the ADO.NET Destination adapter. This option was added to support bulk loads into SQL Database for SQL 2008 R2 release. Previously ADO.NET component inserted rows on a row by row basis. Since ADO.NET is the primary way to transfer data into SQL Database.

 

Future Schema and Data Updates

Typically, customers start with developing an application on-premise and deploy the application code and database code into the Azure platform. As new functionality is added or existing features are modified, both application as well as database code changes needs to be re-deployed into the Azure platform.

Schema Modifications

Schema modifications typically consist of adding/modifying columns and/or adding/modifying indexes. The scripts generated for these modifications need to be validated against SQL Database as some of the options that can be specified for an on-premise SQL Server have been disabled for SQL Database. Options such as specifying a file group for tables/indexes, certain index options such as SORT_IN_TEMPDB, PAD_INDEX, FILLFACTOR etc. A comprehensive list of SQL Database options and their supportability is described in detail athttp://msdn.microsoft.com/en-us/library/ee336281(v=MSDN.10).aspx.

Data Updates

Data updates can be done via scripts or BCP or SSIS depending on the type of update. The best practices described in the previous Data Migration section can be applied for efficient data transfers in larger batch sizes.

Back to top

Sample Code Using SqlBulkCopy A

using System;
using System.Text;
using System.Data;
using System.Data.SqlClient;
public class MySqlBulkCopy
 
{
 public static long ExecTableBulkCopy(string strConnectionStringSource, string strTableSource
 string strConnectionStringDest, string strTableDest, int iBatchSize)
 
 {
 
  int lTotalRows = 0;
  StringBuilder sb = new StringBuilder();
  // Open database connection to the Source Database 
  using (SqlConnection connSource = new SqlConnection(strConnectionStringSource))
  {
 
   connSource.Open();
 
   // Open a data readerSource from your source table 
 
   SqlCommand cmdSource = new SqlCommand(String.Format("SELECT * FROM {0}", strTableSource), connSource);
 
   SqlDataReader readerSource = cmdSource.ExecuteReader();
 
   // We'll create a new, in-memory datatable derived from the source table schema 
   DataTable newTable = null;
   DataRow row = null;
   DataTable dtSchema = readerSource.GetSchemaTable();
   // Begin reading rows from the source table 
 
   while (readerSource.Read())
 
   {
 
    // 1st pass only: create the DataTable and DataColumns with names & data types 
 
 
    // based on the source table 
 
    if (null == newTable)
 
    {
 
 
newTable = new DataTable(strTableDest);
 
for (int iCol = 0; iCol < readerSource.FieldCount; iCol++)
 
{
 
DataColumn colVal =  new DataColumn();
 
colVal.DataType = readerSource.GetFieldType(iCol);
 
colVal.ColumnName = readerSource.GetName(iCol);
 
newTable.Columns.Add(colVal);
 
}
 
    }
 
    // Add a new row to the in-memory DataTable from the source table 
 
    row = newTable.NewRow();
 
    for (int iCol =  0; iCol < readerSource.FieldCount; iCol++)
 
row[iCol] = readerSource.GetValue(iCol);
 
    newTable.Rows.Add(row);
 
   }
 
   // Close the data reader and accept changes to the in-memory table 
 
   readerSource.Close();
 
   newTable.AcceptChanges();
 
   // Do we actually have rows to copy? 
 
 
   if (newTable.Rows.Count > 0)
 
   {
 
 
    lTotalRows = newTable.Rows.Count;
 
    // Open database connection to the destination database 
 
    // We assume the destination table schema is the same as the source table 
 
    using (SqlBulkCopy bulkCopy = new SqlBulkCopy(strConnectionStringDest, SqlBulkCopyOptions.Default))
 
 
    {
 
// Set the batch size and name the destination table 
 
bulkCopy.BatchSize = iBatchSize;
 
bulkCopy.DestinationTableName = newTable.TableName;
 
// Bind the destination table columns to the in-memory source table 
 
 
for (int i = 0; i < newTable.Columns.Count; i++)
 
 
bulkCopy.ColumnMappings.Add(newTable.Columns[i].Caption, newTable.Columns[i].Caption);
 
// Write the data to the destination table 
 
bulkCopy.WriteToServer(newTable);
 
}
   }
 
   return lTotalRows;
 
  }
 
 }
 
}

Back to top

See Also

  • [[Getting Started with Windows Azure SQL Database Development]]
  • [[Overview of Options for Migrating Data and Schema to Windows Azure SQL Database]]
  • [[Windows Azure SQL Database TechNet Wiki Articles Index]]

Credits

The authors of the original information contained in this article are Dinakar Nethi and Michael Thomassy./em>