Automating SQL operations with Service Management Automation (SMA) and Invoke-Sqlcmd – challenges and solutions
I’ve been working on some automation tasks with SMA 2016 lately, which involved quite heavy data transactions to a SQL database (Microsoft SQL Server 2016) in the backend. Doing this, I have faced some really interesting challenges that I would like to share now with you and of course their respective solutions/workarounds.
Reading and writing data to a SQL database in the backend, using PowerShell, was really a challenge of its kind, but to be honest I was surprised on how much efforts have been invested by Microsoft in making such operations smooth and easy to handle.
Let’s start with the most recent updates in this direction – Read-SqlTableData and Write-SqlTableData. Everyone, who is familiar with SQL related PowerShell cmdlets knows what had to be done before to obtain the data (read) or insert it (write) from/to a SQL table.
I have been using either SQL connection objects or even more often – Invoke-Sqlcmd . Both of the techniques are briefly summarized in multiple sources, among which also here:
Connecting to SQL Server Using PowerShell
Both of them work nice with some minor exceptions, depending on the environment. But isn’t this a lot of code to do pretty simple operations? I think it is and as it seems Microsoft does too. As a result they have surprised us with the new SQL Server PowerShell module (SqlSever), containing a couple of new SQL related cmdlets and of course lots of other great stuff in regards to SQL. You can take look at this great review from the Microsoft Scripting Guys:
Nearly everything has changed for SQL Server PowerShell
In our case, the cmdlets concerning SQL data operations are Read-SqlTableData, Read-SqlViewData and Write-SqlTableData.
Reading data from SQL
Reading data from SQL using a connection object is method, used pretty often. You will find examples everywhere, here is one, where asset data is exported to a CSV file:
#SQL parameters
[string] $SQLServer= "Server1"
[string] $SQLDatabase = "ExampleDB"
[string] $SQLQuery= $("SELECT AssetStatus, AssetID, Model, Name, Site, ManufacturerName, Type FROM Assets WHERE AssetStatus='3' AND Type='Client' AND Category = 'Hardware'")
[string]$CSVPath = "\\FileServer\FileShare\Export.csv"
$Command = New-Object System.Data.SQLClient.SQLCommand
$Command.Connection = $Connection
$SQLConnection = New-Object System.Data.SqlClient.SqlConnection
$SQLConnection.ConnectionString = "Server = $SQLServer; Database = $SQLDatabase; Integrated Security = True;"
$SQLCmd = New-Object System.Data.SqlClient.SqlCommand
$SQLCmd.CommandText = $SQLQuery
$SQLCmd.Connection = $SQLConnection
$SQLAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
$SQLAdapter.SelectCommand = $SQLCmd
$SQLDataSet = New-Object System.Data.DataSet
$SQLAdapter.Fill($SQLDataSet)
$SQLDataSet.Tables[0] | Export-Csv -Delimiter ';' -NoTypeInformation -LiteralPath $CSVPath -Force
Here is also a slightly different variation of the same example, but this time, using credentials from SMA:
#SQL parameters
[string]$SQLServer = "Server1"
[string]$SQLDatabase = "ExampleDB"
[string]$SQLQuery = $("SELECT AssetStatus, AssetID, Model, Name, Site, ManufacturerName, Type FROM Assets WHERE AssetStatus='3' AND Type='Client' AND Category = 'Hardware'")
[string]$CSVPath = "\\FileServer\FileShare\Export.csv"
#Building a credential object
$SQLCredential = Get-AutomationPSCredential -Name 'SQLreader'
$SQLPassword = $SQLCredential.GetNetworkCredential().Password
$SQLUser = $SQLCredential.GetNetworkCredential().UserName
$SQLAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
$SQLConnection = "Server=$SQLServer; Database=$SQLDatabase; Integrated Security=false; User ID = $SQLUser; Password = $SQLPassword;"
$SQLAdapter.SelectCommand = New-Object System.Data.SqlClient.SqlCommand ($SQLquery,$SQLConnection)
$CommandBuilder = New-Object System.Data.SqlClient.SqlCommandBuilder $SQLAdapter
$SQLData = New-Object System.Data.DataTable
[void]$SQLAdapter.fill($SQLData)
$SQLdata | Export-Csv -Delimiter ';' -NoTypeInformation -LiteralPath $CSVPath -Force
Lots of lines for something that can be achieved in a simpler way, isn’t it?
Now let us take the same example and see how this looks like when we use the Invoke-Sqlcmd cmdlet. We have again the parameter section:
#SQL parameters
[string]$SQLServer = "Server1"
[string]$SQLDatabase = "ExampleDB"
[string]$SQLQuery = $("SELECT AssetStatus, AssetID, Model, Name, Site, ManufacturerName, Type FROM Assets WHERE AssetStatus='3' AND Type='Client' AND Category = 'Hardware'")
[string]$CSVPath = "\\FileServer\FileShare\Export.csv"
And here comes the surprise:
$Assets = Invoke-Sqlcmd –Query $SQLQuery -ServerInstance $SQLServer -Database $SQLDatabase
Just one line and we have our assets, saved in the variable. Now we can export them:
$Assets | Export-Csv -Delimiter ';' -NoTypeInformation -LiteralPath $CSVPath -Force
So this example is great illustration of how we can shorten the amount of code when we use the Invoke-Sqlcmd cmdlet. Of course, there is another side of the comparison– the performance. I will not deal with this right now, but you can easily test how both approaches behave when working with real SQL data. It also makes a difference how big the data sets are that you are trying to obtain.
And now, let us check the newest player in regards to reading SQL data – Read-SQLTableData and check how it will look like in the same scenario:
#SQL parameters
[string]$SQLServer = "Server1"
[string]$SQLDatabase = "ExampleDB"
[string]$SQLTableAssets = "Assets"
[string]$CSVPath = "\\FileServer\FileShare\Export.csv"
$Assets = Read-SqlTableData -ServerInstance $SQLServer -DatabaseName $SQLDatabase -TableName $SQLTableAssets -SchemaName $SQLSchemaName -ColumnName "AssetStatus","AssetID","Model","Name","Site","ManufacturerName","Type" -ColumnOrder "AssetID","AssetStatus","Model","ManufacturerName","Type","Name","Site" -OutputAs DataTable
There are lots of cool things you can do with the Read-SQLTableData:
Read-SqlTableData [[-ServerInstance] <String[]> ] [-ColumnName <String[]> ] [-ColumnOrder <String[]> ] [-ColumnOrderType <OrderType[]> ] [-ConnectionTimeout <Int32> ] [-Credential <PSCredential> ] [-DatabaseName <String> ] [-IgnoreProviderContext] [-OutputAs <OutputTypeSingleTable> {DataSet | DataTable | DataRows} ] [-SchemaName <String> ] [-SuppressProviderContextWarning] [-TableName <String> ] [-TopN <Int64> ] [ <CommonParameters>]
What I personally like a lot is the ability to get the data from selected columns only (-ColumnName), re-order them as you wish (-ColumnOrder) and outputting the data as different types like DataSet, DataTable or DataRows. This way we are pretty flexible when we want the data formatted and how we would like to use the data after getting it.
Writing data to SQL
When it comes to writing data to SQL I would not recommend any other approach then using SQL Stored Procedures. Not that you can’t do it otherwise, it is just more easy to maintain and troubleshoot them. There are other advantages of course (control, security, maintenance, etc.). For those who are not familiar with SQL stored procedures, here a simplified explanation: You can think of a stored procedure as a predefined, reusable template for executing a given SQL query. It certainly contains statements that perform operations on the database, it could accept parameters and it could also return values.
So let us review the options for executing a SQL stored procedure with PowerShell and take a closer look at the advantages and drawbacks of those. Like when running a SELECT query, we are also able to execute a stored procedure using a SQL connection object as an alternative to a direct INSERT or UPDATE of the table data set. The code is absolutely the same as when running a simple SQL query for reading data, but this time the variable $SQLQuery will contain the line for running the SQL stored procedure. To make it easier to understand we will take the same query, used in the previously and will run it as a stored procedure. I have named the stored procedure “SelectAssets”, it is very simple and you can also create it as per the following guide (There is another example, using SELECT):
So, our example will take the following form:
SELECT AssetStatus, AssetID, Model, Name, Site, ManufacturerName, Type
FROM Assets
WHERE AssetStatus = @AssetStatus
AND Type = @Type
AND Category = @Category
This is what we have to put in SQL on the creation of the stored procedure. In addition to this we have to give it a name, so that we know which stored procedure we are calling. The name in our case will be “GetAssetData”
Every time we would like to execute the stored procedure we will have to provide values for the 3 parameters (defined with a ‘@’ in the statement).
So how to run our stored procedure? Here is the example:
#SQL parameters
[string]$SQLServer = "Server1"
[string]$SQLDatabase = "ExampleDB"
[string]$SQLStoredProc = ”GetAssetData”
[string]$CSVPath = "\\FileServer\FileShare\Export.csv"
#We will define our parameters for the stored procedure in a hashtable
[hashtable]$Parameters = @{AssetStatus="3"; AssetType="Client";Category="Hardware"}
#Connection section
$Connection = New-Object System.Data.SQLClient.SQLConnection
$Connection.ConnectionString = "Server = $SQLServer;Database = $SQLDatabase;Integrated Security=true;"
#Command section
$Command = New-Object System.Data.SQLClient.SQLCommand
$Command.CommandType = [System.Data.CommandType]::StoredProcedure
$Command.CommandText = $SQLStoredProcName
$Command.Connection = $Connection
#Parameter section
foreach ($param in $Parameters.Keys)
{
$Command.Parameters.AddWithValue("@$param",$Parameters[$param])
}
#Open the connection, execute the query and close it afterwards
$Connection.Open()
$Reader = $Command.ExecuteNonQuery()
$Connection.Close()
It is interesting now to see the alternatives to this approach, especially in case you have to write data to a table. One would be to use Write-SQLTableData, but in this case you will need to define the parametrized query direct and won’t be able to leverage the stored procedure. So, the option would be to use Invoke-Sqlcmd for running the stored procedure.
The issue
Before doing that, I would like to say that all the information up to now was kind of introduction (a pretty long one) to the actual challenge when using Invoke-Sqlcmd in SMA runbooks – the multithreading.
What me and my colleagues found out is that when we run simultaneously multiple SMA runbooks, containing the Invoke-Sqlcmd cmdlet, in an intermittent way we got a strange error:
Runbook Error*
Unable to get jobs for runbook [Runbook_Name]:
System.Management.Automation.PSInvalidOperationException: The WriteObject and WriteError methods cannot be called from outside the overrides of the BeginProcessing, ProcessRecord, and EndProcessing methods, and they can only be called from within the same thread. Validate that the cmdlet makes these calls correctly, or contact Microsoft Customer Support Services. at Microsoft.PowerShell.Activities.PowerShellValue`1.Execute(NativeActivityContext context) at System.Activities.NativeActivity`1.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager) at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)
*
After bit of troubleshooting and extending the logging within the runbooks, we were able to find out that the error was really caused by Invoke-Sqlcmd, when used in parallel pipelines.
What was causing this error? Simple, Invoke-Sqlcmd is not thread “safe” and when used in parallel pipelines can cause issues with error handling and synchronisation.
As we were already using Invoke-Sqlcmd in almost all our runbooks, we needed to either re-write a huge part of the code to use SQL connection objects that would eliminate the multithreading “issue” or find another solution. Luckily we had Oliver (all credits to this mastermind), who came with a clever plan on how to avoid the multithreading problem and still use Invoke-Sqlcmd.
The solution
Basically, he did a “wrap” of the cmdlet and used a particular .NET class (Monitor) in order to synchronize the execution of the cmdlet. Before explaining in details what the method is used for, let us see the code:
function Invoke-WrappedSqlCmd
{
[CmdletBinding()]
param (
[string] $ServerInstance,
[Parameter(Mandatory=$false)]
[string] $Database,
[Parameter(Mandatory=$false)]
[string] $EncryptConnection,
[Parameter(Mandatory=$false)]
[string] $Username,
[Parameter(Mandatory=$false)]
[string] $Password,
[Parameter(Mandatory=$false)]
[string] $Query,
[Parameter(Mandatory=$false)]
[Int32] $QueryTimeout,
[Parameter(Mandatory=$false)]
[Int32] $ConnectionTimeout,
[Parameter(Mandatory=$false)]
[Int32] $ErrorLevel,
[Parameter(Mandatory=$false)]
[Int32] $SeverityLevel,
[Parameter(Mandatory=$false)]
[Int32] $MaxCharLength,
[Parameter(Mandatory=$false)]
[Int32] $MaxBinaryLength,
[Parameter(Mandatory=$false)]
[switch] $AbortOnError,
[Parameter(Mandatory=$false)]
[switch] $DedicatedAdministratorConnection,
[Parameter(Mandatory=$false)]
[switch] $DisableVariables,
[Parameter(Mandatory=$false)]
[switch] $DisableCommands,
[Parameter(Mandatory=$false)]
[string] $HostName,
[Parameter(Mandatory=$false)]
[string] $NewPassword,
[Parameter(Mandatory=$false)]
[string[]] $Variable,
[Parameter(Mandatory=$false)]
[string] $InputFile,
[Parameter(Mandatory=$false)]
[bool] $OutputSqlErrors,
[Parameter(Mandatory=$false)]
[switch] $IncludeSqlUserErrors,
[Parameter(Mandatory=$false)]
[switch] $SuppressProviderContextWarning,
[Parameter(Mandatory=$false)]
[switch] $IgnoreProviderContext,
[Parameter(Mandatory=$false)]
[Microsoft.SqlServer.Management.PowerShell.OutputType] $OutputAs,
[Parameter(Mandatory=$false)]
[string] $ConnectionString
)
try
{
# Locking on some random type that should be available everywhere..
[System.Threading.Monitor]::Enter([guid])
$ReturnValue = Invoke-Sqlcmd @PSBoundParameters
}
catch
{
$ErrorMessage = "Error during Invoke-WrappedSqlCmd:$([System.Environment]::NewLine)$_"
throw $ErrorMessage
}
finally
{
# Exiting the lock
[System.Threading.Monitor]::Exit([guid])
}
return $ReturnValue
}
As you can see all parameters of the native cmdlet has been made available also in the „wrapped“ version.
The core functionality of this wrapping is achieved by using the Monitor class (name space System.Threading) and its method “Enter(Object)”. Let’s take a closer look at the Monitor class and how in what it actually does.
Its main purpose is to “provide a mechanism that synchronizes access to objects.” (Source MSDN) and ensure that only one thread can access our function at each point in time. It does so by acquiring an exclusive lock on an object and only one thread is able to run the code. No other threads can influence the behaviour until the lock is released [“Exit(Object)”].
Typically Monitor is used to lock a specific object, but in our case this needed adjustment, because we wanted to use the PowerShell module from different workflows, which were not sharing any particular objects. So Oliver decided locking a common type, which should be globally available, thus the usage of lock on a [guid] (Enter([guid])). Of course other common types like [string], [array], etc. could be used with the same success here.
You can find more information in regards to the class and its methods here:
The rest of the steps were more or less trivial: The function has been turned into a script module, which we could use within the SMA runbooks.
Adding those few lines of code and “wrapping” the native cmdlet ensured that we could continue leveraging Invoke-Sqlcmd in multiple parallel pipelines by eliminating the multithreading “situation” we faced when working with it.
Another big advantage of this approach is that we don’t have to re-write all the runbooks, where Invoke-Sqlcmd has been used and we still can simply and quickly replace it with the Invoke-WrappedSqlCmd (note this is the original name, given by Oliver).
Conclusion
My primary goal was to describe a solution to a problem that can be encountered by anybody, working with PowerShell and SQL. After writing all this I also thought about PowerShell and how flexible PS can be, especially when .NET classes and their methods are used to provide additional functionalities like in the example above.
I have published also the function on the Microsoft Technet Gallery here.