Freigeben über


How to unit test PowerShell scripts that call cmdlets from the SharePoint snap-in

Update: If you're reading this then be sure to have a read of the comments too - some good insight and points from nohwnd on the Pester team that are also worth considering if you are interested in this stuff.

Update 2: I've posted a new post related to this which shows an improved way to test SP cmdlets

As we see PowerShell scripts get more and more complex (lets face it scripting guys, you're basically writing code - we've turned you in to developers without you realising it! :P) people are discovering the need for things that help us maintain complex code bases, such as unit testing. Pester is a great tool for being able to write unit tests for PowerShell scripts, and will even integrate in to Visual Studio so you can run tests as you are writing your scripts. The way Pester lets us perform some pretty complex scenarios is through the ability to mock specific functions in a script, specifying what parameters to watch for in the input and what we will return from our mock - but for these mocks to work Pester needs to be able to see the original cmdlet or function you are mocking, which gets more complex in a SharePoint environment. Sure, I could go and run all of my tests on a server that has SharePoint installed - but I've never been a fan of installing SharePoint on a build server, and the second you look at a hosted build server option (for the DSC resources including xSharePoint we use AppVeyor so we can connect it to github and show build status and output publicly) you can't have SharePoint installed, meaning your calls to mock will fail.

Now the easy way out here is to just say "lets just not test those parts" - it's sort of why a lot of people avoided writing unit tests when they were writing server side code for SharePoint, because mocking was hard to get your head around at first, but I really wanted to tackle this one so that the DSC resources I'm working on could have some great automated tests so that I don't need to spend as much time manually testing things where we receive pull requests. So the next option became looking at how we could abstract out the calls or stubbing them without our scripts (there was a discussion on the topic over at GitHub) but at the end of the day while we could automate the creation of the stubs - that is a ton of overhead and script that was too tightly bound to SharePoint for my liking, so I threw that idea out as well. This did lead to me getting another idea though, around writing a generic stub method that would let me call any SharePoint cmdlet I wanted to, and then I could validate it in tests by specifying some parameters. Here is my stub method:

 function Invoke-xSharePointSPCmdlet() {
 [CmdletBinding()]
 param
 (
 [parameter(Mandatory = $true,Position=1)]
 [string]
 $CmdletName,
 
 [parameter(Mandatory = $false,Position=2)]
 [HashTable]
 $Arguments
 )
 
 Write-Verbose "Preparing to execute SharePoint command - $CmdletName"
 
 if ($null -ne $Arguments -and $Arguments.Count -gt 0) {
 $argumentsString = ""
 $Arguments.Keys | ForEach-Object {
 $argumentsString += "$($_): $($Arguments.$_); "
 }
 Write-Verbose "Arguments for $CmdletName - $argumentsString"
 }
 
 if ($null -eq $Arguments) {
 $script = ([scriptblock]::Create("Initialize-xSharePointPSSnapin; $CmdletName; `$params = `$null"))
 $result = Invoke-Command -ScriptBlock $script -NoNewScope
 } else {
 $script = ([scriptblock]::Create("Initialize-xSharePointPSSnapin; `$params = `$args[0]; $CmdletName @params; `$params = `$null"))
 $result = Invoke-Command -ScriptBlock $script -ArgumentList $Arguments -NoNewScope
 }
 return $result
 }
 
 function Initialize-xSharePointPSSnapin() {
 if ($null -eq (Get-PSSnapin -Name "Microsoft.SharePoint.PowerShell" -ErrorAction SilentlyContinue)) 
 {
 Write-Verbose "Loading SharePoint PowerShell snapin"
 Add-PSSnapin -Name "Microsoft.SharePoint.PowerShell"
 }
 }

Lets talk through what we are doing here. The main method I'm calling from my scripts is Invoke-xSharePointSPCmdlet and I'm passing in two arguments here, first is the name of the cmdlet I want to call and the second is a collection of the parameters I want to call it with. Here is an example of how we call New-SPConfigurationDatabase:

  $newFarmArgs = @{
 DatabaseServer = $params.DatabaseServer
 DatabaseName = $params.FarmConfigDatabaseName
 FarmCredentials = $params.FarmAccount
 AdministrationContentDatabaseName = $params.AdminContentDatabaseName
 Passphrase = (ConvertTo-SecureString -String $params.Passphrase -AsPlainText -force)
 SkipRegisterAsDistributedCacheHost = $true
 }
 
 Invoke-xSharePointSPCmdlet -CmdletName "New-SPConfigurationDatabase" -Arguments $newFarmArgs

Here you can see we build up a hashtable of the parameters that would normally go on to the New-SPConfigurationDatabase cmdlet. You can also add things that are normally switch values, such as the SkipRegisterAsDistributedCacheHost option by just setting the value to be $true. But once I have the cmdlets build up I pass them in to my invoke cmdlet and it then goes and runs it. The way we execute this inside the invoke function is to dynamically generating a ScriptBlock that we then execute. [scriptblock]::Create lets us build a string up and convert it to a ScriptBlock in PowerShell that we can then execute. So you can see that in the invoke statement we generate a string that first calls Initialize-xSharePointPSSnapin, a quick function I wrote that will ensure the SharePoint snap-in is loaded (this is to save me having to load it in a million other places in my scripts, reducing the complexity of the rest of my code base). From there we either execute the cmdlet with no parameters if none were passed, or if they were we refer to the parameters through the Arguments property of Invoke-Command. The part of the string that does this is shown again below.

 `$params = `$args[0]; $CmdletName @params;

$args[0] will grab the first argument we passed to Invoke-Command, which in this case is the hashtable of arguments we generated earlier. Then by calling it as @params we tell the cmdlet to execute with those parameters attached to it (note the @ instead of the $ which is what tells the engine to expand the parameters instead of passing the hash table as the one parameter). This then execute our cmdlet with the parameters that were passed.

So now that we have a way to call the SharePoint cmdlets that doesn't directly require the SharePoint PowerShell snap-in, we have something to mock for our tests. Here is an example of a test for that same script that creates the farm.

 Context "Validate set method" {
 It "Creates a new SharePoint 2016 farm" {
 Mock Invoke-xSharePointSPCmdlet { return $null } -Verifiable -ParameterFilter { $CmdletName -eq "New-SPConfigurationDatabase" -and $Arguments.ContainsKey("LocalServerRole") }
 Mock Invoke-xSharePointSPCmdlet { return $null } -Verifiable -ParameterFilter { $CmdletName -eq "Install-SPHelpCollection" }
 Mock Invoke-xSharePointSPCmdlet { return $null } -Verifiable -ParameterFilter { $CmdletName -eq "Initialize-SPResourceSecurity" }
 Mock Invoke-xSharePointSPCmdlet { return $null } -Verifiable -ParameterFilter { $CmdletName -eq "Install-SPService" }
 Mock Invoke-xSharePointSPCmdlet { return $null } -Verifiable -ParameterFilter { $CmdletName -eq "Install-SPFeature" }
 Mock Invoke-xSharePointSPCmdlet { return $null } -Verifiable -ParameterFilter { $CmdletName -eq "New-SPCentralAdministration" }
 Mock Invoke-xSharePointSPCmdlet { return $null } -Verifiable -ParameterFilter { $CmdletName -eq "Install-SPApplicationContent" }
 
 Mock Get-xSharePointInstalledProductVersion { return @{ FileMajorPart = 16 } }
 
 Set-TargetResource @testParams
 
 Assert-VerifiableMocks
 }
 }

The mocks here are all set to run against the Invoke-xSharePointSPCmdlet function, where I am able to pass the "ParameterFilter" values to specify a different return value for each call based on the cmdlet name that was requested (in this case I don't care about the return values though so I can return null on each call). So here where I am also making a mock to act as if I am creating a farm in the SharePoint 2016 preview, I can also test to make sure that when I call New-SPConfigurationDatabase that it gets called with an argument that contains the key "LocalServerRole" which is a new required parameter for that cmdlet in the latest version.

The second parameter you will notice on all of my mock calls is the "Verifiable" parameter. This tells Pester that I want to be able to validate that this mock was actually called - this gives us a way to test for expected code paths through our scripts. By adding this it will make sure that the cmdlet was called with each of the parameter filter options, so for example if I call it and tell it I want to run New-SPConfigurationDatabase but I don't pass in the LocalServerRole option, then this test will fail as the specific mock I said was verifiable was not called, thus failing the test. But the greatest thing about all of this is that at no point do we actually try to execute any of the SharePoint cmdlets, we wrap them in our own method, which is a single method and not a million stubs for everything in SharePoint, that lets us ensure that all of the right methods are going to be called against SharePoint when it runs for real.

But there you have it - I'm busy updating and adding unit test coverage to the xSharePoint resources on GitHub that use this method and add a range of tests to our get and set methods that were previously overlooked due to the need for the snap-in to exist. Hopefully I'll be done with that shortly and this scripts that will use this across the entire module will hit the main dev branch ahead of the release of v0.6 of the module. Hopefully this approach demonstrates how you can write your scripts for testability and incorporate better unit testing to your PowerShell scripts as well!

Comments

  • Anonymous
    September 14, 2015
    The comment has been removed

  • Anonymous
    September 14, 2015
    1/2 I finally got to read this article, and to be honest it seems like you chose the worst possible approach. I assume you will be writing the code on a server with all the prerequisites installed. So with your approach you are be throwing away code completion, as well as static code checking. This means that in case of any syntax error your code will fail on run-time, instead of refusing to run at all. This makes for a bad development and testing experience.   The whole problem is that you come from a false premise: Mock fails because SharePoint cmdlets are not installed on the server. That is not true. Mock fails because there is no function of the required name. It does not care where that function is coming from. So there is no need for proxy functions or checking if we are testing and so on. We just need to set up the "context" correctly. To be more specific: Mock does not care about SharePoint cmdlets, it cares about having ANY function. Defining "function Install-SPService () {}" is no different than importing that function from a SharePoint module, in effect you can have two different Install-SPService functions. One for the tests, on for the production. It all depends on the context. Now you might protest that defining empty functions does not get us anywhere, because we want to do mock filtering based on the parameters. And you would be right. So how to make placeholder functions that have  the same name and parameters as the original function? In the linked Github conversation somebody suggests using proxy functions to wrap the real cmdlets. That solves a different problem, but get's us close. Generating proxy function for the Install-SPService, we get another Install-SPService function. This new function has the correct signature (same as the original) but has a body that links to the original function. We are not interested in that body, so we simple scratch that and replace it with an empty body {}.

  • Anonymous
    September 14, 2015
    Hi nohwnd, So a couple of the assumptions here which I should have made clear when I wrote this - when we are writing the scripts we aren't always writing them somewhere with SharePoint installed. It's a big beefy product that won't run well inside a VM on my laptop, and paying to run a VM hosted elsewhere just to write scripts isn't always feasible either - so yes, we are missing static code checking and code completion but in a lot of scenarios we didn't have those anyway. But to make it very clear - if you are writing and wanting to test SharePoint related scripts on a SP server, then my method in this post is 100% not needed - I'm scripting for scenarios where they don't exist. The other key place where the prerequisites don't exist is on our CI servers, which is the same issue a lot of SharePoint devs have had with traditional build servers. In the case of a lot of stuff I write AppVeyor is used and those servers don't have SharePoint installed so have the same issue, so I wanted something that could work there too so I could mock my way through different code paths to ensure that the right functions got called. I also accept your point of clarification around the mock failing because SharePoint isn't installed, which leads in to the next part of your solution around how we can stub out the individual functions and the script you gave us to export that. I did look in to a similar approach but there were a couple of hurdles we ran in to with this in regards to versioning of SharePoint. Between major versions (2013 and 2016 for example) there are usually changes to cmdlets which bring new signatures to existing cmdlets and also add/delete cmdlets. Now for two versions maintaining two sets of the stub function should be fine and then we just repeat the tests with the different portable module installed. But when we hit a scenario where there are changes to cmdlets between cumulative updates (which happens less often but has happened before) I then need to start maintaining different versions of the portable modules for each CU. Granted, there aren't a lot of scripts I write that do account for a CU patch level so this is a "way out there" edge case, but it was a consideration. For the DSC resources we've been working on though I think if I can get the stubbing approach you've provided there working that would be a much better solution, so I will take that away and work on that to see what I can come up with. I'll also update the post here to make sure I draw people to read the comments because you've made some good points there too. Thanks for taking the time to provide the feedback.

  • Anonymous
    September 14, 2015
    Thanks for the reaction Brian. Now I get the environment you are scripting in. I've been thinking about it, and I would still use my approach. Tl;DR Having multiple versions of fake SP is not a bad thing. It's a good thing. It reflects the real environment better on the CI server. Don't write the tests to have them passing on CI server. Write tests to improve the chance your code will work in real environment.


The more realistic the test environment is, the better the chance is that it will actually work in the real environment. In an ideal world you would stage at least one server for each version of SP supported by your module. Each of those servers would be re-staged after each test, to provide fresh environment for every test. This would of course be very costly, quite impractical, very slow and the tests would be difficult to write, BUT it would bring the most certainity that the module would actually work in production. Simply because your tests would perform the same tasks as in the production. Obviously you won't do that in the real world, but you can fake having the SP installed by stubbing the module. Those stubs are not perfect, they do not have the behavior of the real SP cmdlets. But they enable you to test all your logic, and be sure that you call the SP cmdlets correctly. They also add the code completion and static checking to your code. Also because those stubs live in-memory, they don't have any conflicts. You can generate stubs for each version of SP you support, and then run your test suite against multiple versions of (fake) SP on a single server. Now instead of succeeding the test on the CI server and failing on test server with real SP, you will get more failures directly on the CI server. The more failures that are the same on the CI and in production, the more accurate are the tests, and the better is the chance that once you solve the fails on the CI server they will also work in production. So personally I would create the stubs for every supported version of SharePoint and would run my tests against each of them.

  • Anonymous
    September 14, 2015
    Yea I tend to agree with you here - as per the discussion on GitHub (at github.com/.../398 for those playing along at home). I'll explore integrating that in to the test suite we have and will post a new blog post (which I will link to here) with the results once I get that approach working. Thanks again for the feedback!

  • Anonymous
    October 12, 2015
    We are running SharePoint 2010.  I'm trying to write some powershell scripts/modules using Pester for testing in Visual Studio 2013.  However, I run into issues because Pester runs with powershell but the SharePoint cmdlets need to run under v2.  Has anyone been able to work through this?

  • Anonymous
    October 15, 2015
    Hi Dev, I would recommending reading my next post (blogs.msdn.com/.../better-approaches-to-unit-testing-powershell-scripts-that-call-sharepoint-cmdlets.aspx) on how we mocked the entire SharePoint cmdlet set - this can run away from a SharePoint server and should get you around the issue you are seeing here.

  • B
  • Anonymous
    March 15, 2016
    Hi,I want to write unit test using pester, but my script is not a function. It is like .\filename.ps1 $param1 $param2. Can you tell me how to use pester?Thanks
    • Anonymous
      March 15, 2016
      The comment has been removed
      • Anonymous
        March 15, 2016
        The comment has been removed