Freigeben über


Creating a Custom Post-Security Trimmer with SharePoint 2013

What You Will Learn

This blog post will show you how to write your own custom security post-trimmer for SharePoint 2013. Not only that, we will take you through the steps of deploying and registering the trimmer using a crawl rule before putting the trimmer to work.

Please visit the official MSDN documentation for the overview and definitive source of documentation of this feature:

https://msdn.microsoft.com/en-us/library/ee819930.aspx

Why Post-Security Trimming

There are two kinds of custom security trimming: Pre-trimming refers to pre-query evaluation where the backend rewrites the query adding security information before the index lookup in the search index. Post-trimming refers to post-query evaluation where the search results are pruned before they are returned to the user.

Post-trimming analyzes each search result after the query has been evaluated. Performance and potential incorrect refiner data and hit counts aside, sometimes it is necessary to perform last-minute security evaluation "when the results are served".

One scenario could be to deny documents in the search results for machines that do not have proper anti-virus software. Another can be to restrict certain documents from being visible in the search results outside a given time of day.

Requirements

  • SharePoint 2013 Server
  • Visual Studio 2012
  • A Custom Connector sending claims (see previous post on this blog).

The Trimmer Design

Let's create a simple post-security trimmer. It should remove documents from the search results if a certain hint is given for document results. We will use the string field docaclmeta for this purpose. The rules are simple. Documents will be removed if this field contains the text 'deny'. If this field is empty or contains anything else (e.g. 'allow'), then the documents will be visible in the search results.

The Code

This MSDN article offers useful starting tips on creating the security post-trimmer project in Visual Studio, adding the references to both the Microsoft.Office.Server.Search.dll and the Microsoft.IdentityModel.dll.

The following is added to the using directives at the beginning of the class file, SearchPostTrimmer.cs:

     using System.Collections.Specialized;
    using System.Security.Principal;
    using Microsoft.Office.Server.Search.Administration;
    using Microsoft.Office.Server.Search.Query;

We then define the class as implementing the ISecurityTrimmerPost interface in the class declaration:

 public class SearchPostTrimmer : ISecurityTrimmerPost

We do not need any additional settings for this trimmer to work. Thus, the Initialize method remains empty:

     /// <summary>
    /// Initialize the post-trimmer.
    /// </summary>
    /// <param name="staticProperties">Static properties configured for the trimmer.</param>
    /// <param name="searchApplication">Search Service Application object</param>
    public void Initialize(NameValueCollection staticProperties, SearchServiceApplication searchApplication)
    {
        // Intentionally blank
    }    

The actual trimmer logic resides in the CheckAccess method. This method returns a bit array with values 1 or 0, grant access or deny access, respectively.

We will check for the term 'deny' in document's docaclmeta field. If so, we will remove that document from the search results.

     /// <summary> 
    /// Check access for the results returned from the search engine
    /// </summary>
    /// <param name="documentUrls"> List of the URLs for each document whose access is to be determined by the security trimmer implementation.</param>
    /// <param name="documentAcls"> List of the document ACLs for each document whose access is to be determined by the security trimmer implementation. This list may be null or may contain String.Empty strings.</param>
    /// <param name="sessionProperties"> Transient property bag valid within the scope of a single query processing component execution.</param>
    /// <param name="userIdentity"> Identity of the user.</param>
    /// <returns>a bitarray with values 1 if the respective document identifier from documentUrls has been granted access or 0 if the respective document identifier has not been granted access </returns>
    public BitArray CheckAccess(
        IList<string> documentUrls, 
        IList<string> documentAcls, 
        IDictionary<string, object> sessionProperties,
        IIdentity passedUserIdentity)
    {
        if ((null == documentCrawlUrls) || (documentCrawlUrls.Count < 1))
        {
            throw new NullReferenceException("Error: CheckAccess method is called with invalid URL list");
        }

        if ((null == documentAcls) || (documentAcls.Count < 1))
        {
            throw new NullReferenceException("Error: CheckAccess method is called with invalid ACL list");
        }

        if (null == passedUserIdentity)
        {
            throw new NullReferenceException("Error: CheckAccess method is called with invalid user identity parameter");
        }
        
        // Initialize the bit array with false value which means all results will be trimmed out.
        var urlStatusArray = new BitArray(documentCrawlUrls.Count, false);
        for (var x = 0; x < documentAcls.Count; x++)
        {
            if (!string.IsNullOrEmpty(documentAcls[x]))
            {
                urlStatusArray[x] = true;
                if (string.Compare(documentAcls[x], "deny", StringComparison.InvariantCultureIgnoreCase) == 0)
                {
                    urlStatusArray[x] = false;
                }
            }
            else
            {
                urlStatusArray[x] = true;
            }
        }

        return urlStatusArray;
    }

Performance Considerations 

One might consider the following tips to improve the overall performance with this trimmer as a starting point:

  • Use Pre-Security trimming instead for improved performance and correctness (hit counts etc.)
  • Cache locally any data needed to evaluate if a result should be removed or not

Deploying Trimmer

After you have built the custom security trimmer project, you must deploy it to the global assembly cache on any server in the Query role.

  1. On the Start menu, choose All Programs, choose Microsoft Visual Studio 2010, and then choose Visual Studio Tools and open a Visual Studio command prompt.

  2. To install the SearchPostTrimmer.dll, type the following the command prompt

    gacutil /i <ExtractedFolderPath>\PostTrimmer\bin\Debug\SearchPostTrimmer.dll

  3. As the last step of deploying the trimmer, we need to learn about the token of the DLL. Type the following the command prompt

    gacutil /l SearchPostTrimmer
    Write down the token listed for the newly added DLL.

Registering Trimmer

  1. Open the SharePoint Management Shell.

  2. At the command prompt, type the following command:

    New-SPEnterpriseSearchSecurityTrimmer -SearchApplication "Search Service Application"
    -typeName "CstSearchPostTrimmer.SearchPostTrimmer, CstSearchPostTrimmer,
    Version=1.0.0.0, Culture=neutral, PublicKeyToken=token" -RulePath "xmldoc://*"

    where token is the text copied from the gacutil /l command above. The xmldoc://*  refers to the crawl rule we defined in the XML Connector blog post. Example: New-SPEnterpriseSearchSecurityTrimmer -SearchApplication "Search Service Application" -typeName "CstSearchPostTrimmer.SearchPostTrimmer, SearchPostTrimmer, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ba2b4aceeb50e6d" -RulePath "xmldoc://*" -Id 1

  3. Restart the Search Service by typing

    net restart sphostcontrollerservice

Testing the Trimmer

Now, you can issue queries and the beauty of the Post-trimmer logic should reveal itself on each query evaluated. Try modifying the docaclmeta field contents in the Product.xml file for the custom connector, issue a full crawl and repeat the query.

Acknowledgements

Author: Sveinar Rasmussen (sveinar).

https://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-01-55-87-Trimming/8512.PostTrimmer.zip

Comments

  • Anonymous
    June 17, 2013
    Thanks for Great Post, I tried to configure the same in my On premises environment but no luck , could you please send me working code in my mail id arvindmits@yahoo.com

  • Anonymous
    October 08, 2013
    Thanks for the post. One important addition: When registering the Post-Trimmer, you must do a FULL CRAWL in order for the CheckAccess method to fire. You only have to do this the first time. Subsequent updates to the post trimmer don't require a re-crawl, as long as you are not re-registering the trimmer.

  • Anonymous
    October 09, 2013
    Eugene, you are absolutely correct, Sir! The reason for the initial full crawl is that the content objects (documents) need to be associated with the registered trimmer. It will put an integer ID on those that match the crawl rule. Once the content is "marked up" with a post-trimmer, the post-trimmer is invoked. And any subsequent changes to the trimmer logic will not require any crawls after that, as long as you aren't re-registering trimmers.

  • Anonymous
    May 08, 2014
    This is a terrific post, thanks for sharing

  • Anonymous
    January 14, 2015
    I have the post trimmer working, thanks for the article!  Now.. do you know of a way to fix the paging of the results.  I can live with the refiners and hit count being inaccurate, but currently I have a query where 16 records should make it past the post-trimmer, but page one has one result, pages two and three are blank, one result on page 4...three on page 9.  I'm afraid this is not useable as it is.  Is there a way to trigger the query to re-configure paging, or add a value to the rows that I could sort on? Some options I was thinking of were    a query rule to sort the results - ( this seems applied before trimming - didn't work)    CSS to hide the "hidden" rows - ( they are really trimmed  - not in the html so this isn't an option) Thanks for any help. Karl

  • Anonymous
    January 14, 2015
    Karl, thanks for your kind words. The pagination of the result set is troublesome indeed where the number of hits returned might not be sufficient to fill out the page itself. One approximation might be to consider increasing the RowLimit query parameter (msdn.microsoft.com/.../microsoft.office.server.search.query.queryproperties.rowlimit(v=office.15).aspx). The rationale of that is that the post trimmer will automatically bump the RowLimit parameter by 50% to accommodate for the potential rows trimmed away post query evaluation. Thus, the larger RowLimit, the more results are returned. Coupling the RowLimit with the StartRow query parameters might give you a sensible pagination then. Alternatively, ask for a larger set of rows in the first place (by making the search engine return e.g. 300 rows) and perform a dynamic frontend/visual-only-in-the-browser pagination in JavaScript if that makes more sense.

  • Anonymous
    January 23, 2015
    The comment has been removed

  • Anonymous
    February 03, 2015
    Jason, the ResultsPerPage property in the ResultScriptWebPart is max. 50 items. You can perhaps try (though unsupported) try to use the QueryTransform to set a new rowlimit through PowerShell, QueryTransform.OverrideProperties. Try Bing for the Nelson Lamprecht blog on search-result-types for more details.