Jaa


Microsoft Identity Lifecycle Manager "2" Policy Service: A Look Behind the Curtain

In my previous post I gave a very brief overview of the different components of the Microsoft Identity Lifecycle Manager "2" product.  I provided an especially brief description of one of those components, the Microsoft Identity Lifecycle Manager Policy Service (ILM-PS).  I would like to take some time to follow up on that post and provide a deeper description of the ILM-PS at this time.  To do that let me start by taking a step back and discuss briefly some of the motivations behind adding this component in Microsoft Identity Lifecycle Manager "2".

Inclusion of the ILM-PS into the Microsoft Identity Lifecycle Manager "2" product is the realization of a concept that started with Microsoft Identity Lifecycle Manager 2007.  Prior to the release of Microsoft Identity Lifecycle Manager 2007, the Synchronization Engine component was the entirety of the product known as Microsoft Integrated Identity Server (MIIS).  At that time MIIS was, and still is, fantastic at what it does:  synchronize, provision, and deprovision data between heterogeneous data sources.  However, managing the lifecycle of this data was done externally through the external data stores.  In other words, the Synchronization Engine would only perform synchronization, provisioning, or deprovisioning actions when there was a data change in an external store to which it was connected through a Management Agent (MA).  Further, deploying and configuring of the Synchronization Engine was a complex task that often required contracting experts, especially if an enterprise's deployment required the authoring of custom one or more custom MAs.

With the release of Microsoft Identity Lifecycle Manager 2007 the Synchronization Engine is joined by the Certificate Lifecycle Manager (CLM).  The addition of CLM begins the inclusion of the ability to manage the lifecycle of data synchronized by the Synchronization Engine.  The deployment and configuration of the Synchronization Engine remains mostly the same; however, the integration point between the Synchronization Engine and CLM is improved with the inclusion of a custom MA that sits between the Synchronization Engine and the data store that backs the CLM.  This results in the ability for enterprises to use Microsoft Identity Lifecycle Manager 2007 as a complete solution for managing certificate related data.

With the release of Microsoft Identity Lifecycle Manager "2" the Synchronization Engine and Certificate Lifecycle Manager (CLM) are joined by Policy Service.  The Policy Service extends the initial step taken by the CLM to include the ability to manage the lifecycle of data synchronized by the Synchronization Engine into the Microsoft Identity Lifecycle Management product.  Like CLM, the data store backing the Policy Service is connected to the Synchronization Engine with a custom MA.  However, unlike CLM, the Policy Service does not manage one specific type of data.  More precisely, the Policy Service introduces a platform for managing the lifecycle of different types of data providing that data can be represented as a "Resource" within the Policy Service.

The flexibility of the Policy Service starts with the flexibility of what data can be expressed as a "Resource" and, thus, managed by the Policy Service.  As I started to explain in my previous post, a resource is just a set of related data describable in a flat XML schema.  Think of it as an object.  In fact, you could think of a "Resource" as the Microsoft Identity Lifecycle Manager "2" platform's version of System.Object.  Just as all types in the .NET Framework is built on top of System.Object, all data managed in the Policy Service is built on top of "Resource".  Within the .NET Framework, System.Object is a type that does not do much on its own, but does include some basic functionality that all types share (e.g. ToString(), GetType(), GetHash(), etc).  Likewise, within the Policy Service, "Resource" represents data that does not describe much on its own but does include some basic attributes that all resources share (e.g. Creator, CreatedTime, ObjectType, ObjectID, etc).

I mentioned that a resource is a set of data describable in a flat XML schema.  To be more precise, I should say that a resource is a set of data describable in an XML schema with a maximum depth of one.  This allows for creating an object with a series of attributes (which are represented as XML elements).  These attributes can contain data represented by one of the following supported data types:  string, integer, date time, reference, boolean, or binary.  Most of these should be self-descriptive with the exception of reference.  An attribute of type reference contains a reference to another resource within the Policy Service store, specifically it contains a unique identifier of that resource.  This reference allows for creating direct relationships between two resources without needing to create a nested XML schema.  I will dive further in the resources and provide detailed examples in later posts.

It turns out that managing the lifecycle of resources only requires providing five operations:  Create, Read, Update, Delete, and Enumerate (search).  Any higher level conceptual operation that needs to be done on data can be represented as one of these five lower level operations.  For example, the operation of adding a user to a group can be modeled as an update to the group resource's attribute containing membership data.  These operations are the basis of of the WS-STAR web service endpoints exposed by the Policy Service.  This allows web service clients to talk directly to the Policy Service and create new instances or manage existing instances of resources as long as those resources are defined within the Policy Service's schema.

So far so good.  We have these things called "Resources" that can be created and managed  by web service clients talking directly to the Policy Service.  There is one problem, the data that makes up these resources can have complex management policy that needs to be enforced.  This is where the Policy Service really flexes its extensibility muscles.  Within the Policy Service is a mechanism for processing requests through the system that allows for amazing customization (if I do say so myself).  First, all clients attempting to create, read, update, delete, or enumerate resources actually submits a request to do so.  This request then is dispatched through the Microsoft Identity Lifecycle Manager "2" request pipeline.  This pipeline moves a request through five phases of processing:  Rights Check, Authentication, Authorization, Commit the Operation, Action.  (Initial authentication is accomplished at the web service layer and outside of the request pipeline, so the rights check is actually done against a client that has had some authentication done and the ILM authentication phase is really a second factor authentication.)

During the Rights Check phase the Microsoft Identity Lifecycle Manager "2" Policy Service determines if the user submitting the request has been explicitly granted the rights to perform the operation described by the current request.  If so, the request moves onto Authentication otherwise processing is halted and permission is denied.  Within the Authentication phase, the Policy Service determines if it has been configured to require additional authentication of the user.  If there is additional authentication required, processing pauses and the user is redirected to the Policy Service's custom Security Token Service (STS) for additional validation.  In the absence of additional authentication requirements, or after the user has gained additional credentials and resubmitted their request, the current request moves into the Authorization phase.  At this time the Policy Service determines if it has been configured to require additional authorization of the request.  This could involve custom data validation (e.g. verify that a resource's DisplayName attribute does not include illegal words or characters) or external authorization (e.g. ask a person's manager for approval of a particular operation).  Regardless of the content of the authorization, the Policy Service will execute all additional authorization.  If any of this authorization faults, processing of the request is halted and the current request is denied.  Only after all authorization processes have completed successfully is the request moved into the Commit phase where the actual operation is performed in the data store and committed.  Finally, the Policy Service determines if there are any additional processes that should occur as a consequence of the data operation and executes each of those appropriately.

Given the importance of this portion of the Policy Service you can bet that I will follow this post up with an even deeper dive into this component.  However, until then let me provide at least a little overview of the pieces involved.  Each of these phases allows for enterprises to configure who can perform what and what additional processes should be executed in each phase of processing the request to perform a specific operation.  Within the Policy Service, there is a concept called a ManagementPolicyRule.  Additionally, the Policy Service has a concept of a Set.  A Set is a group of resources that meet share criteria (e.g. All Persons, All Full-time Employees, All Persons with the First Name of Mark, etc).  ManagementPolicyRules are used to configure the request processing phases for sets of objects.  They grant rights to sets of principal objects (often, but not always, persons).  They reference Authentication, Authorization, and/or Action workflow definitions.  Again, Sets and ManagementPolicyRules are important enough concepts to warrant their own blog posting.  For now, understand that they exist and are used in conjunction to provide highly configurable control over the request processing phases of the Microsoft Identity Lifecycle Manager "2" Policy Service.

I have described quite a bit of stuff so far, and I hope I have not lost anyone.  I have talked about a few items of metadata that are used to configure and drive data through the Microsoft Identity Lifecycle Manager "2" Policy Service.  Schema metadata that defines different types of "Resources" in the system.  Request metadata that captures requests to perform data operations on those resources.  ManagementPolicyRules metadata that configure the different phases of request processing of those resources within the Policy Service.  Sets metadata that allows the grouping of resources that meet share criteria.  Workflow definition metadata that defines additional processing to be performed within specific phases of request processing within the Policy Service.  There is also additional metadata within the system to help configure the Synchronization Engine (bringing us back to the goal of easing the deployment and configuration of the Synchronization Engine).

This may seem like a lot of metadata.  The good news is that all of this metadata used to drive the Policy Service (and configure the Synchronization Engine within the Policy Service) is all represented within the system as "Resources" themselves.  This means that you manage the metadata that drives the Policy Service, thus managing the Policy Service, through managing resources within the Policy Service.  Further, this means that the control over the request processing of resources is inherited by these system resources as well.  In other words, if an enterprise wants to control who can create new ManagementPolicyRules, they simply create a ManagementPolicyRule granting the right to create resources that would fall within the set "All ManagementPolicyRules" to the set of appropriate users.  If an enterprise wants to add additional authorization (validation and/or approval) to the creation of custom workflow definitions they simply create a ManagementPolicyRule specifying their authorization workflow to run during the authorization phase of requests to create resources that would fall within the set "All WorkflowDefinitions".

I know that I have covered a lot of information in this post; it is definitely longer than I like to make my typical blog posting.  I will be following this up with more targeted postings on the specifics covered here; however, I am hoping that this has taken another step towards providing the groundwork for greater and deeper conversations about the Microsoft Identity Lifecycle Manager "2" Policy Service.

Next week:  A Discussion of Computed, Explicit, and Temporal Sets in Microsoft Identity Lifecycle Manager "2"

A quick note about my planned blog entry for next week:  I have answered a number of questions from people regarding types of set membership in Microsoft Identity Lifecycle Manager "2".  Sets are a key concept to building solutions on top of the Policy Service platform.  Often these questions start off as something other than set membership, but quickly come down to correcting misunderstandings about set memberships.  I would like to take a pause on my deep dive into the Policy Service to spend some time discussing the different types of set memberships within Microsoft Identity Lifecycle Manager "2".

Comments

  • Anonymous
    July 07, 2008
    Mark, Great description to the ILM-PS!

  • Anonymous
    July 08, 2008
    Mark, For resource manipulation, is the authentication real time, queued, or the possibility of both?  Also the same question applies for additional authentication, for example, by a manager. Very informative.  Can't wait to read more. Thanks, Jeff

  • Anonymous
    July 10, 2008
    Jeff - I will be diving deeper into the Request Processing model for the Policy Service, but let me try and answer your question.  When we say Authentication (AuthN) there are really two types of AuthN done in the Policy Service.  There is the web service layer's AuthN that performs initial validation that a user can actually talk to the Policy Service.  Then the Policy Service adds on second factor AuthN which can be configured using MPRs. This second factor AuthN is done in real time and blocks a request from being processed until all AuthN processes and gates have been passed.  However, AuthN does not generally include anything other than the actual user submitting the request. Your question about "additional authentication ... by a manager" is more likely a question about the Authorization (AuthZ) portion of request processing.  This is where we will delay processing a request until all AuthZ processes have run, which may include asking for manager approval.  If any AuthZ process fails the request is denied and only when ALL AuthZ processes complete is the request processed. I hope that answers your questions.

  • Anonymous
    July 16, 2008
    Thanks Mark, that makes sense.   With the various auth types (AuthN, AuthZ, others?) is there any limitation on what type of authentication can be performed? More specifically do specific levels of auth limit auth method to: native supported Windows auths (AD user/pw), smart card, digital cert, LDAP, external (3rd party) auth? Jeff

  • Anonymous
    July 17, 2008
    There are only two types of Auth (AuthN and AuthZ) in the ILM "2" Policy Server.  There is a third stage where workflow is run (Action).  I am hoping to post this week describing the request processing model in a bit more detail (it will take a few posts to dive into the depths). The auth you refer to here seems to be Authentication (verifying the user is who they claim to be) versus Authorization (verifying the request meets all validation requirements before being processed).  The short answer here is that ILM "2" supports any type of authentication that can be coded in C#.  Under the covers our second factor AuthN (the web service uses WCF under the covers to do an initial authentication of the client) is driven by Windows Workflow Foundation.  If you can write a C# method to validate user authentication and pair that with a custom gate client to produce the response data.  I know this is a bit hand wavy, and I appologize for that.  Please bear with me as I continue peeling back the onion of the Policy Service and provide the context needed to have deep dives into the ILM "2" specifics about things like custom Authentication. Thanks for your patience.

  • Anonymous
    July 17, 2008
    Actually your answer is perfect.  I needed to understand how open the Authentication mechanism is in order to figure out how best to tie this into the architecture I work with. My applications are all .Net/C#/web services, so understanding integration options at this point helps me to look at the whole platform from the proper perspective as you move forward with educating us. All of this is very helpful, and I thank you for allowing me to stray from the main topic a bit.

  • Anonymous
    July 18, 2008
    This week my laptop decided it was time for a career change and made the transition to door stop. Unfortunately