Opalis and the Internal Microsoft Adoption Story (On Demand Webcast)
On Tuesday June 28th, Charlie Satterfield and I had a great webcast presentation with a lot of content related to how Microsoft is leveraging Opalis internally. Please click the inline link or the graphic below to view the content.
Details
This webcast concentrated specifically on the following details:
- Real world scenarios we are currently leveraging Opalis for
- Service Request Automation (overview)
- ConfigMgr Role Deployment (Demo)
- Patch Tuesday Automation (Demo)
- Reach Analysis for Opalis at Microsoft (overview)
- Architecture considerations
- Single Management Server and Single Action Server
- Single Management Server and Multiple Action Servers
- Lessons Learned
- Moving from simple to scalable / componentized workflows
- Leveraging paper specs to build out initial workflow design before development efforts
This webcast was well received and we believe is a solid hour of great information on how you may look at implementing Opalis and Orchestrator into your own environment. To view the webcast on demand click on the following link: https://bit.ly/iuZz2h
Q & A from the presentation
Q: How far along in your process did you realize you needed to go into componentization?
A: Almost immediately. Our reach workflow showed that if we hadn’t componentized our workflows, they wouldn’t be very scalable or efficient.
(Note: see next Q/A for more information on this).
Q: What kind of requirements did you find concerning the number of action servers required to run concurrent policies? Meaning did you need to deploy a certain number of action servers to run the necessary number of simultaneous policies?
A: The answer to this question is, it depends. To put a finer point on this, the Desktop Management side within MPSD is essentially supporting ~280,000 systems worldwide on a single Opalis Management Server and separate Action Server (both running on Hyper-V as guest VMs). The reason the answer to this question is “it depends” is mostly due to the fact that there are many ways to slice out how you may execute and design runbooks to accomplish automation in your environment. Do we hit all 280,000 systems from our environment directly from Opalis – no. We indirectly manage those systems through the use of System Center Configuration Manager (and the System Center suite as a whole for that matter in one way or another). Key detail on how you design your architecture really has to do with what you are doing and how you are going to do it. If you have lots of long running tasks that take several hours to complete, you may be leveraging your Action Servers in a way that will prevent you from scaling efficiently (long running tasks taking up memory and running over long periods of time reducing your ability to push more policies into the queue). Building scalable workflows is an important piece of understanding how to optimize your Action Server load. Final note on this is – mileage may and will vary depending upon what you are doing. Analyze how you are executing and what you are executing in regards to your IT Process Automation efforts. If you can reduce your runbook tasks in to subrunbooks that execute quickly and return back to a main orchestration workflow, you will spend less time waiting on many tasks to complete in a larger single threaded workflow and allow Opalis to complete quicker and more efficiently allowing you to run more workflows on less infrastructure.
Q: Would you say that action server architecture decisions are more location-based than concurrent policy based? So, in deploying additional action servers in a high volume location, then deploying an action server in a location that might have a high latency link with the main datacenter/Opalis Management Server location?
A: The answer is both :). It depends on how you want to spread the load of your policies and whether different locations are sensitive to certain infrastructure needs (non trusted domains,etc.). Action Server build out can be useful to allow you to have more policies running concurrently but more often it is geographical (as long as your runbooks are built in a scalable fashion).
As always, any questions – hit me up!
And of course – Happy Automating!