Freigeben über


Securing SCOM in a Privilege Tiered Access Model–Part 1

Disclaimer: Due to changes in the MSFT corporate blogging policy, I’m moving all of my content to the following location. Please reference all future content from that location. Thanks.

I’ve had a few discussions with some people internally on this subject. One thing that has been consistent in these conversations is that we (Microsoft) don’t have much in the way of good guidance on securing SCOM, and this really needs to be addressed. Since I’ve written quite a bit on Cyber Security and SCOM, have released a security monitoring solution for SCOM, and am now officially a Cyber Security Consultant at Microsoft, I figured I’d take a stab at this. It’s worth noting that this has been tossed around internally, though I wouldn’t be surprised if I have to update it at some point in the not so distant future as this is unofficial guidance.

Let’s start by giving a quick explanation of the tiered access model. For more detail, I’d highly recommend reading the Securing Privileged Access Reference Material that Microsoft has published, as it has much more detail. In summary, Microsoft recommends breaking accounts into various tiers. Microsoft recommend isolating identities into various Tiers.  Identities include user accounts, computer account, applications, etc.  Tier 0 represents those identities that can give you full access to the environment. These credentials should NEVER be used on Tier 1 or Tier 2 systems. They should only be used on Tier 0 systems (i.e. domain controllers). Tier 1 represents the server tier where your business and application data resides. Even in this scenario, it’s recommended to move away from that global server admin account which if compromised is almost as bad as an attacker getting that DA account. Compromising a Tier 0 account is certainly easier for an attacker, but if they get enough of Tier 1, they still have your data. Servers and accounts managing servers need to be isolated with various restrictions in place to prevent lateral movement and collection of these credentials. Microsoft does provide an engagement to help against this called SLAM, Securing (against) Lateral Account Movement. I highly recommend that as a way to start locking down your organization. Tier 1 credentials should never be used in Tier 0 or Tier 2. Tier 2 is the desktop tier with connectivity to the internet for browsing, email, and general application use. This is the assumed breach area, as no matter how hard you try, some one will click on something they shouldn’t and eventually compromise a desktop. Tier 1 and 0 creds should never be used on a Tier 2 device. This includes common things such as RDP to a Tier 1 server. RDP Restricted Admin settings can help in some ways, namely keeping a Tier 1 cred off of the Tier 2 system, but the recommendation for managing your environment would be to use separate Privilege Access Workstations (PAW) in some sort of Red Forest environment, which we call ESAE.

System Center services have high privilege in the environment to many systems including Tier 0 which makes them a prime target for attackers to do bad things in your environment.  As John Lambert mention as part of his “How InfoSec Security Controls Create Vulnerability” article, the method that Information Security systems are implemented without visualizing the security dependency graph is where individual risk management decisions fail to create a defensible system. As such, I’d highly recommend isolation of the system center stack.  This is an application that could potentially hold the credentials to powerful accounts, making it a high value target to attackers.

Let’s start with the architecture. SCOM uses an agent to run workflows and return data to the management server for alerting, collection, etc. In and of itself, this is a fairly innocuous task. Communication between the management server and the agents is fairly benign. The management servers will send configuration information the agents (i.e. which management packs to download) and the agents send the results of those MPs back to the management server. There are a few risks to this, with the biggest being run as accounts. We’ll talk more about them in the next part, but I’ll simply note here that poor distribution of run as accounts can expose your organization to credential theft and reuse (aka pass the hash).  For now though, I want to highlight two other areas of concern.

Agent Action Account should always be the local system account

This should not be confused with the Management Server Action account. This account is the default account for things like agent updates, agent deployment (and I would argue that it’s probably best not to use this account for those purposes, since it runs in resident memory on the management servers), and running various workflows on the management servers. The agent action account is the account that an agent uses to execute its workflows. By default, this is the local system account, as that is what the Microsoft Monitoring Agent runs under. That said, it is configurable and customers can have the monitoring agent run under service account credentials. This is a BAD IDEA. As mentioned in the Administrative Tools and Logon Types of the Securing Privileged Access Reference Material article, service accounts leave credentials behind on every system that the service runs.  By compromising one system where this service (that uses the service account) runs provides an attacker the capability to reuse those credentials to access all other systems that are allowed to use this account. If you must use a service account, then this account needs to have access restricted only to the machine that needs it. If that account has rights across the domain, then you've opened your environment up to being compromised quickly. I’ve written about this as well, and you can find that piece here.

Secure access to who can import/change MPs and from where they can import them.

While it’s not as obvious from the SCOM console, SCOM has extensive libraries to run PowerShell, command line, and VBS scripts, and to be fair, much of this is on the author of those management packs to follow best practices, and an attacker has no such obligation. This means that someone could write a management pack that could potentially deploy malicious software, create a back door, or even use SCOM as a vehicle to collect key information about an environment.  I could, for instance, write a management pack to use a PowerShell probe or task that connected to a remote share and install some malware on a system. I could potentially use it to lower the security posture of a system.  SCOM doesn’t have much in the way of auditing either, meaning that we cannot trace back who would have done something like this. Your only clues as to whether or not this could be going on is if you were regularly auditing the installed management packs as well as their content (and I find that this not done often).  It would be likely in this scenario that you would see a lot of the yellow SCOM alerts (Workflow failed to run, Workflow failed to initialize, OpsManager failed to start a process, etc.), but in my experience, very few organizations spend much time looking at these alerts.

Out of the box, I’d add, SCOM is very vulnerable as the BUILTIN\Administrators group is by default a SCOM administrator. This should be removed and replaced with an active directory group that is limited only to your SCOM engineers and appropriate SCOM service accounts (more on that in the next post).  You also need to control where this type of access can be performed. This fits into Microsoft’s PAW and Red Forest concepts, as administration of SCOM should not be allowed from your Tier 2 environment. Tier 2 is an assumed breach environment as it can be compromised easily. If your SCOM admin, for instance, has the SCOM console installed on his/her desktop and does a  “run as” to use it, their SCOM administrative credential is now sitting in the LSA on their local desktop, which means an attacker can steal those credentials. If those credentials have more access, the attacker just got your tier 1 environment. If they are just SCOM admins, the attacker could upload a malicious management pack to SCOM.  This also means your SCOM admin could feasibly be the victim of a targeted phishing attack as this could be a very quick way to compromise an environment.

Because of this, SCOM administration really needs to be occurring through a Red Forest. A Red Forest, for the record, is a non-trusted domain. It’s hardened and it does not have internet access, email, etc. You would use IPSEC and firewalls to restrict administration of your environment through only your Red Forest. Your SCOM admin should never be administering SCOM from an internet facing machine joined to your domain. They should be doing this from a red forest. If they do their administration on the management server directly, they should only be allowed to RDP to the management server from the Red Forest. This makes it very difficult for an attacker to steal your credentials.

That said, setting up a Red Forest will certainly take a lot of time. In the short term, consider enabling RDP Restricted Admin mode (instructions are here). This will lower the attack surface for lateral movement as RDP credentials will not be stored in the local machine’s LSA. Authentication will happen on the RDP target only. This isn’t as secure as a Red Forest, but it is an easy short term fix that can reduce your attack surface.

This covers the first piece in this series. In the next piece, I will cover more about least privileges, run as accounts, and other things that can be done to protect your Operations Manager environment.

Summary

  • Securing Privilege Access (AD Security) paper.
  • Agent Access Account should be the Local System Account
  • SCOM administrators should be restricted. The location of where SCOM administrators can administer SCOM should also be restricted.

Part 2 is here.

Part 3 is here.