In this article, I would like to help you be better prepared for your next penetration test (“pen test”). If you’ve never had a penetration test, then this article will help you skip your “first” penetration test experience, which will likely contain these basic problems that are relatively easy to fix. Each of these issues has led to quick compromise or quick elevation of privilege, resulting in a complete takeover of the environment. In simplified terms, if your organization has any of these issues, then your security program investment is not providing you with an acceptable return because it can all be bypassed with old attack methodologies.
In today’s IT security climate, organizations are constantly pushed by vendors to adopt the latest and greatest buzzword prevention technology or to use a new cloud solution to protect against some edge case scenario. As this increases, it’s easy for security professionals to get caught up in this arms race, myself included. Staying on the cutting edge of technology is required by organizations in order for them to survive in the Information Security industry, but when was the last time you looked at the basics?
Penetration testing serves multiple purposes to businesses, the most profound of which is arguably regulatory compliance. Many industries are required to follow a compliance framework that instructs them to have a third party perform this testing on an annual basis, at a minimum, and sometimes up to a quarterly basis. Organizations like this usually have a remediation program in place, and findings are resolved before the next test occurs. This regulated approach increases the chances that the organization will be prepared for an attack, as the organization is more aware of new threats and has a process to deploy the mitigations quickly after they are announced.
Another reason organizations request penetration testing is to test their defenses; this could be scoped as testing an Intrusion Prevention System (IPS), the organization’s Managed Security Service Provider (MSSP), or their entire security program as a whole. Some organizations may not have regular interval testing scheduled, or they may have never had a penetration test performed before.
The third most common reason organizations have testing performed is when a major change has occurred; this might be a web application that has been converted to a new language or new functionality that has been added or deployed to a new location. Another example might be during the merger of two separate companies so that potential risks can be identified before connecting the two networks together.
Regardless of the reason for the penetration test or the preparation level of the organization, we see the same basic issues repeatedly:
Default insecure Microsoft settings (old protocols enabled, security features not enabled, etc.)
Incorrect Group Policy Object permissions
Server out-of-band management interfaces enabled
Overly permissive Anti-Virus exclusions
Default Insecure Microsoft Settings
Microsoft has a massive catalog of software and, more than likely, you are using a lot of it. Business decisions must be made to allow all this software to interoperate out of the box, and that results in secure functionality being disabled in order to support backwards compatibility. It doesn’t help that in the current Information Security climate, pretty much all organizations are understaffed and overworked. This situation leads to deployments with these poor defaults making it to production, only to be quietly exploited by attackers later down the road; the most impactful example of this involves a trio of vulnerabilities that have been abused for at least 20 years.
Link Level Multicast Name Resolution (LLMNR) is enabled by default on Windows operating systems with the purpose of allowing communication between computers with only their name. While that may sound like an entirely useful and necessary function, when you understand how it works, you realize that it has no place in a modern business network; the feature was created for home networking where no Domain Name Resolution (DNS) server is present. If you have an Active Directory (AD) in your company, you have DNS servers; even if you don’t have Active Directory, if you have anything more than a flat network (meaning your network is separated into multiple subnets), you have DNS servers. When LLMNR is enabled, it provides a last resort fallback for when a name cannot be resolved in any other manner.
Consider when you have autocomplete enabled in your browser (also a bad idea), and you type “goggle” instead of “google.” You hit enter, and because “goggle” does not exist in your autocomplete history, the browser does not append “.com” and instead looks in the local hosts file for a static entry for “goggle.” Naturally, this does not exist, and the next step is to query the configured DNS server specified in your network settings. This is likely an internal DNS server which is configured to check its internal zones first for the existence of the name “goggle” and quite possibly append your domain name to the end of it. When that is not found, the DNS server queries the root servers on the Internet for the name “goggle.” When it is not found there, it falls back to LLMNR, which is a broadcast to anyone listening – essentially screaming “Who is goggle?” On a healthy network, nothing will respond unless there is a computer named “goggle,” in which case you probably already have a problem. An attacker exploits this problem by using software configured to respond to any broadcast request by replying “I’m and this is my IP.” Now, a typo has just inadvertently made you communicate with an attacker-owned computer and reveal information about yourself that allows them to impersonate you. It really is that quick and that easy.
With LLMNR being the foundation of this trio, you probably noticed that this is only going to work on the local subnet and only if you “fat finger” a host name. The next building block that makes this worse is Web Proxy Auto Detect (WPAD). This feature is enabled by default on all browsers, but Microsoft browsers specifically are where we see the problem expose itself due to how ingrained Internet Explorer or Edge is for network communication. WPAD is another holdover from before Active Directory was created; it allowed each individual workstation in a network to know where the web proxy server was without having to configure them. On browser start, and frequently while the browser was still open, there would be name lookups for “wpad.” When this was used, there would be a server named “wpad” that would respond with a configuration file containing the IP, port, and options for the web proxy. The browser would then import this configuration, allowing Internet access to be available for the user. Since the creation of Active Directory and the use of GPOs to configure machines, this is rarely ever done anymore, resulting in an opportunity for attackers to use the aforementioned attack methodology to force you to communicate with their system. Now an attacker doesn’t need to wait for a typo, as a non-existent name is looked up multiple times per minute.
Let's Dig Deeper Into LLMNR
The LLMNR attack is still the same, which means the attack surface is still the local subnet. But what happens on your network when you plug in a new computer? Likely, when you request DHCP to get an address, you provide your hostname, and it is automatically placed in DNS so that other computers may find you. What if you name your computer “wpad? If you have Active Directory DNS, no problem, Microsoft already thought about this, and that name cannot be registered. If you use pretty much any other DNS solution, it will register and now EVERY computer with a browser open will be communicating with you. The attack surface just became every computer with a browser open, and the attacker is likely able to take over the domain in less than 5 minutes.
The third leg of this trio is Server Message Block (SMB) signing not being enabled by default; this is the perfect example of compatibility taking priority over security. The list of vendors who cannot support SMB signed sessions is dwindling and becoming less of an issue, which offloads the problem to security professionals who must ensure the latest software versions are being used so that this feature can be enabled. A regular unsigned SMB communication consists of authentication and commands. The LLMNR attack allows the attacker to relay or replay the authentication they receive from you when you inadvertently communicated with them; the attacker adds the commands, and the impersonation occurs. SMB signing adds a signature to the SMB communication. Skipping the details of how public key infrastructure works, the signature portion of the communication is an encrypted blob that can be decrypted by the destination machine and compared to the source information of where the request originated from. If the source and the signature match, then the authentication may proceed and commands may be run; if the source and the signature do not match, then the request is thrown away.
If you’ve been paying attention, you probably understand that SMB signing alone can prevent the previously described attack; however, the reason I broke this down into three pieces is because enabling SMB signing won’t prevent other possible impersonation techniques when LLMNR is enabled. Disabling LLMNR is by far the easiest and fastest way to prevent a multitude of attack types and should have zero impact on your network. Disabling WPAD in the browsers should equally have no impact to your environment. In addition, if you are using a third-party DNS solution, create a static record for “wpad” and assign it the address “127.0.0.1” to prevent any attacker device from being able to register that name. SMB signing will require some protocol discovery or, at a minimum, a technology inventory to understand if you have systems that will fail when the third portion of the SMB communication is used.
Incorrect Group Policy Object Permissions
GPOs simplify computer management within Active Directory and are often used to set standards or baselines across an entire organization. They really make a system administrator’s life easier; however, there seems to be real confusion for many revolving around how to apply the settings in GPO to a group of users or computers, which results in huge holes that attackers may leverage to take over an Active Directory domain.
All GPOs must be linked to an Organizational Unit (OU) to be applicable to those users or computers. This seems to be the easy part and is well understood; after all, troubleshooting this is relatively easy. If the GPO is not linked, it just doesn’t work. However, we see the “Delegation” tab in a GPO consistently misused, and there is no warning indicator when this occurs. The GPO is applied as expected and the admin moves on.
By default, a GPO’s Delegation properties contain READ access for the Authenticated Users group and EDIT access for the Domain Admins, Enterprise Admins group, and SYSTEM. This should make perfect sense, as Domain Admins are typically the ones to create or modify GPOs, while members of the Authenticated Users group, which contain both user and computer objects, just need to read the GPO to know which settings are to be applied.
It should also be understood that every setting in a GPO is essentially a command and will be run as the permission level of the user or computer to which it is applied. From an attacker’s perspective, this means code execution. EITS commonly sees modification to this default configuration by confused admins who add the Domain Users group and FULL control; this inadvertently allows anyone to modify the GPO, which may allow an attacker to lower the security level of the target or add new settings that can execute code. Depending on the OU to which the GPO is linked, this can have far reaching effects, such as the Default Domain Policy GPO.
An exacerbating problem is that logging for GPO modification is not enabled by default and, even when enabled, does not provide a list of changes made to the GPO; this makes it impossible for anyone to know when the change was made or by whom. Many of these misconfigurations discovered during penetration testing engagements have been estimated by the customers to be multiple years old. This is another discovery that, when made by an attacker, can be used to take over a domain in less than 5 minutes.
Rarely should the “Delegation” tab on GPOs be changed from the default, unless your organization has decided to further restrict GPO management to a group smaller than Domain Admins. In that case, the “Delegation” tab should still look the same across all GPOs and be checked regularly for inconsistencies. If your organization follows a change management process, then enabling the events associated with GPO changes will be helpful for determining modification outside of change windows or by users who should not be doing so.
Server Out-of-Band Management Interfaces Enabled
Many physical servers ship with a dedicated Network Interface Card (NIC) for out-of-band management. When connecting to a server on this interface, you receive a Graphical User Interface (GUI) for server management that contains low level access to the physical hardware like power, disks, and sometimes full-featured console access. Each vendor has its own name for these interfaces, such as iLO, DRAC and IPMI. The problem is that the software for these interfaces is always old and outdated, containing multiple vulnerabilities that result in admin level access. All an attacker needs to do is get on the internal network and look for these web interfaces; when they are found, the attacker exploits them to gain access to the management software. At this point, there is a good chance that they will then be able to gain access to the console of the machine where an admin is still logged in from the last time troubleshooting occurred on the physical hardware.
The impact here is hit or miss based on whether your admins are trained to log out every time they leave a server. Without a console session, usually all an attacker can do is create a very annoying Denial of Service (DoS); however, what we often find is that customers are not using or did not even know the interface was active. When servers are racked in a datacenter, it’s not always the System Admins who are doing it; the deployment team or the datacenter team do not get to deal with the pain of vulnerabilities, but merely plug cables into all interfaces as instructed. The fix here is easy: If you are not using these out-of-band interfaces, do not plug them in. Consider adding a step to your deployment process where these interfaces are tapped over to ensure they are never accidentally connected.
Overly Permissive Anti-Virus Exclusions
Our final common finding on penetration tests is Anti-Virus exclusions. We all understand the importance of Anti-Virus and generally understand how it does its job; however, troubleshooting Anti-Virus problems is not fun in an enterprise setting. This results in applying exclusions that are much too broad. The user is happy, the admin is happy, and everything resumes as normal; the problem is that inadvertent holes have been opened that do not set off alarm bells, much like the other issues in this article. The concept of creating the tightest rule possible is taught in networking when it comes to firewalls; unfortunately, most Anti-Virus administrators do not have this training and do not follow this concept.
Most Anti-Virus solutions store the list of exclusions in a clear text format, readable by an unprivileged user if you just know where to look. We have seen them in log files, registry keys, and sometimes the GUI itself. You should not consider the excluded locations to be a secret. Following that understanding, there are some common rules to follow when creating exclusions. First, never create a file extension exclusion without a full path; doing so provides an attacker the ability to execute malicious code from anywhere on the machine just using that extension. Attackers are not restricted by file type associations; as such, malicious code can be loaded onto a machine and run with any file extension.
It is common to see entire folders excluded, and this is sometimes necessary; be sure to scope these exclusions down as tight as possible. In reality, the correct answer is going to be to specify the filenames or file types within that directory rather than the entire thing.
The final and most significant problem with Anti-Virus exclusions is excluding files or folders in user-modifiable locations. If you exclude anything within the user profile directory, an attacker will be able to leverage it either by placing the malware in that folder or by renaming the existing excluded file and replacing with the malware. The problem is not limited to just the user profile directory, so check the permissions on the destination of any excluded object.
Penetration Testing time frames are highly compressed, so not all testers will look for these older problems. Just because you have never received a finding for these problems does not mean you do not have them. My hope is that you will take a moment to inspect these simple configurations, harden your environment, and prevent findings on your next pen test. If you’ve learned anything from this article, please pass it on to another peer or organization; sharing information improves everyone’s security posture.
A methodical assessment of your security controls to identify weaknesses and prioritize your security plan.
The EITS mission is “to understand each of our customers’ unique needs, guide them toward the right security investments, and provide the best possible professional services to defend their people and their data.”