6. Privilege Escalation – Privileged Attack Vectors: Building Effective Cyber-Defense Strategies to Protect Organizations

© Morey J. Haber 2020
M. J. HaberPrivileged Attack Vectorshttps://doi.org/10.1007/978-1-4842-5914-6_6

6. Privilege Escalation

Morey J. Haber1 
Heathrow, FL, USA
Once we have established an authenticated session of any type, whether the session is legitimate or hacked via any of the attacks previously discussed, a threat actor’s typical goal is to elevate privileges and extract data. Figure 6-1 illustrates this based on the models we have been discussing. A standard user typically does not have rights to a database, sensitive files, or anything of value en masse. So, how does a threat actor navigate an environment and gain administrator or root privileges to exploit them as an attack vector? There are five primary methods:
  • Credential exploitation

  • Vulnerabilities and exploits

  • Misconfigurations

  • Malware

  • Social engineering

In addition, some security solutions designed to protect against these threats, when not properly hardened or maintained, could lead to exploitation using any of the techniques listed here too.
Figure 6-1

Privilege Hijacking and Escalation

Credential Exploitation

We have already established that valid credentials will allow you to authenticate against a resource. This is how authentication works. However, once a username is known, obtaining the account’s password becomes a hacking exercise. Often a threat actor will first target an administrator or executive since their credentials often have privileges to directly access sensitive data and systems, enabling the cybercriminal to move laterally while arousing little or no suspicion. For a threat actor, going undetected is key to the success of their mission. They need to start infiltration by gaining a foothold within the environment. Gaining this beachhead could be the result of anything from leveraging missing security patches all the way through social engineering. Once the initial infiltration has been successful, threat actors will typically perform surveillance and be patient, waiting for the right opportunity to continue their mission. Threat actors will customarily pursue the path of least resistance and will perform steps to clean up their tracks in order to remain undetected. Whether this involves masking their source IP address or deleting logs based on the credentials they are using, any evidence about their presence can be an indicator of compromise. Once identified, this can be used to either stop their movement or allow the organization to ramp up forensics to monitor their intentions.

There are multiple philosophies on what to do once a breach is detected that are outside of the scope of this book. Regardless, when dealing with compromised credentials, everything privileged to that account is now fair game for the attacker. Resetting passwords is typically a priority and reimaging infected systems is a standard practice (especially if it involves servers). However, simply requesting the end user to change a password does not always resolve the incident because the method of obtaining the credentials in the first place may involve other attack vectors, like malware. Compromised credentials are the easiest privileged attack vector for a threat actor, and the accounts associated with them control almost every aspect of a modern information technology environment, from administrators to service accounts.

As previously discussed, the theft of credentials can be performed in a variety of ways, ranging from password reuse to memory-scraping malware. Stolen administrator credentials allow direct exploitation of resources. Standard user credentials could allow access to sensitive data based on a user’s role and job title. Privileged escalation of credentials from a standard user to administrator can happen using any of the techniques described in the following texts. Therefore, credentials compromised for the most sensitive accounts (domain, database administrator, etc.) can be a “game-over” event for some companies, and those accounts should always be treated with care and properly identified during a risk assessment. These credentials are a prime attack vector for privilege escalation and their protection should be prioritized over the course of your PAM journey.

Vulnerabilities and Exploits

A vulnerability itself does not allow for a privileged attack vector to succeed. In fact, a vulnerability in and of itself just means that a risk exists and that any type of attack could succeed. Vulnerabilities are nothing more than mistakes. They are mistakes in the code, design, implementation, or configuration that allow malicious activity potentially to occur via an exploit. Thus, without an exploit, a vulnerability is just a potential problem and used in a risk assessment to gauge what could happen. Depending on the vulnerability, available exploit, and resources assessed with the flaw, the actual risk could be limited in scope, or it could signify an impending disaster. While this is a simplification of a real risk assessment, it provides the foundation for privileges as an attack vector. Not all vulnerabilities and exploits are equal, and depending on the privileges of the user or application executing in conjunction with the vulnerability, the escalation and effectiveness of the attack vector can change.

For example, an operating system vulnerability executed by a standard user vs. an administrator can have two completely different sets of risks once exploited. As a standard user, the exploit might not work at all, could be limited to just the user’s privileges as a standard user, or it could have full administrative access to the host. In fact, as reported by BeyondTrust in 2019,1 81% of Microsoft vulnerabilities could be mitigated by being a standard user vs. an administrator. And, if the user is using a domain administrator account or other elevated privileges, the exploit could have permissions to the entire environment. This is something a threat actor targets as low-hanging fruit. Who is operating outside of security best practices and how can I leverage them to infiltrate the environment?

With this in mind, vulnerabilities come in all “shapes and sizes.” They can involve the operating system, applications, web applications, infrastructure, and so on. They can also target the protocols, transports, and communications in between resources from wired networks, Wi-Fi, to tone-based radio frequencies. However, not all vulnerabilities have exploits. Some are proof of concepts, some are unreliable, and some are easily weaponized and even included in commercial penetration testing tools or free open source hacking tools. In addition, some vulnerabilities are sold on the dark web to perpetrate cybercrimes, and others are used exclusively by nation-states until they are patched or made public (intentionally or not). The point is that vulnerabilities can be in anything at any time. It is how they are leveraged that makes them important, and if the vulnerability itself leads to an exploit that can change privileges (privilege escalation from one user’s permissions to another), the risk is a very real privileged attack vector. To date, less than 10% of all Microsoft vulnerabilities allow for privilege escalation, yet, these are the types of vulnerabilities that have been responsible for some of the worst exploits in recent years—from BlueKeep2 to WannaCry3 to NotPetya.

The security industry has multiple security standards to convey the risk, threat, and relevance of a vulnerability. The most common standards are the following:
  • Common Vulnerabilities and Exposures (CVE): A standard for information security vulnerability names and descriptions.

  • Common Vulnerability Scoring System (CVSS): A mathematical system for scoring the risk of information technology vulnerabilities.

  • The Extensible Configuration Checklist Description Format (XCCDF): A specification language for writing security checklists, benchmarks, and related kinds of documents.

  • Open Vulnerability Assessment Language (OVAL): An information security community effort to standardize how to assess and report upon the machine state of computer systems.

  • Common Configuration Enumeration (CCE): Provides unique identifiers to system configuration issues to facilitate fast and accurate correlation of configuration data across multiple information sources and tools.

  • Common Weakness Enumeration Specification (CWE): Provides a common language of discourse for discussing, finding, and dealing with the causes of software security vulnerabilities as they are found in code.

  • Common Platform Enumeration (CPE): A structured naming scheme for information technology systems, software, and packages.

  • Common Configuration Scoring System (CCSS): A set of measures of the severity of software security configuration issues. CCSS is a derivation of CVSS.

The results from all this information allow security professionals and management teams to discuss and prioritize the risks from vulnerabilities. The ones with privileged escalation exploits that can operate without any end-user intervention pose the highest risk. These are weaponized in the form of malware called “worms.”

In the end, information technology teams must prevent any type of exploitation, especially ones that are simple for a threat actor to perform. With a common language and structure across vendors, companies, and governments, we can better define mitigation and remediation strategies. A critical risk for one company may not exist for another simply based on their environment. Standards like CVSS allow for that to be communicated correctly to all stakeholders and help define best practices for mitigation. Figure 6-2 illustrates perimeter exploitation typically associated with vulnerabilities as it relates to privileged attack vectors.
Figure 6-2

Perimeter Exploitation and Considerations

As discussed, exploits require a vulnerability. Without a documentable flaw, an exploit cannot exist. We may just not understand the vulnerability when a new exploit appears in the wild. It can take some time for security professionals to reverse engineer an exploit to figure out what vulnerability was leveraged. This is typically a very technical forensics exercise performed by specialists in the industry. An exploit that can gain privileges, execute code, and go undetected is not only dependent on the vulnerability, but also on the privileges the exploit has when it executes. This is why vulnerability management, risk assessments, patch management, and privileged access management are so important. Exploits can only execute in the confines of the resource they compromise. If no vulnerability exists due to remediation, the exploit cannot execute. If the privileges of the user or application with the vulnerability are low (standard user), and no privilege escalation exploitation is possible, then the attack is limited in its capabilities or may not work at all. However, don’t be fooled: exploitation, even at standard user privileges, can cause devastation in the form of ransomware or other vicious attacks. Fortunately, the vast majority can be contained, or otherwise mitigated just by lowering privileges and minimizing the surface area for a privileged attack. Exploits wreak the most havoc with the highest privileges, hence the recommendation to operate with the least amount of privileges as a mitigation strategy.


Configuration flaws are just another form of vulnerabilities. They are, nonetheless, flaws that do not require remediation—just mitigation. The difference between remediation and mitigation is key. Remediation implies the deployment of a software or firmware patch to correct the vulnerability. This is commonly referred to as patch management. Mitigation is simply a change at some level in the existing deployment that deflects (mitigates) the risk from being exploited. It can be a simple change within a file, group policy, updating certificates, or other type of setting. In the end, they are vulnerabilities based on weak configurations or improper hardening and can be easily exploited as a privileged attack vector.

The most common configuration problems exploited for privileges involve accounts that have poor default security practices. This could be blank or default passwords upon initial configuration for administrator or root accounts, or insecure access that is not locked down after an initial install due to a lack of expertise or an undocumented backdoor.

Regardless, configuration flaws just require a change to the resource. And, if the flaw is severe enough, a threat actor can have root privileges with little to no effort.


Malware, which includes viruses, spyware, worms, adware, ransomware, and so on, refers to any class of undesirable or unauthorized software designed to have malicious intent on a resource. The intent can range from surveillance, data leakage, disruption, command, and control to extortion. If you pick your favorite crime that can be translated to an information technology resource, malware can provide a vehicle to instrument cybercriminal activity for a threat actor. Malware, like any other program, can execute at any permission from standard user to administrator (root). Depending on its creation, intent, and privileges, the damage it can do can be anything from an annoyance to a game-over event. Malware can be installed on a resource via a vulnerability and exploit combination, or through legitimate installers, weaknesses in the supply chain, or even social via engineering, such as phishing. Regardless of the delivery mechanism, the motive is to get unauthorized code executing on a resource. Once running, it becomes a battle of detection by endpoint protection vendors and threat actors to keep executing, avoid detection, and remain persistent. This includes malware adapting itself to avoid detection as well as disabling defenses to continue proliferation. Malware itself, based on intent, can perform functions like pass-the-hash and keystroke logging. This allows for the stealing of passwords to perform attacks based on privileges by the malware itself or other attack vectors deployed by the threat actor. Malware is just a transport vehicle to continue the propagation of a sustained attack and, ultimately, needs permissions to obtain the target information sought by the attacker. It is such a broad category of malicious software–but when discussing privileges, the subset that scrapes memory, installs additional malicious software, or provides surveillance is the most relevant.

Social Engineering

If you grew up with siblings, you might have had the fortune of being the brunt of a practical joke—everything from smell my finger, open this box, through taste this. While the examples are rather crude, they are no different from the hacking capabilities we all experience via social engineering and the desire of a threat actor to gain privileges. The main motive from our relatives was to leverage our trust into doing something mischievous or embarrassing for the amusement (usually laughter) of our siblings. As harmless as it sounds, we hopefully learned for the next time.

Social engineering is no different. We have a blind trust in the email we receive, the phone call we answer, or even the letter we receive to believe someone is contacting us. If the message is crafted well enough, and potentially even spoofing someone we already trust, then the threat actor already gained the first step in deceiving us and potentially carrying out a ruse. If, in fact, we act on the fake correspondence from a work colleague, friend, company, or even a sweepstakes, we may just become a victim of social engineering.

Considering the modern threats in the cyber world, from ransomware to recording our voices on a phone call, the outcome can become much more severe than eating a dead worm presented as beef jerky by a sibling. At the risk of becoming paranoid about every email we receive and phone call we answer, we need to understand how social engineering works and how to identify it in the first place, without losing our sanity. This learned behavior is no different than figuring out whether your sibling has lied about a message from your parents or not. Sometimes you just need to verify the message before taking action and understand the risks from the outcome, should you engage.

From a social engineering perspective, threat actors attempt to capitalize on a few key human traits to meet their goals:
  • Trust: The belief that the correspondence, of any type, is from a trustworthy source.

  • Gullibility: The belief that the contents, as crazy or simple as they may be, are, in fact, real.

  • Sincerity: The intent of the content is in your best interest to respond or open.

  • Suspicion: The contents of the correspondence do not raise any concern by having misspellings and poor grammar, or by sounding like a robot corresponding on the phone.

  • Curiosity: The attack technique has not been identified (as part of previous training), or the person remembers the attack vector, but does not react accordingly.

  • Laziness: The correspondence initially looks good enough, but investigating the URLs and contents for malicious activity does not seem worth the effort.

If we consider each of these characteristics, we can appropriately train team members to be resistant to social engineering. The difficulty is overcoming human traits and not deviating from the education. To that end, please consider the following training parameters and potential self-awareness techniques to stop social engineering and privileged attacks:
  • Team members should only trust requests for sensitive information from known and trusted team members. An email address alone in the “From” line is not sufficient to verify the request, nor is an email reply. The sender’s account could be compromised. The best option is to learn from two-factor authentication techniques and pick up the phone or verify the email using another communications path. For example, call the party requesting the sensitive information and verify the request. If the request seems absurd, like requesting W-2 information or a wire transfer, confirm this is acceptable according to internal policies or other stakeholders, such as finance or human resources (it could be an insider attack). Simple verification of the request from an alleged trusted individual, like a superior, can go a long way to stopping social engineering. Also, all of this should occur before opening any attachments or clicking any links due to any existing vulnerabilities and exploits. If the email is malicious, the payload and exploit may have executed before you performed any verification.

  • If the request is coming from an unknown source, but is moderately trusted—such as a bank or business you interact with—simple techniques can stop you from being gullible. First, check all the links in the email and make sure they actually point back to the proper domain. Just hovering over the link on most computers and email programs will reveal the contents. If the request is over the phone, never give out personal information. Remember, they called you. For example, the IRS will never contact you by phone; they only use USPS for official correspondence. Don’t let yourself fall for the “sky is falling” metaphor.

  • Teaching how to distinguish between genuine, legitimate correspondence and fraudulent correspondence is rather difficult. Social engineering can take on many forms—from accounts payable, love letters, and resumes, to human resources interventions. Just stating “if it seems too good to be true” or “nothing is ever free” only handles a very small subset of social engineering attempts. In addition, if peers receive the same correspondence, it only eliminates spear-phishing attempts as the probable attack vector. The best option is to consider if you should be receiving the request in the first place. Is this something you normally do or is it out of the ordinary to receive it? If it is an unusual request, default back to caution and verify trust, and therefore verify the intent, before proceeding. This is especially important with the inception of deepfake voice and photos that are nearly impossible to distinguish from real people and images.

  • Suspicious correspondence is the easiest way to detect and deflect social engineering attempts. This requires a little detective-style investigation into the correspondence by looking for spelling mistakes, poor grammar, bad formatting, or robotic voices on the phone that could be deepfakes. This is expressly true if the request is from a source with whom you have never had an interaction in the past. This could be an offer of a free cruise, or from a bank at which you have no accounts. If there is any reason to be suspicious, it is best to err on the side of caution: do not open any attachments or files, click any links, or verbally reply—just delete the correspondence. If it is real, the responsible party will call back in due course.

  • Curiosity is the worst offender, from a social engineering perspective. Nothing should happen to me since I believe I am fully protected by my computer and my company’s information technology security resources, right? That’s a false and dangerous assumption. Modern attacks can circumvent the best systems and application control solutions—even leveraging native OS commands to conduct their attacks. The best defense for a person’s curiosity is purely self-restraint. Do not reply to “Can you hear me?” from a strange phone call; do not open attachments if any of the preceding criteria have been realized; and do not believe that nothing can happen to you (even for people using MacOS). The fact is, it can, and your curiosity should not be the cause. Being naïve will make you a victim.

Social engineering is a huge problem, and there is no technology that is 100% effective. Spam filters can strip out malicious emails, and endpoint protection solutions can find known or behavior-based malware, but nothing can totally stop the human problem of social engineering and potential insider threats. The best defense for social engineering is education and an understanding of how these attacks leverage our own traits to be successful. If we can understand our own flaws and react accordingly, we can minimize the threat actor’s ability to compromise resources and gain privileges within the environment.

Multi-factor Authentication

While we have been focusing on passwords as the primary form of authentication with credentials, other authentication techniques should be used to strengthen the authentication model. This is especially true for privileged accounts. As a security best practice, and required by many regulatory authorities, multi-factor authentication (MFA) techniques are required to secure access instead of a traditional username and password credential combination only (single factor). MFA provides an additional layer that makes it more difficult (but not impossible) to hack and, thus, is always recommended when securing sensitive information.

The premise for MFA (two-factor is a subset category for authentication) is simple. In addition to a traditional username and password credential, an additional “passcode” or evidence is needed to validate the user. This is more than just a PIN code; it is best implemented when you have something physical to reference. The delivery and randomization of “proof” varies from technology to technology and from vendor to vendor. This proof typically takes on the form of knowledge (something the user knows that is unique to them), possession (something they physically have that’s unique to them), and inherence (something they are in a given state).

The use of multiple authentication factors provides important additional protection around an identity. An unauthorized threat actor is most likely unable to supply all the factors required for correct access due to an additional authentication variable. During a session, if at least one of the components is in error, the user’s identity is not verified with sufficient confidence (2 of 3 criteria match), then access to the resource being protected by multi-factor authentication is denied. The authentication factors of a multi-factor authentication model typically include the following:
  • A physical device or software like a phone app or USB key that produces a secret passcode re-randomized on a regular frequency.

  • A secret code known only to the end user, like a PIN that is typically mentally stored.

  • A physical characteristic that can be digitally analyzed for uniqueness, like a fingerprint, typing speed, or voice. These are called biometric authentication technology.

MFA is an identity-specific layer for authentication. Once validated, the privileges assigned as a potential attack vector are not any different unless policies explicitly require multi-factor authentication in order to be assigned. For example, if credentials are compromised in a traditional username and password model, a threat actor could authenticate against any target that will accept them locally or remotely. For multi-factor, even though there is an additional variable required, including physical presence, once you are validated, lateral navigation is still possible from your initial location (barring any segmentation technology or policy). The difference is solely your starting point for authentication. Multi-factor must have all the security conditions met from an entry point, while traditional credentials do not. A hacker can leverage credentials within a network to jump from host to host while changing credentials as needed. Unless the multi-factor system itself is compromised, the hacker cannot target a multi-factor host for authentication unless they have compromised the multi-factor system itself, or have in possession an identity’s complete multi-factor challenge and response. Hence, there always needs to be an initial entry point for starting a multi-factor session, and once in, using credentials is the easiest method for a threat actor to continue a privileged attack with additional lateral movement.

Local Vs. Centralized Privileges

In subsequent chapters, we will discuss the various approaches to strong and efficient privileged access management options that are available to organizations. As we discuss the privileged attack vector in-depth, it will become apparent that this goal may be best served by an identity governance solution that leverages a directory service foundation. However, as organizations consolidate and simplify identity infrastructures, they must be cautious. If not implemented or secured correctly, they can become a privilege’s greatest weakness. If one privileged account is compromised, the risk of lateral movement (Figure 6-3) to other resources relying on and trusting this service for authentication may be possible.
Figure 6-3

Lateral Movement and Exfiltration

A strong, centralized IAM implementation permits authentication between layers from file systems, operating systems, users, applications, data, and even business partners. It is an age-old information technology dilemma to provide the best security, but allow for smooth and seamless business functions. Too much security, and nothing works. Too little security, and it can be an instrument for continued execution by a threat actor to operate anywhere within the environment.

In the end, the best considerations for privileges are granularity and centralization using an identity governance model. This allows finite controls for rights and a single place for management. For today’s modern infrastructure, this is the best security practice we can implement today.