Front Matter – Privileged Attack Vectors: Building Effective Cyber-Defense Strategies to Protect Organizations

Morey J. Haber

Privileged Attack Vectors

Building Effective Cyber-Defense Strategies to Protect Organizations

2nd ed.
Morey J. Haber
Heathrow, FL, USA
ISBN 978-1-4842-5913-9e-ISBN 978-1-4842-5914-6
© Morey J. Haber 2017, 2020
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Distributed to the book trade worldwide by Springer Science+Business Media New York, 233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505, e-mail, or visit Apress Media, LLC is a California LLC and the sole member (owner) is Springer Science + Business Media Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation.

Having a happy and healthy life is all the privileges anyone in the world should ever need.


You almost can’t read a news article or watch the nightly news these days without seeing some reference to hacking. One company after the next is falling victim to cybersecurity breach incidents and data loss, and the frequency of these data breaches has been accelerating over 10 years. It’s almost to the point that these news events are so commonplace, we scarcely even pay them attention. We simply just accept that our most private information, from our financial records to our likes and dislikes, and even our genetic profile, is open for the public to read.

The problem in our new overconnected world is everyone wants everything immediately available at their fingertips. We want to shop from the convenience of our couch and have the items show up at our door—sometimes within hours of making our purchase decision. Or, we want to pay our friend back for buying lunch and ship funds from one electronic source to another with the click of a button. We bank online, shop online, talk online, play games online, and perform myriad other activities too numerous to list. More and more we are interconnecting every aspect of our lives and giving away our most precious commodity, our private information.

The majority of the world’s users simply don’t understand the value of what is being given away. One study estimated in 2020 the average Internet user to have 207 accounts. Seven of these will be for social media platforms alone. Unfortunately, the average user doesn’t really understand what it takes to protect their accounts either. During a recent rollout of an SSO platform, I discovered how many users didn’t understand that it took more than using their child’s name as their password to protect their information. I was even asked why it all was necessary. Didn’t the hackers already have all of this anyway?

The problem is, how do we really know who we are giving all this information to? More importantly, who are these companies then giving our information to? I can’t tell you the number of times I’ve looked up some product or service on Google only to find it being offered at a discounted price the next time I’m shopping on Amazon. How often are the things we discuss on social media, our search histories, and buying habits used to “enhance” our online experience?

From a business perspective, it’s a great model. The customers do all the work, provide all the information free of charge, and the business gets to reap the rewards. It’s basically the business model Google was founded on—provide other businesses with targeted marketing research based on the consumers’ search habits, for a fair price of course.

We all hope that the companies we interact with will protect our information the same way we would protect it ourselves. The truth is they often don’t have the resources to accomplish this goal. It seems like every day a new attack vector is discovered. A new computer virus or malware is released, or a hacker publishes a new technique to bypass a company’s defensive security tools. Typically, the tools the company spent their hard-earned resources installing, configuring, and deploying to prevent such an incident. This onslaught of attacks is simply draining for companies.

Unfortunately, the anonymity of the Internet offers a playground for the unscrupulous. It was recently estimated that the global cybercrime economy generates over 1.5 trillion in profits. Every technology platform available is currently being exploited somehow by someone for their own gains. I know I get at least five calls a week either telling me my car warranty has expired or my computer has been reporting malicious activity and they want to help “fix” it. Every piece of personal information I’ve shared online has somehow come back to be used against me as part of this nefarious worldwide theft.

Sadly, many organizations just don’t take the necessary precautions to prevent the theft or misuse of the information we provide. Since the information security industry evolved from information technology (IT), there is a tendency to focus on the physical systems and networks while neglecting to adequately address the users and system admins. You might think, in today’s world, information security professionals would be ten steps ahead. The inexplicable truth is many just simply aren’t. I’m not sure if it’s caused by too much work, a lack of time, or the simple lack of understanding, but the impact is the same; everyone’s private information is at risk.

If you read any modern breach report, you’ll find that some percentage of every data breach around the world involves some sort of account compromise. Most of these reports estimate between 80 and 95% depending. No real big surprises there. If you want to steal information, you’re going to need access to it. However, the big surprise is how often it’s the compromise of an IT administrator’s credentials that lead to the loss.

Long gone are the days when we can think of the “hacker” as some mischievous or disaffected teenager in their basement, using an acoustic coupling modem to dial up NORAD and start World War III. They’re not even hanging out in a wine cellar with Halle Berry, watching a green screen graphic cube “hack” into a bank to siphon $9.5 billion from government slush funds. Today (2020), it’s more likely to be some sophisticated organization gaining access through an unprotected supply chain and infecting the downstream product, or, alternatively, some company employee who ends up with more access to information than they need.

If we really take a precise look at the problem, we find that administrative privilege is commonly exploited in most modern “hacking” practices. If the threat actor can find a way to give a user more access than needed, or create a new account on a system they can use, these often go unnoticed by system administrators. In 2019, the average time it took to identify a data breach was 206 days. That means the “hacker” would have more than 6 months to steal any information they want, before even being detected by the company. That doesn’t include the time it would take to figure out all the access they have and remove it from all the systems.

The legitimate use of admin privileges is a business necessity in today’s technologically dependent environments. However, exploits to misuse privileges tend to outpace the innovations to protect against them. For example, many legacy applications that are still used by companies can’t support modern authentication practices to ensure administrators are valid. Many systems don’t protect user accounts with encryption or other general functions used to protect administrator access. Even some administrators don’t want to be burdened with two-factor authentication or any restrictions they perceive would impede their ability to support their organization in times of need.

Today’s businesses are a complex set of technologies, people, and processes. They simply can’t update everything and everyone to the most modern solutions available all the time. There is always going to be something, an old operating system, a custom application, or a long-standing process, that will need to be maintained.

In this second edition of Privileged Attack Vectors, Morey will help you understand the current threats to privileged accounts and why properly managing them is so important. This book will provide a road map to understand how to protect against a breach, protect against lateral movement, and improve the ability to detect hacker activity or insider threats in order to mitigate the impact. There is no silver bullet to guarantee you’ll have all the protection you need against all vectors and stages of an attack, but this Privileged Attack Vectors will arm you with the tools and strategy necessary to stand a fighting chance.

David Tyburski

Vice President of Information Security and CISO

Wynn Resorts


In quantum mechanics, the observer effect1 theory asserts that the mere observation or measurement of a system or event unavoidably alters that which it is observing/measuring. In other words, the tools or methods used for measurement or observation interfere with the system or event they are interacting with in some way.

As one example, consider the measurement of voltage across a circuit or battery. A voltmeter must draw a very small, but measurable, amount of current in order to make the calculation. This lowers the overall current (I) ultimately available to a system. If the measurement was intrusive or did not have a sufficiently high resistance (R) (Ohm’s law—voltage equals current multiplied by resistance), the available current and potential voltage (V) would be impacted as well.

While the effects of measurement are commonly negligible, the object under observation or utilization may still experience a change. And sometimes, these changes can alter our perception of the entire system because the measurement itself is much more intrusive than anticipated, or initially designed. This effect can be found in domains ranging everywhere from physics to electronics—and even digital marketing.

This concept is fundamentally important in the world of privileged attack vectors because the more a privileged account is used, exposed, or made readily available, just like a measurement, the higher risk it has to an environment. To that end, let’s begin by diving into the role the observer effect plays in the realm of cybersecurity and home in on the importance of keeping privileges secure.

The Cybersecurity Observer Effect

Every measurement used for a cybersecurity check impacts the overall system. This is true for a simple antivirus check, all the way through resources used for logging. CPU, time to load, memory, network traffic, and others can each be altered in the course of providing a security measurement for some activity.

Ideally, a security measurement should operate with little to no impact, but how often is this really the case? Can frictionless, no-impact security actually be successfully implemented within an environment? The answer may surprise you and the observer effect plays a big part.

As we have established, every IT security measurement does alter a system and, in so doing, consumes resources and potentially changes the risk surface for an environment. If all of the measurements are serial, elongated by time, each one adds a piece of information to the overall measurement to calculate an observable outcome. The total resource consumption becomes cumulative and is calculated by storage, authentication time, changes or elongation in workflow, transmission of data, and auditing of all data logged as a part of the assessment.

However, when measurements and logical decisions are performed in parallel (provided the system has enough resources to perform them simultaneously), then the amount of time needed to perform a measurement can be reduced and the perceived impact to the user minimized because the measure is in a finite timeframe and is not persistently reoccurring. This is basically parallel processing. To achieve no-impact security (truly, we’re talking about minimal-impact or as-frictionless-as-possible security), security measurements and operational logic should occur alongside regular processes in parallel, and only when needed. This is contrary to typical measurements that might occur at fixed intervals or batches where anomalies or security incidents like an unauthorized remote session could occur between measured intervals. These could be potentially missed unless all logs are processed for connection history (in parallel) in lieu of just checking if a session is active at a point in time. If you had to constantly monitor for a remote session, the observer effect would clearly have a resource impact vs. looking for a trigger, in parallel, to determine a remote session is active and to begin measurements.

Therefore, cybersecurity measurements are best conducted when a baseline has been established and changes occur—authorized or not. While a periodic test of the baseline is a security best practice, checking for the same thing over and over on a static resource is a waste of resources. This is true for disciplines like vulnerability management that assess for the same vulnerabilities over and over again, even though nothing has changed on the asset. Detecting that a change occurred and performing a new measurement, plus reviewing any historical logging for context, minimize the observer effect in cybersecurity measurements. When this is applied to privileged accounts, all aspects—from discovery to session monitoring—can ensure that privilege monitoring and management is low impact to the user and does not create the resource issues we have been describing (i.e., checking for privileged activity and usage over and over again).

Consider the following two real-world cybersecurity scenarios when trying to measure access.

Scenario 1

Before any multi-factor or two-factor authentication can occur to a resource, the security tool introduces new steps in the workflow to validate the user. In addition to traditional credentials, a second factor is included to provide physical validation of the user. That is, I have something in my possession to help prove that I am authorized to use an account. It does not prove the user’s identity, however. That is a different discussion and another book.2 This adds time and resources, as well as a level of annoyance, to end users. Single sign-on (SSO) technology mitigates some of this annoyance by only requiring two-factor authentication once to a group of resources and passing through authentication—since the user is already considered trusted for that session.

As described earlier, the first launch of two-factor initiated a workflow to validate the user for subsequent applications vs. requiring them to repeatedly relaunch two-factor. The process of single sign-on is now running in parallel to the user’s normal operations and, in fact, provides a lower impact than requiring credentials each time the application is launched—even without two-factor authentication. The user just logs in without any additional challenge and response. So, the measurement of the user’s trust was done once intrusively with additional steps, but subsequently made easier because of the high confidence in the initial measurement. Logging continues to occur in parallel with each new application launch to audit for activity.

The alternative method is highly intrusive and would require credentials and two-factor authentication for every application launched by the end user in a serial workflow and based on the policy to provide multi-factor authentication for every application launch. This underscores the necessity of parallel processing and a simple model for creating a secure, frictionless environment. Measure only when needed and minimize the observer effect.

Scenario 2

Consider the password storage capabilities within a password safe or password vault technology. Enterprise versions of these solutions can automatically rotate passwords (and certificates) on a schedule or based on usage, such that they are ever-changing and not a liability if known by a threat actor—whether an insider or external. The more that they are exposed, utilized, or documented externally (measured), the higher risk they represent.

If a user or administrator needs to use these credentials , the typical workflow involves authenticating into the password safe or vault (hopefully using the two-factor discussed earlier) and retrieving the credentials needed to perform a specific task. From a workflow perspective, simply measuring when privileged credentials are being accessed by a user and providing the current password is intrusive to the end user due to the additional steps required to obtain them. For example, the user has additional mouse clicks, time, and applications to complete the task while potentially creating additional risk of copying the password into memory using the clipboard (copy/paste) or even physically exposing it by writing it down on paper. While this is the primary use case to measure privileged access by documenting when privileged credentials were requested, it provides little security if we cannot reliably determine when, where, and how the credentials are being used. This is a high-impact model that needs to change from both a resource and risk perspective. Both are negative impacts with regard to the observer effect.

Next, consider session monitoring and management . This will be discussed in detail later in this book, but the capability provides a gateway, or proxy technology, into a host for monitoring sessions, and potentially documenting, all security and user activity. Session management is essentially a low-impact method to monitor what is actually happening when a privileged session occurs, but it requires the remote connection to occur through the proxy, as opposed to a lateral connection, in order to be effective. Essentially, there has to be a man in the middle in order to perform the session monitoring, even if it is dedicated software that reports its findings to a proxy or gateway based on local or remote access.

Without proper access control lists (ACLs) , a password retrieval from a safe allows for remote access without any session monitoring capabilities. This is an undesirable state as there are no measurements since the session can occur without using the proxy. When we consider password storage, retrieval, rotation, and session monitoring as a solution working in parallel, we can measure activity down to the keystroke and can create a very low-impact session management implementation. And, if this entire workflow can measure all activity and privileged access based on the account or user, then the observer effect becomes a moot point for a successful privileged account management model.

To that end, password storage solutions alone can be intrusive to the workflow for password retrieval. Session monitoring by itself is vulnerable to security flaws like lateral movement. When used together as a strategy for universal privilege management, the two solutions can operate in parallel to create a near no-impact security solution.

Mitigating the Observer Effect in Cybersecurity

The observer effect presents an ongoing concern for cybersecurity practitioners. Many solutions can have a high impact on the runtime within an environment and create undesirable delays, single points of failure, and changes that negatively impact users, operations, and productivity. The worst problems can cause even the best solutions to become shelfware since the end users push back and resist adoption.

Measuring and implementing security will always have some impact, but the goal is to make it as unperceivable as possible—especially to the end users. While zero impact is truly unobtainable, the concept of little to no impact after initial setup is definitely viable.

When you evaluate security solutions from a single vendor, or multiple vendors, ask how the solutions can operate in parallel or be used in tandem to create a no-impact environment. After all, if they all run serially or have a high impact, users will not only reject them, your ability to obtain accurate cybersecurity measurements will also suffer due to the resources required to collect necessary data. Now, let us apply this to modern cybersecurity findings.

The Observer Effect in the Real World

Each year, Verizon publishes its Data Breach Investigations Report3 (DBIR) , and BeyondTrust publishes its Privileged Access Threat Report.4 Each report provides valuable data for information and security technology professionals around cybersecurity trends, perceptions, cyberattack methods, causes of breaches, and more—all observer-based. With both reports now available, security professionals can make further deductions about cyberthreats, particularly the most dangerous ones—privileged threats—along with the best strategies to mitigate them.

Top Privileged Threats

In June 2019, BeyondTrust published the Privileged Access Threat Report. In this report, the organization surveyed over 1000 IT decision-makers across a diverse set of industries throughout the United States, EMEA, and APAC to gauge the perceived threats facing organizations and the risks of privileged attack vectors. The survey generated some noteworthy data around breaches and poor cybersecurity practices :
  • About 64% of respondents thought it is likely that they’ve suffered a breach due to employee access, and 58% indicated that they likely suffered a breach due to vendor access.

  • About 62% of respondents were worried about the unintentional mishandling of sensitive data by employees based on the following poor security practices:
    • Writing down passwords (60%)

    • Downloading data onto an external memory stick (60%)

    • Sending files to personal email accounts (60%)

    • Telling colleagues their passwords (58%)

    • Logging in over unsecured Wi-Fi (57%)

    • Staying logged on (56%)

  • About 71% of respondents agreed that their company would be more secure if they restricted employee device access.

But what are the attack vectors that drive these opinions—and fears?

According to the 2020 Verizon Data Breach Investigation Report (DBIR) , use of stolen credentials is the second most common threat activity attackers leverage to breach an environment, just below phishing. In addition, the DBIR also reveals that over 80% of breaches classified as hacking involve brute force or the use of lost or stolen credentials.

Stolen credentials are most often used on mail servers, leading to a variety of identity-based attack vectors. Unfortunately, the actual techniques used for obtaining and applying stolen credentials are not covered in the Verizon report. But that doesn’t mean the answers are beyond our grasp and something we cannot measure.

According to the PATR, it is reasonable to conclude that well more than half of employees and vendors have been the source of a breach and also that poor cybersecurity hygiene for credentials and passwords is the prime cause for these breaches.

Combining the Verizon and BeyondTrust data points, we can deduce the following as the top privileged attack vector techniques used, as well as why they are an unacceptable risk for any business:

  • Password guessing

  • Dictionary attacks or rainbow tables

  • Brute force attacks

  • Pass the hash (PtH) or other memory-scraping techniques

  • Security question social engineering

  • Account hijacking based on predictable password resets

  • Privileged vulnerabilities and exploits

  • Misconfigurations

  • Malware, like keystroke loggers

  • Social engineering (phishing, etc.)

  • MFA flaws using weak 2FA, like SMS

  • Default system or application credentials

  • Anonymous or enabled Guest access

  • Predictable password patterns

  • Shared or unmanaged, stale credentials

  • Temporary passwords

  • Reused passwords or credentials

  • Shadow or obsolete (former employee) credentials

  • Various hybrid credential attacks (i.e., spray attacks) based on variations of the above

These two reports alone are supported by analysts like Forrester. Forrester Research5 estimates that privileged credentials are implicated in over 80% of data breaches.

Mitigating Privileged Attack Vectors

Now, the question becomes, “What can organizations and users do to resolve these privileged attack vectors ?”

To begin, consider the following universal cybersecurity best practices regarding credential and password management :
  • All privileged accounts (administrator and root) should be monitored for appropriate activity and have proper certifications based on roles and ownership.

  • Users should always perform their daily computing activities as a Standard User and only use a privileged account when absolutely necessary and appropriate.

  • When possible, administrative privileges should be removed or eliminated, and end users, administrators, DevOps processes, and RPA (robotic process automation) should operate using the concept of least privilege.

  • All accounts, regardless of operating system or application, should have a unique password whenever, and wherever, possible. The credential rotation and management practices should be based on policy and guided by considerations such as regulatory compliance and other security best practices, like NIST.

  • All sessions, locally initiated or remotely started, should honor all of the best practices listed and, when possible, avoid the implementation of always-on privileged accounts. The concept of just-in-time-privileged access can help implement these best practices.

While the implementation of these concepts may seem daunting and unachievable for many organizations, these goals are practical and well within your reach, but they do require your adoption of a formal privileged access management (PAM) program . This is often referred to as a PAM journey. In addition, when PAM is implemented correctly, it can mitigate threats illustrated by the observer effect in the real world and, most importantly, provide a frictionless approach to securing your universe of privileges. That is the key. If any of this journey introduces measurements that impact resources or provide a poor user experience, it will fail.

Therefore, here is what a successful PAM journey within an organization encompasses, and what we will detail in great lengths through the remainder of this book:
  • Password management for rotation and check-in and checkout of passwords.

  • Session management for recording, indexing, filtering, and documenting of all interactive sessions.

  • Endpoint privilege management to remove administrative or root privileges on any platform including Windows, MacOS, Unix, Linux, and even network devices like routers, switches, printers, and IoT devices.

  • Secure remote access to establish sessions based on personas (i.e., vendors or help desk staff), with least privilege credentials and the need to share credentials with approved operators.

  • Directory bridging to consolidate logon accounts across non-Windows systems, like Unix and Linux. This enables users, regardless of persona, to authenticate using their Active Directory credentials in lieu of local accounts.

  • Management of next-generation technologies from ICS to IoT and all of the automation technologies in between, from RPA to DevOps.

  • User behavior analytics and reporting to provide complete attestation reporting, certifications, and alerting on inappropriate behavior based on privileged usage.

  • The cloud, just-in-time administration, and zero trust all play a major part in the strategy for almost every modern organization. Embracing PAM as a journey with these tactical concepts will help ensure the observer effect for privilege management does not impact their deployment.

  • The complete integration of all the preceding capabilities within an organization’s established ecosystem for change management, ticketing, operational workflow, identity governance, and security information and event managers (SIEMs) .

These practices ensure that privileged credentials and passwords are vigorously resistant to hacking attempts. In addition, should the credentials ever become compromised, the risk and damage from any exploit can be mitigated by keeping them unique among resources and having the least privileges necessary—and for a time-limited duration—to perform necessary, authorized actions. One key piece of least privilege involves reducing the privileges of the credentials to those of a standard user, which makes it exceedingly difficult for a threat actor to use privileged attack vectors (stolen credentials) as a method of compromise.

Finally ask yourself one honest question, “How confident are you in your own organization’s PAM abilities?” If you have any doubts about your PAM posture, then this book is for you and will guide you along a safe and successful PAM journey.


Contributions by:

Contributing Editor: Matt Miller

Product Management: Brian Chappell

Deputy CTO: Christopher Hills

Illustrations and Graphics:
  • Angela Duggan

  • Hannah Reed

  • Greg Francendese

  • Stacy Blaiss

  • Liz Drysdale

And a special thank you to Daniel DeRosa, the Chief Product Officer at BeyondTrust, for his contributions on PAM strategy and machine learning.

Table of Contents
Index 375
About the Author
Morey J. Haber

is Chief Technology Officer and Chief Information Security Officer at BeyondTrust. He has more than 20 years of IT industry experience and has authored three Apress books: Privileged Attack Vectors, Asset Attack Vectors, and Identity Attack Vectors. In 2018, Bomgar acquired BeyondTrust and retained the BeyondTrust name. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition. Today, Morey oversees BeyondTrust strategy for privileged access management and remote access solutions. In 2004, he joined eEye as Director of Security Engineering and was responsible for strategic business discussions and vulnerability management architectures in Fortune 500 clients. Prior to eEye, he was Development Manager for Computer Associates, Inc. (CA), responsible for new product beta cycles and named customer accounts. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook.

About the Technical Reviewer
Derek A. Smith

is an expert in cybersecurity, cyber forensics, healthcare IT, SCADA security, physical security, investigations, organizational leadership, and training. He is currently an IT program manager with the federal government and a cybersecurity Associate Professor at the University of Maryland University College and the Virginia University of Science and Technology, and runs a small cybersecurity training company. Derek has completed three cybersecurity books and contributed a chapter for a fourth. He currently speaks at cybersecurity events throughout America and performs webinars for several companies as one of their cyber experts. Formerly, Derek worked for a number of IT companies, Computer Sciences Corporation and Booz Allen Hamilton, among them. Derek spent 18 years as a special agent for various government agencies and the military. He has also taught business and IT courses at several universities for over 25 years. Derek has served in the US Navy, Air Force, and Army for a total of 24 years. He completed an MBA, MS in IT Information Assurance, Masters in IT Project Management, MS in Digital Forensics, a BS in Education, and several associate degrees. He completed all but the dissertation for his doctorate.