Chapter 15 Local Security – Linux Administration: A Beginner's Guide, Eighth Edition, 8th Edition

CHAPTER

15

Local Security

We frequently hear about newly discovered attacks (or vulnerabilities) against various operating systems. A sometimes overlooked detail in discussions about these new attacks is the exploit vector. Exploit vectors can be of two types: those in which the vulnerability is exploitable over a network and those in which the vulnerability is exploitable locally. Local security and network security considerations require two different approaches. In this chapter, we focus on security from a local security perspective.

Local security addresses the problem of attacks that require the attacker to be able to do something on the system itself for the purpose of gaining elevated privileges.

Systems that lack proper local security controls can pose a real problem and invite attacks. Educational environments and schools are often ripe for these types of attacks: students may need access to servers to complete assignments and perform other academic work, but such a situation can be a threat to the system because when students get bored, they may test the bounds of their access and their own creativity, or they may sometimes not think about the consequences and impact of their actions.

Local security issues can also be triggered by network security issues. If a network security issue results in an attacker being able to invoke any program or application on the server, the attacker can use a local security–based exploit not only to give herself full access to the server, but also to escalate her own privileges to the root user. “Script kiddies”—attackers who use other people’s attack programs because they are incapable of creating their own—are known to use these kinds of methods to gain unauthorized access to systems or more colloquially to own other people’s systems.

This chapter addresses the fundamentals of keeping your system secure against common local security attacks. Keep in mind, however, that a single chapter on this topic will not make you an expert. Security is a field that is constantly evolving, and as such you should endeavor to also keep yourself abreast of latest developments and techniques in this space.

In this chapter, you will notice two recurring themes: mitigating risk and a “simpler is better” mantra. The former is another way of allocating your investment (both in time and money), given the risk you’re willing to take on and the risk that a system or server poses if compromised. And keep in mind that because you cannot prevent all attacks, you have to accept a certain level of risk—and the level of risk you are willing to accept will drive the investment in both time and money. So, for example, a web server dishing up your vacation pictures on a low-bandwidth link is a lower risk than a server handling large financial transactions for Wall Street!

The “simpler is better” mantra stems from Engineering 101—simple systems are less prone to problems, easier to fix, easier to understand, and inevitably more reliable. Keeping your servers simple is a desirable goal.

Common Sources of Risk

Security is the mitigation of risk. Along with every effort of mitigating risk comes an associated cost. Costs are not always necessarily financial; they can take the form of restricted access, loss of functionality, or loss of time. Part of your job as an administrator is to balance the costs of mitigating risk with the potential damage that an exploited risk can cause.

Consider a web server, for example. The risk of hosting a service that can be probed, poked at, and possibly exploited is inherent in running a public-facing or publicly accessible web server. However, you may find that the risk of exposure is low so long as the web server is maintained and immediately patched when security issues arise. If the benefit of running a web server is great enough to justify your cost of maintaining it, then it is a worthwhile endeavor. In this section, we look at common sources of risk and examine what you can do to mitigate those risks.

SetUID Programs

SetUID programs are executables that have a special attribute (flag) set in their permissions that allows users to run the executable in the context of the executable’s owner. This enables administrators to make selected applications, programs, or files available with higher privileges to normal users, without having to give those users any administrative rights. An example of such a program is ping. Because the creation of raw network packets is restricted to the root user (the ability to create raw packets can allow the application to inject potentially bad payload within the packet), the ping application must run with the SetUID bit enabled and the owner set to root. Thus, for example, even though user yyang may start the ping program, the program can be run in the context of the root user for the purpose of placing an Internet Control Message Protocol (ICMP) packet onto the network. The ping utility in this example is said to be “SetUID root.”

Developers of programs that need to run with root privileges have an obligation/responsibility to be extra security conscious. It should not be possible for a normal user to do something dangerous on the system by using such programs. This means many checks need to be written into the program and potential bugs must be carefully removed. Ideally, these programs should be small and single-purposed. This makes it easier to evaluate the code for potential bugs that can harm the system or allow for a user to gain privileges that he or she should not have.

From a day-to-day perspective, it is in the administrator’s best interest to keep as few SetUID root programs on the system as possible. The risk balance here is the availability of features/functions to users versus the potential for bad things to happen. For some common programs such as mount, traceroute, and su, the risk is low for the value they bring to the system. Some well-known SetUID programs, such as the X Window System, pose a low to moderate risk; however, given X Window System’s exposure, it is unlikely to be the root of any problems. If you are running a pure server environment and you do not need X Window System, it never hurts to remove it.

SetUID programs executed by web servers are almost always a bad thing. Use great caution with these types of applications and look for alternatives. The exposure is much greater, since it is possible for network input (which can come from anywhere) to trigger this application and affect its execution. If you find that you must run an application SetUID with root privileges, another alternative is to find out whether it is possible to run the application in a chroot environment (discussed later in this chapter in the section “chroot”).

TIP  An alternative to SetUID programs is a feature called capabilities provided by the kernel.

As an example, it is possible to make the popular ping program non-SetUID by instead assigning it just the precise capabilities (CAP_NET_RAW) that it needs to function. See the man pages for the following to learn more about capabilities: getcap, setcap, capabilities, and getpcaps. To view any special capabilities required by ping on any modern Fedora distro, type the following:

Finding and Creating SetUID Programs

A SetUID program has a special file attribute that the kernel uses to determine whether it should override the default permissions granted to an application. A simple file system listing (ls -l) will show the permissions on a file and reveal this little fact. Here’s an example:

If the fourth character in the permissions field is s, the application is SetUID. If the file’s owner is root, then the application is SetUID root. In the case of the mount binary, we can see that it will execute with root permissions available to it.

TIP  You can use the stat utility to view the octal mode representation of file permissions. For example, to view the octal permission mode (4755) of the mount command, type the following:

Another example is the passwd utility, shown here:

As with mount, we see that the fourth character of the permissions is s and the owner is root. The passwd program is, therefore, SetUID root.

To determine whether a running process is SetUID, you can use the ps command to see both the actual user of a process and its effective user, like so:

This will output all of the running programs with their process ID (pid), effective user (euser), real user (ruser), and command name (comm). If the effective user is different from the real user, it is likely a SetUID program.

NOTE  Some applications that are (necessarily) started by the root user can give up their permissions to run as a less-privileged user to improve security. The Apache web server, for example, might be started by the root user to allow it to bind to TCP port 80 (recall that only privileged users can bind to ports lower than 1024), but it then gives up its root permissions and starts all of its threads as an unprivileged user (typically the user “nobody,” “apache,” “www-data,” or “www”).

Very rarely, you may need to make a program run as SetUID. To do this, use the chmod command. Prefix the desired permissions with a 4 to turn on the SetUID bit. (Using a prefix of 2 will enable the SetGID bit, which is similar to SetUID, but offers group permissions instead of user permissions.)

For example, if we have a program called myprogram and we want to make it SetUID root, we would do the following:

Ensuring that a system has only the absolute minimum and necessary SetUID programs can be a good housekeeping measure. A typical Linux distribution can easily have several files and executables that are unnecessarily SetUID. Going from directory to directory to find SetUID programs can be tiresome and error-prone. So instead of doing that manually, you can use the find command, like so:

To search for files that are SetGID instead, type the following:

To search for both SetUID and SetGID files with one single find command and view the octal mode for the file permissions using the stat command, type this:

Unnecessary Processes

When looking through the system’s boot or startup sequence, you may have noticed that a standard-issue Linux system starts with several (familiar and unfamiliar) processes running.

The underlying security issue always goes back to the question of risk: Is the risk of running an application worth the value it brings you? If the value a particular process brings is zero because you’re not using it, then no amount of risk is worth it. Looking beyond security, there is the practical matter of stability and resource consumption. If a process brings zero value, even a benign process that does nothing but sit in an idle loop uses memory, processor time, and kernel resources. If a bug were to be found in that process, it could threaten the stability of your server. The bottom line is this: if you don’t need it, don’t run it!

If your system is running as a server, you should reduce the number of processes that are run. For example, if there is no reason for the server to connect to a printer, disable the print services. If there is no reason the server should accept or send e-mail, turn off the mail server component. If no services are run from xinetd, then xinetd should be turned off. No printer? Turn off Common UNIX Printing System (CUPS). Not a file server? Turn off Network File System (NFS) and Samba.

Fully thinned down, the server should be running the bare minimum it needs to provide the services required of it.

Picking the Right Runlevel

A Linux system with a GUI desktop environment provides a nice startup screen, a login menu, a mild learning curve, general familiarity, and an overall positive desktop experience. For a server, however, the trade-off is probably not worth it.

Most modern systemd-enabled Linux distros that are configured to boot and load the X Window (GUI) subsystem will boot to graphical.target target (also referred to as runlevel 5 in the SysV Init world). In such distros, changing the default boot target to multi-user.target (also referred to as runlevel 3 in the SysV init world) will turn off the GUI subsystem.

Modern systemd-enabled Linux distros use the systemctl utility, as well as a series of file system elements (soft links) to control and manage the system’s default boot target (runlevel). Chapters 7 and 9 cover systemd in detail as well as show how to change the default boot target.

TIP  You can see what runlevel you’re in by using any of the following commands:

On a systemd-enabled system, you can alternatively run this:

Nonhuman User Accounts

User accounts on a server do not always correspond to actual human users. Recall that every process running on a Linux system must have an owner. Running the ps auxww command on your system will show all of the process owners in the leftmost column of the output. On your desktop system, for example, you could be the only human user, but a look at the /etc/passwd files shows that there are several other accounts on the system.

For an application to drop its root privileges, it must be able to run as another user. This is where those extra users come into play: Each application that gives up its root privileges can be assigned another dedicated (and less privileged) user profile on the system. This other user typically owns all the application’s files (including executable, libraries, configuration, and data) and the application processes. By having each application that drops privileges use its own user, the risk of a compromised application having access to other application configuration files is mitigated. In essence, an attacker is limited by what files the application can access, which, depending on the application, may be quite uninteresting.

Limited Resources

To better control the resources available to processes started by the shell, the ulimit facility can be used. System-wide defaults can be configured using the /etc/security/limits.conf file. ulimit options can be used to control such things as the number of files that may be open, how much memory they may use, how much CPU time they may use, how many processes they may open, and so on. The settings are read by the PAM (Pluggable Authentication Module) libraries when a user starts up. Incidentally, some servers that run/host mission-critical applications such as databases also use facilities like ulimit for performance-tuning purposes.

The key to choosing ulimit values is to consider the purpose of the system. For example, in the case of an application server, if the application is going to require a lot of processes to run, then the system administrator needs to ensure that ulimit caps don’t cripple the functionality of the system. Other types of single-purpose applications, such as a Domain Name System (DNS) server, should not need more than a small handful of processes.

Note a caveat here: PAM must have a chance to run to set the settings before the user does something. If the application starts as root and then drops permissions, PAM is not likely to run. From a practical point of view, this means that having individual per-user settings is not likely to do you a lot of good in most server environments. What will work are global settings that apply to both root and normal users. This detail turns out to be a good thing in the end; having root under control helps keep the system from spiraling away both from attacks and from broken applications.

TIP  A Linux kernel feature known as control groups (cgroups) also provides the ability to manage and allocate various system resources such as CPU time, network bandwidth, memory, and so on. For more on cgroups, see Chapter 11.

The format of each line in the /etc/security/limits.conf file is as follows:

Any line that begins with a pound sign (#) is a comment. The domain value holds the login name of a user or the name of a group; it can also be a wildcard (*). The type field refers to the type of limit, as in soft or hard.

The item field refers to what the limit applies to. The following is a sample of some items that an administrator might find useful:

A reasonable tweak on a server is to restrict the number of processes for users. The same should be done for other settings too—after duly considering their effects. Remember, ulimit is not a cure-all for restricting or managing all classes of system resources. You have to use the proper tool for the job. So, for example, if you need to control total disk usage for a user, you should use disk quotas instead.

To mitigate problems that can be caused by users’ ability to spawn too many processes that can quickly use up system resources (like in the fork bomb example), we can implement a ulimit setting for limiting the number of processes to 512 (for example) for each user. We can do this by creating an entry like the one shown here in the /etc/security/limits.conf file:

If you log out and log in again, you can see the limits take effect by running the ulimit command with the -a option, as shown next, to see what the limits are. The highlighted max user processes entry in the following sample output shows the change.

Mitigating Risk

Once you know what the risks are, mitigating them becomes easier. You might find that the risks you see are sufficiently low, such that no additional securing is necessary. For example, a Microsoft Windows desktop system used by a trusted, well-experienced user is a low risk for running with administrator privileges. The risk that the user downloads and executes something that can cause damage to the system is low. This well-experienced user may find that being able to run some additional tools and having raw access to the system are well worth the risk of running with administrator privileges. Like any nontrivial risk, the list of caveats is long.

chroot

The chroot() system call (pronounced “cha-root”) allows a process and all of its child processes to redefine what they perceive the root directory to be. For example, if you were to run chroot("/www") and start a shell, you could find that using the cd command would leave you at /www. The program would believe it is a root directory, but in reality, that would not be the case. This restriction applies to all aspects of the process’s behavior: where it loads configuration files, shared libraries, data files, and so on. The restricted environment is also sometimes referred to as a “jail.”

When the perceived root directory of the system is changed, a process has a restricted view of what is on the system. Access to other directories, libraries, and configuration files is not available. Because of this restriction, it is necessary for a target application to have all of the files necessary for it to work completely contained within the chroot environment. This includes any passwd files, libraries, binaries, and data files.

Most major applications have their own set of configuration files, libraries, and executables and thus the directions for making an application work in a chroot environment vary. However, the principle remains the same: make it all self-contained under a single directory with a faux root directory structure.

CAUTION  A chroot environment will protect against accessing files outside of the directory, but it does not protect against system utilization, memory access, kernel access, and interprocess communication. This means that if there is a security vulnerability that someone can take advantage of by sending signals to another process, it will be possible to exploit it from within a chroot environment. In other words, chroot is not a perfect cure, but more a deterrent.

An Example chroot Environment

As an example, let’s create a chroot environment for the Bash shell. We begin by creating the directory into which we want to put everything. Because this is just an example, we’ll create a directory in /tmp called myroot:

Let’s assume we need only two programs: bash and ls. Let’s create the bin directory under myroot and copy the binaries over there:

With the binaries there, we now need to check whether these binaries need any libraries. We use the ldd command to determine what (if any) libraries are used by these two programs.

CAUTION  The following copy (cp) commands were based strictly on the output of the ldd $(type -P bash) and ldd $(type -P ls) commands on our sample system. You might need to modify the names and versions of the files that you are copying over to the chroot environment to match the exact filenames that are required on your system/platform.

We run ldd against /bin/bash, like so:

We also run ldd against /bin/ls, like so:

Now that we know what libraries need to be in place, we create the lib64 directory and copy the 64-bit libraries over (because we are running a 64-bit operating system).

First, we create the /tmp/myroot/lib64 directory:

Next, we’ll copy over the shared libraries that /bin/bash needs:

And for /bin/ls, we need to run the following commands to get the other needed library files that we don’t already have:

Most Linux distros conveniently include a powerful little program called chroot that can invoke the chroot() system call for us. The program takes two parameters: the directory that we want to make the root directory and the command that we want to run in the chroot environment. We want to use /tmp/myroot as the directory and start /bin/bash, so we run the following:

Because there is no /etc/profile or /etc/bashrc to change/customize our prompt, the prompt will change to something like bash-<VERSION_NUMBER>#. Now try running the /bin/ls command in your chroot environment:

Next, try a pwd to view the current working directory:

NOTE  You may be wondering where the pwd command that we ran in the previous command hails from. We didn’t need to explicitly copy over the pwd command, because pwd is one of the many Bash built-in commands. It comes with the Bash program that we already copied over!

Since we don’t have an /etc/passwd or /etc/group file in the chroot-ed environment (to help map numeric user IDs to usernames), an ls -l command will show the raw user ID (UID) values for each file. Here’s an example:

With limited commands/executables in our sample chroot environment, the environment isn’t terribly useful for practical work, which is what makes it great from a security perspective; we allow only the absolute minimum files necessary for an application to work, thus minimizing our exposure in the event the application gets compromised. Keep in mind that not all chroot environments need to have a shell and an ls command installed—for example, if the Berkeley Internet Name Domain (BIND) DNS server software needs only its own executable, libraries, and zone files installed, then that’s all you need!

To quit the chroot environment, use the exit command:

bash-*# exit

NOTE  A popular implementation of OS-level virtualization (or software/application containers) makes extensive use of the basic principles of chroot-ing as well as resource isolation and partitioning facilities of Linux (cgroups). LinuX Containers (LXC), Docker, CoreOS, Rocket, FreeBSD Jail (non-Linux), and so on are popular implementations of these concepts. Chapter 31 covers this topic in more detail.

SELinux

Traditional Linux security is based on a Discretionary Access Control (DAC) model. The DAC model allows the owner of a resource (objects) to control which users or groups (subjects) can access the resource. It is called “discretionary” because the access control is based on the discretion of the owner.

Another type of security model is the Mandatory Access Control (MAC) model. Unlike the DAC model, the MAC model uses predefined policies to control user and process interactions. The MAC model restricts the level of control that users have over the objects that they create. SELinux is an implementation of the MAC model in the Linux kernel.

The U.S. government’s National Security Agency (NSA) has taken an increasingly public role in information security, especially due to the growing concern over information security attacks that could pose a serious threat to the world’s ability to function. With Linux being a major component of enterprise computing, the NSA set out to create a set of patches to increase the security of Linux. The patches have all been released under the GNU Public License (GPL) with full source code and are thus subject to the scrutiny of the world—an important consideration given Linux’s worldwide presence and developer community. The patches are collectively known as “SELinux,” short for “Security-Enhanced Linux.” The patches have been integrated into the Linux kernel using the Linux Security Modules (LSM) framework. This integration has made the patches and improvements far-reaching and an overall benefit to the Linux community.

SELinux makes use of the concepts of subjects (users, applications, processes, and so on), objects (files and sockets), labels (metadata applied to objects), and policies (which describe the matrix of access permissions for subjects and objects). Given the extreme granularity of objects, it is possible to express rich and complex rules that dictate the security model and behavior of a Linux system. Because SELinux uses labels, it requires a file system that supports extended attributes.

The full gist of SELinux is well beyond the scope of a single section in this book. To learn more about SELinux, visit the SELinux Fedora Wiki project page at http://fedoraproject.org/wiki/SELinux.

TIP  As useful as SELinux is, you may find that it is the cause of some hard-to-debug issues that prevent some applications or subsystems from working properly. In such situations, if you need to quickly eliminate SELinux as being the issue, you can temporarily disable it by running

When you are ready to re-enable it, run

AppArmor

AppArmor is another implementation of the MAC security model on Linux-based systems. It is SUSE’s alternative to SELinux (which is used mainly in Red Hat–derived distros such as Fedora, CentOS, and RHEL). AppArmor’s backers tout it as being easier to manage and configure than SELinux. AppArmor’s implementation of the MAC model focuses more on protecting individual applications—hence the name Application Armor—instead of attempting a blanket security model that applies to the entire system, as in SELinux. AppArmor’s security goal is to protect systems from attackers exploiting vulnerabilities in specific applications that are running on the system. AppArmor is file system independent. It is integrated into and used mostly in openSUSE, SUSE Linux Enterprise (SLE), as well as some Debian-based distros. And, of course, it can also be installed and used in other Linux distributions.

Monitoring Your System

As you become familiar with Linux, your servers, and their day-to-day operation, you’ll find that you start getting a “feel” for what is normal. This might sound peculiar, but in much the same way you learn to feel when your car isn’t running quite right, you’ll know when your server is not acting quite the same.

Part of your getting a feel for the system requires basic system monitoring. For local system behavior, you need to trust your underlying system as not having been compromised in any way. If your server does get compromised and a “root kit” that bypasses monitoring systems is installed, it can be difficult to see what is happening. For this reason, a mix of on-host monitoring and remote-host–based monitoring is a good idea.

Logging

By default, most of your log files will be stored in the /var/log directory, with the logrotate program automatically rotating (archiving) the logs on a regular basis. Although it is handy to be able to log to the local disk, it is often a better idea to also have logs sent to a dedicated log server. With remote logging enabled, you can be certain that any logs sent to the log server before an attack are most likely guaranteed not to be tampered with.

Because of the volume of log data that can be generated, you might find it prudent to learn some basic scripting skills so that you can easily parse through the log data and automatically highlight and e-mail or notify the administrator of anything that is peculiar or should warrant suspicion. This allows the administrator to track both normal and erroneous activity without having to read through a significant number of log messages every day. We discussed some log-filtering techniques and utilities (journalctl) in Chapter 9 that can help with this.

Using ps and netstat

You should periodically review the output of the ps auxww command. Future deviations from any established baseline output should catch your attention. As part of monitoring, you may find it useful to periodically list what processes are running and make sure that any processes you don’t expect are there for a reason. Be especially suspicious of any packet-capture programs, such as tcpdump, that you did not start yourself!

The same can be said about the output of the netstat -an command (admittedly, netstat’s focus is more from a network security standpoint). Once you have a sense of what represents normal traffic and normally open ports, any deviations from that output should trigger interest into why the deviation is there. It might help answer questions such as these: Did someone change the configuration of the server? Did the application do something that was unexpected? Is there threatening activity on the server?

Between ps and netstat, you should have a fair handle on the goings-on with your network and process list.

Watch That Space (Using df)

The df command shows the available space on each of the disk partitions that is mounted. Running df on a regular basis to see the rate at which disk space gets used is a good way to look for any questionable activity. A sudden change in disk utilization should spark your curiosity. For example, a sudden increase could be because users are using their home directories to store vast quantities of MP3 files, movies, and so on. Legal issues aside, there are also other pressing concerns and repercussions for such unofficial use, such as backups and DoS issues.

The backups might fail because the backup medium ran out of space storing someone’s music files instead of the key files necessary for the business. From a security perspective, if the sizes of the web or FTP directories grow significantly without reason, it may signal trouble looming with unauthorized use of your server. A server whose disk becomes full unexpectedly is also a potential source of a local (and/or remote) DoS attack. A full disk might prevent legitimate users from storing new data or manipulating existing data on the server. The server may also have to be temporarily taken offline to rectify the situation, thereby denying access to other services that the server should be providing.

Automated Monitoring

Most of the popular automated system-monitoring solutions specialize in monitoring network-based services and daemons. However, most of these also have extensive local resource-monitoring capabilities to monitor such things as disk usage, CPU usage, process counts, changes in file system objects, and so on. Some examples include sysinfo, Nagios, Tripwire, Munin, sysstat utilities (sar, iostat, and sadf), Beats (via Elasticsearch, Logstash, and Kibana aka ELK), Icinga, and so on.

Staying in the Loop (Mailing Lists)

As part of managing your system’s security, you should be subscribed to key security mailing lists, such as BugTraq (www.securityfocus.com). BugTraq is a moderated mailing list that generates only a small handful of e-mails a day, most of which may not pertain to the software you are running. However, this is where critical issues are likely to show up first and be dealt with in real time.

In addition to BugTraq, any security lists for software for which you are responsible are musts. Also look for announcement lists for the software you use. Most Linux distributions also maintain announcement lists for security issues that pertain to their specific distros. Major software vendors also maintain their own lists. Although this may seem like a lot of e-mail, consider that most of the lists that are announcement-based are extremely low volume. In general, you likely won’t need to deal with significantly more e-mail than you already do!

Summary

In this chapter you learned about securing your Linux system and mitigating risk, and you learned what to look for when making decisions about how to balance function with the need to be secure. We touched on local security concepts and techniques at a very high level. Specifically, we discussed SetUID programs, mitigating risk through the use of chroot environments, popular MAC security models (SELinux and AppArmor), and things that should be monitored as part of daily system housekeeping.

In the end, you will find that maintaining a reasonably secure environment is akin to maintaining good hygiene. Keep your server clean of unnecessary applications, make sure the environment for each application is minimized so as to limit exposure, and patch your software as security issues are brought to light. Try to keep as up to date as possible with security-related news for software that you run. With these basic commonsense practices, you’ll find that your servers will be quite reliable and secure.