Chapter 16 Network Security – Linux Administration: A Beginner's Guide, Eighth Edition, 8th Edition

CHAPTER

16

Network Security

In Chapter 15, you learned that exploit vectors are of two types: those in which the vulnerability is exploitable locally and those in which the vulnerability is exploitable over a network. The latter case is covered in this chapter.

Network security addresses the problem of attackers sending malicious network traffic to your system with the intent of either making your system unavailable (denial-of-service, or DoS, attack) or exploiting weaknesses in your system to gain access or control of the system or other related systems. Network security is not a substitute for the good local security practices discussed in the previous chapter. Both local and network best security practices are necessary to keep things working in the expected manner.

This chapter covers four aspects of network security: tracking services, monitoring network services, handling attacks, and tools for testing. The following sections should be used as accompaniment with the information in Chapters 14 and 15.

TCP/IP and Network Security

The following discussion assumes you have experience configuring a system for use on a TCP/IP network. Because the focus here is on network security and not an introduction to networking, this section discusses only those parts of TCP/IP affecting your system’s security. If you’re curious about TCP/IP’s internal workings, read Chapter 12.

The Importance of Port Numbers

Every host on an IP-based network has at least one IP address. In addition, every Linux-based host has many individual processes running. Each process has the potential to be a network client, a network server, or both. With potentially more than one process being able to act as a server on a single system, using an IP address alone to uniquely identify a network connection is not enough.

To solve this problem, TCP/IP adds a component to uniquely identify a TCP or UDP connection, called a port. Every connection from one host to another has a source port and a destination port. Each port is labeled with an integer between 0 and 65535.

To identify every unique connection possible between two hosts, the operating system keeps track of four pieces of information: the source IP address, the destination IP address, the source port number, and the destination port number. The combination of these four values is guaranteed to be unique for all host-to-host connections. (Actually, the operating system tracks a myriad of connection information, but only these four elements are needed for uniquely identifying a connection.)

The host initiating a connection specifies the destination IP address and port number. Obviously, the source IP address is already known. But the source port number, the value that will make the connection unique, is assigned by the source operating system. It searches through its list of already open connections and assigns the next available port number.

By convention, this number is always greater than 1024 (port numbers from 0 to 1023 are reserved for system uses and well-known services). Technically, the source host can also select its source port number. To do this, however, another process cannot have already taken that port. Generally, most applications let the operating system pick the source port number for them.

Given this arrangement, we can see how source host A can open multiple connections to a single service on destination host B. Host B’s IP address and port number will always be constant, but host A’s port number will be different for every connection. The combination of source and destination IPs and port numbers is, therefore, unique, and both systems can have multiple independent data streams (connections) between each other.

For a typical server application to offer services, it would usually run programs that listen on specific port numbers. Many of these port numbers are used for well-known services and are collectively referred to as well-known ports, because the port number associated with a known service is an approved standard. For example, port 80 is the well-known service port for HTTP.

In the upcoming “Using the netstat Command” section, we’ll look at the netstat command as an important part of your network security arsenal. When you have a firm understanding of what port numbers represent, you’ll be able to identify and interpret the network security statistics provided by tools like the netstat (and similar) commands.

Tracking Services

The services provided by a server are what make it a server. The ability to provide the service is accomplished by processes that bind to network ports and listen to the requests coming in. For example, a web server might start a process that binds to port 80 and listens for requests to download the pages of a site it hosts. Unless a process exists to listen on a specific port, Linux will simply ignore packets sent to that port.

This section discusses the usage of the netstat command, a tool for tracking and debugging network connections (among other things) in your system.

Using the netstat Command

To track what ports are open and what ports have processes listening to them, we use the netstat command. Here’s an example:

By default (with no parameters), netstat will provide all established connections for both network and domain sockets. That means we’ll see not only the connections that are actually working over the network, but also the interprocess communications (which, from a security monitoring standpoint, might not be immediately useful). So in the command just illustrated, we have asked netstat to show us all ports (-a), regardless of whether they are listening or actually connected, for TCP (-t) and UDP (-u). We have told netstat not to spend any time resolving IP addresses to hostnames (-n).

In the netstat output, each line represents either a TCP or UDP network port, as indicated by the first column of the output. The Recv-Q (receive queue) column lists the number of bytes received by the kernel but not read by the process. Next, the Send-Q (send queue) column tells us the number of bytes sent to the other side of the connection but not acknowledged.

The fourth, fifth, and sixth columns are the most interesting in terms of system security. The Local Address column tells us our server’s IP address and port number. Remember that our server recognizes itself as 127.0.0.1 and 0.0.0.0, as well as its normal IP address. In the case of multiple interfaces, each port being listened to will usually show up on all interfaces and, thus, as separate IP addresses. The port number is separated from the IP address by a colon (:). In the output, the Ethernet device has the IP address 192.168.1.4.

The fifth column, Foreign Address, identifies the other side of the connection. In the case of a port that is being listened to for new connections, the default value will be 0.0.0.0:*. This IP address initially means nothing, since we’re still waiting for a remote host to connect to us!

The sixth column tells us the state of the connection. The man page for netstat lists all of the states, but the two you’ll see most often are LISTEN and ESTABLISHED. The LISTEN state means that a process on your server is listening on the port number and ready to accept new connections. The ESTABLISHED state means just that—a connection is established between a client and server.

Security Implications of netstat’s Output

By listing all of the available connections, you can get a snapshot of what the system is doing. You should be able to explain and account for all ports listed. If your system is listening to a port that you cannot explain, this should raise suspicions.

Just in case you haven’t yet memorized all the well-known services and their associated port numbers (all 25 zillion of them!), you can look up the matching information you need in the /etc/services file. However, some services (most notably those that use the portmapper) don’t have set port numbers but are valid services. To see which process is associated with a port, use the -p option with netstat. Be on the lookout for odd or unusual processes using the network sockets. For example, if the Bourne Again Shell (Bash) is listening to a network port, you can be fairly certain that something odd is going on!

Finally, remember that you are mostly interested in the destination port of a connection; this tells you which service is being connected to and whether it is legitimate. The source address and source port are, of course, important, too—especially if somebody or something has opened up an unauthorized back door into your system. Unfortunately, netstat doesn’t explicitly tell you who originated a connection, but you can usually figure it out if you give it a little thought. Of course, becoming familiar with the applications that you do run and their use of network ports is the best way to determine who originated a connection to where. In general, you’ll find that the rule of thumb is that the side whose port number is greater than 1024 is the side that originated the connection. Obviously, this general rule doesn’t apply to absolutely all services. Some oddball services run on ports higher than 1024, such as the X Window System (port 6000).

Binding to an Interface

A common approach to improving the security of a service running on your server is to make it such that it binds only to a specific network interface. By default, applications will bind to all interfaces (seen as 0.0.0.0 in the netstat output). This will allow a connection to that service from any interface—so long as the connection makes it past any Netfilter rules (built-in Linux Kernel firewall stack) you may have configured. However, if you need a service to be available only on a particular interface, you should configure that service to bind to the specific interface.

For example, let’s assume that there are three interfaces on your server:

•   eno1, with the IP address 192.168.1.4

•   eno2, with the IP address 172.16.1.1

•   lo, with the IP address 127.0.0.1

Let’s also assume that your server does not have IP forwarding (/proc/sys/net/ipv4/ip_forward) enabled. In other words, machines on the 192.168.1.0/24 (eno1) side cannot communicate with machines on the 172.16/16 side. The 172.16/16 (eno2) network represents the “safe” or “inside” network, and, of course, 127.0.0.1 (lo or loopback) represents the host itself.

If the application binds itself to 172.16.1.1, then only those hosts on the 172.16/16 network will be able to reach the application and connect to it. If you do not trust the hosts on the 192.168.1/24 side (for example, because it is designated as a demilitarized zone, or DMZ), this is a safe way to provide services to one segment without exposing yourself to another. For even less exposure, you can bind an application to 127.0.0.1. By doing so, you can almost guarantee that connections will have to originate from the server itself to communicate with the service. For example, if you need to run the MySQL database for a web-based application and the application runs on the server, then configuring MySQL to accept only connections from 127.0.0.1 means that any risk associated with remotely connecting to and exploiting the MySQL service is significantly mitigated. The attacker would have to compromise your web-based application and somehow make it query the database on the attacker’s behalf (perhaps via a SQL injection attack) in order to circumvent this setup.

Shutting Down Services

Striking a reasonable balance between ease of installation/manageability of Linux servers and providing a secure out-of-box experience in Linux distros can be a delicate process. One of the side effects of this can been seen in some distributions that aim to oversimplify things for end users by adopting unsafe default settings, so as to provide convenience for the end user. The task of seeking out these unsafe default settings is thus left to you as the system administrator.

While evaluating which services should stay or go, answer the following questions:

•   Do we need the service? The answer to this question is important. In most situations, you should be able to disable a great number of services that start up by default.

•   If we do need the service, is the default setting secure? This question can also help you eliminate some services—if they aren’t secure and they can’t be made secure, then chances are they should be removed. For example, if remote login is a requirement and Telnet is the service enabled to provide that function, then an alternative such as SSH should be used instead, due to Telnet’s default inability to encrypt login information over a network.

•   Are the developers of the software providing the service still actively maintaining the software with security patches? All software needs updates from time to time. This is partly because as features get added, new security/bugs problems creep in. So be sure to remember to track the server software’s development and get updates as necessary.

Shutting Down xinetd and inetd Services

To shut down a service that is started via the xinetd program, edit the service’s configuration file under the /etc/xinetd.d/ directory and set the value of the disable directive to Yes.

For traditional System V–based services, you can also use the chkconfig command to disable the service managed by xinetd. For example, to disable the echo service, type the following:

On modern Linux distributions running systemd, you can alternatively disable a service using the systemctl. For example, to disable the xinetd service, use the following:

On legacy Debian-based systems such as Ubuntu, you can use the sysv-rc-conf command (install it with the apt-get command if you don’t have it installed) to achieve the same effect. For example, to disable the echo service in Ubuntu, you could run the following:

Shutting Down Non-xinetd Services

If a service is not managed by xinetd, then a separate process or script that is started at boot time is running it. If the service in question was installed by your distribution, and your distribution offers a nice tool for disabling a service, you may find that to be the easiest approach.

On modern Linux distributions running systemd, you can stop a service using the systemctl command. For example, to stop the rpcbind service, type the following:

Similarly, to permanently disable the rpcbind service unit from starting up during the next system boot, using the systemctl utility, type this:

Alternatively, on legacy (non-systemd) Linux distros, the chkconfig program provides an easy way to enable and disable individual services. For example, to disable the rpcbind service from starting in runlevels 3 and 5 on such systems, simply run the following:

Note that using chkconfig doesn’t actually turn an already running service on or off; instead, it defines what will happen at the next startup time. To stop the running process, use the control script in the /etc/init.d/ directory or the service command. In the case of rpcbind, we would stop it with the following:

Monitoring Your System

Several free and open source commercial-grade applications exist that perform monitoring and are well worth checking out. Here, we’ll take a look at a variety of excellent tools that help with system monitoring. Some of these tools already come installed with your Linux distributions; others don’t. All are free and easily acquired.

Making the Best Use of syslog

In Chapter 9, we explored rsyslogd, the system logger, as well as the systemd-journald service (journald), both of which help manage and collect log messages from various programs. By now, you’ve probably seen the types of log messages you get with rsyslogd and have played with the journalctl utility as well. These include security-related messages, such as who has logged into the system, when they logged in, and so forth.

As you can imagine, it’s possible to analyze these logs to build a time-lapse image of the utilization of your system services. This data can also point out questionable activity. For example, why was the host crackerboy.nothing-better-to-do.net sending so many web requests in such a short period of time? Has he found a hole (vulnerability) in the system?

Log Parsing

Doing periodic checks on the system’s log files is an important part of maintaining a good security posture. Unfortunately, scrolling through an entire day’s worth of logs is a time-consuming task that might reveal few meaningful events. To ease the drudgery, pick up a text on a scripting language (such as Python) and write small scripts to parse out the logs. A well-designed script should ignore what it recognizes as normal behavior and show everything else. This can reduce thousands of log entries for a day’s worth of activities down to a manageable few dozen. This is an effective way to detect attempted break-ins and possible security gaps.

Hopefully, it’ll become entertaining to watch the script kiddies trying and failing to break down your walls. Several canned solutions exist that can also help make parsing through log files easier. Examples of such programs that you might want to try out are journalctl, logwatch, gnome-system-log, ksystemlog, ELK (www.elastic.co), and Splunk.

Storing Log Entries

Unfortunately, log parsing may not be enough. If someone breaks into your system, it’s likely that your log files will be promptly erased—which means all those wonderful scripts won’t be able to tell you a thing. To get around this, consider dedicating a single host on your network to storing log entries. Configure your local logging daemon to send all of its messages to a separate/central loghost, and configure the central host appropriately to accept logs from trusted or known hosts. In most instances, this should be enough to gather, in a centralized place, the evidence of any bad things happening.

If you’re really feeling paranoid, consider attaching another Linux host to the loghost using a serial port and using a terminal emulation package, such as minicom, in log mode and then feeding all the logs to the serially attached machine. Using a serial connection between the hosts helps ensure that one of the hosts does not need network connectivity. The logging software on the loghost can be configured to send all messages to /dev/ttyS0 if you’re using COM1, or to /dev/ttyS1 if you’re using COM2. And, of course, do not connect the other system to the network! This way, in the event the loghost also gets attacked, the log files won’t be destroyed. The log files will be safely residing on the serially attached system, which is impossible to log into without physical access.

For an even (more ridiculous) higher degree of ensuring the sanctity of logs, you can connect a printer to another system and have the terminal emulation package echo everything it receives on the serial port to the printer. Thus, if the serial host system fails or is damaged in some way by an attack, you’ll have a hard copy of the logs!

Monitoring Bandwidth with MRTG

Monitoring the amount of bandwidth being used on your servers produces some useful information. A common use for this is to justify the need for hardware upgrades, by being able to consistently demonstrate high system utilization levels. Your data can be easily turned into a graph, too—and everyone knows how much upper management folks like graphs and pretty pictures! Another useful benefit of monitoring bandwidth is to identify bottlenecks in the system, thus helping you balance the system load. But relative to the topic of this chapter, a useful aspect of graphing your bandwidth is to identify when things go wrong.

Once you’ve installed a package such as MRTG (Multi-Router Traffic Grapher, available at http://oss.oetiker.ch/mrtg/) to monitor bandwidth, you will quickly be able to establish a criterion for what “normal” looks like on your site. Investigate any inexplicable and substantial drop or increase in utilization, as it may indicate a failure or a type of attack. Other things to do are to check your logs, check the modification timestamps on configuration files to ensure that the modification times correspond to legitimate changes, look for configuration files with odd or unusual entries, and so on.

Handling Attacks

Part of security includes planning for the worst case: What happens if/when a break-in succeeds? At that point, knowing the details around the how and when is important, but possibly even more important is dealing with the aftermath of the event. Servers are doing things they shouldn’t, information is leaking that shouldn’t leak, or other mayhem is discovered by you or your team, and stakeholders are asking why you’re spreading mayhem and not doing your job. Your pretty graphs won’t be of much use to you in these situations!

Just as a facilities director plans for fires and a backup administrator plans for backing up and restoring data when needed, an IT security officer needs to plan for how to handle an attack. This section covers key points to consider with respect to Linux. For an excellent overview on handling attacks, visit the CERT web site at www.cert.org.

Trust Nothing (and No One)

The first thing you should do in the event of an attack is to fire everyone in the IT department. Absolutely no one is to be trusted. Everyone is guilty until proven innocent. Just kidding!

But, seriously, if an attacker has successfully broken into your systems, there is nothing that your servers can tell you about the situation that is completely trustworthy. Root kits (tool kits that attackers use to invade systems and then cover their tracks) can make detection difficult. With binaries replaced, you may find that there is nothing you can do to the server itself that helps. In other words, every server that has been successfully hacked may need to be completely rebuilt with a fresh installation. Before doing the reinstall, you should make an effort to look back at how far the attacker went so as to determine the point in the backup cycle when the data is certain to be trustworthy. Any data backed up after that should be closely examined to ensure that compromised data does not make it back into the system.

Change Your Passwords

If the attacker has gotten your root password or may have taken a copy of the password file (or equivalent), it is crucial that all of your passwords be changed. This is an incredible hassle; however, it is necessary to make sure that the attacker doesn’t waltz back into your rebuilt server using the password without any resistance.

NOTE  It is also a good idea to change the root password(s) as well as any other shared privileged account credentials following any staff changes. It may seem like everyone is leaving on good terms; however, later finding out that someone on your team had issues with the company can spell trouble.

Pull the Plug

Once you’re ready to start cleaning up, you will need to stop any remote access to the system. You may find it necessary to stop all network traffic to the server until it is completely rebuilt with the latest patches before reconnecting it to the network.

This can be done by simply pulling the plug on whatever connects the box to the network. Putting a server back onto the network when it is still getting patches is an almost certain way to find yourself dealing with another attack.

Network Security Tools

Lots of tools exist to help monitor your systems, including Nagios (www.nagios.org), Icinga (www.icinga.org) and, of course, the various tools already mentioned in this chapter. But what do you use to poke at your system for basic sanity checks?

In this section, we review a few tools that you can use for testing your system. Note that no one single tool is enough, and no combination of tools is perfect—there is no secret “Hackers Testing Tool Kit” that security professionals use. The key to the effectiveness of most tools is how you use them and how you interpret the data gathered by the tools.

Some of the tools discussed here were originally created to aid in basic diagnostics and system management and later on evolved to also become useful as security tools. What makes these tools work well for Linux from a security perspective is that they offer deeper insight into what your system is doing. That extra insight often proves to be incredibly helpful.

nmap

The nmap program can be used to scan a host or a group of hosts to look for open TCP and UDP ports. nmap can go beyond scanning and can actually attempt to connect to the remote listening applications or ports so that it can better identify the remote application. This is a powerful and simple way for an administrator to take a look at what the system exposes to the network and is frequently used by both attackers and administrators to get a sense of what is possible against a host.

What makes nmap powerful is its ability to apply multiple scanning techniques. This is especially useful because each scanning technique has its pros and cons with respect to how well it traverses firewalls and the level of stealth desired.

Snort

An intrusion detection system (IDS) provides a way to monitor a point in the network surreptitiously and report on questionable activity based on packet traces. The Snort program (www.snort.org) is an open source IDS and intrusion prevention system (IPS) that provides extensive rule sets that are frequently updated with new attack vectors. Any questionable activity can be sent to a logging host, and several open source log-processing tools are available to help make sense of the information gathered (for example, the Basic Analysis and Security Engine, or BASE).

Running Snort on a Linux system that is located at a key entry/exit point in your network is a great way to track the activity without your having to set up a proxy for each protocol that you want to support.

Nessus and OpenVAS

The Nessus and OpenVAS applications take the idea behind nmap and extends it with deep application-level probes and a rich reporting infrastructure. Nessus is owned and managed by a commercial company, Tenable Network Security (www.tenable.com). The OpenVAS project (www.openvas.org) is a free and open source alternative to Nessus.

Running Nessus or OpenVAS against a server is a quick way to perform a sanity check on the server’s exposure. Your key to understanding these systems is in understanding their output. The report will log numerous comments, from an informational level all the way up to a high level. Depending on how your application is written and what other services you offer on your Linux system, they may log false positives or seemingly scary informational notes. Take the time to read through each one of them and understand what the output is, as not all of the messages necessarily reflect your situation. For example, if the scanner detects that your system is at risk due to a hole in Oracle 25c but your server does not even run Oracle, more than likely, you have hit upon a false positive!

Wireshark/tcpdump

You learned about Wireshark and tcpdump in Chapter 12, where we used them to study the ins and outs of TCP/IP. Although those chapters used these tools only for troubleshooting, they are just as valuable for performing network security functions.

Raw network traces are the food devoured by all the tools listed in the preceding sections to gain insight into what your server is doing. However, these tools don’t have quite the insight that you do into what your server is supposed to do. Thus, you’ll find it useful to be able to take network traces yourself and read through them to look for any questionable activity. You may be surprised by what you see!

For example, if you are looking at a possible break-in, you may want to start a raw network trace from another Linux system that can see all of the network traffic of your questionable host. By capturing all the traffic over a 24-hour period, you can go back and start applying filters to look for anything that shouldn’t be there. Extending the example, if the server is supposed to handle only web operations and SSH, you can take the packet trace with Domain Name System (DNS) resolution turned off and then apply the filter “not port 80 and not port 22 and not icmp and not arp.” Any packets that show up in the output are suspect.

Summary

Using the information presented in this chapter, you should have the basic high-level knowledge you need to make an informed decision about the state of health of your server and decide what, if any, action is necessary to secure it.

It is important to take the time to know what constitutes normal or baseline behavior for the systems/services that you manage. Once you know what normal behavior is, unusual behavior will stick out like a sore thumb. For example, if you know that the Telnet service should not be running (or even installed) on your system normally, then seeing a system log entry for access via Telnet would mean that something is terribly wrong!

Security as a field is constantly evolving and requires keeping a watchful/careful eye toward new developments. Be sure to subscribe to the relevant mailing lists, keep an eye on relevant web sites, educate yourself with additional reading materials/books, and, most important, always apply common sense.