Network File System (NFS)
Network File System (NFS) is one of the native ways of sharing files and applications across the network in the Linux/UNIX world. NFS is somewhat similar to Microsoft Windows File Sharing, in that it allows you to attach to a remote file system (or disk) and work with it as if it were a local drive—a handy tool for sharing files and large storage space among users.
NFS and Windows File Sharing are solutions to the same problem; however, the solutions are very different beasts. NFS requires different configurations, management strategies, tools, and underlying protocols. We will explore NFS and show how to deploy it in this chapter.
The Mechanics of NFS
As with most network-based services, NFS follows the usual client and server paradigms—that is, it has its client-side components and its server-side components.
Chapter 8 covered the concept of mounting and unmounting file systems. The same concepts apply to NFS, except you also need to specify the server hosting the share in addition to the other items (mount options) you would normally define. Of course, you also need to make sure the server is actually configured to permit access to the share!
Let’s look at an example. Assume there exists an NFS server named serverA that wants to share its local /home file system over the network. In NFS parlance, we say that the NFS server is “exporting its /home file system.” Assume there also exists a client system on the network named clientA that needs access to the contents of the /home file system being exported by the NFS server. Finally, assume all other requirements are met (permissions, security, compatibility, and so on).
For clientA to access the /home share being exported by serverA, clientA needs to make an NFS mount request for /home so that it can mount it locally, such that the remote share appears locally as the /home directory. Here is a simple command to trigger this:
After executing the previous command on clientA, all the users on clientA would be able to view the contents of /home as if it were just another directory or local file system. Linux would take care of making all of the network requests to the server.
Remote procedure calls (RPCs) are responsible for handling the requests between the client and the server. RPC technology provides a standard mechanism for any RPC client to contact the server and find out to which service the calls should be directed. Thus, whenever a service wants to make itself available on a server, it needs to register itself with the RPC service manager, portmap, which tells the client where the actual service is located on the server.
Versions of NFS
The protocol behind NFS has evolved and changed a lot over the years. Standards committees have helped NFS evolve to take advantage of new technologies, as well as changes in usage patterns. At the time of this writing, three well-known versions of the protocol exist: NFS version 2 (NFSv2), NFS version 3 (NFSv3), and NFS version 4 (NFSv4).
NFSv2 is the oldest of the three. NFSv3 is the standard with perhaps the widest use. NFSv4 has been in development for a while and is the newest standard. NFSv2 should probably be avoided if possible and should be considered only for legacy reasons. NFSv3 should be considered if stability and widest range of client support are desired. NFSv4 should be considered if its bleeding-edge features are needed and probably for very new deployments where backward compatibility is not an issue. Perhaps the most important factor in deciding which version of NFS to consider would be the version that your NFS clients will support.
Here are some of the features of each NFS version:
• NFSv2 Mount requests are granted on a per-host basis and not on a per-user basis. This version uses Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) as its transport protocol. Version 2 clients have a file size limitation of less than 2GB that they can access.
• NFSv3 This version includes a lot of fixes for the bugs in NFSv2. It has more features than version 2, has performance gains over version 2, and can use either TCP or UDP as its transport protocol. Depending on the local file system limits of the NFS server itself, clients can access files larger than 2GB in size. Mount requests are also granted on a per-host basis and not on a per-user basis.
• NFSv4 This version of the protocol uses a stateful protocol such as TCP or Stream Control Transmission Protocol (SCTP) as its transport. It has improved security features thanks to its support for Kerberos; for example, client authentication can be conducted on a per-user basis or on a principal basis. It was designed with the Internet in mind, and as a result, this version of the protocol is firewall-friendly and listens on the well-known port 2049. The services of the RPC binding protocols (such as rpc.mountd, rpc.lockd, and rpc.statd) are no longer required in this version of NFS because their functionality has been built into the server; in other words, NFSv4 combines these previously disparate NFS protocols into a single protocol specification. (The portmap service is no longer necessary.) It includes support for file access control list (ACL) attributes and can support both version 2 and version 3 clients. NFSv4 introduces the concept of the pseudo-file system, which allows NFSv4 clients to see and access the file systems exported on the NFSv4 server as a single file system. NFSv4 is currently at minor revision 2 (NFSv4.2).
The version of NFS used can be specified at mount time by the client via the use of mount options. For a Linux client to use a specific NFS version, the
nfsvers mount option has to be specified with the desired version (for example,
nfsvers=3). Otherwise, the client will negotiate a suitable version with the server.
The rest of this chapter concentrates mostly on NFSv3 and NFSv4, because they are considered quite stable, they are well known, and have the widest cross-platform support.
Security Considerations for NFS
In its default state, NFS is not a secure method for sharing disks. The same commonsense rules that you would apply to securing other network services also apply to securing NFS. You should be able to trust the users on the client systems accessing your server, but if you can’t guarantee that this trust exists, you should have measures in place to mitigate the obvious security issues. So, for example, if you’re the root user on both the client and the server, there is a little less to worry about. The important thing in this case is to make sure non-root users don’t become root—which is something you should be doing anyway! You should also strongly consider using NFS mount flags, such as the
root_squash flag discussed later on.
If you cannot fully trust the person with whom you need to share a resource, it will be worth your time and effort to seek alternative methods of sharing resources (such as read-only sharing of the resources).
As always, stay up to date on the latest security bulletins from the Computer Emergency Response Team (www.cert.org), and keep up with all the patches from your distribution vendor.
Mount and Access a Partition
Several steps are involved in a client’s making a request to mount a server’s exported file system or resource (these steps pertain mostly to NFSv2 and NFSv3):
1. The client contacts the server’s portmapper to find out which network port is assigned as the NFS mount service.
2. The client contacts the mount service and requests to mount a file system. The mount service checks to see if the client has permission to mount the requested partition. (Permission for a client to mount a resource is based on directives or options in the /etc/exports file.) If all is well, the mount service returns an affirmative.
3. The client contacts the portmapper again—this time to determine on which port the NFS server is located. (Typically, this is port 2049.)
4. Whenever the client wants to make a request to the NFS server (for example, to read a directory), an RPC is sent to the NFS server.
5. When the client is done, it updates its own mount tables but doesn’t inform the server.
Notification to the server is unnecessary, because the server doesn’t keep track of all clients that have mounted its file systems. Because the server doesn’t maintain state information about clients and the clients don’t maintain state information about the server, clients and servers can’t tell the difference between a crashed system and a really slow system. Thus, if an NFS server is rebooted, ideally all clients should automatically resume their operations with the server as soon as the server is back online.
Enabling NFS in Fedora, RHEL, and CentOS
Almost all the major Linux distributions ship with support for NFS in one form or another. The only task left for the administrator is to configure it and enable it. On our sample Fedora system, enabling NFS is easy.
Because NFS version 3 (and lower) and its ancillary programs are RPC based, you first need to make sure that the system rpcbind service is installed and running.
To make sure that the rpcbind package is installed on the system, on a RPM-based distro (such as Fedora), type the following:
If the output is empty, you can use
dnf to install it by running this:
To check the status of the rpcbind service on systemd-enabled Linux distros, type this:
If the rpcbind service is stopped, start it like so:
Before going any further, use the
rpcinfo command to view the status of any RPC-based services that might have registered with portmap:
Because we don’t yet have an NFS server running on the sample system, the output may not show too many RPC services.
To start the NFS service, you can use the
NOTE systemd will automatically start rpcbind (as a dependency) whenever the nfs server is started, and so you don’t need to explicitly start rpcbind separately.
rpcinfo command again to view the status of RPC programs registered with the portmapper shows this output:
This output shows various RPC programs (
nlockmgr, and so on) are now running.
To stop the NFS service, enter this command:
To have the NFS service automatically start up with the system with the next reboot, use the
Enabling NFS in Ubuntu and Debian
Installing and enabling an NFS server in Ubuntu or Debian is as easy as installing the following components: nfs-common, nfs-kernel-server, and rpcbind.
To install these using Advanced Packaging Tool (APT), run the following command:
The install process will also automatically start up the NFS server, as well as all its attendant services, for you. You can check this by running the following:
To stop the NFS server in Ubuntu, type this:
The Components of NFS
Versions 2 and 3 of the NFS protocol rely heavily on RPCs to handle communications between clients and servers. RPC services in Linux are managed by the portmap service. As mentioned, this ancillary service is no longer needed in NFSv4 and higher.
The following list shows the various RPC processes that facilitate the NFS service under Linux. The RPC processes are mostly relevant only in NFS versions 2 and 3, but mention is made wherever NFSv4 applies.
• rpc.statd This process is responsible for sending notifications to NFS clients whenever the NFS server is restarted without being gracefully shut down. It provides status information about the server to rpc.lockd when queried. This is done via the Network Status Monitor (NSM) RPC protocol. It is an optional service that is started automatically by the nfslock service. It is not required in NFSv4.
• rpc.rquotad As its name suggests, rpc.rquotad supplies the interface between NFS and the quota manager. NFS users/clients will be held to the same quota restrictions that would apply to them if they were working on the local file system instead of via NFS. It is not required in NFSv4.
• rpc.mountd When a request to mount a partition is made, the rpc.mountd daemon takes care of verifying that the client has the appropriate permission to make the request. This permission is stored in the /etc/exports file. (The upcoming section “The /etc/exports Configuration File” tells you more about the /etc/exports file.) It is automatically started by the NFS server init scripts. It is not required in NFSv4.
• rpc.nfsd The main component to the NFS system, this is the NFS server/daemon. It works in conjunction with the Linux kernel either to load or unload the kernel module as necessary. It is, of course, still relevant in NFSv4.
• rpc.lockd The rpc.statd daemon uses this daemon to handle lock recovery on crashed systems. It also allows NFS clients to lock files on the server. The nfslock service is no longer used in NFSv4.
• rpc.idmapd This is the NFSv4 ID name-mapping daemon. It provides this functionality to the NFSv4 kernel client and server by translating user and group IDs to names, and vice versa.
• rpc.svcgssd This is the server-side rpcsec_gss daemon. The rpcsec_gss protocol allows the use of the gss-api generic security API to provide advanced security in NFSv4.
• rpc.gssd This provides the client-side transport mechanism for the authentication mechanism in NFSv4 and higher.
NOTE You should understand that NFS itself is an RPC-based service, regardless of the version of the protocol. Therefore, even NFSv4 is inherently RPC based. The fine point here lies in the fact that most of the previously used ancillary and stand-alone RPC-based services (such as mountd and statd) are no longer necessary, because their individual functions (or functionality) have now been folded into the NFSv4 daemon.
Kernel Support for NFS
NFS is implemented in two forms among the various Linux distributions. Most distributions ship with NFS support enabled in the kernel. A few Linux distributions also ship with support for NFS in the form of a stand-alone daemon that can be installed via a package.
Although not mandatory, kernel-based NFS server support is considered the de facto standard. However, if you choose to run NFS as a stand-alone daemon, you can rest assured that the nfsd program that handles NFS server services is completely self-contained and provides everything necessary to serve NFS.
NOTE On the other hand, clients must have support for NFS in the kernel. This support in the kernel has been around for a long time and is thus stable. Almost all present-day Linux distributions ship with kernel support for NFS enabled.
Configuring an NFS Server
Setting up an NFS server is a two-step process. The first step is to create the /etc/exports file, which defines which parts of your server’s file system or disk are shared with the rest of your network and the rules by which they get shared (for example, is a client allowed read-only access to the file system, or also allowed to write to the file system?). After defining the /etc/exports file, the second step is to start the NFS server processes that read the /etc/exports file.
The /etc/exports Configuration File
This primary configuration file for the NFS server lists the file systems that are sharable, the hosts with which they can be shared, and with what permissions (as well as other parameters). The file specifies remote mount points for the NFS mount protocol.
The format for the file is simple. Each line in the file specifies the mount point(s) and export flags within one local server file system for one or more hosts.
Here is the format of each entry/line in the /etc/exports file:
The different fields are explained here:
• /directory/to/export This is the directory you want to share with other users—for example, /home.
• client This refers to the hostname(s) of the NFS client(s).
• ip_network This allows the matching of hosts by IP addresses (for example, 172.16.1.1) or network addresses with a netmask combination (for example, 172.16.0.0/16). Wildcard characters (such as * and ?) are also supported in this field.
• permission These are the corresponding permissions for each client. Table 24-1 describes the valid permissions for each client.
Table 24-1 NFS Permissions
Following is an example of a complete NFS /etc/exports file. (Note that line numbers have been added to the listing to aid readability.)
Lines 1 and 2 are comments and are ignored when the file is read.
Line 3 exports the /home file system to the machines named hostA and hostB, and gives them read/write (
rw) permissions. And to the machine named clientA, it gives read/write (
rw) access and allows the remote root user to have root privileges on the exported file system (/home)—this last bit is indicated by the
Line 4 exports the /usr/local/ directory to all hosts on the 172.16.0.0/16 network. Hosts in the network range are allowed read-only access.
Telling the NFS Server Process About /etc/exports
Once you have an /etc/exports file setup, use the
exportfs command to tell the NFS server processes to reread the configuration. The parameters for
exportfs are as follows:
Following are sample usages of
exportfs from the command line.
To export all file systems specified in the /etc/exports file, type this:
To export the directory /usr/local to the host clientA with the
no_root_squash permissions, type this:
In most instances, you will simply want to use
exportfs with the
-r option (meaning re-export all directories and complete other housekeeping tasks).
The showmount Command
When you’re configuring NFS, you’ll find it helpful to use the
showmount command to see if everything is working correctly. The command is used for showing mount information for an NFS server. By using the
showmount command, you can quickly determine whether you have configured nfsd correctly.
After you have configured your /etc/exports file and exported all your file systems using
exportfs, you can run
showmount -e to see a list of exported file systems on the local NFS server. The
-e option tells
showmount to show the NFS server’s export list. Here’s an example:
showmount command without any options will list clients connected to the server:
You can also run this command on clients by passing the server hostname as the last argument. To show the exported file systems on a remote NFSv3 server (serverA) from an NFS client (clientA), you can issue this command while logged into clientA:
Troubleshooting Server-Side NFS Issues
When exporting file systems, you may sometimes find that the server appears to be refusing the client access, even though the client is listed in the /etc/exports file. Typically, this happens because the server takes the IP address of the client connecting to it and resolves that address to the fully qualified domain name (FQDN), and the hostname listed in the /etc/exports file isn’t qualified. For example, this might happen if the server thinks the client hostname is clientA.example.com, but the /etc/exports file lists just clientA.
Another common problem is that the server’s perception of the hostname/IP pairing is not correct. This can occur because of an error in the /etc/hosts file or in the Domain Name System (DNS) tables. You’ll need to verify that the pairing is correct.
For NFSv2 and NFSv3, the NFS service may fail to start correctly if the other required services, such as the portmap service, are not already running.
Even when everything seems to be set up correctly on the client side and the server side, you may find that the firewall on the server side is preventing the mount process from completing. In such situations, you will notice that the
mount command seems to hang without any obvious errors. On Red Hat–like systems that use firewalld for firewall rule management, you can permanently open the port for the NFS service on the server by using the
firewall-cmd command, like this:
Configuring NFS Clients
NFS clients are remarkably easy to configure under Linux, because they don’t require any new or additional software to be loaded. The only requirement is that the kernel be compiled to support the NFS file system. Virtually all Linux distributions come with this feature enabled by default in their stock kernel. Aside from the kernel support, the only other important factor is the options used with the
The mount Command
mount command was originally discussed in Chapter 8. The important parameters to use with the
mount command are the NFS server name or IP address, the local mount point, and the options specified after
-o on the
mount command line.
The following is an example of an NFS
mount command invocation:
Here, serverA is the NFS server name. Make sure that the name is resolvable via either DNS or the /etc/hosts file. The various
-o options available are explained in Table 24-2.
Table 24-2 Mount Options for NFS
mount options can also be used (hardcoded) in the /etc/fstab file. This same entry in the /etc/fstab file would look like this:
Soft vs. Hard Mounts
By default, NFS operations are hard, which means the clients continue their attempts to contact the server indefinitely. However, this behavior is not always desirable! It causes a problem if an emergency shutdown of all systems is performed. If the servers happen to get shut down before the clients, the clients’ shutdowns will stall while they wait for the servers to come back up. Enabling a soft mount allows the client to time out the connection after a number of retries (specified with the
NOTE There is one exception to the preferred approach of using soft mounts: Don’t use this arrangement when you have data that must be committed to disk no matter what and you don’t want to return control to the application until the data has been committed. (NFS-mounted mail directories are typically mounted this way.)
Cross-mounting can best be described as the process of having serverA NFS-mounting serverB’s disks and serverB NFS-mounting serverA’s disks. Although this may appear innocuous at first, there is a subtle danger in doing this. If both servers crash, and if each server requires mounting the other’s disk in order to boot correctly, you’ve got a chicken-and-egg problem. ServerA won’t boot until serverB is done booting, but serverB won’t boot because serverA isn’t done booting!
To avoid this problem, avoid situations that require this interdependencies. Ideally, servers should be able to boot completely without needing to mount anyone else’s disks for anything critical. However, this doesn’t mean you can’t cross-mount at all. There are legitimate reasons for cross-mounting, such as needing to make home directories available across all servers. In these situations, make sure you set your /etc/fstab entries to use the
bg mount option. By doing so, you will allow each server to background the
mount process for any failed mounts, thus giving all of the servers a chance to boot completely and then properly make their NFS-mountable file systems available.
The Importance of the intr Option
When a process makes a system call, the kernel takes over the action. During the time that the kernel is handling the system call, the process may not have control over itself. In the event of a kernel access error, the process must continue to wait until the kernel request returns; the process can’t give up and quit. In normal cases, the kernel’s control isn’t a problem, because typically, kernel requests get resolved quickly. When there’s an error, however, it can be quite a nuisance. Because of this, NFS has an option to mount file systems with the interruptible flag (the
intr option), which allows a process that is waiting on an NFS request to give up and move on. In general, unless you have reason not to use the
intr option, it is usually a good idea to do so.
The default block size that is transmitted with NFS version 3 is 8192 bytes (for NFSv4, it is 32,768 bytes). There might be situations where you need to tune these default values to take advantage of faster network stacks or faster equipment available to you. This is where the
wsize (write size) and
rsize (read size) options come in handy. You will usually want to tune these values up or down (in cases where you have older hardware) to suit your environment.
Here is a sample entry in an NFS client’s /etc/fstab file to tweak (or double the value to 65,536 bytes) the
rsize options for NFSv4:
Troubleshooting Client-Side NFS Issues
Like any major service, NFS has mechanisms to help it cope with error conditions. In this section, we discuss some common error cases and how NFS handles them.
Stale File Handles
If a file or directory is in use by one process when another process removes the file or directory, the first process gets an error message from the server. Typically, this error states something to the following effect: “Stale NFS file handle.”
Most often, stale file handle errors can occur, for example, when you’re using the graphical environment on a system and you have two GUI terminal windows open. If the first terminal window is in a particular directory—say, /mnt/usr/local/mydir/—and that directory gets deleted from the second terminal window, the next time you press ENTER in the first terminal window, you’ll see the error message.
To fix this problem, simply change your directory to one that you know exists, without using relative directories (for example,
You’re likely to see the “Permission denied” message if you’re logged in as root and are trying to access a file that is NFS-mounted. Typically, this means that the server on which the file system is mounted is not acknowledging root’s permissions.
This is usually the result of forgetting that the /etc/exports file will, by default, enable the
root_squash option. So if you are experimenting from a permitted NFS client as the root user, you might wonder why you are getting access-denied errors even though the remote NFS share seems to be mounted properly.
The quick way around this problem is to become the user who owns the file you’re trying to control. For example, if you’re root and you’re trying to access a file owned by the user yyang, use the
su command to become yyang:
When you’re done working with the file, you can exit out of yyang’s shell and return to root. Note that this workaround assumes that yyang exists as a user on the system and has the same UID on both the client and the server.
A similar problem occurs when users clearly have the same usernames on the client and the server but still get permission-denied errors. This might happen because the actual UIDs associated with the usernames on both systems are different. For example, suppose the user mmellow has a UID of 1003 on the host clientA, but a user with the same name on serverA has a UID of 6000. The simple workaround to this can be to create/maintain users with the same UIDs and GIDs across all systems. The scalable workaround to this may be to implement a central user database infrastructure, such as LDAP or NIS, so that all users have the same UIDs and GIDs, independent of their local client systems.
TIP Keep those UIDs in sync! Every NFS client request to an NFS server includes the UID of the user making the request. This UID is used by the server to verify that the user has permissions to access the requested file. However, in order for NFS permission-checking to work correctly, the UIDs of the users must be synchronized between the client and server. (The
all_squash option can circumvent this when used in the /etc/exports file.) Having the same username on both systems is not enough, however. The numerical equivalent of the usernames (UID) should also be the same. A Network Information Service (NIS) database or a Lightweight Directory Access Protocol (LDAP; see Chapter 27) database can help in this situation. These directory systems help to ensure that UIDs, GIDs, and other information are in sync by keeping all the information in a central database.
Sample NFS Client and NFS Server Configuration
In this section, you’ll put everything you’ve learned thus far together by walking through the actual setup of an NFS environment. We will set up and configure the NFS server. Once that is accomplished, we will set up an NFS client and make sure that the directories get mounted when the system boots.
In particular, we want to export the /usr/local file system on the host serverA to a particular host on the network named clientA. We want clientA to have read/write access to the shared volume and the rest of the world to have read-only access to the share. Our clientA will mount the NFS share at its /mnt/usr/local mount point. The procedure involves these steps:
TIP For a quick-and-dirty way to make sure that both server(s) and client(s) can resolve the other system’s hostname to the correct IP address, you can create quick entries in the /etc/hosts file using these sample commands on the relevant systems. Here, clientA’s IP is 172.16.0.113 and serverA’s IP is 172.16.0.2:
1. On the server, serverA, edit the /etc/exports configuration file. We want to share /usr/local, so input the following into the /etc/exports file:
2. Save your changes to the file when you are done editing and then exit the text editor.
3. On the NFS server, first check whether the rpcbind is running:
If it is not running, start it. If it is stopped or inactive, you can start it with this command:
TIP On an openSUSE system, the equivalent of the preceding commands are
rcrpcbind status and
4. Start the NFS service, which will start all the other attendant services it needs. Use the
systemctl command on systemd-enabled Linux distros to start the service:
5. Use the
exportfs command to re-export the directories in /etc/exports:
6. To check whether your exports are configured correctly, run the
7. If you don’t see the file systems that you put into /etc/exports, check /var/log/messages for any output that
mountd might have logged. For journald-enabled systems, you can also use the
journalctl -f -xe command to monitor the logs in real time.
If you need to make changes to /etc/exports, don’t forget to reload or restart the nfsd service and run
exportfs -r when you are done making the changes. And, finally, run
showmount -e again to make sure that the changes took effect.
8. Now that you have the server configured, it is time to set up the client. First, see if the rpc mechanism is working between the client and the server. You will again use the
showmount command to verify that the client can see the shares. If the client cannot see the shares, you might have a network problem or a permissions problem to the server. From clientA, issue the following command:
TIP If the
showmount command returns an error similar to “clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)” or “clnt_create: RPC: Port mapper failure...” then you should ensure that a firewall running on the NFS server or between the NFS server and the client is not blocking the communications. Actual mounting might still work in spite of this error! You need to open up ports for the following services: nfs, rpc-bind, and mountd.
9. Once you have verified that you can view shares from the client, it is time to see if you can successfully mount a file system. First, create the local /mnt/usr/local/ mount point and then use the
mount command as follows:
10. Use the
mount command to view only the NFS-type file systems that are mounted on clientA:
Or for NFSv4 mounts, you would run this:
11. If these commands succeed, you can add the
mount entry with its options into the /etc/fstab file so that the remote file system will get mounted upon reboot:
Common Uses for NFS
The following ideas are, of course, just ideas. You are likely to have your own reasons for sharing file systems via NFS.
• To host popular programs If you are accustomed to Windows, you’ve probably worked with applications that refuse to be installed on network shares. For one reason or another, these programs want each system to have its own copy of the software—a nuisance, especially if a lot of machines need the software. Linux rarely has such conditions prohibiting the installation of software on network disks. Thus, many sites install heavily used software on a special file system that is exported to all hosts.
• To hold home directories Another common use for NFS partitions is to hold home directories. By placing home directories on NFS-mountable partitions, it’s possible to configure the Automounter (and a directory service) so that users can log into any machine in the network and have their home directory available to them. Heterogeneous sites typically use this configuration so that users can seamlessly move from one variant of Linux to another without worrying about the location of their personal data.
• For shared mail spools A directory residing on the mail server can be used to store all of the user mailboxes, and the directory can then be exported via NFS to all hosts on the network. In this setup, traditional UNIX mail readers can read a user’s e-mail straight from the spool file stored on the NFS share. In the case of large sites with heavy e-mail traffic, multiple servers might be used for providing Post Office Protocol version 3 (POP3) mailboxes, and all the mailboxes can easily reside on a common NFS share that is accessible to all the servers.
In this chapter, we discussed the process of setting up an NFS server and client. This requires little configuration on the server side. The client side requires a wee bit more configuration. But, in general, the process of getting NFS up and running is relatively painless. Here are some key points to remember:
• NFS has been around for a long time now, and as such, it has gone through several revisions of the protocol specifications. The revisions are mostly backward-compatible, and each succeeding revision can support clients using the older versions.
• NFS version 4.* is the newest revision and is loaded with a lot of improvements and features that were not previously available. As of this writing, NFSv4.* might still not be the most widely deployed version of the protocol in the wild. However, it is the stock version implemented and shipped with the mainstream Linux distros. As such, it is quickly becoming the norm/standard in comparison to the aging NFSv3.
• The older NFS protocols (versions 2 and 3) are implemented as a stateless protocol. Clients can’t tell the difference between a crashed server and a slow server; thus, recovery is automatic when the server comes back up. In the reverse situation, when the client crashes and the server stays up, recovery is also automatic.
• The key server processes in NFSv2 and NFSv3 are rpc.statd, rpc.rquotad, rpc.mountd, and rpc.nfsd. Most of these functions have been integrated into NFSv4.
NFS is a powerful tool for sharing storage volumes/file systems across network clients. Be sure to spend some time experimenting with it before using it to try to meet your environment’s resource-sharing needs.