File systems provide a means of organizing data on a storage medium. They serve as a nice abstraction layer above the nitty-gritty details of sectors, cylinders, and integrated circuits (ICs) of physical disks. This chapter discusses the composition and management of these abstraction layers supported by Linux. We’ll pay particular attention to the native Linux file systems—the extended file system family.
This chapter will also cover the many aspects of managing disks. This includes creating partitions and volumes, establishing file systems, automating the process by which the file systems are mounted at boot time, and dealing with them when things go wrong. We will also touch on Logical Volume Management (LVM) concepts.
NOTE Before beginning your study of this chapter, you should be familiar with files, directories, permissions, and ownership in the Linux environment. If you haven’t yet read Chapter 4, you should read that chapter before continuing.
The Makeup of File Systems
Let’s begin by going over the structure of file systems under Linux to build a proper foundation for the various concepts discussed later on.
The most fundamental building block of many Linux/UNIX file systems is the i-node. An i-node is a control structure that points either to other i-nodes or to data blocks.
The control information in the i-node includes the file’s owner, permissions, size, time of last access, creation time, group ID, and other information. The i-node does not provide the file’s name, however. Directories themselves are special instances of files. This means each directory gets an i-node, and the i-node points to data blocks containing information (filenames and i-nodes) about the files in the directory. Figure 8-1 illustrates the organization of i-nodes and data blocks in the older ext2 file system.
Figure 8-1 The i-nodes and data blocks in the ext2 file system
As you can see in Figure 8-1, the i-nodes are used to provide indirection so that more data blocks can be pointed to—which is why each i-node does not contain the filename. Only one i-node works as a representative for the entire file; thus, it would be a waste of space if every i-node contained filename information. Take, for example, a 6GB disk that contains 1,079,304 i-nodes. If every i-node also required 256 bytes to store the filename, a total of about 33MB would be wasted in storing filenames, even if they weren’t being used!
Each indirect block, in turn, can point to other indirect blocks if necessary. With up to three layers of indirection, it is possible to store very large files on a Linux file system.
Data on an ext* file system is organized into blocks. A block is a sequence of bits or bytes, and it is the smallest addressable unit in a storage device. Depending on the block size, a block might contain only a part of a single file or an entire file. Blocks are in turn grouped into block groups. Among other things, the block group contains a copy of the superblock, the block group descriptor table, the block bitmap, an i-node table, and of course the actual data blocks. The relationship among the different structures in an ext2 file system is shown in Figure 8-2.
Figure 8-2 Data structure on ext2 file systems
The first piece of information read from a disk is its superblock. This small data structure reveals several key pieces of information, including the disk’s geometry, the amount of available space, and, most importantly, the location of the first i-node. Without a superblock, an on-disk file system is useless.
Something as important as the superblock is not left to chance. Multiple copies of this data structure are scattered all over the disk to provide backup in case the first one is damaged. Under Linux’s ext2 file system, a superblock is placed after every group of blocks, and it contains i-nodes and data. One group consists of 8192 blocks; thus, the first redundant superblock is at 8193, the second at 16,385, and so on.
The fourth extended file system (ext4) is the successor of the ext2/ext3 file system. It is the default file system used in most Linux distributions. The ext4 file system offers several improvements and features, discussed next.
Journaling file systems work by first creating an entry of sorts in a log (or journal) of changes that are about to be made before actually committing the changes to disk. Once this transaction has been committed to disk, the file system goes ahead and modifies the actual data or metadata. This results in an all-or-nothing situation—that is, either all or none of the file system changes get done. Traditional file systems (such as ext2) must instead search through the directory structure, find the right place on disk to lay out the data, and then lay out the data. (Linux can also cache the whole process, including the directory updates, thereby making the process appear faster to the user.)
One of the benefits of using a journaling-type file system is the greater assurance that data integrity will be preserved, and in the unavoidable situations where problems arise, speed, ease of recovery, and likelihood of success are vastly increased. One such unavoidable situation is a system crash. In this case, you might not need to run the file system checker or file system consistency checker (
fsck). Other benefits of using journaling-type file systems are that system reboots are simplified, disk fragmentation is reduced, and I/O operations can be accelerated (depending on the journaling method used).
Btrfs, XFS, and ext4 are popular Linux file systems that implement journaling.
Unlike ext3/ext2, the ext4 file system does not use the indirect block mapping approach. Instead, it uses the concept of extents. An extent is a way of representing contiguous physical blocks of storage on a file system. An extent provides information about the range or magnitude over which a data file extends on the physical storage. So instead of each block carrying a marker to indicate the data file to which it belongs, a single extent (or a few extents) can be used to state that the next X number of blocks belong to a specific data file.
As data grows, shrinks, and is moved around, it can become fragmented with time. Fragmentation can cause the mechanical components of a physical storage device to work harder than necessary, which in turn leads to increased wear and tear on the device.
Traditionally, the process of undoing file fragmentation is to defragment the file system offline. “Offline” in this instance means to run the defragmenting when no possibility exists that the files are being accessed or used. ext4 supports online defragmentation of individual files or an entire file system.
Larger File System and File Size
The older ext3 file system is able to support maximum file system sizes of 16TB as well as maximum individual file sizes of up to 2TB. The ext4 system, on the other hand, is able to support maximum file system sizes of 1EB (exabyte) as well as maximum individual file sizes of up to 16TB each.
The B-tree file system (Btrfs) is a next-generation Linux file system aimed at solving any enterprise scalability issues that the current Linux file systems may have. Btrfs is fondly pronounced “Butter FS.” As of this writing, Btrfs is available for use in different Linux distributions. In addition to all the advanced features supported by ext4, Btrfs supports (or plans to support) several additional features, including the following:
• Dynamic i-node allocation and transparent compression
• Online file system checking (
• Built-in RAID functions such as mirroring and stripping
• Online defragmentation, support for snapshots, and support for sub-volumes
• Support for the online addition and removal of block devices
• Improved storage utilization via support for data deduplication
XFS is a journaled 64-bit file system that’s been around for a while. It has since been ported to the Linux kernel. XFS was recently introduced as the default file system on some Red Hat–based distros, like RHEL and CentOS. XFS is considered a Big Iron file system, meaning it is enterprise grade, high performance, reliable, scalable, well tested, and so on. Its features include:
• Quick recovery and support for extended attributes
• Support for file systems as large as 8 exbibytes (8EiB or ~8 million terabytes)
• Support for a maximum file size of 8 exbibytes (8EiB or ~8 million terabytes)
• Support for as close to raw input/output performance that the underlying hardware can provide
• Online defragmentation and online resizing
TIP With so many choices for file systems, it might be a little daunting figuring out which one to go with for which use case or work load! No need to fear, though, because during the server OS installation process, you will find that the default file system supplied by the distribution vendor will suffice for most general use cases, so you can go about your merry business without giving it another thought.
Managing File Systems
Once the file systems have been created, deployed, and added to the backup cycle, they do tend to take care of themselves for the most part. What makes them tricky to manage are the administrative issues, such as users who refuse to do housekeeping on their personal home directories and other cumbersome nontechnical issues.
In the following sections, we’ll go over the technical issues involved in managing file systems—that is, the process of mounting and unmounting partitions, dealing with the /etc/fstab file, and performing file system recovery with the
Mounting and Unmounting Local Disks
Partitions or volumes need to be mounted so that their contents can be accessed. In actuality, the file system on a partition or volume is mounted so that it appears as just another subdirectory on the system. This helps to promote the illusion of one large directory tree structure, even though several different file systems might be in use. This characteristic is especially helpful to the administrator, who can relocate data stored on a physical partition to a new location (possibly a different partition) under the directory tree, with the system users being none the wiser.
The file system management process begins with the root directory. This partition is also fondly called slash and likewise symbolized by a forward slash character (/). The partition containing the kernel and core directory structure is mounted at boot time. It is possible and usual for the physical partition that houses the Linux kernel to be stored on a separate file system, such as /boot. It is also possible for the root file system (/) to house both the kernel and other required utilities and configuration files needed to bring the system up to single-user mode.
As the boot scripts run, additional file systems are mounted, adding to the structure of the root file system. The mount process overlays a single subdirectory with the directory tree of the partition it is trying to mount. For example, let’s say that /dev/sda2 is the root partition. It includes the directory /usr, which contains no files. The partition /dev/sda3 contains all the files that you want in /usr, so you mount /dev/sda3 to the directory /usr. Users can now simply change directories to /usr to see all the files from that partition. The user doesn’t need to know that /usr is actually a separate partition.
NOTE In this and other chapters, we might inadvertently say that a partition or volume is being mounted at such and such a directory. Please note that it is actually the file system on the partition that is being mounted.
Keep in mind that when a new directory is mounted, the
mount process hides all the contents of the previously mounted directory. So in our /usr example, if the root partition did have files in /usr before mounting /dev/sda3, those /usr files would no longer be visible. (They’re not erased, of course, because once /dev/sda3 is unmounted, the /usr files would become visible again.)
Using the mount Command
Like many command-line tools, the
mount command has a plethora of options, most of which you won’t be using in daily work. You can get full details on these options from the
mount man page. In this section, we’ll explore the most common uses of the command.
The structure of the
mount command is as follows:
mount options can be any of those shown in Table 8-1.
Table 8-1 Options Available for the mount Command
mount command without any options will list all the currently mounted file systems:
Assuming that a directory named /bogus-directory exists, the following
mount command will mount the /dev/sda3 partition onto the /bogus-directory directory in read-only mode:
Unmounting File Systems
To unmount a file system, use the
umount command (note that the command is not
unmount with an n). Here’s the syntax for the command:
Here, directory is the directory to be unmounted. Here’s an example:
This unmounts the partition mounted on the /bogus-directory directory.
When the File System Is in Use There’s a catch to using
umount: If the file system is in use (that is, someone or something is currently accessing the contents of the file system via reading or writing), you won’t be able to unmount that file system. To get around this, you can do any of the following:
• You can use the
fuser program to determine which processes are keeping the files open and then kill them off or ask the process owners to stop what they’re doing. If you choose to kill the processes, make sure you understand the repercussions of doing so—in other words, be extra careful before killing unfamiliar processes.
• You can use the
-f option with
umount to force the unmount process. It is especially useful for Network File System–type file systems that are no longer available.
• Use the lazy
unmount, specified with the
-l option. This option almost always works, even when others fail. It detaches the file system from the file system hierarchy immediately, and it cleans up all references to the file system as soon as the file system stops being busy.
• The safest and most proper alternative is to bring the system down to single-user mode and then unmount the file system or initiate a simple reboot. In reality, of course, you don’t always have the luxury of being able to do this on production systems.
The /etc/fstab File
The /etc/fstab file is a configuration file that
mount can use. This file contains a list of all partitions known to the system. During the boot process, this list is read and the items in it are automatically mounted with the options specified therein.
Here’s the format of entries in a sample /etc/fstab file:
Following are sample entries from an /etc/fstab file (line numbers have been added to the output to aid readability):
Let’s take a look at some of the entries in the /etc/fstab file that haven’t yet been discussed.
Line 1 The first entry in our sample /etc/fstab file is the entry for the root volume. The first column shows the device that houses the file system—the /dev/mapper/fedora-root logical volume (more on volumes later in the section “Volume Management”).
The second column shows the mount point—the / (slash or root) directory.
The third column shows the file system type—XFS in this case.
The fourth column shows the options with which the file system should be mounted—only the default options are required in this case.
The fifth field is used by the
dump utility (a simple backup tool) to determine which file systems need to be backed up. And the sixth and final field is used by the
fsck program to determine whether the file system needs to be checked and also to determine the order in which the checks are done.
Line 2 The next entry in our sample file is the /boot mount point. The first field of this entry shows the device—in this case, it points to the device identified by its Universally Unique Identifier (UUID). In the case of the /boot mount point, you might notice that the field for the device looks a little different from the usual /dev/<path-to-device> convention. The use of a UUID to identify devices/partitions helps to ensure that they are correctly and uniquely identified under any circumstances—such as when a new disk is added or an existing disk is removed or when changing the drive controller or bus to which the drive is attached, and so on.
Some Linux distributions may instead opt to use labels to identify the physical device in the first field of the /etc/fstab file. The use of labels helps to hide the actual device (partition) from which the file system is being mounted. When labels are used, the device is replaced with a token that looks like the following:
LABEL=/boot. During the initial installation, the partitioning program of the installer automatically set the label on the partition. Upon bootup, the system scans the partition tables and looks for these labels and does the right thing. Labels are also useful for transient external media such as flash drives, USB hard drives, and so on.
The other fields mean basically the same thing as the field for the root mount point discussed previously.
TIP The command-line utility blkid can be used to display different attributes of the storage devices attached to a system. One such attribute is the UUID of the volumes. For example, running blkid without any options will print a variety of information, including the UUID of each block device on the system:
Line 3 This is the entry for the system swap partition, where virtual memory resides. In Linux, the virtual memory can be kept on a separate partition from the root partition. Keeping the swap space on a separate partition helps to improve performance. Also, because the swap partition doesn’t need to be backed up or checked with
fsck at boot time, the last two parameters in its entry are zeros. See the man page on
mkswap for additional information.
fsck tool (short for File System Check) is used to diagnose and repair file systems that might have become damaged in the course of daily operations. Such repairs may be necessary after a system crash in which the system did not get a chance to fully flush all of its internal buffers to disk. (The fact that this tool’s name—
fsck—bears a striking resemblance to one of the expressions often uttered by system administrators after a system crash, coupled with the fact that this tool can be used as a part of the recovery process, is strictly coincidental!)
Usually, the system runs the
fsck tool automatically during the boot process as it deems necessary. If it detects a file system that was not cleanly unmounted, it runs the utility. A file system check will also be run once the system detects that a check has not been performed after a predetermined threshold, such as a number of mounts or an amount of time passed between mounts. Linux will do its best to automatically repair any problems it runs across! The robust nature of the Linux file systems helps in dire situations. However, when things get out of hand, you might get this message:
At this point, you need to run
fsck by hand and answer its prompts yourself.
If you do find that a file system is not behaving as it should (spurious errors in log messages are an excellent hint of this type of anomaly), you may want to run
fsck yourself on a running system. The only downside is that the file system in question must be unmounted in order for this to work, which might sometimes require the system to be offline.
fsck isn’t the actual title for the repair tool; it’s actually just a wrapper. The
fsck wrapper tries to determine what kind of file system needs to be repaired and then runs the appropriate repair tool, passing any parameters that were passed to
fsck. For the ext4 file system, the actual tool is
fsck.ext4; for the VFAT file system, the tool is
fsck.vfat; and for a XFS file system, the utility is called
fsck.xfs. So, for example, when a system crash occurs on an ext4-formatted partition, you might choose to call
fsck.ext4 directly rather than relying on the wrapper tool
fsck to call it for you automatically.
fsck on the /dev/mapper/fedora-home file system mounted at the /home directory, you would carry out the following steps.
Assuming that the /home file system is not currently being used or accessed by any process or user, first unmount the file system:
Since we know that this particular file system type is ext4, we can call the correct utility (fsck.ext4) directly or simply use the
This output shows that the file system is marked clean.
To forcefully check the file system and answer yes to all questions in spite of what your OS thinks, type this:
What If I Still Get Errors?
fsck utility rarely finds problems that it cannot correct by itself. When it does ask for human intervention, telling
fsck to execute its default suggestion is often enough. Very rarely does a single pass of
fsck not clear up all problems.
On the rare occasions when a second run is needed, it should not turn up any more errors. If it does, you are most likely facing a hardware failure. Remember to start with the obvious: Check for reliable power and well-connected and good quality cables; for mechanical drives, make sure that there are no clicking sounds; and so on.
And when all else fails, and
fsck doesn’t want to fix the issue, it will often give you a hint as to what’s wrong. You can then use this hint to perform a search on the Internet and see what other people have done to resolve the same issue.
The lost+found Directory
Another rare situation occurs when
fsck finds file segments that it cannot rejoin with the original file. In those cases, it will place the fragment in the partition’s lost+found directory. This directory is located where the partition is mounted, so if /dev/mapper/fedora-home is mounted on /home, for example, then /home/lost+found correlates to the lost+found directory for that particular file system. Anything can go into a lost+found directory—file fragments, directories, and even special files. At the very least, lost+found tells you whether anything became dislocated. Again, such errors are extraordinarily rare.
Adding a New Disk
On systems sporting a PC hardware architecture, the process of adding a disk under Linux is relatively easy. Assuming you are adding a disk that is of similar type to your existing disks—for example, adding a SATA disk to a system that already has SATA drives—the system should automatically detect the new disk at boot time. All that remains is partitioning it and creating file system(s) on it.
If you are adding a new type of disk (such as a SAS [Serial Attached SCSI] disk on a system that has only SATA drives), you may need to ensure that your kernel supports the new hardware. This support can either be built directly into the kernel or be available as a loadable module (driver). Note that the kernels of most Linux distributions come with support for many popular disk/storage controllers.
Once the disk is in place, simply boot the system, and you’re ready to go. If you aren’t sure about whether the system can see the new disk, run the
dmesg command and view what the kernel has detected. Here’s an example:
Overview of Partitions
For the sake of clarity, and in case you need to know what a partition is and how it works, let’s briefly review this subject. Disks typically need to be partitioned before use. Partitions divide the disk into segments, and each segment acts as a complete disk by itself. Once a partition is filled with data, the data cannot automatically overflow onto another partition.
Various things can be done with a partitioned disk, such as installing an OS into a single partition that spans the entire disk, installing several different OSs into their own separate partitions in what is commonly called a “dual-boot” configuration, and using the different partitions to separate and restrict certain system functions into their own work areas.
This last example is especially relevant on a multiuser system, where the content of users’ home directories should not be allowed to overgrow and disrupt important OS functions.
Traditional Disk and Partition Naming Conventions
Modern Linux distributions use the libATA library to provide support within the Linux kernel for various storage devices as well as host controllers. Under Linux, each disk is given its own device name. The device files are stored under the /dev directory.
Hard disks start with the name sdX, where X can range from a through z, with each letter representing a physical block device. For example, in a system with two hard disks, the first hard disk would be /dev/sda and the second hard disk would be /dev/sdb. Depending on the implementation/driver, virtual block devices start with names like vdX.
When partitions are created, corresponding device files are created. They take the form of /dev/sdXY (or /dev/vdXY), where X is the device letter (as described in the preceding paragraph) and Y is the partition number.
Thus, the first partition on the /dev/sda disk is /dev/sda1, the second partition would be /dev/sda2, the second partition on the third disk would be /dev/sdc2, and so on.
Some standard devices are automatically created during system installation, and others are created as they are connected to the system.
You may have noticed earlier that we use the terms “partition” and “volume” interchangeably in parts of the text. Although they are not exactly the same, they are similar in a conceptual way.
Volume management is a new approach to dealing with disks and partitions: Instead of viewing a disk or storage entity along partition boundaries, the boundaries are no longer present and everything is now seen as volumes. (That made perfect sense, didn’t it? Don’t worry if it didn’t; this is a tricky concept. Let’s try this again with more detail.)
This new approach to dealing with partitions is called Logical Volume Management (LVM) in Linux. It offers several benefits and removes the restrictions, constraints, and limitations that the concept of partitions imposes. Following are some of the benefits:
• Greater flexibility for disk partitioning
• Easier online resizing of volumes
• Easier to increase storage space by simply adding new disks to the storage pool
• Use of snapshots
Following are some important volume management terms:
• Physical volume (PV) This typically refers to the physical hard disk(s) or another physical storage entity, such as a Redundant Array of Inexpensive Disks (RAID) array or iSCSI LUN. Only a single storage entity (for example, one partition) can exist in a PV.
• Volume group (VG) Volume groups are used to house one or more physical volumes and logical volumes into a single administrative unit. A volume group is created out of physical volumes. VGs are simply a collection of PVs; however, VGs are not mountable. They are more like virtual raw disks.
• Logical volume (LV) This is perhaps the trickiest LVM concept to grasp, because logical volumes (LVs) are the equivalent of disk partitions in a non-LVM world. The LV appears as a standard block device. We put file systems on the LV, and the LV gets mounted. The LV gets
fsck-ed if necessary.
LVs are created out of the space available in VGs. To the administrator, an LV appears as one contiguous partition, independent of the actual PVs from which it is derived.
• Extents Two kinds of extents can be used: physical extents and logical extents. Physical volumes (PVs) are said to be divided into chunks, or units of data, called “physical extents.” Logical volumes (LVs) are said to be divided into chunks, or units of data, called “logical extents.”
The following illustration shows the relationship between disks, physical volumes (PVs), volume groups (VGs), and logical volumes (LVs) in LVM:
Creating Partitions and Logical Volumes
During the OS installation process, you probably used a “pretty” tool with a nice GUI front-end to create partitions. The GUI tools available across the various Linux distributions vary greatly in looks and ease of use. Two command-line tools that can be used to perform most partitioning tasks, and that have a unified look and feel regardless of the Linux flavor, are the venerable
fdisk is small and somewhat awkward, it’s a reliable command-line partitioning tool.
parted, on the other hand, is much more user-friendly and has a lot more built-in functionalities than other tools have. In fact, a lot of the GUI partitioning tools call the
parted program in their back-end. Therefore, you should be familiar with foundational tools such as
fdisk. Other powerful command-line utilities for managing partitions are
During the installation of the OS, as covered in Chapter 2, you would have probably ended up with some free unallocated space on the disk if you accepted the default partitioning scheme. We will now use that free space to demonstrate some LVM concepts by walking through the steps required to create a logical volume.
In particular, we will create a 20GB-sized logical volume that will house the contents of our current /var directory. Because a separate /var volume was not created during the OS installation, the contents of the /var directory are currently stored under the volume that holds the root (/) tree. The general idea is that because the /var directory is typically used to hold frequently changing and growing data (such as log files), it is prudent to put its content on its own separate file system.
The steps we’ll follow can be summarized this way:
1. Examine the current disk partition layout using the
2. Examine the current LVM layout using the
3. Determine how much unallocated space we have on our existing volume group(s).
4. Finally, create a new logical volume within the volume group, format the volume, and assign mount points to the logical volume.
CAUTION The process of creating partitions is potentially irrevocably destructive to the data already on the disk. Before creating, changing, or removing partitions on any disk, you must be sure of what you are doing and the consequences.
Beyond the bother of having to physically insert a new disk into the server chassis, the actual steps of making the disk available/usable to the OS comprise a simple but methodical process from start to finish. We’ve interspersed the process in the following sections with some extra steps, along with some notes and explanations.
Some common and handy LVM utilities used are listed and described in Table 8-2.
Table 8-2 LVM Utilities
Examining Disk/Partition Layout
Let’s examine the current partition or disk layout of the main system disk, /dev/sda, by following these steps:
1. Begin by running the
parted utility with the device name as a parameter:
You will be presented with a simple
parted prompt: (parted).
2. Print the partition table again while at the
parted shell. Type
A few facts are worthy of note regarding this output:
• The total disk size is approximately 107GB.
• The partition table type is the GUID Partition Table (GPT) type. Three partitions are currently defined on our sample system: 1, 2, and 3 ( /dev/sda1, /dev/sda2, and /dev/sda3, respectively).
• Partition 1 (/dev/sda1) is marked with a boot flag (*). This means that it is a bootable partition. Specifically, it is the special required EFI partition used on UEFI-based systems.
• Partition 2 (/dev/sda2) is our traditional xfs-formatted /boot partition.
• Partition 3 (/dev/sda3) is the last partition spanning the rest of the disk and is marked with the
• From the partitioning scheme we chose during the OS installation, we can deduce that partition 1 (/dev/sda1) houses the /boot/efi file system and partition 2 is /boot, while partition 3 (/dev/sda3) houses everything else (see the output of the
df command for reference).
• The last partition (3, or /dev/sda3) ends smack dab on the 1074MB (107GB) boundary. Therefore, there isn’t any more room to create additional partitions!
3. We are done admiring the disk layout, so type
quit at the
(parted) prompt and press ENTER:
You will be returned back to your regular command shell (Bash in our case).
TIP Assuming you had free usable space on the disk or even extra/additional disks connected to the system, you can use a combination of various
parted subcommands, such as
set, and so on, to create new partitions.
Keep in mind that in some very rare cases, you may need to reboot the system or unplug and reinsert the newly partitioned block device in order to allow the Linux kernel to recognize or use newly created partitions. Running utilities like
hdparm with the right options can help inform the OS of partition table changes without rebooting.
Exploring Physical Volume(s)
We will use the
pvdisplay command in the following procedures to examine the current physical volume on the system.
1. Make sure you are still logged into the system as a user with superuser privileges.
2. Let’s view the current physical volumes defined on the system. Type
pvdisplay at the prompt, like so:
Take note of the physical volume name field (
PV Name). Our sample output shows that the /dev/sda3 partition is currently initialized as a physical volume.
Exploring Volume Group(s)
We will examine/explore any volume group (VG) defined on the system.
vgdisplay command to view the current volume groups that might exist on your system:
From the preceding output, we can see the following:
• The volume group name (VG Name) is fedora.
• The current size of the VG is 98.80 GiB
• The physical extent size is 4.00 MiB, and there are a total of 25,293 PEs.
• There are 19,428 physical extent (or ~75.89 GiB) free in the VG.
With the all free, usable extents/space that we’ve uncovered, we’ll next go on to carving out a logical volume (LV) in the following section.
Creating a Logical Volume
Now that we’ve unraveled the free space on the VG, we can go ahead and create the logical volume (LV) for our future /var file system.
1. First view the current LVs on the system:
The preceding output shows the current LVs:
2. With the background information that we now have, we will create an LV using the same naming convention (that is, name of mount point) currently used on the system. We will create a third LV of size 20GB called var on the fedora VG.
The full path to the LV will be /dev/fedora/var. Type the following:
NOTE You can actually name your LV any way you want. We named ours var for consistency only. We could have replaced var with another name, such as “my-volume” or “LogVol03” if we wanted to. The value for the
-n) option determines the name of the LV.
-L option specifies the size in human-readable (gigabyte or megabytes) units. We could have also specified the size in megabytes by using an option such as
3. View the LV you created by typing the following:
TIP You can install a GUI tool named blivet-gui on Fedora, RHEL, and CentOS Linux distributions that can greatly simplify the entire management of an LVM system.
The openSUSE Linux distribution also includes a very capable GUI tool for managing disks, partitions, and the LVM. Issue the command
yast2 disk to launch the utility. GNOME desktop environments usually have a nicely integrated GUI tool named gnome-disks that you can also use for storage/disk management.
Creating File Systems
After creating any volume(s), you next need to put file system(s) on them to make them actually useful. (If you’re accustomed to Microsoft Windows, this is akin to formatting the disk once you’ve partitioned it.)
The type of file system you want to create will determine the particular utility you should use. In this project, we want to create an XFS-type file system; therefore, we’ll use the
mkfs.xfs utility. As indicated earlier in this chapter, XFS is considered a highly performant and production-ready file system; therefore, you should be able to use it for most production type workloads. Many command-line parameters are available for the
mkfs.xfs tool, but we’ll use it in its simplest form here.
Following are the steps for creating a file system:
1. The only command-line parameter you’ll usually have to specify is the name of the partition (or volume) onto which the file system should go. To create a file system on /dev/fedora/var, issue the following command:
Once the preceding command runs to completion, the file system will be created.
We will next begin the process of trying to relocate the contents of the current /var directory to its own separate (and new) file system.
2. Create a temporary folder that will be used as the mount point for the new file system. Create it under the root folder:
3. Mount the new var logical volume at the /new_var directory:
4. Copy the content of the current /var directory to the /new_var directory:
5. Now you can rename the current /var directory to /old_var:
6. Create a new and empty /var directory:
7. To avoid taking the system down to single-user mode to perform the following sensitive steps, we will type the following:
This step temporarily remounts the /new_var directory to the /var directory where the system actually expects it to be. This is done by using the
bind option with the
mount utility. This is useful until we are good and ready to reboot the system.
bind option can also be useful on systems running the NFS service. This is because the rpc_pipefs pseudo-file system is often automatically mounted under a subfolder in the /var directory (/var/lib/nfs/rpc_pipefs). So to get around this, you can use the
mount utility with the
bind option to mount the rpc_pipefs pseudo-file system temporarily in a new location so that the NFS service can continue working uninterrupted. The command to do this in our sample scenario would be as follows:
8. This step usually is optional, but it may be necessary in certain Linux distros (such as Fedora, RHEL, and CentOS) that have SELinux enabled to restore the security contexts for the new /var folder so that the daemons that need it can use it:
9. We need to create an entry for the new file system in the /etc/fstab file. To do so, we must edit the /etc/fstab file so that our changes can take effect the next time the system is rebooted. Open the file for editing with any text editor of your choice and add the following entry into the file:
TIP You can also use the
echo and tee commands to append the preceding text to the end of the file. The command is:
10. This is a good time to reboot the system:
11. Hopefully, the system came back up fine. After the system boots, delete the /old_var and /new_var folders using the
TIP While still on the subject of file systems, this might be a good time to increase or expand the size of the root volume group to fill up any remaining space. The current size was chosen by default during our initial OS install in Chapter 2. We’ll use the
lvresize command to do this on our demo system, as follows:
In this chapter, we discussed some de facto Linux file systems such as the extended file system family (ext2, ext3, ext4), XFS, and Btrfs. We covered the process of administering your file systems, and we touched on various storage administrative tasks.
We also went through the process of moving a sensitive system directory (/var) onto its own separate file system (XFS). The exercise detailed what you might need to do while managing a Linux server in the real world. With this information, you’re armed with what you need to manage basic file system issues on a production-grade Linux-based server in a variety of environments.