Chapter 1 Introduction to Linux, Distributions, and FOSS – Linux Administration: A Beginner's Guide, Eighth Edition, 8th Edition

CHAPTER

1

Introduction to Linux, Distributions, and FOSS

In this chapter we’ll look at some of the core server-side technologies as they are implemented in the Linux (open source) world. And where applicable, we compare and contrast this to the Microsoft Windows Server world (possibly the platform you are more familiar with). But before delving into any technicalities, we briefly discuss some important underlying concepts, ideas, and ideologies that form the genetic makeup of Linux and Free and Open Source Software (FOSS).

Linux: The Operating System

Some people (mis)understand Linux to be an entire software suite of developer tools, editors, graphical user interfaces (GUIs), networking tools, and so forth. More formally and correctly, such software collectively is called a distribution, or distro. The distro is the entire software suite, including Linux, that forms the whole.

So if we consider a distribution everything plus Linux, what then is Linux exactly? Linux itself is the core of the operating system: the kernel or, more colloquially, the heart. The kernel is the program acting as chief of operations. It is responsible for starting and stopping other programs (such as text editors, web browsers, services, and so on), handling requests for memory, accessing disks, and managing network connections. The complete list of kernel activities could easily fill a book in itself, and, in fact, several books documenting the kernel’s internal functions have been written.

The kernel is a nontrivial program. It is also what puts the Linux badge on all the numerous Linux distributions. All distributions use essentially a version of the Linux kernel, so the fundamental behavior of all Linux distributions is the same.

You’ve probably heard of the Linux distributions named Red Hat Enterprise Linux (RHEL), Fedora, Debian, Amazon Linux, Ubuntu, Mint, openSUSE, CentOS, Arch, Chrome OS, Slackware, Oracle Linux, and so on, which have received a great deal of press.

Depending on whom you are speaking with, Linux distributions can be categorized along a variety of different lines, including software management style, cultural, commercial, noncommercial, philosophical, and function. One popular taxonomy for categorizing Linux distros is commercial versus noncommercial.

The vendors of commercial distros generally offer support for their distributions—at a cost. The commercial distros also tend to have a longer release life cycle. To meet certain regulatory requirements, some commercial distros may incorporate/implement more specific security requirements that the FOSS community might not care about but that some institutions/corporations do care about. Examples of commercial flavors of Linux-based distros are RHEL and SUSE Linux Enterprise (SLE).

The noncommercial distros, on the other hand, are free. These distros try to adhere to the original spirit of open source software. They are mostly community supported and maintained—the community consists of the users and developers. The community support and enthusiasm can sometimes supersede that provided by the commercial offerings!

Several of the so-called noncommercial distros also have the backing and support of their commercial counterparts. Very often, the companies that offer the purely commercial flavors have vested interests in making sure that free distros exist. Some of the companies use the free distros as the proving and testing ground for software that ends up in the commercial spins. This is a sort of freemium model. Examples of noncommercial flavors of Linux-based distros are Fedora, openSUSE, Ubuntu, Linux Mint, Gentoo, Raspbian, and Debian.

An interesting thing to note about the commercial Linux distributions is that most of the programs with which they ship were not written by the companies themselves! Rather, other people have (freely) released their programs with licenses, allowing their redistribution with source code. The distribution vendors simply bundle them into one convenient and cohesive package that’s easy to install. In addition to bundling existing software, several distribution vendors also develop value-added tools that make their distributions easier to administer or use, but the software that they ship is generally written by others.

Open Source Software and GNU: Overview

In the early 1980s, Richard Matthew Stallman began a movement within the software industry. He preaches that software should be free. Note that by free he doesn’t mean in terms of price but rather in the same sense as freedom or libre. This means shipping not just a product but the entire source code as well. To clarify the meaning of free software, Stallman was once famously quoted as saying:

“Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.”

Stallman’s position was, somewhat ironically, a return to classic computing, when software was freely shared among hobbyists!

The premise behind giving away source code is simple: A user of the software should never be forced to deal with a developer who might not support that user’s intentions for the software. The user should never have to wait for bug fixes to be published. More important, code developed under the scrutiny of other programmers is typically of higher quality than code written behind locked doors. One of the great benefits of open source software comes from the users themselves: If they have the know-how, they can add new features to the original program and then contribute these features to the source so that everyone else can benefit from them.

This basic desire lead to the release of a complete UNIX-like system (aka Linux) to the public, free of license restrictions. Of course, before you can build any operating system, you need to build tools, and this is how the GNU project and its namesake license were born. The tight symbiotic relationship between the GNU project and the Linux Kernel project is one of the reasons why you would often see the complete stack written as GNU/Linux.

NOTE  GNU stands for GNU’s Not UNIX. This is an example of a recursive acronym, which is a type of hacker humor. If you don’t understand why it’s funny, don’t worry. You’re still in the majority.

The GNU Public License

One important thing to emerge from the GNU project is the GNU Public License (GPL). This license explicitly states that the software being released is free and that no one can ever take away these freedoms. It is acceptable to take the software and resell it, even for a profit; however, in this resale, the seller must release the full source code, including any changes. Because the resold package remains under the GPL, the package can be distributed for free and resold yet again by anyone else for a profit. Of primary importance is the liability clause: The programmers are not liable for any damages caused by their software.

It should be noted that the GPL is not the only license used by open source software developers (although it is arguably the most popular). Other licenses, such as BSD and Apache, have similar liability clauses but differ in terms of their redistribution. For instance, the BSD license allows people to make changes to the code and ship those changes without having to disclose the added code (whereas the GPL requires that the added code is shipped). For more information about other open source licenses, check out www.opensource.org.

Upstream and Downstream

Upstream [developer, code, project] and downstream [developer, code, project] are terms you might come across frequently in the FOSS world. To help you understand the concept of upstream and downstream components, let’s start with an analogy. Picture, if you will, a pizza with all your favorite toppings.

The pizza is put together and baked by a local pizza shop. Several things go into making a great pizza—cheeses, vegetables, flour (dough), herbs, meats (or meat substitutes), and sauces, to mention a few. The pizza shop will often make some of these ingredients in-house and rely on other businesses to supply other ingredients. The pizza shop is also tasked with assembling the ingredients into a complete, finished pizza.

Let’s consider one of the most common pizza ingredients—cheese. The cheese is made by a cheesemaker who makes her cheese for many other industries or applications, including the pizza shop. The cheesemaker is pretty set in her ways and has very strong opinions about how her product should be paired with other food stuffs (wine, crackers, bread, vegetables, and so on). The pizza shop owners, on the other hand, do not care about other food stuffs—they care only about making a great pizza. Sometimes the cheesemaker and the pizza shop owners will bump heads due to differences in opinion and objectives. And at other times they will be in agreement and cooperate beautifully. Ultimately (and sometimes unbeknownst to them), the pizza shop owners and the cheesemaker care about the same thing: producing the best product they can.

The pizza shop in our analogy here represents the Linux distributions’ vendors/projects (Fedora, Debian, RHEL, openSUSE, and so on). The cheesemaker represents the different software project maintainers that provide the important programs and tools, such as the Bourne Again Shell (Bash), GNU Image Manipulation Program (GIMP), GNOME, KDE, Nmap, LibreOffice, and GNU Compiler Collection (GCC), that are packaged together to make a complete distribution (the pizza). The Linux distribution vendors are referred to as the downstream component of the open source food chain; the maintainers of the accompanying different software projects are referred to as the upstream component.

The Advantages of Open Source Software

If the GPL seems like a bad idea from a commercial standpoint, consider the surge in adoption of successful open source software projects—this is indicative of a system that does indeed work! This success has evolved for two reasons. First, as mentioned earlier, errors in the code itself are far more likely to be caught and quickly fixed under the watchful eyes of peers. Second, under the GPL system, programmers can release code without the fear of being sued. Without that protection, people might not feel as comfortable releasing their code for public consumption.

NOTE  The concept of free software, of course, often begs the question of why anyone would release his or her work for free. As hard as it might be to believe, some people do it purely for altruistic reasons and the love of it.

Most projects don’t start out as full-featured, polished pieces of work. They often begin life as a quick hack to solve a specific problem bothering the programmer at the time. As a quick-and-dirty hack, the code might not have a sales value. But when this code is shared and consequently improved upon by others who have similar problems and needs, it becomes a useful tool. Other program users begin to enhance the code with features they need, and these additions travel back to the original program. The project thus evolves as the result of a group effort and eventually reaches full refinement. This polished program can contain contributions from possibly hundreds, if not thousands, of programmers who have added little pieces here and there. In fact, there may be little evidence remaining of original author’s code.

There’s another reason for the success of generously licensed software. Any project manager who has worked on commercial software knows that the real cost of developing software isn’t only in the development phase—it’s also in the cost of selling, marketing, supporting, documenting, packaging, and shipping that software. A programmer carrying out a weekend hack to fix a problem with a tiny, kludged together program might lack the interest, time, and money to turn that hack into a profitable product.

When Linus Torvalds released Linux in 1991, he released it under the GPL. As a result of its open charter, Linux has had a notable number of contributors and analyzers. This participation has made Linux strong and rich in features. It is estimated that since the v.2.2.0 kernel, Torvalds’s contributions represent less than 2 percent of the total code base!

NOTE  This might sound strange, but it is true: Contributors to the Linux kernel code include companies with competing operating system platforms. For example, Microsoft was one of the top code contributors to the Linux version 3.0 kernel code base (as measured by the number of changes or patches relative to the previous kernel version). Even though this might have been for self-promoting reasons on Microsoft’s part, the fact remains that the open source licensing model that Linux adopts permits this sort of thing to happen. Everyone and anyone who knows how, can contribute code. The code is subjected to a peer review process, which in turn helps the code to benefit from the “many eyeballs” axiom. In the end, everyone (end users, companies, developers, and so on) benefits.

Because Linux is free (as in speech), anyone can take the Linux kernel and other supporting programs, repackage them, and resell them. A lot of people and corporations have made money with Linux doing just this! As long as these folks release the kernel’s full source code along with their individual packages, and as long as the packages are protected under the GPL, everything is legal. Of course, this also means that packages released under the GPL can be resold by other people under other names for a profit.

In the end, what makes a package from one person more valuable than a package from another person is the value-added features, support channels, and documentation. The money isn’t necessarily in the product alone; it can also be in the services that go with it.

Understanding the Differences Between Windows and Linux

As you might imagine, the differences between Microsoft Windows and the Linux operating system cannot be completely discussed in the confines of this section. Throughout this book, topic by topic, you’ll read about the specific contrasts between the two systems. In some chapters, you’ll find no comparisons because a major difference doesn’t really exist.

But before we attack the details, let’s take a moment to discuss the primary architectural differences between the two operating systems.

Single Users vs. Multiple Users vs. Network Users

Windows was originally designed according to the “one computer, one desk, one user” vision of Microsoft co-founder Bill Gates. For the sake of discussion, we’ll call this philosophy “single user.” In this arrangement, two people cannot work in parallel running (for example) Microsoft Word on the same machine at the same time. You can buy Windows and run what is known as Terminal Services or thin clients, but this requires extra computing power/hardware and extra costs in licensing. Of course, with Linux, you don’t run into the cost problem, and Linux will run fairly well on averagely specced hardware. Linux easily supports multiuser environments, where multiple users doing different things can be concurrently logged onto a central machine. The operating system (Linux) on the central machine takes care of the resource-“sharing” details.

“But, hey! Windows can allow people to offload computationally intensive work to a single machine!” you may argue. “Just look at SQL Server!” Well, that position is only half correct. Both Linux and Windows are indeed capable of providing services such as databases over the network. We can call users of this arrangement network users, since they are never actually logged into the server but rather send requests to the server. The server does the work and then sends the results back to the user via the network. The catch in this case is that an application must be specifically written to perform such server/client duties. Under Linux, a user can run any program allowed by the system administrator on the server without having to redesign that program. Most users find the ability to run arbitrary programs on other machines to be of significant benefit.

The Monolithic Kernel and the Micro-Kernel

Three popular forms of kernels are used in operating systems. The first, a monolithic kernel, provides all the services the user applications need. The second, a micro-kernel, is much more minimal in scope and provides only the bare minimum core set of services needed to implement the operating system. And the third is a hybrid of the first two.

Linux, for the most part, adopts the monolithic kernel architecture: It handles everything dealing with the hardware and system calls. Windows, on the other hand, has traditionally worked off a micro-kernel design, with the latest Windows server versions using the hybrid kernel approach. The Windows kernel provides a small set of services and then interfaces with other executive services that provide process management, input/output (I/O) management, and so on. It has yet to be proved which methodology is truly the best way.

Separation of the GUI and the Kernel

Taking a cue from the original Macintosh design concept, Windows developers integrated the GUI with the core operating system. One simply does not exist without the other. The benefit with this tight coupling of the operating system and user interface is consistency in the appearance of the system.

Although Microsoft does not impose rules as strict as Apple’s with respect to the appearance of applications, most developers tend to stick with a basic look and feel among applications. One reason why this is dangerous, however, is that the video card driver is now allowed to run at what is known as “Ring 0” on a typical x86 architecture. Ring 0 is a protection mechanism—only privileged processes can run at this level, and typically user processes run at Ring 3. Because the video card is allowed to run at Ring 0, it could misbehave (and it does!), and this can bring down the whole system.

On the other hand, Linux (like UNIX in general) has kept the two elements—user interface and operating system—separate. The windowing or graphical stack (X11, Xorg, Wayland, and so on) is run as a user-level application, which makes the overall system more stable. If the GUI (which is complex for both Windows and Linux) fails, Linux’s core does not go down with it. The GUI process simply crashes, and you get a terminal window. The graphical stack also differs from the Windows GUI in that it isn’t a complete user interface. It defines only how basic objects should be drawn and manipulated on the screen.

One of the most significant features of the X Window System is its ability to display windows across a network and onto another workstation’s screen. This allows a user sitting on host A to log into host B, run an application on host B, and have all of the output routed back to host A. It is possible, for example, for several users to be logged into the same machine and simultaneously use an open source equivalent of Microsoft Word (such as LibreOffice).

In addition to the core graphical stack, a window manager is needed to create a useful environment. Linux distributions come with several window managers, including the heavyweight and popular GNOME and KDE environments. Both GNOME and KDE offer an environment that is friendly, even to the casual Windows user. If you’re concerned with speed and small footprint, you can look into the F Virtual Window Manager (FVWM), Lightweight X11 Desktop Environment (LXDE), and XFCE window managers.

So which approach is better—Windows or Linux—and why? That depends on what you are trying to do. The integrated environment provided by Windows is convenient and less complex than Linux, but out of the box, Windows lacks the X Window System feature that allows applications to display their windows across the network on another workstation. The Windows GUI is consistent, but it cannot be easily turned off, whereas the X Window System doesn’t have to be running (and consuming valuable hardware resources) on a server.

NOTE  With its latest server family of operating systems, Microsoft has somewhat decoupled the GUI from the base operating system (OS). You can now install and run the server in a so-called Server Core mode. Managing the server in this mode is done via the command line or remotely from a regular system, with full GUI capabilities.

My Network Places

The native mechanism for Windows users to share disks on servers or with each other is through My Network Places (the former Network Neighborhood). In a typical scenario, users attach to a share and have the system assign it a drive letter. As a result, the separation between client and server is clear. The only problem with this method of sharing data is more people-oriented than technology-oriented: People have to know which servers contain which data.

With Windows, a new feature borrowed from UNIX has also appeared: mounting. In Windows terminology, it is called reparse points. This is the ability to mount an optical drive into a directory on your C drive.

Right from its inception, Linux was built with support for the concept of mounting, and as a result, different types of file systems can be mounted using different protocols and methods. For example, the popular Network File System (NFS) protocol can be used to mount remote shares/folders and make them appear local. In fact, the Linux Automounter can dynamically mount and unmount different file systems on an as-needed basis. The concept of mounting resources (optical media, network shares, and so on) in Linux/UNIX might seem a little strange, but as you get used to Linux, you’ll understand and appreciate the beauty in this design. To get anything close to this functionality in Windows, you have to map a network share to a drive letter.

A common example of mounting resources under Linux involves mounted home directories. The user’s home directories can reside on a remote server, and the client systems can automatically mount the directories at boot time. So the /home (pronounced slash home) directory exists on the client, but the /home/username directory (and its contents) can reside on the remote server.

Under Linux, NFS, and other Network File Systems, users never have to know server names or directory paths, and their ignorance is your bliss. No more questions about which server to connect to. Even better, users need not know when the server configuration must change. Under Linux, you can change the names of servers and adjust this information on client-side systems without making any announcements or having to reeducate users. Anyone who has ever had to reorient users to new server arrangements or major infrastructure changes will appreciate the benefits and convenience of this.

The Registry vs. Text Files

Think of the Windows Registry as the ultimate configuration database—thousands upon thousands of entries, only a few of which are completely documented.

“What? Did you say your Registry got corrupted?” <insert maniacal laughter> “Well, yes, we can try to restore it from last night’s backups, but then Excel starts acting funny and the technician (who charges $130 just to answer the phone) said to reinstall .…”

In other words, the Windows Registry system can be, at best, difficult to manage. Although it’s a good idea in theory, most people who have serious dealings with it don’t emerge from battling it without a scar or two.

Linux does not have a registry, and this is both a blessing and a curse. The blessing is that configuration files are most often kept as a series of text files (think of the Windows .ini files). This setup means you’re able to edit configuration files using the text editor of your choice rather than tools such as regedit. In many cases, it also means you can liberally add comments to those configuration files so that six months from now you won’t forget why you set up something in a particular way. Most software programs that are used on Linux platforms store their configuration files under the /etc (pronounced slash etc) directory or one of its subdirectories. This convention is widely understood and accepted in the FOSS world.

The curse of a no-registry arrangement is that there is no standard way of writing configuration files. Each application can have its own format. Many applications are now coming bundled with GUI-based configuration tools to alleviate some of these problems. So you can do a basic setup easily via the GUI tool and then manually edit the configuration file when you need to do more complex adjustments.

In reality, having text files hold configuration information usually turns out to be an efficient method and makes automation much easier too. Once set, these files rarely need to be changed; even so, they are straight text files and therefore easy to view and edit when needed. Even more helpful is that it’s easy to write scripts to read the same configuration files and modify their behavior accordingly. This is especially helpful when automating server maintenance operations, which is crucial in a large site with many servers.

Domains and Active Directory

The idea behind Microsoft’s Active Directory (AD) is simple: Provide a repository for any kind of administrative data, whether it is user logins, group information, or even just telephone numbers. In addition, AD provides a central place to manage authentication and authorization (using Kerberos and LDAP) for a domain. The domain synchronization model also follows a reliable and well-understood Domain Name System (DNS)–style hierarchy. As tedious as it may be, AD works pretty well when properly set up and maintained.

Out of the box, Linux does not use a tightly coupled authentication/authorization and data store model the way that Windows does with AD. Instead, Linux uses an abstraction model that allows for multiple types of stores and authentication schemes to work without any modification to other applications. This is accomplished through the Pluggable Authentication Modules (PAM) infrastructure and the name resolution libraries that provide a standard means of looking up user and group information for applications. PAM also provides a flexible way of storing that user and group information using a variety of schemes.

For administrators looking to Linux, this abstraction layer can seem peculiar at first. However, consider that you can use anything from flat files to Network Information Service (NIS), Lightweight Directory Access Protocol (LDAP), or Kerberos for authentication. This means you can pick the system that works best for you. For example, if you have an existing infrastructure built around AD, your Linux systems can use PAM with Samba or LDAP to authenticate against the Windows domain model. And, of course, you can choose to make your Linux system not interact with any external authentication system. In addition to being able to tie into multiple authentication systems, Linux can easily use a variety of tools, such as OpenLDAP, to keep directory information centrally available as well.

Summary

This chapter offered an overview of what Linux is and what it isn’t. We discussed a few of the guiding principles, ideas, and concepts that govern open source software and Linux by extension. We ended the chapter by covering some of the similarities and differences between core technologies in the Linux and Microsoft Windows Server worlds. Most of these technologies and their practical uses are dealt with in greater detail in the rest of this book.

If you are so inclined and would like to get more detailed information on the internal workings of Linux itself, you might want to start with the source code. The source code can be found at www.kernel.org. It is, after all, open source!