From Unix to Linux: Key Trends in the Evolution of Operating Systems (Part 3)

The previous article in this series examining the roots of Linux and Unix was something of an apotheosis to the BSD operating system. BSD remains significant in computer history, and important installations of BSD can still be found. Marshall Kirk McKusick, when commenting on the article, said that today, FreeBSD can be found in the Sony Playstation, Netflix servers, Juniper routers, and elsewhere. Indeed, BSD is important enough for the Linux Professional Institute to offer certification as a BSD Specialist. But somehow BSD got passed by in the 1980s as Unix became the most important operating system in the world. The attention went to other variants, even though some—such as SunOS, the flagship software of Sun Microsystems—were based on BSD.

Fundamental Challenges

Some proponents of BSD blame its relative lack of progress on fear, uncertainty, and doubt surrounding the project during a lawsuit in the early 1990s. Citing unfair competition, AT&T sued BSDi, the company that BSD developers had formed in 1991 to commercialize BSD. AT&T then expanded the complaint to a much more fundamental challenge: that BSDi included AT&T’s Unix source code in BSD. This complaint rested on a lot of chutzpah, considering that the Unix developers at AT&T had benefited from incorporating innumerable tools and features created by the BSD developers. The suit was settled in less than two years, and BSD eventually stood on its own with no AT&T code, but BSD proponents like to point out that GNU/Linux was burgeoning at that very time. They intimate that BSD could have done what GNU/Linux did, were it not saddled by the AT&T lawsuit.

There's some appeal to this argument, because the Linux code and community of 1992 didn't look poised to produce big things—or even aspire to do so. But I have trouble buying the argument that a two-year blip in the long history of BSD could be so destructive. Linux itself faced a much more fundamental, dragged-out legal challenge (the SCO lawsuit, discussed in the next article in this series) without much harm. The fact is that BSD developers were creating their own problems during and after the lawsuit. So I throw my lot in with another set of observers who attribute the eclipse of BSD to its own development and organizational practices.

BSD Fragmentation

The developers and intense users of the BSD distributions I’ve talked to paint a complex portrait of BSD's dilemma, with as many angles as a cubist still life. Warner Losh, a former member of the FreeBSD core team, said in his comments on this article that he believes BSD had a healthy environment under its original developers, the Computer Systems Research Group (CSRG). That team finished its work and disbanded in 1995 with the intention that further development would take place in the BSDi company. Fragmentation started after that.

The leadership started making decisions that other contributors found arbitrary. Forming cliques, team members could not always recognize which contributions from outsiders were worth including. 386BSD, FreeBSD, NetBSD, OpenBSD—one by one, a small team of discontented developers would split off and create their own fork. The Linux community was immature by comparison, but kernel development stayed relatively united and the participants found their way forward to stability.

One might accept the proliferation of different BSD variants as a gift to users. Each variant had its own strengths—so the argument goes—and users could choose what was right for them. But the forks left none of the variants, except possibly FreeBSD, with a large enough critical mass to thrive. Anyone who wanted to develop for BSD needed to choose one of the variants or do a lot of porting. From the standpoint of the publishing industry, I can attest that putting out a book about BSD was nearly impossible. We couldn't cover all variants, and covering a single variant left us with too small an audience to make a profit.

McKusick points out that three separate distributions are a fairly small number for a historic operating system and seem like nothing compared to the fecund proliferation of GNU/Linux distributions. Not only do the utilities in the GNU/Linux distributions differ in important ways—such as the tools used to build and install software packages—but their underlying kernels are different.

This is all valid and worthy of discussion. But it's natural for distributions to build different kernels frequently. The Linux development repository has managed to remain unitary. And GNU/Linux enthusiasts will back me up in saying that one can reasonably learn enough utilities to expertly manage all the well-known distributions. Mick Bauer, who wrote Building Secure Servers with Linux for O'Reilly in 2002 (Linux Server Security in a later edition), confirms my point in his review of this article. He writes, "I was surprised at how easy it was to cover Red Hat, Debian/Ubuntu, and SuSE for all my topics. Knowing just a few utilities (mainly package managers) and config-file locations was all it took." 

Bauer also attributes the burgeoning of GNU/Linux to two distinguishing traits: the strength of its distributions and the license under which it was developed. Regarding distributions, he says: "From very early on users could choose between militantly free distributions like Slackware and Debian, commercial distributions with structured training and support programs like Red Hat and SuSE, and all points between. But this diversity hasn't (yet) led to any disruptive schisms in Linux kernel development. Early in Linux's evolution, this combination of commercial support contracts and kernel-development stability helped make Linux a viable choice for hosting network services for large corporations."

Bauer's other point concerns the GNU General Public License (GPL), which requires anyone distributing the software to donate back any changes they make. BSD's license falls into the permissive camp, which allows users to build on the software without opening up their changes. Although it makes sense that the more restrictive license would increase contributions, I am not persuaded that it makes a big difference. Companies that use free software have many incentives to get their changes back into the "core" regardless of legal constraints.

Several reviewers of this article report that the GPL's legal pressure increases the efforts made by companies to contribute back their code. McKusick claims, however, that FreeBSD has more committers than Linux, making the process for accepting commits easier to navigate.

But, as with most laws and contracts, someone must enforce GPL-style contracts if they are to be meaningful. It doesn't require a personal phone call from Richard Stallman (although a company I worked for received one), but various organizations in the open source world do police violations of licenses.

Unix Marches On

As the various BSD projects were sidelined, the 1980s witnessed an unprecedented turn to a single culture in operating systems. Feeling the cold water that Sun Microsystems' success threw in the faces of other computer vendors, they all (IBM, Digital Equipment Corporation, and others) turned to variants of either BSD or AT&T Unix.

Fans of other operating systems sometimes looked at Unix's idiosyncrasies and were bewildered by its seemingly irresistible spread. In commenting on this article, Unix historian Warren Toomey attributes the success of Unix to the convenience it offered programmers and other technically adept users. Foremost here is the famous innovation of pipes, where "many small programs collaborate." Some observers say that pipes encouraged programmers to use plain text input and output instead of binary formats, which in turn made both programming and debugging easier.

In addition to minicomputers, championed by Digital and Sun, companies started producing workstations meant to support a single engineer or knowledge worker. Although quite costly in comparison to the personal computers being offered during the 1980s, workstations were priced so that companies could provide each professional employee his or her own computer.

Commercial adoption of Unix was greatly aided by the creation of a standard, portable graphical interface. This interface by no means replaced the command line but allowed for word processors, CAD/CAM, visualization tools, and other important applications that professional users were hankering for. The graphical interface was created by MIT's historic Project Athena and released under the MIT license, which like BSD's license was very permissive. The developers christened their interface the X Window System, a name that‘s unbearingly boring to anyone but a network engineer. Still, the software was cleverly designed and was a godsend to Unix computer vendors.

But commercialization by numerous vendors increased the proliferation of different variants, making portability harder. A partial solution was POSIX, a specification that purported to standardize Unix system calls, library calls, and core utilities such as the shell (command line).

POSIX was valuable to some extent. Every Unix implementation—including Linux, when it came along later—felt compelled to support the standard. And it introduced a welcome consistency to some important areas such as threads (lightweight processes), which many developers urgently needed in an age of multicore processors.

But like many standards, POSIX was incomplete and slow to respond to its environment. For instance, it couldn't handle timing in fractions of a second well, a capability needed by many applications. Time after time, some operating system would create a better interface than POSIX for some function, and whether other vendors would adopt the improvement was hit-or-miss.

Once, in the late 1980s, Unix vendors tried to combine their efforts into a truly portable operating system. In the next section, we’ll quickly review this attempt.

The Open Software Foundation

History shows that people often come together in the face of a perceived external threat. This was the position in which most of the computer industry found itself in 1988, as AT&T—the owner of Unix—and Sun Microsystems announced a somewhat nebulous partnership. Although it wasn't clear how much AT&T would boost Sun's already accelerating success, some kind of threat to other vendors was present. Sun jettisoned the SunOS that had propelled it into commercial computing and adopted the latest version of AT&T's Unix. Showing no more creativity than MIT, AT&T called this version System V.4.2.

In response to the prospect of losing business to an AT&T/Sun partnership founded on total control over Unix, a half dozen major vendors formed a consortium called the Open Software Foundation (OSF). The group quickly showed promise by creating a toolkit for the X Window System called Motif. Most developers found Motif more appealing than the corresponding toolkit offered by Sun, called OPEN LOOK. After the successful release of this toolkit, OSF took on several more ambitious projects, including a version of Unix called OSF/1, a compiler back end called the Architecturally Neutral Distribution Format (ANDF), and the project's Waterloo, the Distributed Computing Environment (DCE).

The problems with the OSF are further explained in a section of my memoir that contrasts the OSF with Linux development. OSF, like BSD, could not coordinate independent and sometimes competing contributors. The participating companies couldn't put aside their clashing interests in favor of true collaboration. “Beggar thy neighbor” seemed to be their competitive strategy.

Microsoft: The Outlier

Before proceeding with this history of Unix variants, I should acknowledge the single important operating system of the past 30 years that owes almost nothing to Unix: Microsoft Windows. After the release of its first major project—the DOS operating system that drove the IBM personal computer—Microsoft has consistently taken a non-intersecting path to Unix.

In a comment on this article, Internet service provider Brett Glass pointed out that when Windows incorporated TCP/IP, the system depended almost entirely on BSD commands for network management (which formed the groundwork for Linux network administration, too) and that Windows even contains some BSD code to handle networking. This makes sense because BSD led the pack in developing the Internet protocol stack, whereas Microsoft was slow to recognize the Internet and had to catch up fast. 

Microsoft’s first Windows system was poised precariously on top of DOS. An example of DOS's limitations was a continuing legacy of memory management based on a totally obsolete Intel concept called "segmented memory." Microsoft's de facto monopoly on PC operating systems served it well for a decade, but the growing demands made by applications on computers in the late 1980s forced them to look for a totally new base for their operating system. They designed their revamped operating system, called Windows NT, on several precedents—but decidedly not on Unix.

One precedent was VMS, the operating system that Digital Equipment Corporation had created for their VAX line. As Digital declined, a leading VMS developer named Dave Cutler joined Microsoft to design the new Windows.

Another precedent was Mach, an intriguing research project led by Rick Rashid at Carnegie Mellon University. Mach was a radically new design in operating systems that rejected monolithic kernels for a concept called a microkernel. This sort of kernel has minimal responsibilities and draws on various outside services for most of the tasks normally performed by an operating system. The microkernel concept is theoretically appealing, and reflects the trend toward modularization that has pervaded computing during the past four decades, such as today's microservices. Microkernels haven't proven successful in general-purpose operating systems, though, and have faded away along with Mach. Although Windows NT started with a microkernel design, it eventually ended up without much of Mach’s microkernel legacy. But interestingly, a kind of microkernel concept has popped up again as a way to super-containerize applications, as in MirageOS.

The Mach/Microsoft relationship might have been more important for its ancillary effects. Rashid left academia to join Microsoft—initially at their celebrated research center—and took on several high-level roles at the company.

While excluding Unix from influence on its operating system design, Microsoft has acknowledged the operating system's importance in other ways. The company developed a rudimentary but compliant POSIX interface, licensed a Unix variant called Xenix for several years, contributed a lot to the Linux kernel, and now offers an emulator called Windows Subsystem for Linux. So even the company once seen as the archenemy of Unix has joined with Unix in the modern era.

The Promise of Collaboration

In this article, we saw the failure of two important attempts to unify and drive forward Unix. BSD and the OSF both withered even as Unix became critical to computing.

Perhaps part of their problem was that both BSD and the OSF depended on fairly traditional development processes. Version control was primitive (CVS came into use only during the late 1980s) and testing was seen as a task for a separate QA team. The management of people and personalities was even less understood. Under such conditions, an orderly and convivial development model for Unix seemed impossible.

But the Internet was growing and with it new opportunities for collaborative production. The final article in this series starts with that elusive promise. Programmers were exploring new models for distributed development in the early 1990s, including a 19-year-old computer science student named Linus Torvalds.

See also:

Comments