The kernel column #102 with Jon Masters – celebrating 20 years of kernel history (and a look ahead to Linux 3.0)
Jon Masters marks the 20th anniversary of the Linux kernel with a reflection on 20 years of Linux kernel history and a look ahead to Linux 3.0…
Twenty years ago, in April of 1991, a young Finnish computer science student at the University of Helsinki began work on a piece of software that would fundamentally change the computer industry. Linus Torvalds had just recently acquired an Intel 80836 microprocessor-powered PC system and wanted to exploit its support for ‘Protected Mode’ paged virtual memory (the ability for the processor to isolate individual programs from one another and give each an entire memory address space of its own to work with), and in the process learn about how such features worked. At the time, contemporary consumer operating systems such as Microsoft’s Windows 3.0 had only very limited support for the advanced features of the new Intel processor, and commercial UNIX-based alternatives were extremely expensive propositions, while open source operating systems such as Minix used alternative (older) techniques like memory segmentation, which had been in older Intel CPUs.
At the time Linus first began work on what would become Linux, it had become somewhat fashionable in academic circles to work on ‘microkernels’ (operating systems formed from many very simple parts that had extremely complex interactions – ultimately their downfall – but in theory were more robust to single component failure of parts of the software). Microsoft researchers had begun work on what would later be known as Windows NT (which did initially use a microkernel design), and Carnegie Mellon University (CMU) in the US state of Pennsylvania had been working on the Mach microkernel for many years (since the mid-Eighties). The Mach kernel would later be used in derivative form in Apple’s Mac OS X. But in the early Nineties, microkernels still suffered from terrible performance, which was a consequence of their purist-driven design (great theory, but not necessarily all that practical). Later kernels – such as NT and OS X – introduced compromises that meant they were really only microkernels in concept rather than necessarily in their implementation.
In the early Nineties, computer science students such as Linus were increasingly taught using toy operating systems built around microkernels, such as Minix, a creation of Professor Andrew Tannenbaum at VU University Amsterdam. Minix had been created to allow students to study and improve a real operating system that was nonetheless not so overly complicated that it could not be understood over the course of a few semesters’ worth of study. Unlike Andrew Tannenbaum, Linus did not subscribe to the same philosophy that drove much of the popularity of Minix, but he did use it, since it was essentially the only practical open source alternative available. This was another reason why Linux came about, as a means to have a more conventional UNIX-like operating system whose source code was freely available and which supported the kinds of hardware available to students (rather than the higher-end kind typically required for traditional UNIX).
Linus implemented his kernel on his existing Minix system. This meant that the first file system supported by Linux was the Minix file system, and that Linux initially used many of the tools and utilities that formed part of Minix for building and booting the system. By August of 1991, Linus had reached a point where he was ready to share his kernel (the guts of the operating system) with the world. And so, that summer he sent the following now infamous announcement to Usenet (an older newsgroup system used widely at the time in place of today’s mailing lists) newsgroup comp.os.minix:
“Hello everybody out there using minix – I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things)” – Linus Torvalds, famously understating the impact Linux would have on the world.
What’s in a name?
Interest quickly grew in Linux, and within a year it had grown in size from 10,000 lines of source code to over 175,000 (compare that with nearly 15 million in the latest release). Linux 1.2 added support for the first non-Intel processor architectures in the form of ports to Alpha, SPARC and MIPS systems, as well as the first support for the ELF program executable file format that has since become totally ubiquitous on all (non-Apple) UNIX and UNIX-like systems today (but which caused a lot of painful transition from the older a.out file format along the way).
Early versions of what would later become known as Linux actually had another name. Linus wanted to avoid the egotistical connotation of using a name like Linux and so originally named the kernel ‘Freax’, a portmanteau of ‘Freak’, ‘Free’ (as in open source) and ‘x’ (as in UNIX-like). Early on, the name Linux became popular and eventually stuck after Ari Lemmke, who helped to administer the University of Helsinki FTP server, decided to use the name Linux in place of Freax in the name of files made available for download. In addition to the choice of name, the choice of software licence for early releases specifically excluded commercial distribution of Linux. The licence was changed to the GNU GPL (version 2) in time for the 0.99 release in 1992, something Linus would later describe as “the best thing I ever did”.
Linux 2.x series
Linux 2.0 came in 1996, and brought with it support for more than one CPU through a technology known as SMP (symmetric multiprocessing). The early implementation was crude insomuch as it used a giant lock data structure (the ‘Big Kernel Lock’) to ensure only one processor could be running kernel code at any one time. Other processors could run applications concurrently, but every time one needed something from the kernel, those requests had to be globally serialised. The BKL was so deeply ingrained that it was only finally removed from the kernel in 2.6.39 (in a patch entitled ‘That’s all, folks!’), although its impact had long since been taken care of with a much more fine-grained approach to locking within the kernel. The release of 2.2 brought the first support for Apple’s PowerPC systems (prior to native support in 2.4) through the use of a modified Mach microkernel shim layer known as MkLinux that ran on the bare hardware. Apple even gave away copies of MkLinux at MacWorld!
Linux 2.4 was released in 2001 and brought with it the first really truly viable kernel for enterprise use. 2.4 had support for now very antiquated technologies like ISA Plug-and-Play and PCMCIA (later renamed to PC Card), but it also had the first support for USB. Later, it gained the Logical Volume Manager, RAID and the ext3 journaling file system. This was the first release where Linus experimented with pulling major new features into an already released kernel series. It worked for features like LVM, but it didn’t work so well when the entire virtual memory management system was replaced in 2.4.10, leading to a series of very unreliable kernels and increasingly larger alternative ‘-ac’ (Alan Cox) kernel releases that undid many of the changes in the interest of having a kernel distribution that could ship to customers. This was one of the motivators behind the start of the 2.5 development series kernels that introduced many changes ahead of 2.6.
Linux 2.6 was released in December 2003. It really pushed the bounds of enterprise scalability, featuring support for thousands of CPUs, the new NPTL (Native POSIX Threading Library) and a security subsystem known as SELinux originally created by the National Security Agency (NSA). There was an interruption to kernel development in mid-2005 when the (closed source) proprietary source code management system Linus had switched to in frustration back in 2002, and had been using under a special licence, was abruptly withdrawn by its authors following an attempt by Andrew Trigell to reverse-engineer it. This led to Linus writing a completely new source code management system known as ‘git’, which is today used by thousands of other projects. This also led indirectly to a change in workflow for future kernel development: instead of having development kernel series and stable kernel series, all development would happen in one evolving (previously ‘stable’) series. There would be no Linux 2.7. Lessons from the earlier problems in 2.4 were learned and new processes around defined release cycles were introduced.
Linux 3.0
Development in the 2.6 series has been going really well in recent times. Linus has evolved a process of using ‘merge windows’, which are periods of time during a development cycle when intrusive changes can be taken, followed by various rounds of ‘release candidate’ stabilisation efforts.
Read More...
http://www.linuxuser.co.uk/opinion/the-kernel-column-102-with-jon-masters-%E2%80%93-celebrating-20-years-of-kernel-history-and-a-look-ahead-to-linux-3-0/
Most Read Posts
- Peppermint OS Two review
- GNOME 3 vs Unity: Which is right for you?
- The kernel column #102 with Jon Masters – celebrating 20 years of kernel history (and a look ahead to Linux 3.0)
- GNOME vs KDE: which is right for you?
- FreeNAS 8 review
No comments:
Post a Comment