Berkeley Software Distribution

Berkeley Software Distribution (BSD, sometimes called Berkeley Unix) is a Unix operating system derivative developed and distributed by the Computer Systems Research Group (CSRG) of the University of California, Berkeley, from 1977 to 1995. Today the term “BSD” is often used non-specifically to refer to any of the BSD descendants which together form a branch of the family ofUnix-like operating systems. Operating systems derived from the original BSD code remain actively developed and widely used.

Historically, BSD has been considered a branch of UNIX—”BSD UNIX”, because it shared the initial codebase and design with the original AT&T UNIX operating system. In the 1980s, BSD was widely adopted by vendors of workstation-class systems in the form of proprietary UNIX variants such as DEC ULTRIX and Sun Microsystems SunOS. This can be attributed to the ease with which it could be licensed, and the familiarity the founders of many technology companies of the time had with it.

Although these proprietary BSD derivatives were largely superseded by the UNIX System V Release 4 and OSF/1 systems in the 1990s (both of which incorporated BSD code and are the basis of other modern Unix systems), later BSD releases provided a basis for several open sourcedevelopment projects, e.g. FreeBSDNetBSDOpenBSD or DragonFly BSD, that are ongoing. These, in turn, have been incorporated in whole or in part in modern proprietary operating systems, e.g. the TCP/IP (IPv4 only) networking code in Microsoft Windows and a part of the foundation ofApple‘s OS X.

Berkeley’s Unix was the first Unix to include libraries supporting the Internet Protocol stacks: Berkeley sockets. (A Unix implementation of IP’s predecessor, the ARPAnet’s NCP, with FTP and Telnet clients, had been produced at U. Illinois in 1975, and was available at Berkeley.[11][12]However, the memory scarcity on the PDP-11 forced a complicated design and performance problems.[4])

By integrating sockets with the Unix operating system’s file descriptors, it became almost as easy to read and write data across a network as it was to access a disk. The AT&T laboratory eventually released their own STREAMS library, which incorporated much of the same functionality in a software stack with a different architecture, but the wide distribution of the existing sockets library reduced the impact of the new API. Early versions of BSD were used to form Sun Microsystems‘ SunOS, founding the first wave of popular Unix workstations.

Today, BSD continues to serve as a technological testbed for academic organizations. It can be found in use in numerous commercial and free products, and, increasingly, in embedded devices. The general quality of its source code, as well as its documentation (especially reference manual pages, commonly referred to as man pages), make it well-suited for many purposes.

The permissive nature of the BSD license allows companies to distribute derived products as proprietary software without exposing source code and sometimes intellectual property to competitors

BSD operating systems can run much native software of several other operating systems on the same architecture, using a binary compatibility layer. Much simpler and faster than emulation, this allows, for instance, applications intended for Linux to be run at effectively full speed. This makes BSDs not only suitable for server environments, but also for workstation ones, given the increasing availability of commercial or closed-source software for Linux only.

A number of commercial operating systems are also partly or wholly based on BSD or its descendants, includingSun’s SunOS and Apple Inc.‘s Mac OS X.

Most of the current BSD operating systems are open source and available for download, free of charge, under the BSD License, the most notable exception being Mac OS X. They also generally use a monolithic kernel architecture, apart from Mac OS X and DragonFly BSD which feature hybrid kernels.



History of operating systems

History of operating systems

The earliest computers were mainframes that lacked any form of operating system. Each user had sole use of the machine for a scheduled period of time and would arrive at the computer with program and data, often on punched paper cards and magnetic or paper tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a control panel using toggle switches and panel lights. It is said[by whom?] that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of the Universal Turing machine.

Symbolic languages, assemblers, and compilers were developed for programmers to translate symbolic program-code into machine code that previously would have been hand-encoded. Later machines came with libraries of support code on punched cards or magnetic tape, which would be linked to the user’s program to assist in operations such as input and output. This was the genesis of the modern-day operating system. However, machines still ran a single job at a time. At Cambridge University in England the job queue was at one time a washing line from which tapes were hung with different colored clothes-pegs to indicate job-priority.

As machines became more powerful, the time to run programs diminished and the time to hand off the equipment to the next user became very large by comparison. Accounting for and paying for machine usage moved on from checking the wall clock to automatic logging by the computer. Run queues evolved from a literal queue of people at the door, to a heap of media on a jobs-waiting table, or batches of punch-cards stacked one on top of the other in the reader, until the machine itself was able to select and sequence which magnetic tape drives processed which tapes. Where program developers had originally had access to run their own jobs on the machine, they were supplanted by dedicated machine operators who looked after the well-being and maintenance of the machine and were less and less concerned with implementing tasks manually. When commercially available computer centers were faced with the implications of data lost through tampering or operational errors, equipment vendors were put under pressure to enhance the runtime libraries to prevent misuse of system resources. Automated monitoring was needed not just for CPU usage but for counting pages printed, cards punched, cards read, disk storage used and for signaling when operator intervention was required by jobs such as changing magnetic tapes and paper forms. Security features were added to operating systems to record audit trails of which programs were accessing which files and to prevent access to a production payroll file by an engineering program, for example.

All these features were building up towards the repertoire of a fully capable operating system. Eventually the runtime libraries became an amalgamated program that was started before the first customer job and could read in the customer job, control its execution, record its usage, reassign hardware resources after the job ended, and immediately go on to process the next job. These resident background programs, capable of managing multistep processes, were often called monitors or monitor-programs before the term OS established itself.

An underlying program offering basic hardware-management, software-scheduling and resource-monitoring may seem a remote ancestor to the user-oriented OSes of the personal computing era. But there has been a shift in the meaning of OS. Just as early automobiles lacked speedometers, radios, and air-conditioners which later became standard, more and more optional software features became standard features in every OS package, although some applications such as data base management systems and spreadsheets remain optional and separately priced. This has led to the perception of an OS as a complete user-system with an integrated graphical user interface, utilities, some applications such as text editors and file managers, and configuration tools.

The true descendant of the early operating systems is what is now called the “kernel“. In technical and development circles the old restricted sense of an OS persists because of the continued active development of embedded operating systems for all kinds of devices with a data-processing component, from hand-held gadgets up to industrial robots and real-time control-systems, which do not run user applications at the front-end. An embedded OS in a device today is not so far removed as one might think from its ancestor of the 1950s.

The broader categories of systems and application software are discussed in the computer software article.


Bootstrapping using the BIOS

Motherboards contain some non-volatile memory to initialize the system and load some startup software, usually an operating system, from some external peripheral device. Microcomputers such as the Apple II and IBM PC used ROM chips mounted in sockets on the motherboard. At power-up, the central processor would load its program counter with the address of the boot ROM and start executing instructions from the ROM. These instructions initialized and tested the system hardware, displayed system information on the screen, performed RAM checks, and then loaded an initial program from an external or peripheral device (disk drive). If none was available, then the computer would perform tasks from other memory stores or display an error message, depending on the model and design of the computer and the ROM version. For example, both the Apple II and the original IBM PC had Microsoft Cassette BASIC in ROM and would start that if no program could be loaded from disk.

Most modern motherboard designs use a BIOS, stored in an EEPROM chip soldered to or socketed on the motherboard, to bootstrap an operating system. Non-operating system boot programs are still supported on modern IBM PC-descended machines, but nowadays it is assumed that the boot program will be a complex operating system such as MS Windows NT or Linux. When power is first supplied to the motherboard, the BIOS firmware tests and configures memory, circuitry, and peripherals. This Power-On Self Test (POST) may include testing some of the following things:

On recent motherboards, the BIOS may also patch the central processor microcode if the BIOS detects that the installed CPU is one for which errata have been published.

In IBM PC compatible computers, the Basic Input/Output System (BIOS), also known as System BIOSROM BIOS or PC BIOS (/ˈb.s/), is a de facto standard defining a firmware interface.[1] The name originated from the Basic Input/Output System used in the CP/Moperating system in 1975.[2][3] The BIOS software is built into the PC, and is the first software run by a PC when powered on.

The fundamental purposes of the BIOS are to initialize and test the system hardware components, and to load a bootloader or an operating system from a mass memory device. The BIOS additionally provides abstraction layer for the hardware, i.e. a consistent way for application programs and operating systems to interact with the keyboard, display, and other input/output devices. Variations in the system hardware are hidden by the BIOS from programs that use BIOS services instead of directly accessing the hardware. Modern operating systems ignore the abstraction layer provided by the BIOS and access the hardware components directly.