Go Backto software main pageGo to main page

Operating Systems (OS)

a history and introduction

 

  Introduction
OOP
   
   
  Languages
   

 

This part of THOCP has not been given a lot of attention and is, to be honest, waiting for someone to be written. That someone needs to have a general grasp on what an OS is and has to have some affinity with the history of computing.

In the meantime all suggestions from our readers concerning OS's are put here as we are somehow piecing this part together.

So this section is really under construction, any suggestion or information is welcome or just

 

General

An OS takes care for all input and output in a computer system. It manages users, processes, memory management, printing, telecommunication, networking etc.

It sends data to a disk, the printer, the screen and other peripherals connected to the computer

And because every machine is build different, commands for in or output will have to be treated differently too. In almost all cases an Operating System is not one large big behemoth but consists of many small system programs governed by the core or kernel of the OS. Because of the compactness of these small supporting programs it is easier to rewrite parts or packages of the OS than to redesign an entire program.

In general programmers only have to make a "call" to the system to make things happen

This not only makes their live less miserable but the production time becomes shorter. As well as that programs can run on different types of machines with the same family of CPU's without changing anything in the program. This is what makes a standard Operating System so important.

In fact any form of standardization is important for production and compatibility

 

Functions and Structure

Introduction

At first operating systems were designed to help applications interact with the computer hardware. While this is still the case, the importance of the operating system has grown to the point where (at least in the minds of many users) the operating system defines the machine. Most users engaged in the Mac - PC - Unix battle are arguing about the operating systems on these machines, not the hardware platform itself.

The operating system provides a layer of abstraction between the user and the bare machine. Users and applications do not see the hardware directly, but view it through the operating system.

This abstraction can be used to hide certain hardware details from users and applications. Thus, changes in the hardware are not seen by the user (even though the OS must accommodate them).

This is particularly advantageous for venders that want offer a consistent OS interface across an entire line of hardware platforms. For example, certain operations such as interaction with 3D graphics hardware can be controlled by the operating system. When an instruction pertaining to the hardware is executed and if the hardware is present then all is fine. However, if the hardware is not present then a trap is generated by the illegal instruction. In this case the OS can emulate the desired instruction in software.

Another way that abstraction can be used is to make related devices appear the same from the user point of view. For example, hard disks, floppy disks, CD-ROMs, and even tape are all very different media, but in many operating systems they appear the same to the user.

Unix, and increasingly Windows NT, take this abstraction even further. From a user and application programmer standpoint, Unix is Unix regardless of the CPU make and model. As previously mentioned, it is this feature of Unix more than any other that is responsible for Unix's popularity.

We can view an operating system as providing four basic interfaces:

interface to the underlying hardware

interface to application programs

interface to the user

interface to the system manager

Each of these interfaces provides the appropriate view for different groups of individuals:

hardware developers who want their hardware to be supported by a particular operating system are primarily interested in the OS-hardware interface.

application programmers are primarily interested in the OS-application interface.

ordinary users are interested in the user interface. Many books that purport to be about a particular operating system in fact mainly discuss the user interface.

system managers are obviously interested in the system management interface.
.

Most operating systems in use today are composed of two distinct parts: the kernel and the system programs. The kernel is primarily responsible for the first two of the interfaces described above, and the system programs are primarily responsible for the last two.

 

Functionality of the Operating System Kernel

 

Processes.

A key abstraction utilized in the design of an operating system is the notion of process.

A process is a program in execution.

the status of a process includes:

the code that is executing

the values of its variables

the contents of the CPU registers, especially the program counter (PC)

the state of the process (running, ready, waiting, etc.)

At any given time, the system kernel is managing a collection of processes.

some are user processes (shells, applications, etc.)

some are system processes (print spooler, accounting process, etc.)

An important kernel function is the management of processes. The kernel is responsible for creating, scheduling and deleting processes and often for inter-process communication.

Resource Allocation.

Modern operating systems often provide users and applications with a virtual machine, an interface to the underlying hardware that makes it appear as though the user is the only user of the machine and it's hardware.

CPU.

Whether the computer has one CPU or several CPUs, it is usually the case that there are more processes than CPUs. Thus, the operating system is responsible for scheduling the processes on the CPU(s).

Memory.

There is a finite amount of memory that must be shared among the processes. The way this is done varies between different operating systems, but a commonly used mechanism is that of virtual memory.

IO devices.

Several different processes may be trying to access a single IO device and the operating system must manage these accesses. Note that this is a different issue than processes scheduling since often IO is being performed for processes that are not currently executing.

Some devices (e.g. disks) have resources that can be shared among users and/or user processes. The operating system is responsible for managing and protecting these resources.

Support Services. Another important operating system task is providing support services for processes. These include:

Support for IO operations. We've already discussed how the operating system controls IO to enforce a protection scheme.

File system management.

Networking.

Protection.
.

Interrupts and Traps. A great deal of the kernel consists of code that is invoked as the result of a interrupt or a trap.

While the words "interrupt" and "trap" are often used interchangeably in the context of operating systems, there is a distinct difference between the two.

An interrupt is a CPU event that is triggered by some external device.

A trap is a CPU event that is triggered by a program. Traps are sometimes called software interrupts. They can be deliberately triggered by a special instruction, or they may be triggered by an illegal instruction or an attempt to access a restricted resource.

When an interrupt is triggered by an external device the hardware will save the the status of the currently executing process, switch to kernel mode, and enter a routine in the kernel.

This routine is a first level interrupt handler. It can either service the interrupt itself or wake up a process that has been waiting for the interrupt to occur.

When the handler finishes it usually causes the CPU to resume the processes that was interrupted. However, the operating system may schedule another process instead.

When an executing process requests a service from the kernel using a trap the process status information saved, the CPU is placed in kernel mode, and control passes to code in the kernel.

This kernel code is called the system service dispatcher. It examines parameters set before the trap was triggered, often information in specific CPU registers, to determine what action is required. Control then passes to the code that performs the desired action.

When the service is finished, control is returned to either the process that triggered the trap or some other process.

Traps can also be triggered by a fault. In this case the usual action is to terminate the offending process. It is possible on some systems for applications to register handlers that will be evoked when certain conditions occur -- such as a division by zero.

 

Operating System Design Principles

Operating system design is a complex task. One of the driving forces behind software engineering was the complexity of OS design. (See, for example, The Mythical Man Month by Frederick Brooks).
.

System design goals:

User interface: should the interface be easy to learn by a novice user, or should it be designed for the convenience of an experienced user? (multiple user interfaces?)

Efficient system resource management. Unfortunately, the more complete the resource management, the more overhead.

Security. Once again, the more secure a system is the less efficient it is.

Flexibility. Most operating systems come preconfigured for many different devices. Part of the process of setting up a particular machine is to construct a version of the operating system that is tuned for the local installation. This tuning often involves setting certain limits, such as the maximum number of processes. It also involves specifying the attached hardware so that only the necessary drivers will be loaded. Some operating systems can load and unload drivers automatically at run-time.

Portability. Will the operating system be portable to widely varying types of hardware, or just different models of a particular class of hardware?

Backwards compatibility and emulation. Is it important that software that ran under previous operating system versions or under different operating systems be supported?
.

Layered design:

Operating system consists of multiple layers. Each layer depends on the on the layer(s) beneath it.

Advantages: improved security, since only layers close to hardware need to operate in kernel mode.
improved portability since only small part of operating system interfaces with the hardware.
makes maintenance of operating system code easier.
Disadvantages: deciding what functionality to put in each layer can be difficult. This is because there are some interdependencies that would violate the layering model.decreased efficiency.


Distinction between mechanisms and policies:

a mechanism is a facility the system provides the system manager. For example, VMS allows the manager to control whether or not a given account can be logged in to over a network connection.

a policy is a decision made by the manager(s) about how to accomplish some goal. For example, a company may decide that it will not allow privileged accounts to be logged in to over a network connection.

mechanisms are the tools used to implement policies.

Virtual Machines

The concept of virtual machines is closely related to layering.

 

In a typical multi-user system, users are expected to know that the machine is shared by other users, and that resources such as devices are shared between all the users.

In virtual machine operating systems an addition layer of abstraction is placed between the users and the system so that it appears to the user that they are using a machine dedicated to them.

Usually it is the case that a more powerful machine is used to host several virtual machines. For example, the 80386 and later Intel CPUs supported virtual 8086 machines. Thus, an operating system designed for the 80386 could actually run several copies of MS-DOS and appear to the user to be several different PCs at the same time.

Another example of a virtual machine system is the IBM 370 running the VM operating system. This allowed users to work as if they had a dedicated (but smaller, less powerful) 370 completely at their disposal.

 

Stacked Job Batch Systems (mid 1950s - mid 1960s) (2)

A batch system is one in which jobs are bundled together with the instructions necessary to allow them to be processed without intervention.

The basic physical layout of the memory of a batch job computer is shown below:

Monitor (permanently resident)
User Space
(compilers, programs, data, etc.)

 

The monitor is system software that is responsible for interpreting and carrying out the instructions in the batch jobs. When the monitor starts a job, the entire computer is dedicated to the job, which then controlls the computer until it finishes.

A sample of several batch jobs might look like this:

$JOB user_spec ; identify the user for accounting purposes
$FORTRAN ; load the FORTRAN compiler
source program cards
$LOAD ; load the compiled program
$RUN ; run the program
data cards
$EOJ ; end of job

$JOB user_spec ; identify a new user
$LOAD application
$RUN
data
$EOJ

 

Often magnetic tapes and drums are used to store data and compiled programs, temporarely or permanent..

1. Advantages of batch systems move much of the work of the operator to the computer
increased performance since it was possible for job to start as soon as the previous job finished
2. Disadvantages turn-around time can be large from user standpoint
more difficult to debug program
due to lack of protection scheme, one batch job can affect pending jobs (read too many cards, etc)
a job could corrupt the monitor, thus affecting pending jobs
a job could enter an infinite loop

 

 

One of the major shortcomings of early batch systems is that there's no protection scheme to prevent one job from adversely affecting other jobs.

The solution to this brought a simple protection scheme, where certain memory (e.g. where the monitor resides) were made off-limits to user programs. This prevented user programs from corrupting the monitor.

To keep user programs from reading too many (or not enough) cards, the hardware is changed to allow the computer to operate in one of two modes: one for the monitor and one for the user programs. IO can only be performed in monitor mode, so that IO requests from the user programs are passed to the monitor. In this way, the monitor can keep a job from reading past it's on $EOJ card.

To prevent an infinite loop, a timer is added to the system and the $JOB card is modified so that a maximum execution time for the job is passed to the monitor. The computer will interrupt the job and return control to the monitor when this time is exceeded.


Spooling Batch Systems (mid 1960s - late 1970s)

One difficulty with simple batch systems is that the computer still needs to read the the deck of cards before it can begin to execute the job. This means that the CPU is idle (or nearly so) during these relatively slow operations.

Since it is faster to read from a magnetic tape than from a deck of cards, it became common for computer centers to have one or more less powerful computers in addition to there main computer. The smaller computers were used to read a decks of cards onto a tape, so that the tape would contain many batch jobs. This tape was then loaded on the main computer and the jobs on the tape were executed. The output from the jobs would be written to another tape which would then be removed and loaded on a less powerful computer to produce any hardcopy or other desired output.

It was a logical extension of the timer idea described above to have a timer that would only let jobs execute for a short time before interrupting them so that the monitor could start an IO operation. Since the IO operation could proceed while the CPU was crunching on a user program, little degradation in performance was noticed.

Since the computer can now perform IO in parallel with computation, it became possible to have the computer read a deck of cards to a tape, drum or disk and to write out to a tape printer while it was computing. This process is called SPOOLing: Simultaneous Peripheral Operation OnLine.

Spooling batch systems were the first and are the simplest of the multiprogramming systems.

One advantage of spooling batch systems was that the output from jobs was available as soon as the job completed, rather than only after all jobs in the current cycle were finished.


Multiprogramming Systems (1960s - present)

As machines with more and more memory became available, it was possible to extend the idea of multiprogramming (or multiprocessing) as used in spooling batch systems to create systems that would load several jobs into memory at once and cycle through them in some order, working on each one for a specified period of time.

 

Monitor (more like an operating system)
User program 1
User program 2
User porgram 3
User program 4

 

At this point the monitor is growing to the point where it begins to resemble a modern operating system. It is responsible for:

* starting user jobs
* spooling operations
* IO for user jobs
* switching between user jobs
* ensuring proper protection while doing the above

As a simple, yet common example, consider a machine that can run two jobs at once. Further, suppose that one job is IO intensive and that the other is CPU intensive. One way for the monitor to allocate CPU time between these jobs would be to divide time equally between them. However, the CPU would be idle much of the time the IO bound process was executing.

A good solution in this case is to allow the CPU bound process (the background job) to execute until the IO bound process (the foreground job) needs some CPU time, at which point the monitor permits it to run. Presumably it will soon need to do some IO and the monitor can return the CPU to the background job.


Timesharing Systems (1970s - present)

Back in the days of the "bare" computers without any operating system to speak of, the programmer had complete access to the machine. As hardware and software was developed to create monitors, simple and spooling batch systems and finally multiprogrammed systems, the separation between the user and the computer became more and more pronounced.

Users, and programmers in particular, longed to be able to "get to the machine" without having to go through the batch process. In the 1970s and especially in the 1980s this became possible two different ways.

The first involved timesharing or timeslicing. The idea of multiprogramming was extended to allow for multiple terminals to be connected to the computer, with each in-use terminal being associated with one or more jobs on the computer. The operating system is responsible for switching between the jobs, now often called processes, in such a way that favored user interaction. If the context-switches occurred quickly enough, the user had the impression that he or she had direct access to the computer.

Interactive processes are given a higher priority so that when IO is requested (e.g. a key is pressed), the associated process is quickly given control of the CPU so that it can process it. This is usually done through the use of an interrupt that causes the computer to realize that an IO event has occurred.

It should be mentioned that there are several different types of time sharing systems. One type is represented by computers like our VAX/VMS computers and UNIX workstations. In these computers entire processes are in memory (albeit virtual memory) and the computer switches between executing code in each of them. In other types of systems, such as airline reservation systems, a single application may actually do much of the timesharing between terminals. This way there does not need to be a different running program associated with each terminal.

Operating Systems for mini's and mainframe's

Major operating systems:

UNIX

The further development of the B language was done by Ken Thompson. With which he wrote a series of programs to operate the machines he used. As a logical result of that the UNIX operating system was designed that will be released in 1973 by the BELL laboratories.

S/390

VMS

XENIX

 

 

Operating Systems for Microcomputers or Personal computers

 

The second way that programmers and users got back at the machine was the advent of personal computers around 1980. Finally computers became small enough and inexpensive enough that an individual could own one, and hence have complete access to it.

Major operating systems:

Apple OS

BEOS

CP/M

Computer Program for Micro computers

DOS - Disk Operating System

Most used names: MS-DOS - PC compatibles, PC-DOS - IBM and compatibles, TOS - Atari, DOS - Amiga and many others

MS = Microsoft; T stands for Tramiel, Atari's company president; PC stands for Personal Computer

MS-DOS as well as PC-DOS are both the most frequently used operating systems for PC's until Windows 95 comes along where it becomes integrated with a WIMP  environment.

Both MS DOS and PC DOS have the same origin namely: QDOS.

Mid 1980 Seattle Computer Products asked Tim Patterson to develop an operating system that was capable of running the 8086 CPU card. Under time pressure Tim developed an operating system which structure not exactly shone through its clarity. He named it QDOS - Quick and Dirty Operating System. On the basis of this system he built another Operating System: 86-DOS which was published at the end of 1980. This new version had very few bugs and was liked because of its enhanced efficiency. Sales of this DOS version were favorable to the acceptation of this system. Also because the author had structured the system functions thus that they were the same as the popular CP/M system. For this reason manufacturers could adapt their machines which were thought for CP/M without any problems for 86-DOS. Also the limitations for programs running under CP/M were eliminated.

In 1981 IBM decided to build the Intel 8086 CPU into their PC's and looked around for an efficient operating system. IBM choose Microsoft who had bought the rights of 86-DOS by then. Under the leadership of Tim Patterson, who went together with his brainchild to Microsoft, programmers of Microsoft developed version 1 of MSDOS on prototypes of the IBM PC. IBM liked the system bought the license and introduced it together with its new machine in 1981. IBM called it PC-DOS.

This first version soon no longer satisfied users, also because of improved equipment that was marketed. Programmers agreed that should DOS be further developed an important property of this operating system should be that it remained compatible with its predecessors.

The widespread use of the MS-DOS operating system is caused by this so called compatibility. This line of thought also took care of the future DOS developments.

 

GEM,

Introduced by Digital.

The Graphical Environment Manager (GEM) from Digital Research was really nice and fast. It's a pity that this firm lost the second battle (the first was CP/M versus DOS) against the boys from Redmond.

GEOS,

introduced in 1986 by Berkeley Softworks [1]

MS Windows

Microsoft. In early version windows means a "look-something-like" graphic interface combined with DOS later versions consisted of an integrated (D)OS with a graphical interface

OS/2

IBM's version of a graphic interface

WIMP

 

Common features

 

 

 

Operating Systems for Supercomputers

[we need some help here!]

 

Major operating systems:

LINUX

UNIX

 

 

Common features

*

 


Real-Time, Multiprocessor, and Distributed/Networked Systems

A real-time computer is one that execute programs that are guaranteed to have an upper bound on tasks that they carry out. Usually it is desired that the upper bound be very small. Examples included guided missile systems and medical monitoring equipment. The operating system on real-time computers is severely constrained by the timing requirements.

Dedicated computers are special purpose computers that are used to perform only one or more tasks. Often these are real-time computers and include applications such as the guided missile mentioned above and the computer in modern cars that controls the fuel injection system.

A multiprocessor computer is one with more than one CPU. The category of multiprocessor computers can be divided into the following sub-categories:

 

shared memory multiprocessors have multiple CPUs, all with access to the same memory. Communication between the the processors is easy to implement, but care must be taken so that memory accesses are synchronized.

distributed memory multiprocessors also have multiple CPUs, but each CPU has it's own associated memory. Here, memory access synchronization is not a problem, but communication between the processors is often slow and complicated.

Related to multiprocessors are the following:

networked systems consist of multiple computers that are networked together, usually with a common operating system and shared resources. Users, however, are aware of the different computers that make up the system.

distributed systems also consist of multiple computers but differ from networked systems in that the multiple computers are transparent to the user. Often there are redundant resources and a sharing of the workload among the different computers, but this is all transparent to the user.

 

 

 

 

Go Backto software main page Last Updated on 6 June, 2004 For suggestions please mail the editors 


Footnotes & References