Computer multitasking
|
In computing, multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves the problem by scheduling which task may be the one running at any given time, and when another waiting task gets a turn. The act of reassigning a CPU from one task to another one is called a context switch. When context switches occur frequently enough the illusion of concurrency is achieved. Even on computers with more than one CPU (called multiprocessor machines), multitasking allows many more tasks to be run than there are CPUs.
Operating systems may adopt one of many different scheduling strategies, which generally fall into the following categories:
- In multiprogramming systems, the running task keeps running until it performs an operation that requires waiting for an external event (e.g. reading from a tape). Multiprogramming systems are designed to maximize CPU usage.
- In time-sharing systems, the running task is required to relinquish the CPU, either voluntarily or by an external event such as a hardware interrupt. Time sharing systems are designed to allow several programs to execute apparently simultaneously.
- In real-time systems, some waiting tasks are guaranteed to be given the CPU when an external event occurs. Real time systems are designed to control mechanical devices such as industrial robots, which require timely processing.
Nowadays, the term time-sharing is seldom used, being replaced by simply multitasking.
Contents |
Multiprogramming
In the early days of computing, CPU time was expensive, and peripherals very slow. When the computer ran a program that needed access to a peripheral, the CPU would have to stop executing program instructions while the peripheral processed the data. This was deemed very inefficient.
The first efforts to create multiprogramming systems took place in the 1960s. Several different programs were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running.
Multiprogramming doesn't give any guarantee that a program will run in a timely manner. Indeed, the very first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced the waiting.
Cooperative multitasking
When computer usage evolved from batch mode to interactive mode, multiprogramming was no longer a suitable approach. Each user wanted to see his program running as if it was the only program in the computer. Time sharing had to be used.
Early multitasking systems consisted of suites of related applications that voluntarily ceded time to each other. This approach, which was eventually supported by many computer operating systems, is today known as cooperative multitasking. Although it is rarely used in larger systems, Microsoft Windows prior to Windows 95, and Mac OS prior to Mac OS X both used cooperative multitasking to enable the running of multiple applications simultaneously.
Cooperative multitasking has many shortcomings. For one, a cooperatively multitasked system must rely on each process to regularly give time to other processes on the system. A poorly designed program, or a "hung" process, can effectively bring the system to a halt. The design requirements of a cooperatively multitasked program can also be onerous for some purposes, and may result in irregular (or inefficient) use of system resources.
Preemptive multitasking
To remedy this situation, most time-sharing systems quickly evolved a more advanced approach known as preemptive multitasking. On such a system, a hardware system (not included on many early machines) can "interrupt" a running process, and direct the processor to execute a different piece of code. A system designed to take advantage of this feature need not rely on the voluntary ceding of processor time by individual processes. Instead, the hardware interrupt system can be set to "preempt" a running process and give control back to the operating system software, which can later restore the preempted process at exactly the point where it was interrupted. Programs "preempted" in such a manner need not explicitly give time to other processes; as far as a programmer is concerned, software programs can be written as though granted uninterrupted access to the CPU, except for some uncertainty about exactly when the program will complete.
Preemptive multitasking allows the computer system to more reliably guarantee each process a regular "slice" of operating time. It also allows the system to rapidly deal with important external events like incoming data, which might require the immediate attention of one or another process.
At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In early systems, processes would often "poll", or "busywait" while waiting for requested input (such as disk, keyboard or network input). During this time, the process was not performing useful work, but still maintained complete control of the CPU. With the advent of interrupts and preemptive multitasking, these I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution.
Although multitasking techniques were originally developed to allow multiple users to share a single machine, it soon became apparent that multitasking was useful regardless of the number of users. Many operating systems, from mainframes down to single-user personal computers, have recognized the usefulness of multitasking support for a variety of reasons. Multitasking makes it possible for a single user to run multiple applications at the same time, or to run "background" processes while retaining control of the computer.
In simple terms: Pre-emptive multitasking involves the use of a scheduler which hands out CPU time to various process so that they can be performed simultaneously. Therefore all processes will get some amount of CPU time at any given time.
Real time
Another reason for multitasking was in the design of real-time computing systems, where a number of possibly unrelated external activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system was coupled with process prioritization to ensure that key activities were given a greater share of available process time.
Multithreading
As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (e.g. one process gathering input data, one process processing input data, one process writing out results on disk.) This, however, required some tools to allow processes to efficiently exchange data.
Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are basically processes that run in the same memory context. Threads are described as lightweight because switching between threads does not involve changing the memory context.
While threads are scheduled preemptively, some operating systems provide a variant to threads, named fibers, that are scheduled cooperatively. Using fibers is even more lightweight than threads, and somewhat easier to program with.
Memory protection
When multiple programs are present in memory, an ill-behaved program may (inadvertently or deliberately) overwrite memory belonging to another program, or even to the operating system itself.
The operating system therefore restricts the memory accessible to the running program. A program trying to access memory outside its allowed range is immediately stopped before it can do any change to memory belonging to some other process.
Another key innovation was the idea of privilege levels. Low privilege tasks are not allowed some kinds of memory access and are not allowed to perform certain instructions. When a task tries to perform an unprivileged operation a trap occurs and a supervisory program running at a higher level is allowed to decide how to respond. This created the possibility of virtualizing the entire system, including virtual peripheral devices. Such a simulation is called a virtual machine operating system. Early virtual machine systems did not have virtual memory, but both are common today.
Virtual memory
Virtual memory is a way for the operating system to provide more memory than is physically available (technically, it provides only the impression that there is more memory, hence the name virtual), by keeping portions of memory on a hard disk. While multitasking and virtual memory are two completely unrelated techniques, they are very often used together, as virtual memory allows more tasks to be loaded at the same time, and multitasking allows another process to run when the running process hits a point where some portion of memory has to be reloaded from disk.
Programming in a multitasking environment
Processes that are entirely independent are not much trouble to program. Most of the complexity in multitasking systems comes from the need to share computer resources between tasks and to synchronize the operation of co-operating tasks.
Large computer systems were sometimes built with a central processor and some number of I/O processors, a kind of asymmetric multiprocessing. One use for interrupts is to allow a simpler processor to simulate the dedicated I/O processors that it did not have.
Over the years, multitasking systems have been refined. Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multi-processing has introduced new complexities and capabilities.cs:Multitasking da:Multitasking de:Multitasking es:Multitarea fi:Moniajo fr:Multitâche hu:Többfeladatos it:Multitasking ja:マルチタスク nl:Multitasking pl:Wielozadaniowość pt:Multitarefa sv:Multikörning