print Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

News

A first in 15 years, Indian students in US surpass Chinese Nov. 17, 2024, 11:51 PM ET (The Indian Express)

The simultaneous growth in availability of big data and in the number of simultaneous users on the Internet places particular pressure on the need to carry out computing tasks “in parallel,” or simultaneously. Parallel and distributed computing occurs across many different topic areas in computer science, including algorithms, computer architecture, networks, operating systems, and software engineering. During the early 21st century there was explosive growth in multiprocessor design and other strategies for complex applications to run faster. Parallel and distributed computing builds on fundamental systems concepts, such as concurrency, mutual exclusion, consistency in state/memory manipulation, message-passing, and shared-memory models.

Creating a multiprocessor from a number of single CPUs requires physical links and a mechanism for communication among the processors so that they may operate in parallel. Tightly coupled multiprocessors share memory and hence may communicate by storing information in memory accessible by all processors. Loosely coupled multiprocessors, including computer networks, communicate by sending messages to each other across the physical links. Computer scientists have investigated various multiprocessor architectures. For example, the possible configurations in which hundreds or even thousands of processors may be linked together are examined to find the geometry that supports the most efficient system throughput. A much-studied topology is the hypercube, in which each processor is connected directly to some fixed number of neighbours: two for the two-dimensional square, three for the three-dimensional cube, and similarly for the higher-dimensional hypercubes. Computer scientists also investigate methods for carrying out computations on such multiprocessor machines (e.g., algorithms to make optimal use of the architecture and techniques to avoid conflicts in data transmission). The machine-resident software that makes possible the use of a particular machine, in particular its operating system, is an integral part of this investigation.

Concurrency refers to the execution of more than one procedure at the same time (perhaps with the access of shared data), either truly simultaneously (as on a multiprocessor) or in an unpredictably interleaved order. Modern programming languages such as Java include both encapsulation and features called “threads” that allow the programmer to define the synchronization that occurs among concurrent procedures or tasks.

Two important issues in concurrency control are known as deadlocks and race conditions. Deadlock occurs when a resource held indefinitely by one process is requested by two or more other processes simultaneously. As a result, none of the processes that call for the resource can continue; they are deadlocked, waiting for the resource to be freed. An operating system can handle this situation with various prevention or detection and recovery techniques. A race condition, on the other hand, occurs when two or more concurrent processes assign a different value to a variable, and the result depends on which process assigns the variable first (or last).

Preventing deadlocks and race conditions is fundamentally important, since it ensures the integrity of the underlying application. A general prevention strategy is called process synchronization. Synchronization requires that one process wait for another to complete some operation before proceeding. For example, one process (a writer) may be writing data to a certain main memory area, while another process (a reader) may want to read data from that area. The reader and writer must be synchronized so that the writer does not overwrite existing data until the reader has processed it. Similarly, the reader should not start to read until data has been written in the area.

With the advent of networks, distributed computing became feasible. A distributed computation is one that is carried out by a group of linked computers working cooperatively. Such computing usually requires a distributed operating system to manage the distributed resources. Important concerns are workload sharing, which attempts to take advantage of access to multiple computers to complete jobs faster; task migration, which supports workload sharing by efficiently distributing jobs among machines; and automatic task replication, which occurs at different sites for greater reliability.

Platform-based development

Platform-based development is concerned with the design and development of applications for specific types of computers and operating systems (“platforms”). Platform-based development takes into account system-specific characteristics, such as those found in Web programming, multimedia development, mobile application development, and robotics. Platforms such as the Internet or an Android tablet enable students to learn within and about environments constrained by specific hardware, application programming interfaces (APIs), and special services. These environments are sufficiently different from “general purpose” programming to warrant separate research and development efforts.

For example, consider the development of an application for an Android tablet. The Android programming platform is called the Dalvic Virtual Machine (DVM), and the language is a variant of Java. However, an Android application is defined not just as a collection of objects and methods but, moreover, as a collection of “intents” and “activities,” which correspond roughly to the GUI screens that the user sees when operating the application. XML programming is needed as well, since it is the language that defines the layout of the application’s user interface. Finally, I/O synchronization in Android application development is more demanding than that found on conventional platforms, though some principles of Java file management carry over.

Real-time systems provide a broader setting in which platform-based development takes place. The term real-time systems refers to computers embedded into cars, aircraft, manufacturing assembly lines, and other devices to control processes in real time. Frequently, real-time tasks repeat at fixed-time intervals. For example, sensor data are gathered every second, and a control signal is generated. In such cases, scheduling theory is used to determine how the tasks should be scheduled on a given processor. A good example of a system that requires real-time action is the antilock braking system (ABS) on an automobile; because it is critical that the ABS instantly reacts to brake-pedal pressure and begins a program of pumping the brakes, such an application is said to have a hard deadline. Other real-time systems are said to have soft deadlines, in that no disaster will happen if the system’s response is slightly delayed; an example is an order shipping and tracking system. The concept of “best effort” arises in real-time system design, because soft deadlines sometimes slip and hard deadlines are sometimes met by computing a less than optimal result. For example, most details on an air traffic controller’s screen are approximations (e.g., altitude) that need not be computed more precisely (e.g., to the nearest inch) in order to be effective.