Table of Contents
References & Edit History Quick Facts & Related Topics

Printers are a common example of output devices. New multifunction peripherals that integrate printing, scanning, and copying into a single device are also popular. Computer monitors are sometimes treated as peripherals. High-fidelity sound systems are another example of output devices often classified as computer peripherals. Manufacturers have announced devices that provide tactile feedback to the user—“force feedback” joysticks, for example. This highlights the complexity of classifying peripherals—a joystick with force feedback is truly both an input and an output peripheral.

Early printers often used a process known as impact printing, in which a small number of pins were driven into a desired pattern by an electromagnetic printhead. As each pin was driven forward, it struck an inked ribbon and transferred a single dot the size of the pinhead to the paper. Multiple dots combined into a matrix to form characters and graphics, hence the name dot matrix. Another early print technology, daisy-wheel printers, made impressions of whole characters with a single blow of an electromagnetic printhead, similar to an electric typewriter.

Laser printers have replaced such printers in most commercial settings. Laser printers employ a focused beam of light to etch patterns of positively charged particles on the surface of a cylindrical drum made of negatively charged organic, photosensitive material. As the drum rotates, negatively charged toner particles adhere to the patterns etched by the laser and are transferred to the paper. Another, less expensive printing technology developed for the home and small businesses is inkjet printing. The majority of inkjet printers operate by ejecting extremely tiny droplets of ink to form characters in a matrix of dots—much like dot matrix printers.

Computer display devices have been in use almost as long as computers themselves. Early computer displays employed the same cathode-ray tubes (CRTs) used in television and radar systems. The fundamental principle behind CRT displays is the emission of a controlled stream of electrons that strike light-emitting phosphors coating the inside of the screen. The screen itself is divided into multiple scan lines, each of which contains a number of pixels—the rough equivalent of dots in a dot matrix printer. The resolution of a monitor is determined by its pixel size. More recent liquid crystal displays (LCDs) rely on liquid crystal cells that realign incoming polarized light. The realigned beams pass through a filter that permits only those beams with a particular alignment to pass. By controlling the liquid crystal cells with electrical charges, various colors or shades are made to appear on the screen.

Communication devices

One communication device is the common telephone modem (from modulator/demodulator). Modems modulate, or transform, a computer’s digital message into an analog signal for transmission over standard telephone networks, and they demodulate the analog signal back into a digital message on reception. In practice, telephone network components limit analog data transmission to about 48 kilobits per second. Standard cable modems operate in a similar manner over cable television networks, which have a total transmission capacity of 30 to 40 megabits per second over each local neighborhood “loop.” (Like Ethernet cards, cable modems are actually local area network devices, rather than true modems, and transmission performance deteriorates as more users share the loop.)

Abstract vector hi speed internet technology background
More From Britannica
History of Technology Timeline

Asymmetric digital subscriber line (ADSL) modems can be used for transmitting digital signals over a local dedicated telephone line, provided there is a telephone office nearby—in theory, within 5,500 meters (18,000 feet) but in practice about a third of that distance. ADSL is asymmetric because transmission rates differ to and from the subscriber: 8 megabits per second “downstream” to the subscriber and 1.5 megabits per second “upstream” from the subscriber to the service provider. In addition to devices for transmitting over telephone and cable wires, wireless communication devices exist for transmitting infrared, radiowave, and microwave signals.

Peripheral interfaces

A variety of techniques have been employed in the design of interfaces to link computers and peripherals. An interface of this nature is often termed a bus. This nomenclature derives from the presence of many paths of electrical communication (e.g., wires) bundled or joined together in a single device. Multiple peripherals can be attached to a single bus—the peripherals need not be homogeneous. An example is the small computer systems interface (SCSI; pronounced “scuzzy”). This popular standard allows heterogeneous devices to communicate with a computer by sharing a single bus. Under the auspices of various national and international organizations, many such standards have been established by manufacturers and users of computers and peripherals.

Buses can be loosely classified as serial or parallel. Parallel buses have a relatively large number of wires bundled together that enable data to be transferred in parallel. This increases the throughput, or rate of data transfer, between the peripheral and computer. SCSI buses are parallel buses. Examples of serial buses include the universal serial bus (USB). USB has an interesting feature in that the bus carries not only data to and from the peripheral but also electrical power. Examples of other peripheral integration schemes include integrated drive electronics (IDE) and enhanced integrated drive electronics (EIDE). Predating USB, these two schemes were designed initially to support greater flexibility in adapting hard disk drives to a variety of different computer makers.

William Morton Pottenger The Editors of Encyclopaedia Britannica

Microprocessor integrated circuits

Before integrated circuits (ICs) were invented, computers used circuits of individual transistors and other electrical components—resistors, capacitors, and diodes—soldered to a circuit board. In 1959 Jack Kilby at Texas Instruments Incorporated, and Robert Noyce at Fairchild Semiconductor Corporation filed patents for integrated circuits. Kilby found how to make all the circuit components out of germanium, the semiconductor material then commonly used for transistors. Noyce used silicon, which is now almost universal, and found a way to build the interconnecting wires as well as the components on a single silicon chip, thus eliminating all soldered connections except for those joining the IC to other components. Brief discussions of IC circuit design, fabrication, and some design issues follow. For a more extensive discussion, see semiconductor and integrated circuit.

Design

Today IC design starts with a circuit description written in a hardware-specification language (like a programming language) or specified graphically with a digital design program. Computer simulation programs then test the design before it is approved. Another program translates the basic circuit layout into a multilayer network of electronic elements and wires.

Fabrication

The IC itself is formed on a silicon wafer cut from a cylinder of pure silicon—now commonly 200–300 mm (8–12 inches) in diameter. Since more chips can be cut from a larger wafer, the material unit cost of a chip goes down with increasing wafer size. A photographic image of each layer of the circuit design is made, and photolithography is used to expose a corresponding circuit of “resist” that has been put on the wafer. The unwanted resist is washed off and the exposed material then etched. This process is repeated to form various layers, with silicon dioxide (glass) used as electrical insulation between layers.

Between these production stages, the silicon is doped with carefully controlled amounts of impurities such as arsenic and boron. These create an excess and a deficiency, respectively, of electrons, thus creating regions with extra available negative charges (n-type) and positive “holes” (p-type). These adjacent doped regions form p-n junction transistors, with electrons (in the n-type regions) and holes (in the p-type regions) migrating through the silicon conducting electricity.

Layers of metal or conducting polycrystalline silicon are also placed on the chip to provide interconnections between its transistors. When the fabrication is complete, a final layer of insulating glass is added, and the wafer is sawed into individual chips. Each chip is tested, and those that pass are mounted in a protective package with external contacts.

Transistor size

The size of transistor elements continually decreases in order to pack more on a chip. In 2001 a transistor commonly had dimensions of 0.25 μm (or micrometer; 1 μm = 10−6 meter), and 0.1 μm was common in 2006. This latter size allowed 200 million transistors to be placed on a chip (rather than about 40 million in 2001). Because the wavelength of visible light is too great for adequate resolution at such a small scale, ultraviolet photolithography techniques are being developed. As sizes decrease further, electron beam or X-ray techniques will become necessary. Each such advance requires new fabrication plants, costing several billion dollars apiece.

Power consumption

The increasing speed and density of elements on chips have led to problems of power consumption and dissipation. Central processing units now typically dissipate about 50 watts of power—as much heat per square inch as an electric stove element generates—and require “heat sinks” and cooling fans or even water cooling systems. As CPU speeds increase, cryogenic cooling systems may become necessary. Because storage battery technologies have not kept pace with power consumption in portable devices, there has been renewed interest in gallium arsenide (GaAs) chips. GaAs chips can run at higher speeds and consume less power than silicon chips. (GaAs chips are also more resistant to radiation, a factor in military and space applications.) Although GaAs chips have been used in supercomputers for their speed, the brittleness of GaAs has made it too costly for most ordinary applications. One promising idea is to bond a GaAs layer to a silicon substrate for easier handling. Nevertheless, GaAs is not yet in common use except in some high-frequency communication systems.

Future CPU designs

Since the early 1990s, researchers have discussed two speculative but intriguing new approaches to computation—quantum computing and molecular (DNA) computing. Each offers the prospect of highly parallel computation and a way around the approaching physical constraints to Moore’s law.

Quantum computing

According to quantum mechanics, an electron has a binary (two-valued) property known as “spin.” This suggests another way of representing a bit of information. While single-particle information storage is attractive, it would be difficult to manipulate. The fundamental idea of quantum computing, however, depends on another feature of quantum mechanics: that atomic-scale particles are in a “superposition” of all their possible states until an observation, or measurement, “collapses” their various possible states into one actual state. This means that if a system of particles—known as quantum bits, or qubits—can be “entangled” together, all the possible combinations of their states can be simultaneously used to perform a computation, at least in theory.

Indeed, while a few algorithms have been devised for quantum computing, building useful quantum computers has been more difficult. This is because the qubits must maintain their coherence (quantum entanglement) with one another while preventing decoherence (interaction with the external environment). As of 2024, the largest entangled system, IBM Condor, contains a little more than 1,000 qubits.

Molecular computing

In 1994 Leonard Adleman, a mathematician at the University of Southern California, demonstrated the first DNA computer by solving a simple example of what is known as the traveling salesman problem. A traveling salesman problem—or, more generally, certain types of network problems in graph theory—asks for a route (or the shortest route) that begins at a certain city, or “node,” and travels to each of the other nodes exactly once. Digital computers, and sufficiently persistent humans, can solve for small networks by simply listing all the possible routes and comparing them, but as the number of nodes increases, the number of possible routes grows exponentially and soon (beyond about 50 nodes) overwhelms the fastest supercomputer. While digital computers are generally constrained to performing calculations serially, Adleman realized that he could take advantage of DNA molecules to perform a “massively parallel” calculation. He began by selecting different nucleotide sequences to represent each city and every direct route between two cities. He then made trillions of copies of each of these nucleotide strands and mixed them in a test tube. In less than a second he had the answer, albeit along with some hundred trillion spurious answers. Using basic recombinant DNA laboratory techniques, Adleman then took one week to isolate the answer—culling first molecules that did not start and end with the proper cities (nucleotide sequences), then those that did not contain the proper number of cities, and finally those that did not contain each city exactly once.

Although Adleman’s network contained only seven nodes—an extremely trivial problem for digital computers—it was the first demonstration of the feasibility of DNA computing. Since then Erik Winfree, a computer scientist at the California Institute of Technology, has demonstrated that nonbiologic DNA variants (such as branched DNA) can be adapted to store and process information. DNA and quantum computing remain intriguing possibilities that, even if they prove impractical, may lead to further advances in the hardware of future computers.

Operating systems

Role of operating systems

Operating systems manage a computer’s resources—memory, peripheral devices, and even CPU access—and provide a battery of services to the user’s programs. UNIX, first developed for minicomputers and now widely used on both PCs and mainframes, is one example; Linux (a version of UNIX), Microsoft Corporation’s Windows XP, and Apple Computer’s OS X are others.

One may think of an operating system as a set of concentric shells. At the center is the bare processor, surrounded by layers of operating system routines to manage input/output (I/O), memory access, multiple processes, and communication among processes. User programs are located in the outermost layers. Each layer insulates its inner layer from direct access, while providing services to its outer layer. This architecture frees outer layers from having to know all the details of lower-level operations, while protecting inner layers and their essential services from interference.

Early computers had no operating system. A user loaded a program from paper tape by employing switches to specify its memory address, to start loading, and to run the program. When the program finished, the computer halted. The programmer had to have knowledge of every computer detail, such as how much memory it had and the characteristics of I/O devices used by the program.

It was quickly realized that this was an inefficient use of resources, particularly as the CPU was largely idle while waiting for relatively slow I/O devices to finish tasks such as reading and writing data. If instead several programs could be loaded at once and coordinated to interleave their steps of computation and I/O, more work could be done. The earliest operating systems were small supervisor programs that did just that: they coordinated several programs, accepting commands from the operator, and provided them all with basic I/O operations. These were known as multiprogrammed systems.

A multiprogrammed system must schedule its programs according to some priority rule, such as “shortest jobs first.” It must protect them from mutual interference to prevent an addressing error in a program from corrupting the data or code of another. It must ensure noninterference during I/O so that output from several programs does not get commingled or input misdirected. It might also have to record the CPU time of each job for billing purposes.

Modern types of operating systems

Multiuser systems

An extension of multiprogramming systems was developed in the 1960s, known variously as multiuser or time-sharing systems. (For a history of this development, see the section Time-sharing from Project MAC to UNIX.) Time-sharing allows many people to interact with a computer at once, each getting a small portion of the CPU’s time. If the CPU is fast enough, it will appear to be dedicated to each user, particularly as a computer can perform many functions while waiting for each user to finish typing the latest commands.

Multiuser operating systems employ a technique known as multiprocessing, or multitasking (as do most single-user systems today), in which even a single program may consist of many separate computational activities, called processes. The system must keep track of active and queued processes, when each process must access secondary memory to retrieve and store its code and data, and the allocation of other resources, such as peripheral devices.

Since main memory was very limited, early operating systems had to be as small as possible to leave room for other programs. To overcome some of this limitation, operating systems use virtual memory, one of many computing techniques developed during the late 1950s under the direction of Tom Kilburn at the University of Manchester, England. Virtual memory gives each process a large address space (memory that it may use), often much larger than the actual main memory. This address space resides in secondary memory (such as tape or disks), from which portions are copied into main memory as needed, updated as necessary, and returned when a process is no longer active. Even with virtual memory, however, some “kernel” of the operating system has to remain in main memory. Early UNIX kernels occupied tens of kilobytes; today they occupy more than a megabyte, and PC operating systems are comparable, largely because of the declining cost of main memory.

Operating systems have to maintain virtual memory tables to keep track of where each process’s address space resides, and modern CPUs provide special registers to make this more efficient. Indeed, much of an operating system consists of tables: tables of processes, of files and their locations (directories), of resources used by each process, and so on. There are also tables of user accounts and passwords that help control access to the user’s files and protect them against accidental or malicious interference.

Thin systems

While minimizing the memory requirements of operating systems for standard computers has been important, it has been absolutely essential for small, inexpensive, specialized devices such as personal digital assistants (PDAs), “smart” cellular telephones, portable devices for listening to compressed music files, and Internet kiosks. Such devices must be highly reliable, fast, and secure against break-ins or corruption—a cellular telephone that “freezes” in the middle of calls would not be tolerated. One might argue that these traits should characterize any operating system, but PC users seem to have become quite tolerant of frequent operating system failures that require restarts.

Reactive systems

Still more limited are embedded, or real-time, systems. These are small systems that run the control processors embedded in machinery from factory production lines to home appliances. They interact with their environment, taking in data from sensors and making appropriate responses. Embedded systems are known as “hard” real-time systems if they must guarantee schedules that handle all events even in a worst case and “soft” if missed deadlines are not fatal. An aircraft control system is a hard real-time system, as a single flight error might be fatal. An airline reservation system, on the other hand, is a soft real-time system, since a missed booking is rarely catastrophic.

Many of the features of modern CPUs and operating systems are inappropriate for hard real-time systems. For example, pipelines and superscalar multiple execution units give high performance at the expense of occasional delays when a branch prediction fails and a pipeline is filled with unneeded instructions. Likewise, virtual memory and caches give good memory-access times on the average, but sometimes they are slow. Such variability is inimical to meeting demanding real-time schedules, and so embedded processors and their operating systems must generally be relatively simple.

Operating system design approaches

Operating systems may be proprietary or open. Mainframe systems have largely been proprietary, supplied by the computer manufacturer. In the PC domain, Microsoft offers its proprietary Windows systems, Apple has supplied Mac OS for its line of Macintosh computers, and there are few other choices. The best-known open system has been UNIX, originally developed by Bell Laboratories and supplied freely to universities. In its Linux variant it is available for a wide range of PCs, workstations, and, most recently, IBM mainframes.

Open-source software is copyrighted, but its author grants free use, often including the right to modify it provided that use of the new version is not restricted. Linux is protected by the Free Software Foundation’s “GNU General Public License,” like all the other software in the extensive GNU project, and this protection permits users to modify Linux and even to sell copies, provided that this right of free use is preserved in the copies.

One consequence of the right of free use is that numerous authors have contributed to the GNU-Linux work, adding many valuable components to the basic system. Although quality control is managed voluntarily and some have predicted that Linux would not survive heavy commercial use, it has been remarkably successful and seems well on its way to becoming the version of UNIX on mainframes and on PCs used as Internet servers.

There are other variants of the UNIX system; some are proprietary, though most are now freely used, at least noncommercially. They all provide some type of graphical user interface. Apple’s operating system, macOS exemplifies a proprietary UNIX-based system.

Proprietary systems such as Microsoft’s Windows 8, 10, and 11 provide highly integrated systems. All operating systems provide file directory services, for example, but a Microsoft system might use the same window display for a directory as it would for a browser. Such an integrated approach makes it more difficult for nonproprietary software to use Windows capabilities, a feature that has been an issue in antitrust lawsuits against Microsoft.