graphical user interface (GUI), a computer program that enables a person to communicate with a computer through the use of symbols, visual metaphors, and pointing devices. Best known for its implementation in Apple Inc.’s Macintosh and Microsoft Corporation’s Windows operating system, the GUI has replaced the arcane and difficult textual interfaces of earlier computing with a relatively intuitive system that has made computer operation not only easier to learn but more pleasant and natural. The GUI is now the standard computer interface, and its components have themselves become unmistakable cultural artifacts.

Early ideas

There was no one inventor of the GUI; it evolved with the help of a series of innovators, each improving on a predecessor’s work. The first theorist was Vannevar Bush, director of the U.S. Office of Scientific Research and Development, who in an influential essay, “As We May Think,” published in the July 1945 issue of The Atlantic Monthly, envisioned how future information gatherers would use a computer-like device, which he called a “memex,” outfitted with buttons and levers that could access vast amounts of linked data—an idea that anticipated hyperlinking. Bush’s essay enchanted Douglas Engelbart, a young naval technician, who embarked on a lifelong quest to realize some of those ideas. While at the Stanford Research Institute (now known as SRI International), working on a U.S. Department of Defense grant, Engelbart formed the Augmentation Research Center. By the mid-1960s it had devised a set of innovations, including a way of segmenting the monitor screen so that it appeared to be a viewpoint into a document. (The use of multiple tiles, or windows, on the screen could easily accommodate different documents, something that Bush thought crucial.) Engelbart’s team also invented a pointing device known as a “mouse,” then a palm-sized wooden block on wheels whose movement controlled a cursor on the computer screen. These innovations allowed information to be manipulated in a more flexible, natural manner than the prevalent method of typing one of a limited set of commands.

PARC

The next wave of GUI innovation occurred at the Xerox Corporation’s Palo Alto (California) Research Center (PARC), to which several of Engelbart’s team moved in the 1970s. The new interface ideas found their way to a computer workstation called the Xerox Star, which was introduced in 1981. Though the process was expensive, the Star (and its prototype predecessor, the Alto) used a technique called “bit mapping” in which everything on the computer screen was, in effect, a picture. Bit mapping not only welcomed the use of graphics but allowed the computer screen to display exactly what would be output from a printer—a feature that became known as “what you see is what you get,” or WYSIWYG. The computer scientists at PARC, notably Alan Kay, also designed the Star interface to embody a metaphor: a set of small pictures, or “icons,” were arranged on the screen, which was to be thought of as a virtual desktop. The icons represented officelike activities such as retrieving files from folders and printing documents. By using the mouse to position the computer’s cursor over an icon and then clicking a button on the mouse, a command would be instantly implemented—an intuitively simpler, and generally quicker, process than typing commands.

computer chip. computer. Hand holding computer chip. Central processing unit (CPU). history and society, science and technology, microchip, microprocessor motherboard computer Circuit Board
Britannica Quiz
Computers and Technology Quiz

Macintosh to Windows

In late 1979 a group of engineers from Apple, led by cofounder Steven P. Jobs, saw the GUI during a visit to PARC and were sufficiently impressed to integrate the ideas into two new computers, Lisa and Macintosh, then in the design stage. Each product came to have a bit-mapped screen and a sleek, palm-sized mouse (though for simplicity this used a single command button in contrast to the multiple buttons on the SRI and PARC versions). The software interface utilized overlapping windows, rather than tiling the screen, and featured icons that fit the Xerox desktop metaphor. Moreover, the Apple engineers added their own innovations, including a “menu bar” that, with the click of a mouse, would lower a “pull-down” list of commands. Other touches included scroll bars on the sides of windows and animation when windows opened and closed. Apple even employed a visual artist to create an attractive on-screen “look and feel.”

Whereas the Lisa first brought the principles of the GUI into a wider marketplace, it was the lower-cost Macintosh, shipped in 1984, that won millions of converts to the interface. Nonetheless, some critics charged that, because of the higher costs and slower speeds, the GUI was more appropriate for children than for professionals and that the latter would continue to use the old command-line interface of Microsoft’s DOS (disk operating system). It was only after 1990, when Microsoft released Windows 3.0 OS, with the first acceptable GUI for International Business Machines Corporation (IBM) PC-compatible computers, that the GUI became the standard interface for personal computers. This in turn led to the development of various graphical interfaces for UNIX and other workstation operating systems. By 1995, when Microsoft released its even more intuitive Windows 95 OS, not only had components of the GUI become synonymous with computing but its images had found their way into other media, including print design and even television commercials. It was even argued that, with the advent of the GUI, engineering had merged with art to create a new medium of the interface.

Speech recognition

Although the GUI continued to evolve through the 1990s, particularly as features of Internet software began to appear in more general applications, software designers actively researched its replacement. In particular, the advent of “computer appliances” (devices such as personal digital assistants, automobile control systems, television sets, videocassette recorders, microwave ovens, telephones, and even refrigerators—all endowed with the computational powers of the embedded microprocessor) made it apparent that new means of navigation and control were in order. By making use of powerful advances in speech recognition and natural language processing, these new interfaces might be more intuitive and effective than ever. Nevertheless, as a medium of communication with machines, they would only build upon the revolutionary changes introduced by the graphical user interface.

Steven Levy
Britannica Chatbot logo

Britannica Chatbot

Chatbot answers are created from Britannica articles using AI. This is a beta feature. AI answers may contain errors. Please verify important information using Britannica articles. About Britannica AI.
Also called:
mobile application
Related Topics:
PDA
tablet computer
smartphone
application software

app, application software developed for use on a mobile device such as a smartphone or tablet. Mobile apps are distinct from Web applications, which run in Web browsers, and from desktop applications, which are used on desktop computers.

Mobile apps were introduced in the 1980s with the release of the first personal digital assistants (PDAs). However, such apps did not evolve far past the most basic and utilitarian of functions (e.g., clocks and calculators) until the 21st century, when smartphones evolved to run larger programs. Additionally, third-generation (3G) mobile networks made it possible to download files larger than e-mail and text message apps. Once smartphone manufacturers allowed downloads of mobile apps created by third parties in the 2000s a new industry was born. The resulting explosion in mobile app options for consumers revolutionized how people work, play, shop, and travel.

As with desktop computers, smartphones are sold with many basic apps preloaded: an e-mail client, a calendar, a Web browser, a weather forecaster, and so on. Additional apps are generally downloaded from online distribution platforms commonly referred to as app stores. The most popular app stores are operated by smartphone companies, such as Samsung or Huawei, or companies that design their own operating systems (OS), such as Apple or Google. The Apple app store was introduced in 2008 and had about 500 apps available, but by 2022 that total had leapt to over 1.7 million apps. Companies that design their own OS have the advantage of preinstalling their store’s apps on new devices. Some apps are free, while others require payment. Any revenue generated by an app—whether through an upfront transaction, a monthly subscription, microtransactions, or advertising—is usually shared between an app’s creator and the app store. Consequently, an app’s cost may vary based on where it is purchased.

Developers tend to group apps into three types: native, Web-based, and hybrid. Native apps are designed to run off a single operating system or platform. Since native apps are customized to specific operating systems, they are faster and more secure than alternatives. They are also able to interact with mobile devices’ hardware (e.g., a microphone or camera). Web-based mobile apps, or Web apps, rely on a mobile device’s browser to access their features. As a result, they are slower and do not work offline, but they will work with any operating system. Finally, hybrid apps look and function much like native apps but are similar to Web-based apps in that they work on multiple operating systems. Their drawback is that they lack true native apps’ power and speed.

Mobile apps are traditionally characterized by their narrow functionality; they rarely offer as many features and options as their desktop and browser counterparts. One reason for this is that mobile apps have access to less processing power, memory, and storage. Another limiting factor is connectivity: mobile apps are meant to be accessible over mobile networks, which have their own data and speed issues, particularly in rural areas or developing countries. Finally, there is the intrinsic difficulty of offering more options on smaller screens while also providing a pleasant and intuitive interface. However, since each app offers only a few features, users can tailor the functionality of their mobile devices to their preferences by being selective with the apps they choose.

Some countries—particularly in Asia—have seen the rise of “super-apps,” or widely adopted mobile apps that provide services as varied as messaging, food delivery, gaming, and paying rent. Such apps run on top of the smartphone’s operating system as a secondary platform. While they do possess core features, most of their functionality is obtained by downloading third-party “mini-apps.” Super-apps are advantageous in that users create only one account to enjoy numerous services. However, the mini-apps are slower, since they are not directly running on the device’s operating system, and the user interfaces of super-apps with many mini-apps can be harder to navigate.

Moreover, privacy advocates note that a user’s data from a super-app can give one company—or a government—an exceptionally detailed picture of that user’s life. The China-based super-app WeChat, considered to be the oldest super-app, is the primary example, as the service is known to be used by the Chinese government to conduct mass surveillance.

Are you a student?
Get a special academic rate on Britannica Premium.
Adam Volle
Britannica Chatbot logo

Britannica Chatbot

Chatbot answers are created from Britannica articles using AI. This is a beta feature. AI answers may contain errors. Please verify important information using Britannica articles. About Britannica AI.