The denotational semantics for programming languages was originally developed by the American logician Dana Scott and the British computer scientist Christopher Strachey. It can be described as an application of the semantics to computer languages that Scott had developed for the logical systems known as lambda calculus. The characteristic feature of this calculus is that in it one can highlight a variable, say x, in an expression, say M, and understand the result as a function of x. This function is expressed by (λx,M), and it can be applied to other functions.
The semantics for lambda calculus does not postulate any individuals to which the functions it deals with are applied. Everything is function, and, when one function is applied to another function, the result is again a function.
Hypothetical reasoning is often presented as an extension and application of logic. One of the starting points of the study of such reasoning is the observation that the conditional sentences of natural languages do not have a truth-conditional semantics. In traditional logic, the conditional “If A, then B” is true unless A is true and B is false. However, in ordinary discourse, counterfactual conditionals (conditionals whose antecedent is false) are not always considered true.
The study of conditionals faces two interrelated problems: stating the conditions in which counterfactual conditionals are true and representing the conditional connection between the antecedent and the consequent. The difficulty of the first problem is illustrated by the following pair of counterfactual conditionals:
If Los Angeles were in Massachusetts, it would not be on the Pacific Ocean. If Los Angeles were in Massachusetts, Massachusetts would extend all the way to the Pacific Ocean.
Both of these conditionals cannot be true, but it is not clear how to decide between them. The example nevertheless suggests a perspective on counterfactuals. Often the counterfactual situation is allowed to differ from the actual one only in certain respects. Thus, the first example would be true if state boundaries were kept fixed and Los Angeles were allowed to change its location, whereas the latter would be true if cities were kept fixed but state boundaries could change. It is not obvious how this relativity to certain implicit constancy assumptions can be represented formally.
Other criteria for the truth of counterfactuals have been suggested, often within the framework of possible-worlds semantics. For example, the American philosopher David Lewis suggested that a counterfactual is true if and only if it is true in the possible world that is maximally similar to the actual one.
The idea of conditionality suggests that the way in which the antecedent is made true must somehow also make the consequent true. This idea is most naturally implemented in game-theoretic semantics. In this approach, the verification game with a conditional “If A, then B” can be divided into two subgames, played with A and B, respectively. If A turns out to be true, it means that there exists a verifying strategy in the game with A. The conditionality of B on A is thus implemented by assuming that this winning strategy is available to the verifier in the game with the consequent B. This interpretation agrees with evidence from natural languages in the form of the behaviour of anaphoric pronouns. Thus, the availability of the winning strategy in the game with B means that the names of certain objects imported by the strategy from the first subgame are available as heads of anaphoric pronouns in the second subgame. For example, consider the sentence “If you give a gift to each child for her birthday, some child will open it today.” Here a verifying strategy in the game with “you give a gift to each child for her birthday” involves a function that assigns a gift to each child. Since this function is known when the consequent is dealt with, it assigns to some child her gift as the value of “it.” In the usual logics of conditional reasoning, these two questions are answered indirectly, by postulating logical laws that conditionals are supposed to obey.
Certain computational methods for dealing with concepts that are not inherently imprecise are known as fuzzy logics. They were originally developed by the American computer scientist Lotfi Zadeh. Fuzzy logics are widely discussed and used by computer scientists. Fuzzy logic is more of a rival to classical probability calculus, which also deals with imprecise attributions of properties to objects, than a rival to classical logic calculus. The largely unacknowledged reason for the popularity of fuzzy logic is that, unlike probabilistic methods, fuzzy logic relies on compositional methods—i.e., methods in which the logical status of a complex expression depends only on the status of its component expressions. This facilitates computational applications, but it deprives fuzzy logic of most of its theoretical interest.
On the philosophical level, fuzzy logic does not make logical problems of vagueness more tractable. Some of these problems are among the oldest conceptual puzzles. Among them is the soritesparadox, sometimes formulated in the form known as the paradox of the bald man. The paradox is this: A man with no hairs is bald, and if he has n hairs, then adding one single hair will not make a difference to his baldness. Therefore, by mathematical induction, a man of any number of hairs is bald. Everybody is bald. One natural attempt to solve this paradox is to assume that the predicate “bald” is not always applicable, so that it leaves what are known as truth-value gaps. But the boundaries of these gaps must again be sharp, reproducing the paradox. However, the sorites paradox can be solved if the assumption of truth-value gaps is combined with the use of a suitable noncompositional logic.
The most influential computer scientists include Alan Turing, the World War II code breaker commonly regarded as the “father of modern computing”; Tim Berners-Lee, inventor of the World Wide Web; John McCarthy, inventor of the programming language LISP and artificial intelligence pioneer; and Grace Hopper, U.S. Navy officer and a key figure in the development of early computers such as the UNIVAC I as well as the development of the computer language compiler.
What can you do with computer science?
Computer science is applied to a wide range of disciplines that include modeling simulations such as the impacts of climate change and the Ebola virus, creating art and visualization through graphics rendering, and simulating a human interface through artificial intelligence and machine learning.
Is computer science used in video games?
Video game development is grounded in the principles of computer science and programming. Modern graphics rendering in video games often employs advanced techniques such as ray tracing to provide realistic effects. The development of augmented reality and virtual reality has also expanded the range of possibilities of video game development.
How do I learn computer science?
Many universities across the world offer degrees that teach students the basics of computer science theory and the applications of computer programming. Additionally, the prevalence of online resources and courses makes it possible for many people to self-learn the more practical aspects of computer science (such as coding, video game development, and app design).
computer science, the study of computers and computing, including their theoretical and algorithmic foundations, hardware and software, and their uses for processing information. The discipline of computer science includes the study of algorithms and data structures, computer and network design, modeling data and information processes, and artificial intelligence. Computer science draws some of its foundations from mathematics and engineering and therefore incorporates techniques from areas such as queueing theory, probability and statistics, and electronic circuit design. Computer science also makes heavy use of hypothesis testing and experimentation during the conceptualization, design, measurement, and refinement of new algorithms, information structures, and computer architectures.
Computer science is considered as part of a family of five separate yet interrelated disciplines: computer engineering, computer science, information systems, information technology, and software engineering. This family has come to be known collectively as the discipline of computing. These five disciplines are interrelated in the sense that computing is their object of study, but they are separate since each has its own research perspective and curricular focus. (Since 1991 the Association for Computing Machinery [ACM], the IEEE Computer Society [IEEE-CS], and the Association for Information Systems [AIS] have collaborated to develop and update the taxonomy of these five interrelated disciplines and the guidelines that educational institutions worldwide use for their undergraduate, graduate, and research programs.)
The major subfields of computer science include the traditional study of computer architecture, programming languages, and software development. However, they also include computational science (the use of algorithmic techniques for modeling scientific data), graphics and visualization, human-computer interaction, databases and information systems, networks, and the social and professional issues that are unique to the practice of computer science. As may be evident, some of these subfields overlap in their activities with other modern fields, such as bioinformatics and computational chemistry. These overlaps are the consequence of a tendency among computer scientists to recognize and act upon their field’s many interdisciplinary connections.
Development of computer science
Computer science emerged as an independent discipline in the early 1960s, although the electronic digital computer that is the object of its study was invented some two decades earlier. The roots of computer science lie primarily in the related fields of mathematics, electrical engineering, physics, and management information systems.
Mathematics is the source of two key concepts in the development of the computer—the idea that all information can be represented as sequences of zeros and ones and the abstract notion of a “stored program.” In the binary number system, numbers are represented by a sequence of the binary digits 0 and 1 in the same way that numbers in the familiar decimal system are represented using the digits 0 through 9. The relative ease with which two states (e.g., high and low voltage) can be realized in electrical and electronic devices led naturally to the binary digit, or bit, becoming the basic unit of data storage and transmission in a computer system.
Electrical engineering provides the basics of circuit design—namely, the idea that electrical impulses input to a circuit can be combined using Boolean algebra to produce arbitrary outputs. (The Boolean algebra developed in the 19th century supplied a formalism for designing a circuit with binary input values of zeros and ones [false or true, respectively, in the terminology of logic] to yield any desired combination of zeros and ones as output.) The invention of the transistor and the miniaturization of circuits, along with the invention of electronic, magnetic, and optical media for the storage and transmission of information, resulted from advances in electrical engineering and physics.
Management information systems, originally called data processing systems, provided early ideas from which various computer science concepts such as sorting, searching, databases, information retrieval, and graphical user interfaces evolved. Large corporations housed computers that stored information that was central to the activities of running a business—payroll, accounting, inventory management, production control, shipping, and receiving.
Are you a student?
Get a special academic rate on Britannica Premium.
Alan TuringBritish mathematician Alan Turing conceptualized the Turing machine.
Theoretical work on computability, which began in the 1930s, provided the needed extension of these advances to the design of whole machines; a milestone was the 1936 specification of the Turing machine (a theoretical computational model that carries out instructions represented as a series of zeros and ones) by the British mathematician Alan Turing and his proof of the model’s computational power. Another breakthrough was the concept of the stored-program computer, usually credited to Hungarian American mathematician John von Neumann. These are the origins of the computer science field that later became known as architecture and organization.
In the 1950s, most computer users worked either in scientific research labs or in large corporations. The former group used computers to help them make complex mathematical calculations (e.g., missile trajectories), while the latter group used computers to manage large amounts of corporate data (e.g., payrolls and inventories). Both groups quickly learned that writing programs in the machine language of zeros and ones was not practical or reliable. This discovery led to the development of assembly language in the early 1950s, which allows programmers to use symbols for instructions (e.g., ADD for addition) and variables (e.g., X). Another program, known as an assembler, translated these symbolic programs into an equivalent binary program whose steps the computer could carry out, or “execute.”
Other system software elements known as linking loaders were developed to combine pieces of assembled code and load them into the computer’s memory, where they could be executed. The concept of linking separate pieces of code was important, since it allowed “libraries” of programs for carrying out common tasks to be reused. This was a first step in the development of the computer science field called software engineering.
Later in the 1950s, assembly language was found to be so cumbersome that the development of high-level languages (closer to natural languages) began to support easier, faster programming. FORTRAN emerged as the main high-level language for scientific programming, while COBOL became the main language for business programming. These languages carried with them the need for different software, called compilers, that translate high-level language programs into machine code. As programming languages became more powerful and abstract, building compilers that create high-quality machine code and that are efficient in terms of execution speed and storage consumption became a challenging computer science problem. The design and implementation of high-level languages is at the heart of the computer science field called programming languages.
Increasing use of computers in the early 1960s provided the impetus for the development of the first operating systems, which consisted of system-resident software that automatically handled input and output and the execution of programs called “jobs.” The demand for better computational techniques led to a resurgence of interest in numerical methods and their analysis, an activity that expanded so widely that it became known as computational science.
The 1970s and ’80s saw the emergence of powerful computer graphics devices, both for scientific modeling and other visual activities. (Computerized graphical devices were introduced in the early 1950s with the display of crude images on paper plots and cathode-ray tube [CRT] screens.) Expensive hardware and the limited availability of software kept the field from growing until the early 1980s, when the computer memory required for bitmap graphics (in which an image is made up of small rectangular pixels) became more affordable. Bitmap technology, together with high-resolution display screens and the development of graphics standards that make software less machine-dependent, has led to the explosive growth of the field. Support for all these activities evolved into the field of computer science known as graphics and visual computing.
Graphical user interfaceThe Xerox Alto was the first computer to use graphical icons and a mouse to control the system—the first graphical user interface (GUI).
Closely related to this field is the design and analysis of systems that interact directly with users who are carrying out various computational tasks. These systems came into wide use during the 1980s and ’90s, when line-edited interactions with users were replaced by graphical user interfaces (GUIs). GUI design, which was pioneered by Xerox and was later picked up by Apple (Macintosh) and finally by Microsoft (Windows), is important because it constitutes what people see and do when they interact with a computing device. The design of appropriate user interfaces for all types of users has evolved into the computer science field known as human-computer interaction (HCI).
The field of computer architecture and organization has also evolved dramatically since the first stored-program computers were developed in the 1950s. So called time-sharing systems emerged in the 1960s to allow several users to run programs at the same time from different terminals that were hard-wired to the computer. The 1970s saw the development of the first wide-area computer networks (WANs) and protocols for transferring information at high speeds between computers separated by large distances. As these activities evolved, they coalesced into the computer science field called networking and communications. A major accomplishment of this field was the development of the Internet.
The idea that instructions, as well as data, could be stored in a computer’s memory was critical to fundamental discoveries about the theoretical behavior of algorithms. That is, questions such as, “What can/cannot be computed?” have been formally addressed using these abstract ideas. These discoveries were the origin of the computer science field known as algorithms and complexity. A key part of this field is the study and application of data structures that are appropriate to different applications. Data structures, along with the development of optimal algorithms for inserting, deleting, and locating data in such structures, are a major concern of computer scientists because they are so heavily used in computer software, most notably in compilers, operating systems, file systems, and search engines.
In the 1960s the invention of magnetic disk storage provided rapid access to data located at an arbitrary place on the disk. This invention led not only to more cleverly designed file systems but also to the development of database and information retrieval systems, which later became essential for storing, retrieving, and transmitting large amounts and wide varieties of data across the Internet. This field of computer science is known as information management.
Another long-term goal of computer science research is the creation of computing machines and robotic devices that can carry out tasks that are typically thought of as requiring human intelligence. Such tasks include moving, seeing, hearing, speaking, understanding natural language, thinking, and even exhibiting human emotions. The computer science field of intelligent systems, originally known as artificial intelligence (AI), actually predates the first electronic computers in the 1940s, although the term artificial intelligence was not coined until 1956.
Three developments in computing in the early part of the 21st century—mobile computing, client-server computing, and computer hacking—contributed to the emergence of three new fields in computer science: platform-based development, parallel and distributed computing, and security and information assurance. Platform-based development is the study of the special needs of mobile devices, their operating systems, and their applications. Parallel and distributed computing concerns the development of architectures and programming languages that support the development of algorithms whose components can run simultaneously and asynchronously (rather than sequentially), in order to make better use of time and space. Security and information assurance deals with the design of computing systems and software that protects the integrity and security of data, as well as the privacy of individuals who are characterized by that data.
Finally, a particular concern of computer science throughout its history is the unique societal impact that accompanies computer science research and technological advancements. With the emergence of the Internet in the 1980s, for example, software developers needed to address important issues related to information security, personal privacy, and system reliability. In addition, the question of whether computer software constitutes intellectual property and the related question “Who owns it?” gave rise to a whole new legal area of licensing and licensing standards that applied to software and related artifacts. These concerns and others form the basis of social and professional issues of computer science, and they appear in almost all the other fields identified above.
So, to summarize, the discipline of computer science has evolved into the following 15 distinct fields:
Computer science continues to have strong mathematical and engineering roots. Computer science bachelor’s, master’s, and doctoral degree programs are routinely offered by postsecondary academic institutions, and these programs require students to complete appropriate mathematics and engineering courses, depending on their area of focus. For example, all undergraduate computer science majors must study discrete mathematics (logic, combinatorics, and elementary graph theory). Many programs also require students to complete courses in calculus, statistics, numerical analysis, physics, and principles of engineering early in their studies.
Feedback
Thank you for your feedback
Our editors will review what you’ve submitted and determine whether to revise the article.
print
Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies.
Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Hintikka, Jaakko J.. "applied logic". Encyclopedia Britannica, 29 Jul. 2020, https://www.britannica.com/topic/applied-logic. Accessed 18 May 2025.
Our editors will review what you’ve submitted and determine whether to revise the article.
print
Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies.
Please refer to the appropriate style manual or other sources if you have any questions.