- Also called:
- operational research
- Key People:
- Charles Babbage
- George Dantzig
Three essential characteristics of operations research are a systems orientation, the use of interdisciplinary teams, and the application of scientific method to the conditions under which the research is conducted.
Systems orientation
The systems approach to problems recognizes that the behaviour of any part of a system has some effect on the behaviour of the system as a whole. Even if the individual components are performing well, however, the system as a whole is not necessarily performing equally well. For example, assembling the best of each type of automobile part, regardless of make, does not necessarily result in a good automobile or even one that will run, because the parts may not fit together. It is the interaction between parts, and not the actions of any single part, that determines how well a system performs.
Thus, operations research attempts to evaluate the effect of changes in any part of a system on the performance of the system as a whole and to search for causes of a problem that arises in one part of a system in other parts or in the interrelationships between parts. In industry, a production problem may be approached by a change in marketing policy. For example, if a factory fabricates a few profitable products in large quantities and many less profitable items in small quantities, long efficient production runs of high-volume, high-profit items may have to be interrupted for short runs of low-volume, low-profit items. An operations researcher might propose reducing the sales of the less profitable items and increasing those of the profitable items by placing salesmen on an incentive system that especially compensates them for selling particular items.
The interdisciplinary team
Scientific and technological disciplines have proliferated rapidly in the last 100 years. The proliferation, resulting from the enormous increase in scientific knowledge, has provided science with a filing system that permits a systematic classification of knowledge. This classification system is helpful in solving many problems by identifying the proper discipline to appeal to for a solution. Difficulties arise when more complex problems, such as those arising in large organized systems, are encountered. It is then necessary to find a means of bringing together diverse disciplinary points of view. Furthermore, since methods differ among disciplines, the use of interdisciplinary teams makes available a much larger arsenal of research techniques and tools than would otherwise be available. Hence, operations research may be characterized by rather unusual combinations of disciplines on research teams and by the use of varied research procedures.
Methodology
Until the 20th century, laboratory experiments were the principal and almost the only method of conducting scientific research. But large systems such as are studied in operations research cannot be brought into laboratories. Furthermore, even if systems could be brought into the laboratory, what would be learned would not necessarily apply to their behaviour in their natural environment, as shown by early experience with radar. Experiments on systems and subsystems conducted in their natural environment (“operational experiments”) are possible as a result of the experimental methods developed by the British statistician R.A. Fisher in 1923–24. For practical or even ethical reasons, however, it is seldom possible to experiment on large organized systems as a whole in their natural environments. This results in an apparent dilemma: to gain understanding of complex systems experimentation seems to be necessary, but it cannot usually be carried out. This difficulty is solved by the use of models, representations of the system under study. Provided the model is good, experiments (called “simulations”) can be conducted on it, or other methods can be used to obtain useful results.
Phases of operations research
Problem formulation
To formulate an operations research problem, a suitable measure of performance must be devised, various possible courses of action defined (that is, controlled variables and the constraints upon them), and relevant uncontrolled variables identified. To devise a measure of performance, objectives are identified and defined, and then quantified. If objectives cannot be quantified or expressed in rigorous (usually mathematical) terms, most operations research techniques cannot be applied. For example, a business manager may have the acquisitive objective of introducing a new product and making it profitable within one year. The identified objective is profit in one year, which is defined as receipts less costs, and would probably be quantified in terms of sales. In the real world, conditions may change with time. Thus, though a given objective is identified at the beginning of the period, change and reformulation are frequently necessary.
Detailed knowledge of how the system under study actually operates and of its environment is essential. Such knowledge is normally acquired through an analysis of the system, a four-step process that involves determining whose needs or desires the organization tries to satisfy; how these are communicated to the organization; how information on needs and desires penetrates the organization; and what action is taken, how it is controlled, and what the time and resource requirements of these actions are. This information can usually be represented graphically in a flowchart, which enables researchers to identify the variables that affect system performance.
Once the objectives, the decision makers, their courses of action, and the uncontrolled variables have been identified and defined, a measure of performance can be developed and selection can be made of a quantitative function of this measure to be used as a criterion for the best solution.
The type of decision criterion that is appropriate to a problem depends on the state of knowledge regarding possible outcomes. Certainty describes a situation in which each course of action is believed to result in one particular outcome. Risk is a situation in which, for each course of action, alternative outcomes are possible, the probabilities of which are known or can be estimated. Uncertainty describes a situation in which, for each course of action, probabilities cannot be assigned to the possible outcomes.
In risk situations, which are the most common in practice, the objective normally is to maximize expected (long-run average) net gain or gross gain for specified costs, or to minimize costs for specified benefits. A business, for example, seeks to maximize expected profits or minimize expected costs. Other objectives, not necessarily related, may be sought; for example, an economic planner may wish to maintain full employment without inflation; or different groups within an organization may have to compromise their differing objectives, as when an army and a navy, for example, must cooperate in matters of defense.
In approaching uncertain situations one may attempt either to maximize the minimum gain or minimize the maximum loss that results from a choice; this is the “minimax” approach. Alternatively, one may weigh the possible outcomes to reflect one’s optimism or pessimism and then apply the minimax principle. A third approach, “minimax regret,” attempts to minimize the maximum deviation from the outcome that would have been selected if a state of certainty had existed before the choice had been made.
Each identified variable should be defined in terms of the conditions under which, and research operations by which, questions concerning its value ought to be answered; this includes identifying the scale used in measuring the variable.
Model construction
A model is a simplified representation of the real world and, as such, includes only those variables relevant to the problem at hand. A model of freely falling bodies, for example, does not refer to the colour, texture, or shape of the body involved. Furthermore, a model may not include all relevant variables because a small percentage of these may account for most of the phenomenon to be explained. Many of the simplifications used produce some error in predictions derived from the model, but these can often be kept small compared to the magnitude of the improvement in operations that can be extracted from them. Most operations research models are symbolic models because symbols represent properties of the system. The earliest models were physical representations such as model ships, airplanes, tow tanks, and wind tunnels. Physical models are usually fairly easy to construct, but only for relatively simple objects or systems, and are usually difficult to change.
The next step beyond the physical model is the graph, easier to construct and manipulate but more abstract. Since graphic representation of more than three variables is difficult, symbolic models came into use. There is no limit to the number of variables that can be included in a symbolic model, and such models are easier to construct and manipulate than physical models.
Symbolic models are completely abstract. When the symbols in a model are defined, the model is given content or meaning. This has important consequences. Symbolic models of systems of very different content often reveal similar structure. Hence, most systems and problems arising in them can be fruitfully classified in terms of relatively few structures. Furthermore, since methods of extracting solutions from models depend only on their structure, some methods can be used to solve a wide variety of problems from a contextual point of view. Finally, a system that has the same structure as another, however different the two may be in content, can be used as a model of the other. Such a model is called an analogue. By use of such models much of what is known about the first system can be applied to the second.
Despite the obvious advantages of symbolic models there are many cases in which physical models are still useful, as in testing physical structures and mechanisms; the same is true for graphic models. Physical and graphic models are frequently used in the preliminary phases of constructing symbolic models of systems.
Operations research models represent the causal relationship between the controlled and uncontrolled variables and system performance; they must therefore be explanatory, not merely descriptive. Only explanatory models can provide the requisite means to manipulate the system to produce desired changes in performance.
Operations research analysis is directed toward establishing cause-and-effect relations. Though experiments with actual operations of all or part of a system are often useful, these are not the only way to analyze cause and effect. There are four patterns of model construction, only two of which involve experimentation: inspection, use of analogues, operational analysis, and operational experiments. They are considered here in order of increasing complexity.
In some cases the system and its problem are relatively simple and can be grasped either by inspection or from discussion with persons familiar with it. In general, only low-level and repetitive operating problems, those in which human behaviour plays a minor role, can be so treated.
When the researcher finds it difficult to represent the structure of a system symbolically, it is sometimes possible to establish a similarity, if not an identity, with another system whose structure is better known and easier to manipulate. It may then be possible to use either the analogous system itself or a symbolic model of it as a model of the problem system. For example, an equation derived from the kinetic theory of gases has been used as a model of the movement of trains between two classification yards. Hydraulic analogues of economies and electronic analogues of automotive traffic have been constructed with which experimentation could be carried out to determine the effects of manipulation of controllable variables. Thus, analogues may be constructed as well as found in existing systems.
In some cases analysis of actual operations of a system may reveal its causal structure. Data on operations are analyzed to yield an explanatory hypothesis, which is tested by analysis of operating data. Such testing may lead to revision of the hypothesis. The cycle is continued until a satisfactory explanatory model is developed.
For example, an analysis of the cars stopping at urban automotive service stations located at intersections of two streets revealed that almost all came from four of the 16 possible routes through the intersection (four ways of entering times four ways of leaving). Examination of the percentage of cars in each route that stopped for service suggested that this percentage was related to the amount of time lost by stopping. Data were then collected on time lost by cars in each route. This revealed a close inverse relationship between the percentage stopping and time lost. But the relationship was not linear; that is, the increases in one were not proportional to increases in the other. It was then found that perceived lost time exceeded actual lost time, and the relationship between the percentage of cars stopping and perceived lost time was close and linear. The hypothesis was systematically tested and verified and a model constructed that related the number of cars stopping at service stations to the amount of traffic in each route through its intersection and to characteristics of the station that affect the time required to get service.
In situations where it is not possible to isolate the effects of individual variables by analysis of operating data, it may be necessary to resort to operational experiments to determine which variables are relevant and how they affect system performance.
Such is the case, for example, in attempts to quantify the effects of advertising (amount, timing, and media used) upon sales of a consumer product. Advertising by the producer is only one of many controlled and uncontrolled variables affecting sales. Hence, in many cases its effect can only be isolated and measured by controlled experiments in the field.
The same is true in determining how the size, shape, weight, and price of a food product affect its sales. In this case laboratory experiments on samples of consumers can be used in preliminary stages, but field experiments are eventually necessary. Experiments do not yield explanatory theories, however. They can only be used to test explanatory hypotheses formulated before designing the experiment and to suggest additional hypotheses to be tested.
It is sometimes necessary to modify an otherwise acceptable model because it is not possible or practical to find the numerical values of the variables that appear in it. For example, a model to be used in guiding the selection of research projects may contain such variables as “the probability of success of the project,” “expected cost of the project,” and its “expected yield.” But none of these may be calculable with any reliability.
Models not only assist in solving problems but also are useful in formulating them; that is, models can be used as guides to explore the structure of a problem and to reveal possible courses of action that might otherwise be missed. In many cases the course of action revealed by such application of a model is so obviously superior to previously considered possibilities that justification of its choice is hardly required.
In some cases the model of a problem may be either too complicated or too large to solve. It is frequently possible to divide the model into individually solvable parts and to take the output of one model as an input to another. Since the models are likely to be interdependent, several repetitions of this process may be necessary.
Deriving solutions from models
Procedures for deriving solutions from models are either deductive or inductive. With deduction one moves directly from the model to a solution in either symbolic or numerical form. Such procedures are supplied by mathematics; for example, the calculus. An explicit analytical procedure for finding the solution is called an algorithm.
Even if a model cannot be solved, and many are too complex for solution, it can be used to compare alternative solutions. It is sometimes possible to conduct a sequence of comparisons, each suggested by the previous one and each likely to contain a better alternative than was contained in any previous comparison. Such a solution-seeking procedure is called heuristic.
Inductive procedures involve trying and comparing different values of the controlled variables. Such procedures are said to be iterative (repetitive) if they proceed through successively improved solutions until either an optimal solution is reached or further calculation cannot be justified. A rational basis for terminating such a process—known as “stopping rules”—involves the determination of the point at which the expected improvement of the solution on the next trial is less than the cost of the trial.
Such well-known algorithms as linear, nonlinear, and dynamic programming are iterative procedures based on mathematical theory. Simulation and experimental optimization are iterative procedures based primarily on statistics.
Testing the model and the solution
A model may be deficient because it includes irrelevant variables, excludes relevant variables, contains inaccurately evaluated variables, is incorrectly structured, or contains incorrectly formulated constraints. Tests for deficiencies of a model are statistical in nature; their use requires knowledge of sampling and estimation theory, experimental designs, and the theory of hypothesis testing (see also statistics).
Sampling-estimation theory is concerned with selecting a sample of items from a large group and using their observed properties to characterize the group as a whole. To save time and money, the sample taken is as small as possible. Several theories of sampling design and estimation are available, each yielding estimates with different properties.
The structure of a model consists of a function relating the measure of performance to the controlled and uncontrolled variables; for example, a business may attempt to show the functional relationship between profit levels (the measure of performance) and controlled variables (prices, amount spent on advertising) and uncontrolled variables (economic conditions, competition). In order to test the model, values of the measure of performance computed from the model are compared with actual values under different sets of conditions. If there is a significant difference between these values, or if the variability of these differences is large, the model requires repair. Such tests do not use data that have been used in constructing the model, because to do so would determine how well the model fits performance data from which it has been derived, not how well it predicts performance.
The solution derived from a model is tested to find whether it yields better performance than some alternative, usually the one in current use. The test may be prospective, against future performance, or retrospective, comparing solutions that would have been obtained had the model been used in the past with what actually did happen. If neither prospective nor retrospective testing is feasible, it may be possible to evaluate the solution by “sensitivity analysis,” a measurement of the extent to which estimates used in the solution would have to be in error before the proposed solution performs less satisfactorily than the alternative decision procedure.
The cost of implementing a solution should be subtracted from the gain expected from applying it, thus obtaining an estimate of net improvement. Where errors or inefficiencies in applying the solution are possible, these should also be taken into account in estimating the net improvement.
Implementing and controlling the solution
The acceptance of a recommended solution by the responsible manager depends on the extent to which he believes the solution to be superior to alternatives. This in turn depends on his faith in the researchers involved and their methods. Hence, participation by managers in the research process is essential for success.
Operations researchers are normally expected to oversee implementation of an accepted solution. This provides them with an ultimate test of their work and an opportunity to make adjustments if any deficiencies should appear in application. The operations research team prepares detailed instructions for those who will carry out the solution and trains them in following these instructions. The cooperation of those who carry out the solution and those who will be affected by it should be sought in the course of the research process, not after everything is done. Implementation plans and schedules are pretested and deficiencies corrected. Actual performance of the solution is compared with expectations and, where divergence is significant, the reasons for it are determined and appropriate adjustments made.
The solution may fail to yield expected performance for one or a combination of reasons: the model may be wrongly constructed or used; the data used in making the model may be incorrect; the solution may be incorrectly carried out; the system or its environment may have changed in unexpected ways after the solution was applied. Corrective action is required in each case.
Controlling a solution requires deciding what constitutes a significant deviation in performance from expectations; determining the frequency of control checks, the size and type of sample of observations to be made, and the types of analyses of the resulting data that should be carried out; and taking appropriate corrective action. The second step should be designed to minimize the sum of the costs of carrying out the control procedures and the errors that might be involved.
Since most models involve a variety of assumptions, these are checked systematically. Such checking requires explicit formulation of the assumptions made during construction of the model.
Effective controls not only make possible but often lead to better understanding of the dynamics of the system involved. Through controls the problem-solving system of which operations research is a part learns from its own experience and adapts more effectively to changing conditions.