linear regression

statistics
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

Related Topics:
regression

linear regression, in statistics, a process for determining a line that best represents the general trend of a data set.

The simplest form of linear regression involves two variables: y being the dependent variable and x being the independent variable. The equation developed is of the form y = mx + b, where m is the slope of the regression line (or the regression coefficient), and b is where the line intersects the y-axis. The equation for the regression line can be found using the least squares method, where m = (nxy) − ΣxΣy)/(nΣx2 − (Σx)2) and b = ymΣx)/n. The symbol Σ indicates a summation of all values, and n is the number of data points.

When a linear correlation exists in the data, a regression line can be found that will represent the line of best fit for the data. This equation can then be used to predict values not collected in the original data set.

It is often useful to create a graph of the collected data to see if there is a likely correlation within the data set before finding the equation of the regression line. If the data points are scattered and show no sign of a relationship, any equation found using linear regression most likely will not yield useful information. The Pearson’s correlation coefficient for a data set can be calculated to assist with this process, and the resulting coefficient can be used to determine if it makes sense to find a regression line equation. A Pearson’s correlation coefficient that is close to +1 for a positive correlation or −1 for a negative correlation indicates that it makes sense to use linear regression.

As an illustration of regression analysis and the least squares method, suppose a university medical centre is investigating the relationship between stress and blood pressure. Assume that both a stress test score and a blood pressure reading have been recorded for a sample of 20 patients. The data are shown graphically in the figure, called a scatter diagram. Values of the independent variable, stress test score, are given on the horizontal axis, and values of the dependent variable, blood pressure, are shown on the vertical axis. The line passing through the data points is the graph of the estimated regression equation: y = 0.49x + 42.3. The parameter estimates, m = 0.49 and b = 42.3, were obtained using the least squares method.

A primary use of the estimated regression equation is to predict the value of the dependent variable when values for the independent variables are given. For instance, given a patient with a stress test score of 60, the predicted blood pressure is 0.49(60) + 42.3 = 71.7. The values predicted by the estimated linear regression equation are the points on the line in the figure, and the actual blood pressure readings are represented by the points scattered about the line. The difference between the observed value of y and the value of y predicted by the estimated regression equation is called a residual. The least squares method chooses the parameter estimates such that the sum of the squared residuals is minimized.

Ken Stewart