F-score

statistics
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Share
Share to social media
URL
https://www.britannica.com/science/F-score
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

External Websites
Also known as: F-measure
Also called:
F-measure
Related Topics:
precision
F1-score
recall

F-score, a metric for evaluating the accuracy of a binary classification model. It combines the precision and recall of an algorithm into one metric.

A binary classification model classifies items as one of two values—for example, “yes” or “no.” Precision is the fraction of a model’s returned “yes” values that are actual “yes” values. Recall is the fraction of a dataset’s actual “yes” values that a model includes in its classified group of “yes” values. This score is considered to be helpful in determining how well an algorithm classifies a dataset into two categories and is widely used in spite of criticisms.

The F-score describes an algorithm’s performance on a scale of 0 to 1. An F-score of 1 indicates a perfect algorithm, and an F-score of 0 indicates an algorithm that has failed completely in either recall, precision, or both. An algorithm’s F-score is calculated using F-score = 2 (precision × recall)/(precision + recall).

Suppose a model looks at pictures of apples and oranges and indicates how many apples are in the picture. In one case, the model is given a picture with eight apples and four oranges but states that the picture contains four apples, one of which is actually an orange. This would show that the precision of the model is 3/4, because three of the four apples the model identified are actually apples. The recall of the model is 3/8, because it identified three of the eight apples in the picture. Therefore the F-score isF-score = 2 (3/4 × 3/8)/(3/4 + 3/8),or 8/16 = 0.5.

When an F-score is calculated as a harmonic mean of precision and recall, it is sometimes called an F1-score. This is a balanced F-score that weights precision and recall equally. However, the formula can be modified to give more weight to either precision or recall.

Some statisticians note that since the F-score does not give detail about whether recall or precision has played a greater role in calculating the final score, it can be misleading when assessing an algorithm or comparing multiple algorithms. For example, if two models differ in that one calculates recall poorly and the other calculates precision incorrectly, they could have the same F-score even though the models struggle with different aspects of classification.

Another problem arises when the proportion of an algorithm’s “no” cases is accurately identified but the F-score only tracks “yes” cases. Although this is not important in cases such as evaluating a model that is retrieving relevant information from a document, it is critical in cases such as a model of medical diagnoses.

Get Unlimited Access
Try Britannica Premium for free and discover more.

Statisticians point out that using this metric requires understanding its limitations and how it can be modified, with some suggesting modifications to make the F-score more useful. F-scores have also been modified for use beyond binary classification problems, with the emergence of multi-class models that classify data into more than two possible outcomes.

Calculation of an F-score to assess recall and precision was originally used to evaluate information retrieval from documents but is now used for evaluating classification algorithms in areas such as machine learning, natural-language processing, computer vision, record linking, and data analytics. One common use is in medical data analysis, for which an F-score can be used to design models that can predict medical outcomes or that can help to determine what factors influence patient safety.

Karin Akre