Tradeoff means increasing one parameter leads to decreasing of the other. Let us explain this in context to binary classification and first define what is precision and recall. Let us call one class as positive and other as negative. Then,

- TP represents the true positives, which is the number of positive predictions which are actually positive.
- FP represents the false positives, which is the number of negative predictions incorrectly classified as a positive ,i.e. they were identified as positives though they were from a negative class.
- TN represents the true negatives, which is the number of negative predictions correctly classified as negative.
- FN represents the false negatives which is the number of positive instances incorrectly identified into the negative class, i.e. they were identified as negative but in reality they were from the positive class.

**Precision** is the fraction of correct positives among the total predicted positives. It is also called the accuracy of positive predictions.

**Recall** is the fraction of correct positives among the total positives in the dataset. It is indicating how many total positives of the actual dataset were covered(classified correctly) while doing prediction.

In an ideal scenario where there is a perfectly separable data, both precision and recall can get maximum value of 1.0. But in most of the practical situations, there is noise in the dataset and the dataset is not perfectly separable. There might be some points of positive class closer to the negative class and vice versa. In such cases, shifting the decision boundary can either increase the precision or recall but not both. Increasing one parameter leads to decreasing of the other. In other words, binary classifier will miss classify some points always. Miss classification means classifying data point from negative class as positive and from positive class as negative. This miss rate is either compromising precision or recall score.

precision-recall tradeoff occur due to increasing one of the parameter(precision or recall) while keeping the model same. This is possible, for instance, by changing the threshold of the classifier. For eg, fig 1 is a plot for the binary classifier showing how precision and recall can vary based on threshold. This threshold decides the decision boundary. When threshold is 0, both precision and recall have the same value of around 0.8. When threshold is increased to around 200000, precision reaches close to 0.95 but recall decreases drastically to around 0.4. When threshold is decreased to -200000, recall increases to 0.95 but precision decreases to 0.4. Note that increasing or decreasing threshold is similar to shifting the decision boundary.

If you’re confused what metric to choose, read here about the best strategy to choose metric.