Beyond Accuracy: Precision and Recall

After training a model, there are some metrics to measure the performance of the model. The accuracy is the common one. Besides, there are other things to measure the performance.

Given four cases of the results:
True positive (TP): actually positive, predictive is positive which is true
False positive (FP): actually negative, predictive is positive which is false (type 1 error)
True negative (TN): actually negative, predictive is negative which is true
False negative (FN): actually positive, predictive is negative which is false (type 2 error)

True False
Positve TP FP
Negative TN FN

$ Precision = \frac{TP}{TP+FP} $
$ Recall = \frac{TP}{TP+FN} $