开发者:上海品职教育科技有限公司 隐私政策详情

应用版本:4.2.11(IOS)|3.2.5(安卓)APP下载

lcrcp3 · 2023年09月15日

如题

NO.PZ2023040502000080

问题如下:

After running his model on the test set, Quinn produces a confusion matrix for evaluating the performance of the model (Exhibit 3). He reminds Wu that since the number of defaults in the dataset is likely much smaller than the number of non-defaults, this needs to be considered in evaluating model performance.


Using Exhibit 3 and Quinn’s reminder, the most appropriate measure of the accuracy of the model is:

选项:

A.

0.79

B.

0.86

C.

0.92

解释:

B is correct. Quinn reminds Wu that there are likely unequal class distributions in the dataset, making F1, the harmonic mean of precision and recall, a better measure of accuracy.

Ÿ Precision, P, measures what proportion of positive identifications were actually correct, where

ü P = (TP)/(TP + FP), TP = True positives, and FP = False positives.

ü P = (TP)/(TP + FP) = 118/(118 + 32) = 0.7866 = 0.79.

Ÿ Recall, R, measures what proportion of actual positives were identified correctly, where

ü R = (TP)/(TP + FN) and FN = False negatives.

ü R = (TP)/(TP + FN) = 118/(118 + 8) = 0.9365 = 0.94.

Ÿ F1 is the harmonic mean of precision and recall and is equal to (2×P×R)/(P + R).

ü F1 = (2×P×R)/(P + R) = (2×0.79×0.94)/(0.79 + 0.94) = 0.86.

A is incorrect. Calculating precision results in 0.79: P = (TP)/(TP + FP) = 118/(118 + 32) = 0.79.

C is incorrect. Accuracy is the percentage of correctly predicted classes out of all predictions:

Ÿ A = (TP + TN)/(TP + FP + TN + FN) = (118 + 320)/(118 + 32 + 320 + 8) = 0.92.

Ÿ When the class distributions in the dataset are unequal, as Wu indicates, F1 is a better measure of the accuracy of the model.

怎么看出来不是求准确率?

1 个答案

星星_品职助教 · 2023年09月18日

同学你好,

题干中提示了“the number of defaults in the dataset is likely much smaller than the number of non-defaults”。对于这种不平衡的情况,需要使用F1 score。

即答案解析中所说的:

  • 1

    回答
  • 0

    关注
  • 474

    浏览
相关问题