开发者:上海品职教育科技有限公司 隐私政策详情

应用版本:4.2.11(IOS)|3.2.5(安卓)APP下载

dazttle · 2024年07月31日

P value > critical value = positive?

NO.PZ2023040502000082

问题如下:

Azarov would like to incorporate machine learning (ML) models into the company’s analytical process. Azarov applies the ML model to the test dataset for Dataset XYZ, assuming a threshold p-value of 0.65. Exhibit 1 contains a sample of results from the test dataset corpus.


Based on Exhibit 1, the accuracy metric for Dataset XYZ’s test set sample is closest to:

选项:

A.

0.67

B.

0.7

C.

0.75

解释:

B is correct. Accuracy is the percentage of correctly predicted classes out of total predictions and is calculated as (TP + TN)/(TP + FP + TN + FN)

In order to obtain the values for true positive (TP), true negative (TN), false positive (FP), and false negative (FN), predicted sentiment for the positive (Class “1”) and the negative (Class “0”) classes are determined based on whether each individual target p-value is greater than or less than the threshold p-value of 0.65. If an individual target p-value is greater than the threshold p-value of 0.65, the predicted sentiment for that instance is positive (Class “1”). If an individual target p-value is less than the threshold p-value of 0.65, the predicted sentiment for that instance is negative (Class “0”). Actual sentiment and predicted sentiment are then classified as follows:


Exhibit 1, with added “Predicted Sentiment” and “Classification” columns, is presented below:


Based on the classification data obtained from Exhibit 1, a confusion matrix can be generated:


Using the data in the confusion matrix above, the accuracy metric is computed as follows:

Accuracy = (TP + TN)/(TP + FP + TN + FN).

Accuracy = (3 + 4)/(3 + 1 + 4 + 2) = 0.70.

A is incorrect because 0.67 is the F1 score, not accuracy metric, for the sample of the test set for Dataset XYZ, based on Exhibit 2. To calculate the F1 score, the precision (P) and the recall (R) ratios must first be calculated. Precision and recall for the sample of the test set for Dataset XYZ, based on Exhibit 2, are calculated as follows:

Precision (P) = TP/(TP + FP) = 3/(3 + 1) = 0.75.

Recall (R) = TP/(TP + FN) = 3/(3 + 2) = 0.60.

The F1 score is calculated as follows:

F1 score = (2 × P × R)/(P + R) = (2 × 0.75 × 0.60)/(0.75 + 0.60) = 0.667, or 0.67.

C is incorrect because 0.75 is the precision ratio, not the accuracy metric, for the sample of the test set for Dataset XYZ, based on Exhibit 2. The precision score is calculated as follows:

Precision (P) = TP/(TP + FP) = 3/(3 + 1) = 0.75.

 If an individual target p-value is greater than the threshold p-value of 0.65, the predicted sentiment for that instance is positive (Class “1”). If an individual target p-value is less than the threshold p-value of 0.65, the predicted sentiment for that instance is negative (Class “0”). 


可以怎麽理解p-value?

2 个答案

品职助教_七七 · 2024年10月04日

嗨,努力学习的PZer你好:


@费尔南多 本题已经解释,对于“同样的问题”,请具体提出解释中仍不理解的地方。

----------------------------------------------
虽然现在很辛苦,但努力过的感觉真的很好,加油!

品职助教_七七 · 2024年07月31日

嗨,爱思考的PZer你好:


1)如果给出的target p-value>threshold p-value of 0.65,则认为预测结果为1,否则认为预测结果为0;

2) 将预测结果和actual 结果对比。然后得到classification的结果。对比原则如下表:

3)所有对比完毕后,分别数出TP、FP、FN、TN的值,如下表:

4)据此计算(TP + TN)/(TP + FP + TN + FN),得到题目要求的“the accuracy metric”。

----------------------------------------------
努力的时光都是限量版,加油!

费尔南多 · 2024年09月30日

我有同样的问题,为什么不可以是p<critical value是positive。对于什么情况下应算positive有什么规律可循吗?

  • 2

    回答
  • 0

    关注
  • 123

    浏览
相关问题