开发者:上海品职教育科技有限公司 隐私政策详情

应用版本:4.2.11(IOS)|3.2.5(安卓)APP下载

Evelynislost · 2022年02月04日

这道题和正则化有什么关系呢?

NO.PZ2021083101000018

问题如下:

Rivera suggests adjusting the model’s hyperparameters to improve performance.

Achler runs a grid search that compares the difference between the prediction error on both the training and the cross-validation datasets for various combinations of hyperparameter values. For the current values of hyperparameters, Achler observes that the prediction error on the training dataset is small, whereas the prediction error on the cross-validation dataset is significantly larger.

Based on Achler’s grid search analysis, the current model can be characterized as:

选项:

A.

underfitted

B.

having low variance

C.

exhibiting slight regularization

解释:

C is correct.

Slight regularization occurs when the prediction error on the training dataset is small, while the prediction error on the cross- validation data set is significantly larger. This difference in error is variance. High variance error, which typically is due to too many features and model complexity, results in model overfitting.

A is incorrect. The current model has high variance which results in model overfitting, not underfitting.

B is incorrect. The difference between the prediction error on the training dataset and the prediction error on the cross-validation dataset is high, which means that the current model has high variance, not low.

这道题和正则化有什么关系呢?

2 个答案

星星_品职助教 · 2023年08月15日

@Amy pro

根据题干中的“the prediction error on the training dataset is small, whereas the prediction error on the cross-validation dataset is significantly larger”,可知这个模型在训练集里表现很好,但是拿到验证集中表现就很差。说明该模型对于训练集的数据拟合过度,导致只适用于训练集本身。换一个数据集就不适用了。这种情况就是典型的Overfitting,

星星_品职助教 · 2022年02月04日

同学你好,

根据题干中的“the prediction error on the training dataset is small, whereas the prediction error on the cross-validation dataset is significantly larger”可以判断出这个模型具有Overfitting的问题。

regularization是通过降低模型的复杂性从而解决Overfitting问题的方法。由于此时Overfitting是存在的,所以这个模型只可能有轻微的regularization存在。

反而言之,如果大范围应用了regularization,这个模型就也不会存在Overfitting的问题了。

Amy pro · 2023年08月14日

能详细解释一下为什么是overfitting吗?

  • 2

    回答
  • 3

    关注
  • 912

    浏览
相关问题

NO.PZ2021083101000018 问题如下 Rivera suggests austing the mol’s hyperparameters to improve performance. Achler runs a grisearthcompares the fferenbetween the prection error on both the training anthe cross-valition tasets for various combinations of hyperparameter values. For the current values of hyperparameters, Achler observes ththe prection error on the training taset is small, wherethe prection error on the cross-valition taset is significantly larger.Baseon Achler’s grisearanalysis, the current mol ccharacterizeas: A.unrfitte B.having low varian C.exhibiting slight regularization C is correct. Slight regularization occurs when the prection error on the training taset is small, while the prection error on the cross- valition ta set is significantly larger. This fferenin error is variance. High varianerror, whitypically is e to too many features anmol complexity, results in mol overfitting.A is incorrect. The current mol hhigh varianwhiresults in mol overfitting, not unrfitting.B is incorrect. The fferenbetween the prection error on the training taset anthe prection error on the cross-valition taset is high, whimeans ththe current mol hhigh variance, not low. 老师,请问正则化是什么意思呢,正则化可以降低overfitting吗?

2024-03-03 23:09 1 · 回答

NO.PZ2021083101000018 问题如下 Rivera suggests austing the mol’s hyperparameters to improve performance. Achler runs a grisearthcompares the fferenbetween the prection error on both the training anthe cross-valition tasets for various combinations of hyperparameter values. For the current values of hyperparameters, Achler observes ththe prection error on the training taset is small, wherethe prection error on the cross-valition taset is significantly larger.Baseon Achler’s grisearanalysis, the current mol ccharacterizeas: A.unrfitte B.having low varian C.exhibiting slight regularization C is correct. Slight regularization occurs when the prection error on the training taset is small, while the prection error on the cross- valition ta set is significantly larger. This fferenin error is variance. High varianerror, whitypically is e to too many features anmol complexity, results in mol overfitting.A is incorrect. The current mol hhigh varianwhiresults in mol overfitting, not unrfitting.B is incorrect. The fferenbetween the prection error on the training taset anthe prection error on the cross-valition taset is high, whimeans ththe current mol hhigh variance, not low. 讲义上有吗,怎么理解?

2023-11-08 16:20 1 · 回答