开发者:上海品职教育科技有限公司 隐私政策详情

应用版本:4.2.11(IOS)|3.2.5(安卓)APP下载

梦梦 · 2024年08月19日

关于neural networks

NO.PZ2024030508000071

问题如下:

A team of data scientists at a global asset management firm is working on a neural network (NN) model to assess how the choice of metrics used by the firm to select equity investments relates to stock performance. In particular, the team designs the NN model to find the combination of weights for these metrics that generates the highest risk-adjusted return in the firm’s portfolios. The team optimizes the model during the training process by varying multiple parameters and observes that, during multiple runs with identical parameters, the loss function converges to different, yet still stable, values. Which of the following is the most appropriate action for the team to take to address this issue?

选项:

A.Construct a confusion matrix to train the model.

B.Apply adaptive boosting when training the model.

C.Decrease the learning rate of the model.

D.Increase the learning rate of the model.

解释:

Explanation: C is correct. The size of the step in the gradient descent algorithm is known as the learning rate. If it is too large, it may oscillate from one side of the optimization valley to the other. Decreasing the learning rate would therefore prevent finding multiple local minima and prevent overshooting the global loss function minimum.

A is incorrect. The confusion matrix is a table that is constructed for model evaluation, specifically to present the possible outcomes and results of a predictive model wherein the output variable is binary categorical.

B is incorrect. Adaptive boosting, or AdaBoost, is an ensemble technique used to improve a model’s performance training it on the errors of previous iterations. AdaBoost operates differently from a neural network and is not directly concerned with finding local minima in a gradient descent.

D is incorrect. As explained in C, increasing the learning rate could result in the gradient descent algorithm oscillating from one side of the optimization valley to the other, which is exactly the problem that needs to be addressed.

Learning Objective: Understand how neural networks are constructed and how their weights are determined.

Reference: Global Association of Risk Professionals. Quantitative Analysis. New York, NY: Pearson, 2023, Chapter 15, Machine Learning and Prediction [QA-15].

老师好,是因为“still stable”这个表述选择C吗?

1 个答案
已采纳答案

李坏_品职助教 · 2024年08月19日

嗨,爱思考的PZer你好:


本题的关键是“during multiple runs with identical parameters, the loss function converges to different”。意思是,在输入参数相同的情况下,损失函数收敛于不同的输出结果。这个问题是机器学习算法在learning rate过大的情况下,算法会在不同的局部最优结果之间来回变化。


为了解决这个来回变化的问题,应该降低learning rate。

----------------------------------------------
就算太阳没有迎着我们而来,我们正在朝着它而去,加油!

梦梦 · 2024年08月19日

明白了,谢谢

  • 1

    回答
  • 0

    关注
  • 93

    浏览
相关问题