【pytorch】深度学习 线性回归训练 参数w,b如何进行反向传播详解(附推导公式,证明及代码)

您所在的位置:网站首页 线性回归方程b怎么算出来的 【pytorch】深度学习 线性回归训练 参数w,b如何进行反向传播详解(附推导公式,证明及代码)

【pytorch】深度学习 线性回归训练 参数w,b如何进行反向传播详解(附推导公式,证明及代码)

2024-01-20 14:40| 来源: 网络整理| 查看: 265

由 l o s s loss loss: l o s s = 1 2 ∗ ( y ^ − y ) 2 = 1 2 ∗ [ ( X w + b ) − y ] 2 loss = \frac{1}{2}*(\hat{y}-y)^{2} = \frac{1}{2}*[(Xw+b)-y]^{2} loss=21​∗(y^​−y)2=21​∗[(Xw+b)−y]2

可得 l l l: l = ∑ i ∈ B l o s s = 1 2 ∗ ∑ i ∈ B [ ( X i w + b ) − y i ] 2 l =\sum_{i \in \mathcal{B}} {loss} = \frac{1}{2}*\sum_{i \in \mathcal{B}} {[(X_{i}w+b)-y_{i}]^{2}} l=i∈B∑​loss=21​∗i∈B∑​[(Xi​w+b)−yi​]2

利用链式法则,计算 l o s s loss loss关于 w 1 w_{1} w1​的偏微分: ∂ l o s s ∂ w 1 = ∂ l o s s ∂ y ^ ∂ y ^ ∂ w 1 = [ ( X w + b ) − y ] x 1 ( i ) \frac{\partial{loss}}{\partial{w_{1}}} = \frac{\partial{loss}}{\partial{\hat{y}}} \frac{\partial{\hat{y}}}{\partial{w_{1}}} = [(Xw+b)-y]x_{1}^{(i)} ∂w1​∂loss​=∂y^​∂loss​∂w1​∂y^​​=[(Xw+b)−y]x1(i)​

计算 l l l关于 w 1 w_{1} w1​的偏微分: ∂ l ∂ w 1 = ∑ ∂ l o s s ∂ w 1 = ∑ i ∈ B ( x 1 ( i ) w 1 + x 2 ( i ) w 2 + b − y ( i ) ) x 1 ( i ) \frac{\partial{l}}{\partial{w_{1}}} = \sum{\frac{\partial{loss}}{\partial{w_{1}}}} = \sum_{i \in \mathcal{B}} \left(x_1^{(i)} w_1 + x_2^{(i)} w_2 + b - y^{(i)}\right) x_{1}^{(i)} ∂w1​∂l​=∑∂w1​∂loss​=i∈B∑​(x1(i)​w1​+x2(i)​w2​+b−y(i))x1(i)​

关于 w 2 w_{2} w2​的偏微分计算同理: ∂ l ∂ w 2 = ∑ ∂ l o s s ∂ w 2 = ∑ i ∈ B ( x 1 ( i ) w 1 + x 2 ( i ) w 2 + b − y ( i ) ) x 2 ( i ) \frac{\partial{l}}{\partial{w_{2}}} = \sum{\frac{\partial{loss}}{\partial{w_{2}}}} = \sum_{i \in \mathcal{B}} \left(x_1^{(i)} w_1 + x_2^{(i)} w_2 + b - y^{(i)}\right) x_{2}^{(i)} ∂w2​∂l​=∑∂w2​∂loss​=i∈B∑​(x1(i)​w1​+x2(i)​w2​+b−y(i))x2(i)​



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3