×

PyTorch教程2.5之自动微分

消耗积分:0 | 格式:pdf | 大小:0.18 MB | 2023-06-05

杨火亭

分享资料个

回想一下2.4 节,计算导数是我们将用于训练深度网络的所有优化算法中的关键步骤。虽然计算很简单,但手工计算可能很乏味且容易出错,而且这个问题只会随着我们的模型变得更加复杂而增长。

幸运的是,所有现代深度学习框架都通过提供自动微分(通常简称为 autograd )来解决我们的工作当我们通过每个连续的函数传递数据时,该框架会构建一个计算图来跟踪每个值如何依赖于其他值。为了计算导数,自动微分通过应用链式法则通过该图向后工作。以这种方式应用链式法则的计算算法称为反向传播

虽然 autograd 库在过去十年中成为热门话题,但它们的历史悠久。事实上,对 autograd 的最早引用可以追溯到半个多世纪以前Wengert,1964 年现代反向传播背后的核心思想可以追溯到 1980 年的一篇博士论文 ( Speelpenning, 1980 ),并在 80 年代后期得到进一步发展 ( Griewank, 1989 )虽然反向传播已成为计算梯度的默认方法,但它并不是唯一的选择。例如,Julia 编程语言采用前向传播 Revels等人,2016 年. 在探索方法之前,我们先来掌握autograd这个包。

import torch
from mxnet import autograd, np, npx

npx.set_np()
from jax import numpy as jnp
import tensorflow as tf

2.5.1. 一个简单的函数

假设我们有兴趣区分函数 y=2x⊤x关于列向量x. 首先,我们分配x一个初始值。

x = torch.arange(4.0)
x
tensor([0., 1., 2., 3.])

在我们计算梯度之前y关于 x,我们需要一个地方来存放它。通常,我们避免每次求导时都分配新内存,因为深度学习需要针对相同参数连续计算导数数千或数百万次,并且我们可能会面临内存耗尽的风险。请注意,标量值函数相对于向量的梯度x是向量值的并且具有相同的形状x.

# Can also create x = torch.arange(4.0, requires_grad=True)
x.requires_grad_(True)
x.grad # The gradient is None by default
x = np.arange(4.0)
x
array([0., 1., 2., 3.])

Before we calculate the gradient of y with respect to x, we need a place to store it. In general, we avoid allocating new memory every time we take a derivative because deep learning requires successively computing derivatives with respect to the same parameters thousands or millions of times, and we might risk running out of memory. Note that the gradient of a scalar-valued function with respect to a vector x is vector-valued and has the same shape as x.

# We allocate memory for a tensor's gradient by invoking `attach_grad`
x.attach_grad()
# After we calculate a gradient taken with respect to `x`, we will be able to
# access it via the `grad` attribute, whose values are initialized with 0s
x.grad
array([0., 0., 0., 0.])
x = jnp.arange(4.0)
x
No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
Array([0., 1., 2., 3.], dtype=float32)
x = tf.range(4, dtype=tf.float32)
x
<tf.Tensor: shape=(4,), dtype=float32, numpy=array([0., 1., 2., 3.], dtype=float32)>

Before we calculate the gradient of y with respect to x, we need a place to store it. In general, we avoid allocating new memory every time we take a derivative because deep learning requires successively computing derivatives with respect to the same parameters thousands or millions of times, and we might risk running out of memory. Note that the gradient of a scalar-valued function with respect to a vector x is vector-valued and has the same shape as x.

x = tf.Variable(x)

我们现在计算我们的函数x并将结果分配给y

y = 2 * torch.dot(x, x)
y
tensor(28., grad_fn=<MulBackward0>)

我们现在可以通过调用它的方法来获取y关于的梯度接下来,我们可以通过的 属性访问渐变xbackwardxgrad

y.backward()
x.grad
tensor([ 0., 4., 8., 12.])
# Our code is inside an `autograd.record` scope to build the computational
# graph
with autograd.record():
  y = 2 * np.dot(x, x)
y
array(28.)

We can now take the gradient of y with respect to x by calling its backward method. Next, we can access the gradient via x’s grad attribute.

y.backward()
x.grad
[09:38:36] src/base.cc:49: GPU context requested, but no GPUs found.
array([ 0., 4., 8., 12.])
y = lambda x: 2 * jnp.dot(x, x)
y(x)
Array(28., dtype=float32)

We can now take the gradient of y with respect to x by passing through the grad transform.

from jax import grad

# The `grad` transform returns a Python function that
# computes the gradient of the original function
x_grad = grad(y)(x)
x_grad
Array([ 0., 4., 8., 12.], dtype=float32)
# Record all computations onto a tape
with tf.GradientTape() as t:
  y = 2 * tf.tensordot(x, x, axes=1)
y
<tf.Tensor: shape=(), dtype=float32, numpy=28.0>

We can now calculate the gradient of y with respect to x by calling the gradient method.

x_grad = t.gradient(y, x)
x_grad
<tf.Tensor: shape=(4,), dtype=float32, numpy=array([ 

声明:本文内容及配图由入驻作者撰写或者入驻合作网站授权转载。文章观点仅代表作者本人,不代表电子发烧友网立场。文章及其配图仅供工程师学习之用,如有内容侵权或者其他违规问题,请联系本站处理。 举报投诉

评论(0)
发评论

下载排行榜

全部0条评论

快来发表一下你的评论吧 !