×

PyTorch教程6.4之惰性初始化

消耗积分:0 | 格式:pdf | 大小:0.11 MB | 2023-06-05

分享资料个

到目前为止,似乎我们在建立网络时草率地逃脱了惩罚。具体来说,我们做了以下不符合直觉的事情,这些事情可能看起来不应该起作用:

  • 我们在没有指定输入维度的情况下定义了网络架构。

  • 我们添加层时没有指定前一层的输出维度。

  • 在提供足够的信息来确定我们的模型应该包含多少参数之前,我们甚至“初始化”了这些参数。

您可能会对我们的代码完全运行感到惊讶。毕竟,深度学习框架无法判断网络的输入维数。这里的技巧是框架推迟初始化,等到我们第一次通过模型传递数据时,动态推断每一层的大小。

稍后,当使用卷积神经网络时,该技术将变得更加方便,因为输入维度(即图像的分辨率)将影响每个后续层的维度。因此,在编写代码时无需知道维度是多少就可以设置参数的能力可以极大地简化指定和随后修改模型的任务。接下来,我们将更深入地研究初始化机制。

import torch
from torch import nn
from d2l import torch as d2l
from mxnet import np, npx
from mxnet.gluon import nn

npx.set_np()
import jax
from flax import linen as nn
from jax import numpy as jnp
from d2l import jax as d2l
No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
import tensorflow as tf

首先,让我们实例化一个 MLP。

net = nn.Sequential(nn.LazyLinear(256), nn.ReLU(), nn.LazyLinear(10))
net = nn.Sequential()
net.add(nn.Dense(256, activation='relu'))
net.add(nn.Dense(10))
net = nn.Sequential([nn.Dense(256), nn.relu, nn.Dense(10)])
net = tf.keras.models.Sequential([
  tf.keras.layers.Dense(256, activation=tf.nn.relu),
  tf.keras.layers.Dense(10),
])

此时,网络不可能知道输入层权重的维度,因为输入维度仍然未知。

因此框架还没有初始化任何参数。我们通过尝试访问以下参数来确认。

net[0].weight
<UninitializedParameter>

Consequently the framework has not yet initialized any parameters. We confirm by attempting to access the parameters below.

print(net.collect_params)
print(net.collect_params())
<bound method Block.collect_params of Sequential(
 (0): Dense(-1 -> 256, Activation(relu))
 (1): Dense(-1 -> 10, linear)
)>
sequential0_ (
 Parameter dense0_weight (shape=(256, -1), dtype=float32)
 Parameter dense0_bias (shape=(256,), dtype=float32)
 Parameter dense1_weight (shape=(10, -1), dtype=float32)
 Parameter dense1_bias (shape=(10,), dtype=float32)
)

Note that while the parameter objects exist, the input dimension to each layer is listed as -1. MXNet uses the special value -1 to indicate that the parameter dimension remains unknown. At this point, attempts to access net[0].weight.data() would trigger a runtime error stating that the network must be initialized before the parameters can be accessed. Now let’s see what happens when we attempt to initialize parameters via the initialize method.

net.initialize()
net.collect_params()
sequential0_ (
 Parameter dense0_weight (shape=(256, -1), dtype=float32)
 Parameter dense0_bias (shape=(256,), dtype=float32)
 Parameter dense1_weight (shape=(10, -1), dtype=float32)
 Parameter dense1_bias (shape=(10,), dtype=float32)
)

As we can see, nothing has changed. When input dimensions are unknown, calls to initialize do not truly initialize the parameters. Instead, this call registers to MXNet that we wish (and optionally, according to which distribution) to initialize the parameters.

As mentioned in Section 6.2.1, parameters and the network definition are decoupled in Jax and Flax, and the user handles both manually. Flax models are stateless hence there is no parameters attribute.

Consequently the framework has not yet initialized any parameters. We confirm by attempting to access the parameters below.

[net.layers[i].get_weights() for i in range(len(net.layers))]
[[], []]

Note that each layer objects exist but the weights are empty. Using net.get_weights() would throw an error since the weights have not been initialized yet.

接下来让我们通过网络传递数据,让框架最终初始化参数。

X = torch.rand(2, 20)
net(X)

net[0].weight.shape
torch.Size([256, 20])
X = np.random.uniform(size=(2, 20))
net(X)

net.collect_params()
sequential0_ (
 Parameter dense0_weight (shape=(256, 20), dtype=float32)
 Parameter dense0_bias (shape=(256,), dtype=float32)
 Parameter dense1_weight (shape=(10, 256), dtype=float32)
 Parameter dense1_bias (shape=(10,), dtype=float32)
)
params = net.init(d2l.get_key(), jnp.zeros((2, 20)))
jax.tree_util.tree_map(lambda x: x.shape, params).tree_flatten()
(({'params': {'layers_0': {'bias': (256,), 'kernel': (20, 256)},
  'layers_2': {'bias': (10,), 'kernel': (256, 10)}}},),
 ())
X = tf.random.uniform((2, 20))
net(X)
[w.shape for w in net.get_weights()]
[(20, 256), (256,), (256, 10), (10,)]

一旦我们知道输入维度 20,框架就可以通过插入值 20 来识别第一层权重矩阵的形状。识别出第一层的形状后,框架进入第二层,依此类推计算图,直到所有形状都已知。请注意,在这种情况下,只有第一层需要延迟初始化,但框架会按顺序进行初始化。一旦知道所有参数形状,框架就可以最终初始化参数。

以下方法通过网络传入虚拟输入以进行试运行,以推断所有参数形状并随后初始化参数。当不需要默认随机初始化时,稍后将使用它。

@d2l.add_to_class(d2l.Module) #@save
def apply_init(self, inputs, init=None):
  self.forward(*inputs)
  if init is not None:
    self.net.apply(init)

Parameter initialization in Flax is always done manually and handled by the user. The following method takes a dummy input and a key dictionary as argument. This key dictionary has the rngs for initializing the model parameters and dropout rng for generating the dropout mask for the models with dropout layers. More about dropout will be covered later in Section 5.6. Ultimately the method initializes the model returning the parameters. We have been using it under the hood in the previous sections as well.

@d2l.add_to_class(d2l.Module) #@save
def apply_init(self, dummy_input, key):
  params = self.init(key, *dummy_input) # dummy_input tuple unpacked
  return params

6.4.1. 概括

延迟初始化可能很方便,允许框架自动推断参数形状,从而轻松修改架构并消除一种常见的错误来源。我们可以通过模型传递数据,让框架最终初始化参数。

6.4.2. 练习

  1. 如果您将输入维度指定给第一层而不是后续层,会发生什么情况?你会立即初始化吗?

  2. 如果您指定不匹配的尺寸会发生什么?

  3. 如果你有不同维度的输入,你需要做什么?提示:查看参数绑定。

声明:本文内容及配图由入驻作者撰写或者入驻合作网站授权转载。文章观点仅代表作者本人,不代表电子发烧友网立场。文章及其配图仅供工程师学习之用,如有内容侵权或者其他违规问题,请联系本站处理。 举报投诉

评论(0)
发评论

下载排行榜

全部0条评论

快来发表一下你的评论吧 !