PyTorch教程-14.4. 锚箱

电子说

1.2w人已加入

描述

物体检测算法通常在输入图像中采样大量区域,判断这些区域是否包含感兴趣的物体,并调整区域的边界,从而更准确地预测物体的真实边界 框。不同的模型可能采用不同的区域采样方案。在这里,我们介绍其中一种方法:它生成多个以每个像素为中心的具有不同比例和纵横比的边界框。这些边界框称为锚框。我们将在14.7 节设计一个基于锚框的目标检测模型。

首先,让我们修改打印精度以获得更简洁的输出。

 

%matplotlib inline
import torch
from d2l import torch as d2l

torch.set_printoptions(2) # Simplify printing accuracy

 

 

%matplotlib inline
from mxnet import gluon, image, np, npx
from d2l import mxnet as d2l

np.set_printoptions(2) # Simplify printing accuracy
npx.set_np()

 

14.4.1。生成多个锚框

假设输入图像的高度为h和宽度 w. 我们以图像的每个像素为中心生成具有不同形状的锚框。让规模成为s∈(0,1]纵横比(宽高比)为 r>0. 那么anchor box的宽高分别是hsr和 hs/r, 分别。请注意,当中心位置给定时,将确定一个已知宽度和高度的锚框。

为了生成多个不同形状的锚框,让我们设置一系列尺度s1,…,sn和一系列纵横比 r1,…,rm. 当以每个像素为中心使用这些尺度和纵横比的所有组合时,输入图像将总共有whnm锚箱。虽然这些anchor boxes可能会覆盖所有的ground-truth bounding boxes,但是计算复杂度很容易过高。在实践中,我们只能考虑那些包含s1或者r1:

(14.4.1)(s1,r1),(s1,r2),…,(s1,rm),(s2,r1),(s3,r1),…,(sn,r1).

也就是说,以同一个像素为中心的anchor boxes的个数为 n+m−1. 对于整个输入图像,我们将生成总共 wh(n+m−1)锚箱。

上面生成anchor boxes的方法是在下面的multibox_prior函数中实现的。我们指定输入图像、比例列表和纵横比列表,然后此函数将返回所有锚框。

 

#@save
def multibox_prior(data, sizes, ratios):
  """Generate anchor boxes with different shapes centered on each pixel."""
  in_height, in_width = data.shape[-2:]
  device, num_sizes, num_ratios = data.device, len(sizes), len(ratios)
  boxes_per_pixel = (num_sizes + num_ratios - 1)
  size_tensor = torch.tensor(sizes, device=device)
  ratio_tensor = torch.tensor(ratios, device=device)
  # Offsets are required to move the anchor to the center of a pixel. Since
  # a pixel has height=1 and width=1, we choose to offset our centers by 0.5
  offset_h, offset_w = 0.5, 0.5
  steps_h = 1.0 / in_height # Scaled steps in y axis
  steps_w = 1.0 / in_width # Scaled steps in x axis

  # Generate all center points for the anchor boxes
  center_h = (torch.arange(in_height, device=device) + offset_h) * steps_h
  center_w = (torch.arange(in_width, device=device) + offset_w) * steps_w
  shift_y, shift_x = torch.meshgrid(center_h, center_w, indexing='ij')
  shift_y, shift_x = shift_y.reshape(-1), shift_x.reshape(-1)

  # Generate `boxes_per_pixel` number of heights and widths that are later
  # used to create anchor box corner coordinates (xmin, xmax, ymin, ymax)
  w = torch.cat((size_tensor * torch.sqrt(ratio_tensor[0]),
          sizes[0] * torch.sqrt(ratio_tensor[1:])))
          * in_height / in_width # Handle rectangular inputs
  h = torch.cat((size_tensor / torch.sqrt(ratio_tensor[0]),
          sizes[0] / torch.sqrt(ratio_tensor[1:])))
  # Divide by 2 to get half height and half width
  anchor_manipulations = torch.stack((-w, -h, w, h)).T.repeat(
                    in_height * in_width, 1) / 2

  # Each center point will have `boxes_per_pixel` number of anchor boxes, so
  # generate a grid of all anchor box centers with `boxes_per_pixel` repeats
  out_grid = torch.stack([shift_x, shift_y, shift_x, shift_y],
        dim=1).repeat_interleave(boxes_per_pixel, dim=0)
  output = out_grid + anchor_manipulations
  return output.unsqueeze(0)

 

 

#@save
def multibox_prior(data, sizes, ratios):
  """Generate anchor boxes with different shapes centered on each pixel."""
  in_height, in_width = data.shape[-2:]
  device, num_sizes, num_ratios = data.ctx, len(sizes), len(ratios)
  boxes_per_pixel = (num_sizes + num_ratios - 1)
  size_tensor = np.array(sizes, ctx=device)
  ratio_tensor = np.array(ratios, ctx=device)
  # Offsets are required to move the anchor to the center of a pixel. Since
  # a pixel has height=1 and width=1, we choose to offset our centers by 0.5
  offset_h, offset_w = 0.5, 0.5
  steps_h = 1.0 / in_height # Scaled steps in y-axis
  steps_w = 1.0 / in_width # Scaled steps in x-axis

  # Generate all center points for the anchor boxes
  center_h = (np.arange(in_height, ctx=device) + offset_h) * steps_h
  center_w = (np.arange(in_width, ctx=device) + offset_w) * steps_w
  shift_x, shift_y = np.meshgrid(center_w, center_h)
  shift_x, shift_y = shift_x.reshape(-1), shift_y.reshape(-1)

  # Generate `boxes_per_pixel` number of heights and widths that are later
  # used to create anchor box corner coordinates (xmin, xmax, ymin, ymax)
  w = np.concatenate((size_tensor * np.sqrt(ratio_tensor[0]),
            sizes[0] * np.sqrt(ratio_tensor[1:]))) 
            * in_height / in_width # Handle rectangular inputs
  h = np.concatenate((size_tensor / np.sqrt(ratio_tensor[0]),
            sizes[0] / np.sqrt(ratio_tensor[1:])))
  # Divide by 2 to get half height and half width
  anchor_manipulations = np.tile(np.stack((-w, -h, w, h)).T,
                  (in_height * in_width, 1)) / 2

  # Each center point will have `boxes_per_pixel` number of anchor boxes, so
  # generate a grid of all anchor box centers with `boxes_per_pixel` repeats
  out_grid = np.stack([shift_x, shift_y, shift_x, shift_y],
             axis=1).repeat(boxes_per_pixel, axis=0)
  output = out_grid + anchor_manipulations
  return np.expand_dims(output, axis=0)

 

我们可以看到返回的anchor box变量的shapeY为(batch size, number of anchor boxes, 4)。

 

img = d2l.plt.imread('../img/catdog.jpg')
h, w = img.shape[:2]

print(h, w)
X = torch.rand(size=(1, 3, h, w)) # Construct input data
Y = multibox_prior(X, sizes=[0.75, 0.5, 0.25], ratios=[1, 2, 0.5])
Y.shape

 

 

561 728

 

 

torch.Size([1, 2042040, 4])

 

 

img = image.imread('../img/catdog.jpg').asnumpy()
h, w = img.shape[:2]

print(h, w)
X = np.random.uniform(size=(1, 3, h, w)) # Construct input data
Y = multibox_prior(X, sizes=[0.75, 0.5, 0.25], ratios=[1, 2, 0.5])
Y.shape

 

 

561 728

 

 

(1, 2042040, 4)

 

将anchor box变量的shape修改Y为(图像高度,图像宽度,以同一像素为中心的anchor boxes个数,4),我们就可以得到以指定像素位置为中心的所有anchor boxes。在下文中,我们访问以 (250, 250) 为中心的第一个锚框。它有四个要素:(x,y)- 轴坐标在左上角和(x,y)锚框右下角的轴坐标。两个轴的坐标值分别除以图像的宽度和高度。

 

boxes = Y.reshape(h, w, 5, 4)
boxes[250, 250, 0, :]

 

 

tensor([0.06, 0.07, 0.63, 0.82])

 

 

boxes = Y.reshape(h, w, 5, 4)
boxes[250, 250, 0, :]

 

 

array([0.06, 0.07, 0.63, 0.82])

 

为了显示图像中以一个像素为中心的所有锚框,我们定义以下show_bboxes函数在图像上绘制多个边界框。

 

#@save
def show_bboxes(axes, bboxes, labels=None, colors=None):
  """Show bounding boxes."""

  def make_list(obj, default_values=None):
    if obj is None:
      obj = default_values
    elif not isinstance(obj, (list, tuple)):
      obj = [obj]
    return obj

  labels = make_list(labels)
  colors = make_list(colors, ['b', 'g', 'r', 'm', 'c'])
  for i, bbox in enumerate(bboxes):
    color = colors[i % len(colors)]
    rect = d2l.bbox_to_rect(bbox.detach().numpy(), color)
    axes.add_patch(rect)
    if labels and len(labels) > i:
      text_color = 'k' if color == 'w' else 'w'
      axes.text(rect.xy[0], rect.xy[1], labels[i],
           va='center', ha='center', fontsize=9, color=text_color,
           bbox=dict(facecolor=color, lw=0))

 

 

#@save
def show_bboxes(axes, bboxes, labels=None, colors=None):
  """Show bounding boxes."""

  def make_list(obj, default_values=None):
    if obj is None:
      obj = default_values
    elif not isinstance(obj, (list, tuple)):
      obj = [obj]
    return obj

  labels = make_list(labels)
  colors = make_list(colors, ['b', 'g', 'r', 'm', 'c'])
  for i, bbox in enumerate(bboxes):
    color = colors[i % len(colors)]
    rect = d2l.bbox_to_rect(bbox.asnumpy(), color)
    axes.add_patch(rect)
    if labels and len(labels) > i:
      text_color = 'k' if color == 'w' else 'w'
      axes.text(rect.xy[0], rect.xy[1], labels[i],
           va='center', ha='center', fontsize=9, color=text_color,
           bbox=dict(facecolor=color, lw=0))

 

正如我们刚刚看到的,x和y 变量中的轴boxes分别除以图像的宽度和高度。在绘制anchor boxes时,我们需要恢复它们原来的坐标值;因此,我们在下面定义变量 bbox_scale。现在,我们可以绘制图像中所有以 (250, 250) 为中心的锚框。如您所见,比例为 0.75、纵横比为 1 的蓝色锚框很好地包围了图像中的狗。

 

d2l.set_figsize()
bbox_scale = torch.tensor((w, h, w, h))
fig = d2l.plt.imshow(img)
show_bboxes(fig.axes, boxes[250, 250, :, :] * bbox_scale,
      ['s=0.75, r=1', 's=0.5, r=1', 's=0.25, r=1', 's=0.75, r=2',
       's=0.75, r=0.5'])

 

pytorch

 

d2l.set_figsize()
bbox_scale = np.array((w, h, w, h))
fig = d2l.plt.imshow(img)
show_bboxes(fig.axes, boxes[250, 250, :, :] * bbox_scale,
      ['s=0.75, r=1', 's=0.5, r=1', 's=0.25, r=1', 's=0.75, r=2',
       's=0.75, r=0.5'])

 

pytorch

14.4.2。并集交集 (IoU)

我们刚刚提到图像中的狗周围有一个锚框“井”。如果物体的ground-truth bounding box是已知的,那么这里的“well”怎么量化呢?直观上,我们可以衡量锚框和真实边界框之间的相似度。我们知道杰卡德指数可以衡量两个集合之间的相似度。给定的集合A和B,它们的 Jaccard 指数是交集的大小除以并集的大小:

(14.4.2)J(A,B)=|A∩B||A∪B|.

事实上,我们可以将任何边界框的像素区域视为一组像素。这样,我们就可以通过它们像素集的 Jaccard 指数来衡量两个边界框的相似度。对于两个边界框,我们通常将它们的 Jaccard 指数称为intersection over union ( IoU ),即它们的交集面积与它们的并集面积之比,如图14.4.1所示。IoU 的范围在 0 到 1 之间:0 表示两个边界框完全不重叠,而 1 表示两个边界框相等。

pytorch

图 14.4.1 IoU 是两个边界框的交集面积与并集面积之比。

对于本节的其余部分,我们将使用 IoU 来衡量锚框和真实边界框之间以及不同锚框之间的相似性。给定两个锚点或边界框列表,以下box_iou计算它们在这两个列表中的成对 IoU。

 

#@save
def box_iou(boxes1, boxes2):
  """Compute pairwise IoU across two lists of anchor or bounding boxes."""
  box_area = lambda boxes: ((boxes[:, 2] - boxes[:, 0]) *
               (boxes[:, 3] - boxes[:, 1]))
  # Shape of `boxes1`, `boxes2`, `areas1`, `areas2`: (no. of boxes1, 4),
  # (no. of boxes2, 4), (no. of boxes1,), (no. of boxes2,)
  areas1 = box_area(boxes1)
  areas2 = box_area(boxes2)
  # Shape of `inter_upperlefts`, `inter_lowerrights`, `inters`: (no. of
  # boxes1, no. of boxes2, 2)
  inter_upperlefts = torch.max(boxes1[:, None, :2], boxes2[:, :2])
  inter_lowerrights = torch.min(boxes1[:, None, 2:], boxes2[:, 2:])
  inters = (inter_lowerrights - inter_upperlefts).clamp(min=0)
  # Shape of `inter_areas` and `union_areas`: (no. of boxes1, no. of boxes2)
  inter_areas = inters[:, :, 0] * inters[:, :, 1]
  union_areas = areas1[:, None] + areas2 - inter_areas
  return inter_areas / union_areas

 

 

#@save
def box_iou(boxes1, boxes2):
  """Compute pairwise IoU across two lists of anchor or bounding boxes."""
  box_area = lambda boxes: ((boxes[:, 2] - boxes[:, 0]) *
               (boxes[:, 3] - boxes[:, 1]))
  # Shape of `boxes1`, `boxes2`, `areas1`, `areas2`: (no. of boxes1, 4),
  # (no. of boxes2, 4), (no. of boxes1,), (no. of boxes2,)
  areas1 = box_area(boxes1)
  areas2 = box_area(boxes2)
  # Shape of `inter_upperlefts`, `inter_lowerrights`, `inters`: (no. of
  # boxes1, no. of boxes2, 2)
  inter_upperlefts = np.maximum(boxes1[:, None, :2], boxes2[:, :2])
  inter_lowerrights = np.minimum(boxes1[:, None, 2:], boxes2[:, 2:])
  inters = (inter_lowerrights - inter_upperlefts).clip(min=0)
  # Shape of `inter_areas` and `union_areas`: (no. of boxes1, no. of boxes2)
  inter_areas = inters[:, :, 0] * inters[:, :, 1]
  union_areas = areas1[:, None] + areas2 - inter_areas
  return inter_areas / union_areas

 

14.4.3。在训练数据中标记锚框

在训练数据集中,我们将每个锚框视为训练示例。为了训练目标检测模型,我们需要每个锚框的类 和偏移标签,其中前者是与锚框相关的对象的类别,后者是真实边界框相对于锚箱。在预测过程中,我们为每张图像生成多个anchor boxes,为所有anchor boxes预测类别和偏移量,根据预测的偏移量调整它们的位置以获得预测的bounding boxes,最后只输出那些满足一定条件的预测bounding boxes .

正如我们所知,对象检测训练集带有用于真实边界框位置及其周围对象类别的标签。为了标记任何生成的锚框,我们参考其分配的最接近锚框的地面实况边界框的标记位置和类别。在下文中,我们描述了一种将最接近的地面实况边界框分配给锚框的算法。

14.4.3.1。将真实边界框分配给锚框

给定一张图像,假设锚框是 A1,A2,…,Ana真实边界框是B1,B2,…,Bnb, 在哪里na≥nb. 让我们定义一个矩阵X∈Rna×nb, 其元素xij在里面ith行和 jth列是anchor box的IoUAi 和真实边界框Bj. 该算法包括以下步骤:

找到矩阵中的最大元素X并将其行和列索引表示为i1和j1, 分别。然后是真实边界框Bj1被分配到anchor boxAi1. 这是非常直观的,因为 Ai1和Bj1是所有成对的锚框和真实边界框中最接近的。第一次赋值后,丢弃所有元素 i1th行和j1th 矩阵中的列X.

在矩阵中找到最大的剩余元素 X并将其行和列索引表示为 i2和j2, 分别。我们分配地面实况边界框Bj2锚框Ai2并丢弃其中的所有元素i2th行和 j2th矩阵中的列X.

此时,矩阵中两行两列的元素 X已被丢弃。我们继续进行,直到所有元素都在nb矩阵中的列X被丢弃。这时候,我们已经为每一个都分配了一个ground-truth bounding box nb锚箱。

只遍历剩下的na−nb锚箱。例如,给定任何锚框Ai, 找到真实边界框Bj具有最大的 IoUAi 在整个ith矩阵行 X, 并赋值Bj到Ai仅当此 IoU 大于预定义阈值时。

让我们用一个具体的例子来说明上述算法。如图 14.4.2 (左)所示,假设矩阵中的最大值X是x23,我们分配地面实况边界框B3到锚箱A2. 然后,我们舍弃矩阵第2行第3列的所有元素,找到最大的x71在剩余的元素(阴影区域)中,并分配地面实况边界框B1到锚箱 A7. 接下来,如图14.4.2 (中)所示,舍弃矩阵第7行第1列的所有元素,找出最大的x54在剩余的元素(阴影区域)中,并分配地面实况边界框B4到锚箱 A5. 最后,如图14.4.2 (右)所示,舍去矩阵第5行第4列的所有元素,找到最大的x92在剩余的元素(阴影区域)中,并分配地面实况边界框B2到锚箱 A9. 之后我们只需要遍历剩下的anchor boxesA1,A3,A4,A6,A8并根据阈值决定是否给它们分配ground-truth边界框。

pytorch

图 14.4.2将真实边界框分配给锚框。

该算法在以下函数中实现assign_anchor_to_bbox 。

 

#@save
def assign_anchor_to_bbox(ground_truth, anchors, device, iou_threshold=0.5):
  """Assign closest ground-truth bounding boxes to anchor boxes."""
  num_anchors, num_gt_boxes = anchors.shape[0], ground_truth.shape[0]
  # Element x_ij in the i-th row and j-th column is the IoU of the anchor
  # box i and the ground-truth bounding box j
  jaccard = box_iou(anchors, ground_truth)
  # Initialize the tensor to hold the assigned ground-truth bounding box for
  # each anchor
  anchors_bbox_map = torch.full((num_anchors,), -1, dtype=torch.long,
                 device=device)
  # Assign ground-truth bounding boxes according to the threshold
  max_ious, indices = torch.max(jaccard, dim=1)
  anc_i = torch.nonzero(max_ious >= iou_threshold).reshape(-1)
  box_j = indices[max_ious >= iou_threshold]
  anchors_bbox_map[anc_i] = box_j
  col_discard = torch.full((num_anchors,), -1)
  row_discard = torch.full((num_gt_boxes,), -1)
  for _ in range(num_gt_boxes):
    max_idx = torch.argmax(jaccard) # Find the largest IoU
    box_idx = (max_idx % num_gt_boxes).long()
    anc_idx = (max_idx / num_gt_boxes).long()
    anchors_bbox_map[anc_idx] = box_idx
    jaccard[:, box_idx] = col_discard
    jaccard[anc_idx, :] = row_discard
  return anchors_bbox_map

 

 

#@save
def assign_anchor_to_bbox(ground_truth, anchors, device, iou_threshold=0.5):
  """Assign closest ground-truth bounding boxes to anchor boxes."""
  num_anchors, num_gt_boxes = anchors.shape[0], ground_truth.shape[0]
  # Element x_ij in the i-th row and j-th column is the IoU of the anchor
  # box i and the ground-truth bounding box j
  jaccard = box_iou(anchors, ground_truth)
  # Initialize the tensor to hold the assigned ground-truth bounding box for
  # each anchor
  anchors_bbox_map = np.full((num_anchors,), -1, dtype=np.int32, ctx=device)
  # Assign ground-truth bounding boxes according to the threshold
  max_ious, indices = np.max(jaccard, axis=1), np.argmax(jaccard, axis=1)
  anc_i = np.nonzero(max_ious >= iou_threshold)[0]
  box_j = indices[max_ious >= iou_threshold]
  anchors_bbox_map[anc_i] = box_j
  col_discard = np.full((num_anchors,), -1)
  row_discard = np.full((num_gt_boxes,), -1)
  for _ in range(num_gt_boxes):
    max_idx = np.argmax(jaccard) # Find the largest IoU
    box_idx = (max_idx % num_gt_boxes).astype('int32')
    anc_idx = (max_idx / num_gt_boxes).astype('int32')
    anchors_bbox_map[anc_idx] = box_idx
    jaccard[:, box_idx] = col_discard
    jaccard[anc_idx, :] = row_discard
  return anchors_bbox_map

 

14.4.3.2。标注类别和偏移量

现在我们可以为每个锚框标记类别和偏移量。假设一个锚框A被分配了一个真实边界框 B. 一方面,anchor box的类A将被标记为B. 另一方面,anchor box的偏移量A会根据中心坐标之间的相对位置进行标注B和A以及这两个框之间的相对大小。给定数据集中不同框的不同位置和大小,我们可以对那些可能导致更容易拟合的更均匀分布的偏移量应用转换到那些相对位置和大小。这里我们描述一个常见的转换。给定中心坐标A和B 作为(xa,ya)和(xb,yb), 它们的宽度为 wa和wb, 他们的身高为ha和 hb, 分别。我们可以标记偏移量A作为

(14.4.3)(xb−xawa−μxσx,yb−yaha−μyσy,log⁡wbwa−μwσw,log⁡hbha−μhσh),

其中常量的默认值是 μx=μy=μw=μh=0,σx=σy=0.1, 和 σw=σh=0.2. 此转换在下面的函数中实现offset_boxes。

 

#@save
def offset_boxes(anchors, assigned_bb, eps=1e-6):
  """Transform for anchor box offsets."""
  c_anc = d2l.box_corner_to_center(anchors)
  c_assigned_bb = d2l.box_corner_to_center(assigned_bb)
  offset_xy = 10 * (c_assigned_bb[:, :2] - c_anc[:, :2]) / c_anc[:, 2:]
  offset_wh = 5 * torch.log(eps + c_assigned_bb[:, 2:] / c_anc[:, 2:])
  offset = torch.cat([offset_xy, offset_wh], axis=1)
  return offset

 

 

#@save
def offset_boxes(anchors, assigned_bb, eps=1e-6):
  """Transform for anchor box offsets."""
  c_anc = d2l.box_corner_to_center(anchors)
  c_assigned_bb = d2l.box_corner_to_center(assigned_bb)
  offset_xy = 10 * (c_assigned_bb[:, :2] - c_anc[:, :2]) / c_anc[:, 2:]
  offset_wh = 5 * np.log(eps + c_assigned_bb[:, 2:] / c_anc[:, 2:])
  offset = np.concatenate([offset_xy, offset_wh], axis=1)
  return offset

 

如果没有为锚框分配真实边界框,我们只需将锚框的类别标记为“背景”。类别为背景的锚框通常称为负锚框,其余称为正锚框。我们实现了以下函数,使用真实边界框(参数)来标记锚框(参数)multibox_target的类和偏移量。此函数将背景类设置为零,并将新类的整数索引递增 1。anchorslabels

 

#@save
def multibox_target(anchors, labels):
  """Label anchor boxes using ground-truth bounding boxes."""
  batch_size, anchors = labels.shape[0], anchors.squeeze(0)
  batch_offset, batch_mask, batch_class_labels = [], [], []
  device, num_anchors = anchors.device, anchors.shape[0]
  for i in range(batch_size):
    label = labels[i, :, :]
    anchors_bbox_map = assign_anchor_to_bbox(
      label[:, 1:], anchors, device)
    bbox_mask = ((anchors_bbox_map >= 0).float().unsqueeze(-1)).repeat(
      1, 4)
    # Initialize class labels and assigned bounding box coordinates with
    # zeros
    class_labels = torch.zeros(num_anchors, dtype=torch.long,
                  device=device)
    assigned_bb = torch.zeros((num_anchors, 4), dtype=torch.float32,
                 device=device)
    # Label classes of anchor boxes using their assigned ground-truth
    # bounding boxes. If an anchor box is not assigned any, we label its
    # class as background (the value remains zero)
    indices_true = torch.nonzero(anchors_bbox_map >= 0)
    bb_idx = anchors_bbox_map[indices_true]
    class_labels[indices_true] = label[bb_idx, 0].long() + 1
    assigned_bb[indices_true] = label[bb_idx, 1:]
    # Offset transformation
    offset = offset_boxes(anchors, assigned_bb) * bbox_mask
    batch_offset.append(offset.reshape(-1))
    batch_mask.append(bbox_mask.reshape(-1))
    batch_class_labels.append(class_labels)
  bbox_offset = torch.stack(batch_offset)
  bbox_mask = torch.stack(batch_mask)
  class_labels = torch.stack(batch_class_labels)
  return (bbox_offset, bbox_mask, class_labels)

 

 

#@save
def multibox_target(anchors, labels):
  """Label anchor boxes using ground-truth bounding boxes."""
  batch_size, anchors = labels.shape[0], anchors.squeeze(0)
  batch_offset, batch_mask, batch_class_labels = [], [], []
  device, num_anchors = anchors.ctx, anchors.shape[0]
  for i in range(batch_size):
    label = labels[i, :, :]
    anchors_bbox_map = assign_anchor_to_bbox(
      label[:, 1:], anchors, device)
    bbox_mask = np.tile((np.expand_dims((anchors_bbox_map >= 0),
                      axis=-1)), (1, 4)).astype('int32')
    # Initialize class labels and assigned bounding box coordinates with
    # zeros
    class_labels = np.zeros(num_anchors, dtype=np.int32, ctx=device)
    assigned_bb = np.zeros((num_anchors, 4), dtype=np.float32,
                ctx=device)
    # Label classes of anchor boxes using their assigned ground-truth
    # bounding boxes. If an anchor box is not assigned any, we label its
    # class as background (the value remains zero)
    indices_true = np.nonzero(anchors_bbox_map >= 0)[0]
    bb_idx = anchors_bbox_map[indices_true]
    class_labels[indices_true] = label[bb_idx, 0].astype('int32') + 1
    assigned_bb[indices_true] = label[bb_idx, 1:]
    # Offset transformation
    offset = offset_boxes(anchors, assigned_bb) * bbox_mask
    batch_offset.append(offset.reshape(-1))
    batch_mask.append(bbox_mask.reshape(-1))
    batch_class_labels.append(class_labels)
  bbox_offset = np.stack(batch_offset)
  bbox_mask = np.stack(batch_mask)
  class_labels = np.stack(batch_class_labels)
  return (bbox_offset, bbox_mask, class_labels)

 

14.4.3.3。一个例子

让我们通过一个具体的例子来说明锚框标记。我们为加载图像中的狗和猫定义地面真实边界框,其中第一个元素是类(0 代表狗,1 代表猫),其余四个元素是(x,y)- 左上角和右下角的轴坐标(范围在 0 和 1 之间)。我们还使用左上角和右下角的坐标构造了五个要标记的锚框: A0,…,A4(索引从0开始)。然后我们在图像中绘制这些真实边界框和锚框。

 

ground_truth = torch.tensor([[0, 0.1, 0.08, 0.52, 0.92],
             [1, 0.55, 0.2, 0.9, 0.88]])
anchors = torch.tensor([[0, 0.1, 0.2, 0.3], [0.15, 0.2, 0.4, 0.4],
          [0.63, 0.05, 0.88, 0.98], [0.66, 0.45, 0.8, 0.8],
          [0.57, 0.3, 0.92, 0.9]])

fig = d2l.plt.imshow(img)
show_bboxes(fig.axes, ground_truth[:, 1:] * bbox_scale, ['dog', 'cat'], 'k')
show_bboxes(fig.axes, anchors * bbox_scale, ['0', '1', '2', '3', '4']);

 

pytorch

 

ground_truth = np.array([[0, 0.1, 0.08, 0.52, 0.92],
             [1, 0.55, 0.2, 0.9, 0.88]])
anchors = np.array([[0, 0.1, 0.2, 0.3], [0.15, 0.2, 0.4, 0.4],
          [0.63, 0.05, 0.88, 0.98], [0.66, 0.45, 0.8, 0.8],
          [0.57, 0.3, 0.92, 0.9]])

fig = d2l.plt.imshow(img)
show_bboxes(fig.axes, ground_truth[:, 1:] * bbox_scale, ['dog', 'cat'], 'k')
show_bboxes(fig.axes, anchors * bbox_scale, ['0', '1', '2', '3', '4']);

 

pytorch

使用multibox_target上面定义的函数,我们可以根据狗和猫的真实边界框来标记这些锚框的类别和偏移量。在此示例中,背景、狗和猫类的索引分别为 0、1 和 2。下面我们为anchor boxes和ground-truth bounding boxes的例子添加一个维度。

 

labels = multibox_target(anchors.unsqueeze(dim=0),
             ground_truth.unsqueeze(dim=0))

 

 

labels = multibox_target(np.expand_dims(anchors, axis=0),
             np.expand_dims(ground_truth, axis=0))

 

返回结果中有三项,都是张量格式。第三项包含输入锚框的标记类。

让我们根据图像中的锚框和真实边界框位置分析下面返回的类标签。首先,在所有的anchor boxes和ground-truth bounding boxes对中,anchor boxes的IoUA4猫的真实边界框是最大的。因此,类A4被标记为猫。取出包含的对A4或猫的真实边界框,其余的一对锚框A1狗的真实边界框具有最大的 IoU。所以类A1被标记为狗。接下来,我们需要遍历剩下的三个未标记的anchor boxes:A0, A2, 和A3. 为了A0,具有最大IoU的ground-truth边界框的类别是狗,但IoU低于预定义的阈值(0.5),因此该类别被标记为背景;为了A2,具有最大IoU的ground-truth bounding box的类别是猫,并且IoU超过阈值,因此该类别被标记为猫;为了A3,具有最大IoU的ground-truth bounding box的类别是猫,但该值低于阈值,因此该类别被标记为背景。

 

labels[2]

 

 

tensor([[0, 1, 2, 0, 2]])

 

 

labels[2]

 

 

array([[0, 1, 2, 0, 2]], dtype=int32)

 

第二个返回项是形状的掩码变量(批量大小,锚框数量的四倍)。掩码变量中每四个元素对应每个锚框的四个偏移值。由于我们不关心背景检测,这个负类的偏移量不应该影响目标函数。通过逐元素乘法,掩码变量中的零将在计算目标函数之前过滤掉负类偏移。

 

labels[1]

 

 

tensor([[0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 1., 1.,
     1., 1.]])

 

 

labels[1]

 

 

array([[0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1]],
   dtype=int32)

 

第一个返回的项目包含为每个锚框标记的四个偏移值。请注意,负类锚框的偏移量标记为零。

 

labels[0]

 

 

tensor([[-0.00e+00, -0.00e+00, -0.00e+00, -0.00e+00, 1.40e+00, 1.00e+01,
     2.59e+00, 7.18e+00, -1.20e+00, 2.69e-01, 1.68e+00, -1.57e+00,
     -0.00e+00, -0.00e+00, -0.00e+00, -0.00e+00, -5.71e-01, -1.00e+00,
     4.17e-06, 6.26e-01]])

 

 

labels[0]

 

 

array([[-0.00e+00, -0.00e+00, -0.00e+00, -0.00e+00, 1.40e+00, 1.00e+01,
     2.59e+00, 7.18e+00, -1.20e+00, 2.69e-01, 1.68e+00, -1.57e+00,
    -0.00e+00, -0.00e+00, -0.00e+00, -0.00e+00, -5.71e-01, -1.00e+00,
     4.17e-06, 6.26e-01]])

 

14.4.4。预测具有非最大抑制的边界框

在预测期间,我们为图像生成多个锚框,并为每个锚框预测类别和偏移量。 因此根据具有预测偏移量的锚框获得预测边界框。下面我们实现offset_inverse将锚点和偏移预测作为输入并应用逆偏移变换以返回预测的边界框坐标的函数。

 

#@save
def offset_inverse(anchors, offset_preds):
  """Predict bounding boxes based on anchor boxes with predicted offsets."""
  anc = d2l.box_corner_to_center(anchors)
  pred_bbox_xy = (offset_preds[:, :2] * anc[:, 2:] / 10) + anc[:, :2]
  pred_bbox_wh = torch.exp(offset_preds[:, 2:] / 5) * anc[:, 2:]
  pred_bbox = torch.cat((pred_bbox_xy, pred_bbox_wh), axis=1)
  predicted_bbox = d2l.box_center_to_corner(pred_bbox)
  return predicted_bbox

 

 

#@save
def offset_inverse(anchors, offset_preds):
  """Predict bounding boxes based on anchor boxes with predicted offsets."""
  anc = d2l.box_corner_to_center(anchors)
  pred_bbox_xy = (offset_preds[:, :2] * anc[:, 2:] / 10) + anc[:, :2]
  pred_bbox_wh = np.exp(offset_preds[:, 2:] / 5) * anc[:, 2:]
  pred_bbox = np.concatenate((pred_bbox_xy, pred_bbox_wh), axis=1)
  predicted_bbox = d2l.box_center_to_corner(pred_bbox)
  return predicted_bbox

 

当有很多锚框时,可能会输出许多相似(具有显着重叠)的预测边界框来包围同一对象。为了简化输出,我们可以使用非最大抑制(NMS)合并属于同一对象的相似预测边界框 。

以下是非极大值抑制的工作原理。对于预测的边界框 B,对象检测模型计算每个类别的预测可能性。表示为p最大的预测似然,对应于这个概率的类就是预测的类B. 具体来说,我们参考p作为 预测边界框的置信度(分数)B. 在同一张图片上,将所有预测的非背景边界框按照置信度降序排序,生成列表L. 然后我们操作排序列表L在以下步骤中:

选择预测的边界框B1以最高的信心L作为基础并删除所有非基础预测边界框,其 IoU 为B1超过预定义的阈值ϵ从L. 在此刻, L保留具有最高置信度的预测边界框,但丢弃与它太相似的其他边界框。简而言之,那些具有非最大置信度分数的被 抑制。

选择预测的边界框B2具有第二高的置信度L作为另一个基础并删除所有非基础预测边界框,其 IoU 与B2超过 ϵ从L.

重复上述过程,直到所有预测的边界框在 L已被用作基础。此时,任意一对预测边界框的IoU在L低于阈值 ϵ; 因此,没有一对彼此太相似。

输出列表中所有预测的边界框L.

以下nms函数按降序对置信度得分进行排序并返回它们的索引。

 

#@save
def nms(boxes, scores, iou_threshold):
  """Sort confidence scores of predicted bounding boxes."""
  B = torch.argsort(scores, dim=-1, descending=True)
  keep = [] # Indices of predicted bounding boxes that will be kept
  while B.numel() > 0:
    i = B[0]
    keep.append(i)
    if B.numel() == 1: break
    iou = box_iou(boxes[i, :].reshape(-1, 4),
           boxes[B[1:], :].reshape(-1, 4)).reshape(-1)
    inds = torch.nonzero(iou <= iou_threshold).reshape(-1)
    B = B[inds + 1]
  return torch.tensor(keep, device=boxes.device)

 

 

#@save
def nms(boxes, scores, iou_threshold):
  """Sort confidence scores of predicted bounding boxes."""
  B = scores.argsort()[::-1]
  keep = [] # Indices of predicted bounding boxes that will be kept
  while B.size > 0:
    i = B[0]
    keep.append(i)
    if B.size == 1: break
    iou = box_iou(boxes[i, :].reshape(-1, 4),
           boxes[B[1:], :].reshape(-1, 4)).reshape(-1)
    inds = np.nonzero(iou <= iou_threshold)[0]
    B = B[inds + 1]
  return np.array(keep, dtype=np.int32, ctx=boxes.ctx)

 

我们定义以下内容multibox_detection以将非最大抑制应用于预测边界框。如果您发现实现有点复杂,请不要担心:我们将在实现后立即通过具体示例展示它是如何工作的。

 

#@save
def multibox_detection(cls_probs, offset_preds, anchors, nms_threshold=0.5,
            pos_threshold=0.009999999):
  """Predict bounding boxes using non-maximum suppression."""
  device, batch_size = cls_probs.device, cls_probs.shape[0]
  anchors = anchors.squeeze(0)
  num_classes, num_anchors = cls_probs.shape[1], cls_probs.shape[2]
  out = []
  for i in range(batch_size):
    cls_prob, offset_pred = cls_probs[i], offset_preds[i].reshape(-1, 4)
    conf, class_id = torch.max(cls_prob[1:], 0)
    predicted_bb = offset_inverse(anchors, offset_pred)
    keep = nms(predicted_bb, conf, nms_threshold)
    # Find all non-`keep` indices and set the class to background
    all_idx = torch.arange(num_anchors, dtype=torch.long, device=device)
    combined = torch.cat((keep, all_idx))
    uniques, counts = combined.unique(return_counts=True)
    non_keep = uniques[counts == 1]
    all_id_sorted = torch.cat((keep, non_keep))
    class_id[non_keep] = -1
    class_id = class_id[all_id_sorted]
    conf, predicted_bb = conf[all_id_sorted], predicted_bb[all_id_sorted]
    # Here `pos_threshold` is a threshold for positive (non-background)
    # predictions
    below_min_idx = (conf < pos_threshold)
    class_id[below_min_idx] = -1
    conf[below_min_idx] = 1 - conf[below_min_idx]
    pred_info = torch.cat((class_id.unsqueeze(1),
                conf.unsqueeze(1),
                predicted_bb), dim=1)
    out.append(pred_info)
  return torch.stack(out)

 

 

#@save
def multibox_detection(cls_probs, offset_preds, anchors, nms_threshold=0.5,
            pos_threshold=0.009999999):
  """Predict bounding boxes using non-maximum suppression."""
  device, batch_size = cls_probs.ctx, cls_probs.shape[0]
  anchors = np.squeeze(anchors, axis=0)
  num_classes, num_anchors = cls_probs.shape[1], cls_probs.shape[2]
  out = []
  for i in range(batch_size):
    cls_prob, offset_pred = cls_probs[i], offset_preds[i].reshape(-1, 4)
    conf, class_id = np.max(cls_prob[1:], 0), np.argmax(cls_prob[1:], 0)
    predicted_bb = offset_inverse(anchors, offset_pred)
    keep = nms(predicted_bb, conf, nms_threshold)
    # Find all non-`keep` indices and set the class to background
    all_idx = np.arange(num_anchors, dtype=np.int32, ctx=device)
    combined = np.concatenate((keep, all_idx))
    unique, counts = np.unique(combined, return_counts=True)
    non_keep = unique[counts == 1]
    all_id_sorted = np.concatenate((keep, non_keep))
    class_id[non_keep] = -1
    class_id = class_id[all_id_sorted].astype('float32')
    conf, predicted_bb = conf[all_id_sorted], predicted_bb[all_id_sorted]
    # Here `pos_threshold` is a threshold for positive (non-background)
    # predictions
    below_min_idx = (conf < pos_threshold)
    class_id[below_min_idx] = -1
    conf[below_min_idx] = 1 - conf[below_min_idx]
    pred_info = np.concatenate((np.expand_dims(class_id, axis=1),
                np.expand_dims(conf, axis=1),
                predicted_bb), axis=1)
    out.append(pred_info)
  return np.stack(out)

 

现在让我们将上述实现应用到一个有四个锚框的具体例子中。为简单起见,我们假设预测的偏移量全为零。这意味着预测的边界框是锚框。对于背景、狗和猫中的每个类别,我们还定义了它的预测可能性。

 

anchors = torch.tensor([[0.1, 0.08, 0.52, 0.92], [0.08, 0.2, 0.56, 0.95],
           [0.15, 0.3, 0.62, 0.91], [0.55, 0.2, 0.9, 0.88]])
offset_preds = torch.tensor([0] * anchors.numel())
cls_probs = torch.tensor([[0] * 4, # Predicted background likelihood
           [0.9, 0.8, 0.7, 0.1], # Predicted dog likelihood
           [0.1, 0.2, 0.3, 0.9]]) # Predicted cat likelihood

 

 

anchors = np.array([[0.1, 0.08, 0.52, 0.92], [0.08, 0.2, 0.56, 0.95],
           [0.15, 0.3, 0.62, 0.91], [0.55, 0.2, 0.9, 0.88]])
offset_preds = np.array([0] * d2l.size(anchors))
cls_probs = np.array([[0] * 4, # Predicted background likelihood
           [0.9, 0.8, 0.7, 0.1], # Predicted dog likelihood
           [0.1, 0.2, 0.3, 0.9]]) # Predicted cat likelihood

 

我们可以绘制这些预测的边界框及其对图像的置信度。

 

fig = d2l.plt.imshow(img)
show_bboxes(fig.axes, anchors * bbox_scale,
      ['dog=0.9', 'dog=0.8', 'dog=0.7', 'cat=0.9'])

 

pytorch

 

fig = d2l.plt.imshow(img)
show_bboxes(fig.axes, anchors * bbox_scale,
      ['dog=0.9', 'dog=0.8', 'dog=0.7', 'cat=0.9'])

 

pytorch

现在我们可以调用该multibox_detection函数来执行非极大值抑制,其中阈值设置为 0.5。请注意,我们在张量输入中为示例添加了一个维度。

我们可以看到返回结果的shape为(batch size, anchor boxes number, 6)。最里面维度的六个元素给出了相同预测边界框的输出信息。第一个元素是预测的类别索引,它从 0 开始(0 是狗,1 是猫)。值 -1 表示背景或非最大抑制中的去除。第二个元素是预测边界框的置信度。剩下的四个元素是(x,y)分别为预测边界框的左上角和右下角的轴坐标(范围在 0 和 1 之间)。

 

output = multibox_detection(cls_probs.unsqueeze(dim=0),
              offset_preds.unsqueeze(dim=0),
              anchors.unsqueeze(dim=0),
              nms_threshold=0.5)
output

 

 

tensor([[[ 0.00, 0.90, 0.10, 0.08, 0.52, 0.92],
     [ 1.00, 0.90, 0.55, 0.20, 0.90, 0.88],
     [-1.00, 0.80, 0.08, 0.20, 0.56, 0.95],
     [-1.00, 0.70, 0.15, 0.30, 0.62, 0.91]]])

 

 

output = multibox_detection(np.expand_dims(cls_probs, axis=0),
              np.expand_dims(offset_preds, axis=0),
              np.expand_dims(anchors, axis=0),
              nms_threshold=0.5)
output

 

 

array([[[ 1. , 0.9 , 0.55, 0.2 , 0.9 , 0.88],
    [ 0. , 0.9 , 0.1 , 0.08, 0.52, 0.92],
    [-1. , 0.8 , 0.08, 0.2 , 0.56, 0.95],
    [-1. , 0.7 , 0.15, 0.3 , 0.62, 0.91]]])

 

去除那些-1类的预测边界框后,我们可以输出非最大抑制保留的最终预测边界框。

 

fig = d2l.plt.imshow(img)
for i in output[0].detach().numpy():
  if i[0] == -1:
    continue
  label = ('dog=', 'cat=')[int(i[0])] + str(i[1])
  show_bboxes(fig.axes, [torch.tensor(i[2:]) * bbox_scale], label)

 

pytorch

 

fig = d2l.plt.imshow(img)
for i in output[0].asnumpy():
  if i[0] == -1:
    continue
  label = ('dog=', 'cat=')[int(i[0])] + str(i[1])
  show_bboxes(fig.axes, [np.array(i[2:]) * bbox_scale], label)

 

pytorch

在实践中,我们甚至可以在执行非最大抑制之前删除具有较低置信度的预测边界框,从而减少该算法的计算量。我们还可以对非最大抑制的输出进行后处理,例如,只保留对最终输出具有更高置信度的结果。

14.4.5。概括

我们以图像的每个像素为中心生成具有不同形状的锚框。

Intersection over union (IoU),也称为 Jaccard 指数,衡量两个边界框的相似性。它是它们的交集面积与联合面积的比率。

在训练集中,我们需要为每个锚框提供两种类型的标签。一个是与anchor box相关的对象的类别,另一个是ground-truth bounding box相对于anchor box的偏移量。

在预测过程中,我们可以使用非最大抑制(NMS)来去除相似的预测边界框,从而简化输出。

14.4.6。练习

更改函数中的sizes和的值 。生成的anchor boxes有什么变化?ratiosmultibox_prior

构造和可视化两个 IoU 为 0.5 的边界框。它们如何相互重叠?

修改14.4.3 节和 14.4.4 节anchors中的 变量。结果如何变化?

非极大值抑制是一种贪心算法,它通过移除预测的边界框来抑制它们。有没有可能其中一些被删除的实际上有用?如何修改此算法以软抑制?你可以参考 Soft-NMS ( Bodla et al. , 2017 )。

与其手工制作,不如学习非极大值抑制?

打开APP阅读更多精彩内容
声明:本文内容及配图由入驻作者撰写或者入驻合作网站授权转载。文章观点仅代表作者本人,不代表电子发烧友网立场。文章及其配图仅供工程师学习之用,如有内容侵权或者其他违规问题,请联系本站处理。 举报投诉

全部0条评论

快来发表一下你的评论吧 !

×
20
完善资料,
赚取积分