为了预训练第 15.8 节中实现的 BERT 模型,我们需要以理想的格式生成数据集,以促进两项预训练任务:掩码语言建模和下一句预测。一方面,原始的 BERT 模型是在两个巨大的语料库 BookCorpus 和英文维基百科(参见第15.8.5 节)的串联上进行预训练的,这使得本书的大多数读者难以运行。另一方面,现成的预训练 BERT 模型可能不适合医学等特定领域的应用。因此,在自定义数据集上预训练 BERT 变得越来越流行。为了便于演示 BERT 预训练,我们使用较小的语料库 WikiText-2 ( Merity et al. , 2016 )。
与 15.3节用于预训练word2vec的PTB数据集相比,WikiText-2(i)保留了原有的标点符号,适合下一句预测;(ii) 保留原始案例和编号;(iii) 大两倍以上。
在 WikiText-2 数据集中,每一行代表一个段落,其中在任何标点符号及其前面的标记之间插入空格。保留至少两句话的段落。为了简单起见,为了拆分句子,我们只使用句点作为分隔符。我们将在本节末尾的练习中讨论更复杂的句子拆分技术。
#@save
d2l.DATA_HUB['wikitext-2'] = (
'https://s3.amazonaws.com/research.metamind.io/wikitext/'
'wikitext-2-v1.zip', '3c914d17d80b1459be871a5039ac23e752a53cbe')
#@save
def _read_wiki(data_dir):
file_name = os.path.join(data_dir, 'wiki.train.tokens')
with open(file_name, 'r') as f:
lines = f.readlines()
# Uppercase letters are converted to lowercase ones
paragraphs = [line.strip().lower().split(' . ')
for line in lines if len(line.split(' . ')) >= 2]
random.shuffle(paragraphs)
return paragraphs
#@save
d2l.DATA_HUB['wikitext-2'] = (
'https://s3.amazonaws.com/research.metamind.io/wikitext/'
'wikitext-2-v1.zip', '3c914d17d80b1459be871a5039ac23e752a53cbe')
#@save
def _read_wiki(data_dir):
file_name = os.path.join(data_dir, 'wiki.train.tokens')
with open(file_name, 'r') as f:
lines = f.readlines()
# Uppercase letters are converted to lowercase ones
paragraphs = [line.strip().lower().split(' . ')
for line in lines if len(line.split(' . ')) >= 2]
random.shuffle(paragraphs)
return paragraphs
15.9.1。为预训练任务定义辅助函数
下面,我们首先为两个 BERT 预训练任务实现辅助函数:下一句预测和掩码语言建模。这些辅助函数将在稍后将原始文本语料库转换为理想格式的数据集以预训练 BERT 时调用。
15.9.1.1。生成下一句预测任务
根据15.8.5.2 节的描述,该 _get_next_sentence
函数为二元分类任务生成一个训练样例。
以下函数paragraph
通过调用该 _get_next_sentence
函数从输入生成用于下一句预测的训练示例。这paragraph
是一个句子列表,其中每个句子都是一个标记列表。该参数 max_len
指定预训练期间 BERT 输入序列的最大长度。
#@save
def _get_nsp_data_from_paragraph(paragraph, paragraphs, vocab, max_len):
nsp_data_from_paragraph = []
for i in range(len(paragraph) - 1):
tokens_a, tokens_b, is_next = _get_next_sentence(
paragraph[i], paragraph[i + 1], paragraphs)
# Consider 1 '' token and 2 '' tokens
if len(tokens_a) + len(tokens_b) + 3 > max_len:
continue
tokens, segments = d2l.get_tokens_and_segments(tokens_a, tokens_b)
nsp_data_from_paragraph.append((tokens, segments, is_next))
return nsp_data_from_paragraph
#@save
def _get_nsp_data_from_paragraph(paragraph, paragraphs, vocab, max_len):
nsp_data_from_paragraph = []
for i in range(len(paragraph) - 1):
tokens_a, tokens_b, is_next = _get_next_sentence(
paragraph[i], paragraph[i + 1], paragraphs)
# Consider 1 '' token and 2 '' tokens
if len(tokens_a) + len(tokens_b) + 3 > max_len:
continue
tokens, segments = d2l.get_tokens_and_segments(tokens_a, tokens_b)
nsp_data_from_paragraph.append((tokens, segments, is_next))
return nsp_data_from_paragraph
15.9.1.2。生成掩码语言建模任务
为了从 BERT 输入序列为掩码语言建模任务生成训练示例,我们定义了以下 _replace_mlm_tokens
函数。在它的输入中,tokens
是代表BERT输入序列的token列表,candidate_pred_positions
是BERT输入序列的token索引列表,不包括特殊token(masked语言建模任务中不预测特殊token),num_mlm_preds
表示预测(召回 15% 的随机标记来预测)。遵循第 15.8.5.1 节中屏蔽语言建模任务的定义 ,在每个预测位置,输入可能被特殊的“”标记或随机标记替换,或者保持不变。最后,该函数返回可能替换后的输入标记、发生预测的标记索引以及这些预测的标签。
#@save
def _replace_mlm_tokens(tokens, candidate_pred_positions, num_mlm_preds,
vocab):
# For the input of a masked language model, make a new copy of tokens and
# replace some of them by '' or random tokens
mlm_input_tokens = [token for token in tokens]
pred_positions_and_labels = []
# Shuffle for getting 15% random tokens for prediction in the masked
# language modeling task
random.shuffle(candidate_pred_positions)
for mlm_pred_position in candidate_pred_positions:
if len(pred_positions_and_labels) >= num_mlm_preds:
break
masked_token = None
# 80% of the time: replace the word with the '' token
if random.random() < 0.8:
masked_token = ''
else:
# 10% of the time: keep the word unchanged
if random.random() < 0.5:
masked_token = tokens[mlm_pred_position]
# 10% of the time: replace the word with a random word
else:
masked_token = random.choice(vocab.idx_to_token)
mlm_input_tokens[mlm_pred_position] = masked_token
pred_positions_and_labels.append(
(mlm_pred_position, tokens[mlm_pred_position]))
return mlm_input_tokens, pred_positions_and_labels
#@save
def _replace_mlm_tokens(tokens, candidate_pred_positions, num_mlm_preds,
vocab):
# For the input of a masked language model, make a new copy of tokens and
# replace some of them by '' or random tokens
mlm_input_tokens = [token for token in tokens]
pred_positions_and_labels = []
# Shuffle for getting 15% random tokens for prediction in the masked
# language modeling task
random.shuffle(candidate_pred_positions)
for mlm_pred_position in candidate_pred_positions:
if len(pred_positions_and_labels) >= num_mlm_preds:
break
masked_token = None
# 80% of the time: replace the word with the '' token
if random.random() < 0.8:
masked_token = ''
else:
# 10% of the time: keep the word unchanged
if random.random() < 0.5:
masked_token = tokens[mlm_pred_position]
# 10% of the time: replace the word with a random word
else:
masked_token = random.choice(vocab.idx_to_token)
mlm_input_tokens[mlm_pred_position] = masked_token
pred_positions_and_labels.append(
(mlm_pred_position, tokens[mlm_pred_position]))
return mlm_input_tokens, pred_positions_and_labels
通过调用上述_replace_mlm_tokens
函数,以下函数将 BERT 输入序列 ( tokens
) 作为输入并返回输入标记的索引(在可能的标记替换之后,如第15.8.5.1 节所述)、发生预测的标记索引和标签这些预测的指标。
#@save
def _get_mlm_data_from_tokens(tokens, vocab):
candidate_pred_positions = []
# `tokens` is a list of strings
for i, token in enumerate(tokens):
# Special tokens are not predicted in the masked language modeling
# task
if token in ['', '']:
continue
candidate_pred_positions.append(i)
# 15% of random tokens are predicted in the masked language modeling task
num_mlm_preds = max(1, round(len(tokens) * 0.15))
mlm_input_tokens, pred_positions_and_labels = _replace_mlm_tokens(
tokens, candidate_pred_positions, num_mlm_preds, vocab)
pred_positions_and_labels = sorted(pred_positions_and_labels,
key=lambda x: x[0])
pred_positions <