作者:刘健健
来自:ChallengeHub
Twitter 的推文有许多特点,首先,与 Facebook 不同的是,推文是基于文本的,可以通过 Twitter 接口注册下载,便于作为自然语言处理所需的语料库。其次,Twitter 规定了每一个推文不超过 140 个字,实际推文中的文本长短不一、长度一般较短,有些只有一个句子甚至一个短语,这对其开展情感分类标注带来许多困难。再者,推文常常是随性所作,内容中包含情感的元素较多,口语化内容居多,缩写随处都在,并且使用了许多网络用语,情绪符号、新词和俚语随处可见。因此,与正式文本非常不同。如果采用那些适合处理正式文本的情感分类方法来对 Twitter 推文进行情感分类,效果将不尽人意。
公众情感在包括电影评论、消费者信心、政治选举、股票走势预测等众多领域发挥着越来越大的影响力。面向公共媒体内容开展情感分析是分析公众情感的一项基础工作。
二、数据基本情况
数据集基于推特用户发表的推文数据集,并且针对部分字段做出了一定的调整,所有的字段信息请以本练习赛提供的字段信息为准
字段信息内容参考如下:
tweet_id string 推文数据的唯一ID,比如test_0,train_1024
content string 推特内容
label int 推特情感的类别,共13种情感
其中训练集train.csv包含3w条数据,字段包括tweet_id,content,label;测试集test.csv包含1w条数据,字段包括tweet_id,content。
tweet_id,content,label
tweet_1,Layin n bed with a headache ughhhh...waitin on your call...,1
tweet_2,Funeral ceremony...gloomy friday...,1
tweet_3,wants to hang out with friends SOON!,2
tweet_4,"@dannycastillo We want to trade with someone who has Houston tickets, but no one will.",3
tweet_5,"I should be sleep, but im not! thinking about an old friend who I want. but he's married now. damn, & he wants me 2! scandalous!",1
tweet_6,Hmmm.
http://www.djhero.com/ is down,4
tweet_7,@charviray Charlene my love. I miss you,1
tweet_8,cant fall asleep,3
!head /home/mw/input/Twitter4903/train.csv
tweet_id,content,label
tweet_0,@tiffanylue i know i was listenin to bad habit earlier and i started freakin at his part =[,0
tweet_1,Layin n bed with a headache ughhhh...waitin on your call...,1
tweet_2,Funeral ceremony...gloomy friday...,1
tweet_3,wants to hang out with friends SOON!,2
tweet_4,"@dannycastillo We want to trade with someone who has Houston tickets, but no one will.",3
tweet_5,"I should be sleep, but im not! thinking about an old friend who I want. but he's married now. damn, & he wants me 2! scandalous!",1
tweet_6,Hmmm. http://www.djhero.com/ is down,4
tweet_7,@charviray Charlene my love. I miss you,1
tweet_8,cant fall asleep,3
!head /home/mw/input/Twitter4903/test.csv
tweet_id,content
tweet_0,Re-pinging @ghostridah14: why didn't you go to prom? BC my bf didn't like my friends
tweet_1,@kelcouch I'm sorry at least it's Friday?
tweet_2,The storm is here and the electricity is gone
tweet_3,So sleepy again and it's not even that late. I fail once again.
tweet_4,"Wondering why I'm awake at 7am,writing a new song,plotting my evil secret plots muahahaha...oh damn it,not secret anymore"
tweet_5,I ate Something I don't know what it is... Why do I keep Telling things about food
tweet_6,so tired and i think i'm definitely going to get an ear infection. going to bed "early" for once.
tweet_7,It is so annoying when she starts typing on her computer in the middle of the night!
tweet_8,Screw you @davidbrussee! I only have 3 weeks...
!head /home/mw/input/Twitter4903/submission.csv
tweet_id,label
tweet_0,0
tweet_1,0
tweet_2,0
tweet_3,0
tweet_4,0
tweet_5,0
tweet_6,0
tweet_7,0
tweet_8,0
环境准备 (建议gpu环境,速度好。pip install paddlepaddle-gpu)
!pip install paddlepaddle
!pip install -U paddlenlp
import pandas as pd
train = pd.read_csv('/home/mw/input/Twitter4903/train.csv')
test = pd.read_csv('/home/mw/input/Twitter4903/test.csv')
sub = pd.read_csv('/home/mw/input/Twitter4903/submission.csv')
print('最大内容长度 %d'%(max(train['content'].str.len())))
最大内容长度 166
def read(pd_data):
for index, item in pd_data.iterrows():
yield {'text': item['content'], 'label': item['label'], 'qid': item['tweet_id'].strip('tweet_')}
from paddle.io import Dataset, Subset
from paddlenlp.datasets import MapDataset
from paddlenlp.datasets import load_dataset
dataset = load_dataset(read, pd_data=train,lazy=False)
dev_ds = Subset(dataset=dataset, indices=[i for i in range(len(dataset)) if i % 5== 1])
train_ds = Subset(dataset=dataset, indices=[i for i in range(len(dataset)) if i % 5 != 1])
for i in range(5):
print(train_ds[i])
{'text': '@tiffanylue i know i was listenin to bad habit earlier and i started freakin at his part =[', 'label': 0, 'qid': '0'}
{'text': 'Funeral ceremony...gloomy friday...', 'label': 1, 'qid': '2'}
{'text': 'wants to hang out with friends SOON!', 'label': 2, 'qid': '3'}
{'text': '@dannycastillo We want to trade with someone who has Houston tickets, but no one will.', 'label': 3, 'qid': '4'}
{'text': "I should be sleep, but im not! thinking about an old friend who I want. but he's married now. damn, & he wants me 2! scandalous!", 'label': 1, 'qid': '5'}
train_ds = MapDataset(train_ds)
dev_ds = MapDataset(dev_ds)
print(len(train_ds))
print(len(dev_ds))
240006000
近年来,大量的研究表明基于大型语料库的预训练模型(Pretrained Models, PTM)可以学习通用的语言表示,有利于下游NLP任务,同时能够避免从零开始训练模型。随着计算能力的发展,深度模型的出现(即 Transformer)和训练技巧的增强使得 PTM 不断发展,由浅变深。
情感预训练模型SKEP(Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis)。SKEP利用情感知识增强预训练模型, 在14项中英情感分析典型任务上全面超越SOTA,此工作已经被ACL 2020录用。SKEP是百度研究团队提出的基于情感知识增强的情感预训练算法,此算法采用无监督方法自动挖掘情感知识,然后利用情感知识构建预训练目标,从而让机器学会理解情感语义。SKEP为各类情感分析任务提供统一且强大的情感语义表示。
百度研究团队在三个典型情感分析任务,句子级情感分类(Sentence-level Sentiment Classification),评价目标级情感分类(Aspect-level Sentiment Classification)、观点抽取(Opinion Role Labeling),共计14个中英文数据上进一步验证了情感预训练模型SKEP的效果。
具体实验效果参考:https://github.com/baidu/Senta#skep
PaddleNLP已经实现了SKEP预训练模型,可以通过一行代码实现SKEP加载。
句子级情感分析模型是SKEP fine-tune 文本分类常用模型SkepForSequenceClassification。其首先通过SKEP提取句子语义特征,之后将语义特征进行分类。
!pip install regex
Looking in indexes: https://mirror.baidu.com/pypi/simple/
Requirement already satisfied: regex in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (2021.8.28)
SkepForSequenceClassification可用于句子级情感分析和目标级情感分析任务。其通过预训练模型SKEP获取输入文本的表示,之后将文本表示进行分类。
pretrained_model_name_or_path:模型名称。支持"skep_ernie_1.0_large_ch",“skep_ernie_2.0_large_en”。
** “skep_ernie_1.0_large_ch”:是SKEP模型在预训练ernie_1.0_large_ch基础之上在海量中文数据上继续预训练得到的中文预训练模型;
“skep_ernie_2.0_large_en”:是SKEP模型在预训练ernie_2.0_large_en基础之上在海量英文数据上继续预训练得到的英文预训练模型;
num_classes: 数据集分类类别数。
关于SKEP模型实现详细信息参考:https://github.com/PaddlePaddle/PaddleNLP/tree/develop/paddlenlp/transformers/skep
from paddlenlp.transformers import SkepForSequenceClassification, SkepTokenizer
指定模型名称,一键加载模型
model = SkepForSequenceClassification.from_pretrained(pretrained_model_name_or_path="skep_ernie_2.0_large_en", num_classes=13)
同样地,通过指定模型名称一键加载对应的Tokenizer,用于处理文本数据,如切分token,转token_id等。
tokenizer = SkepTokenizer.from_pretrained(pretrained_model_name_or_path="skep_ernie_2.0_large_en")
[2021-09-16 1058,665] [ INFO] - Already cached /home/aistudio/.paddlenlp/models/skep_ernie_2.0_large_en/skep_ernie_2.0_large_en.pdparams
[2021-09-16 1010,133] [ INFO] - Found /home/aistudio/.paddlenlp/models/skep_ernie_2.0_large_en/skep_ernie_2.0_large_en.vocab.txt
from visualdl import LogWriter
writer = LogWriter("./log")
SKEP模型对文本处理按照字粒度进行处理,我们可以使用PaddleNLP内置的SkepTokenizer完成一键式处理。
def convert_example(example,
tokenizer,
max_seq_length=512,
is_test=False):
# 将原数据处理成model可读入的格式,enocded_inputs是一个dict,包含input_ids、token_type_ids等字段
encoded_inputs = tokenizer(
text=example["text"], max_seq_len=max_seq_length)
# input_ids:对文本切分token后,在词汇表中对应的token id
input_ids = encoded_inputs["input_ids"]
# token_type_ids:当前token属于句子1还是句子2,即上述图中表达的segment ids
token_type_ids = encoded_inputs["token_type_ids"]
if not is_test:
# label:情感极性类别
label = np.array([example["label"]], dtype="int64")
return input_ids, token_type_ids, label
else:
# qid:每条数据的编号
qid = np.array([example["qid"]], dtype="int64")
return input_ids, token_type_ids, qid
def create_dataloader(dataset,
trans_fn=None,
mode='train',
batch_size=1,
batchify_fn=None):
if trans_fn:
dataset = dataset.map(trans_fn)
shuffle = True if mode == 'train' else False
if mode == "train":
sampler = paddle.io.DistributedBatchSampler(
dataset=dataset, batch_size=batch_size, shuffle=shuffle)
else:
sampler = paddle.io.BatchSampler(
dataset=dataset, batch_size=batch_size, shuffle=shuffle)
dataloader = paddle.io.DataLoader(
dataset, batch_sampler=sampler, collate_fn=batchify_fn)
return dataloader
import numpy as np
import paddle
@paddle.no_grad()
def evaluate(model, criterion, metric, data_loader):
model.eval()
metric.reset()
losses = []
for batch in data_loader:
input_ids, token_type_ids, labels = batch
logits = model(input_ids, token_type_ids)
loss = criterion(logits, labels)
losses.append(loss.numpy())
correct = metric.compute(logits, labels)
metric.update(correct)
accu = metric.accumulate()
# print("eval loss: %.5f, accu: %.5f" % (np.mean(losses), accu))
model.train()
metric.reset()
return np.mean(losses), accu
定义损失函数、优化器以及评价指标后,即可开始训练。
推荐超参设置:
batch_size = 100
max_seq_length = 166
batch_size = 100
learning_rate = 4e-5
epochs = 32
warmup_proportion = 0.1
weight_decay = 0.01
实际运行时可以根据显存大小调整batch_size和max_seq_length大小。
import os
from functools import partial
import numpy as np
import paddle
import paddle.nn.functional as F
from paddlenlp.data import Stack, Tuple, Pad
# 批量数据大小
batch_size = 100
# 文本序列最大长度166
max_seq_length = 166
# 批量数据大小
batch_size = 100
# 定义训练过程中的最大学习率
learning_rate = 4e-5
# 训练轮次
epochs = 32
# 学习率预热比例
warmup_proportion = 0.1
# 权重衰减系数,类似模型正则项策略,避免模型过拟合
weight_decay = 0.01
将数据处理成模型可读入的数据格式
trans_func = partial(
convert_example,
tokenizer=tokenizer,
max_seq_length=max_seq_length)
将数据组成批量式数据,如将不同长度的文本序列padding到批量式数据中最大长度将每条数据label堆叠在一起
batchify_fn = lambda samples, fn=Tuple(
Pad(axis=0, pad_val=tokenizer.pad_token_id), # input_ids
Pad(axis=0, pad_val=tokenizer.pad_token_type_id), # token_type_ids
Stack() # labels
): [data for data in fn(samples)]
train_data_loader = create_dataloader(
train_ds,
mode='train',
batch_size=batch_size,
batchify_fn=batchify_fn,
trans_fn=trans_func)
dev_data_loader = create_dataloader(
dev_ds,
mode='dev',
batch_size=batch_size,
batchify_fn=batchify_fn,
trans_fn=trans_func)
定义超参,loss,优化器等
from paddlenlp.transformers import LinearDecayWithWarmup
import time
num_training_steps = len(train_data_loader) * epochs
lr_scheduler = LinearDecayWithWarmup(learning_rate, num_training_steps, warmup_proportion)
AdamW优化器
optimizer = paddle.optimizer.AdamW(
learning_rate=lr_scheduler,
parameters=model.parameters(),
weight_decay=weight_decay,
apply_decay_param_fun=lambda x: x in [
p.name for n, p in model.named_parameters()
if not any(nd in n for nd in ["bias", "norm"])
])
criterion = paddle.nn.loss.CrossEntropyLoss() # 交叉熵损失函数
metric = paddle.metric.Accuracy() # accuracy评价指标
训练且保存最佳结果
开启训练
global_step = 0
best_val_acc=0
tic_train = time.time()
best_accu = 0
for epoch in range(1, epochs + 1):
for step, batch in enumerate(train_data_loader, start=1):
input_ids, token_type_ids, labels = batch
# 喂数据给model
logits = model(input_ids, token_type_ids)
# 计算损失函数值
loss = criterion(logits, labels)
# 预测分类概率值
probs = F.softmax(logits, axis=1)
# 计算acc
correct = metric.compute(probs, labels)
metric.update(correct)
acc = metric.accumulate()
global_step += 1
if global_step % 10 == 0:
print(
"global step %d, epoch: %d, batch: %d, loss: %.5f, accu: %.5f, speed: %.2f step/s"
% (global_step, epoch, step, loss, acc,
10 / (time.time() - tic_train)))
tic_train = time.time()
# 反向梯度回传,更新参数
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.clear_grad()
if global_step % 100 == 0 and:
# 评估当前训练的模型
eval_loss, eval_accu = evaluate(model, criterion, metric, dev_data_loader)
print("eval on dev loss: {:.8}, accu: {:.8}".format(eval_loss, eval_accu))
# 加入eval日志显示
writer.add_scalar(tag="eval/loss", step=global_step, value=eval_loss)
writer.add_scalar(tag="eval/acc", step=global_step, value=eval_accu)
# 加入train日志显示
writer.add_scalar(tag="train/loss", step=global_step, value=loss)
writer.add_scalar(tag="train/acc", step=global_step, value=acc)
save_dir = "best_checkpoint"
# 加入保存
if eval_accu>best_val_acc:
if not os.path.exists(save_dir):
os.mkdir(save_dir)
best_val_acc=eval_accu
print(f"模型保存在 {global_step} 步, 最佳eval准确度为{best_val_acc:.8f}!")
save_param_path = os.path.join(save_dir, 'best_model.pdparams')
paddle.save(model.state_dict(), save_param_path)
fh = open('best_checkpoint/best_model.txt', 'w', encoding='utf-8')
fh.write(f"模型保存在 {global_step} 步, 最佳eval准确度为{best_val_acc:.8f}!")
fh.close()
global step 10, epoch: 1, batch: 10, loss: 2.64415, accu: 0.08400, speed: 0.96 step/s
global step 20, epoch: 1, batch: 20, loss: 2.48083, accu: 0.09050, speed: 0.98 step/s
global step 30, epoch: 1, batch: 30, loss: 2.36845, accu: 0.10933, speed: 0.98 step/s
global step 40, epoch: 1, batch: 40, loss: 2.24933, accu: 0.13750, speed: 1.00 step/s
global step 50, epoch: 1, batch: 50, loss: 2.14947, accu: 0.15380, speed: 0.97 step/s
global step 60, epoch: 1, batch: 60, loss: 2.03459, accu: 0.17100, speed: 0.96 step/s
global step 70, epoch: 1, batch: 70, loss: 2.23222, accu: 0.18414, speed: 1.01 step/s
visualdl 可视化训练,时刻掌握训练走势,不浪费算力
六、预测
训练完成后,重启环境,释放显存,开始预测
数据读取
import pandas as pd
from paddlenlp.datasets import load_dataset
from paddle.io import Dataset, Subset
from paddlenlp.datasets import MapDataset
test = pd.read_csv('/home/mw/input/Twitter4903/test.csv')
数据读取
def read_test(pd_data):
for index, item in pd_data.iterrows():
yield {'text': item['content'], 'label': 0, 'qid': item['tweet_id'].strip('tweet_')}
test_ds = load_dataset(read_test, pd_data=test,lazy=False)
# 在转换为MapDataset类型
test_ds = MapDataset(test_ds)
print(len(test_ds))
def convert_example(example,
tokenizer,
max_seq_length=512,
is_test=False):
# 将原数据处理成model可读入的格式,enocded_inputs是一个dict,包含input_ids、token_type_ids等字段
encoded_inputs = tokenizer(
text=example["text"], max_seq_len=max_seq_length)
# input_ids:对文本切分token后,在词汇表中对应的token id
input_ids = encoded_inputs["input_ids"]
# token_type_ids:当前token属于句子1还是句子2,即上述图中表达的segment ids
token_type_ids = encoded_inputs["token_type_ids"]
if not is_test:
# label:情感极性类别
label = np.array([example["label"]], dtype="int64")
return input_ids, token_type_ids, label
else:
# qid:每条数据的编号
qid = np.array([example["qid"]], dtype="int64")
return input_ids, token_type_ids, qid
def create_dataloader(dataset,
trans_fn=None,
mode='train',
batch_size=1,
batchify_fn=None):
if trans_fn:
dataset = dataset.map(trans_fn)
shuffle = True if mode == 'train' else False
if mode == "train":
sampler = paddle.io.DistributedBatchSampler(
dataset=dataset, batch_size=batch_size, shuffle=shuffle)
else:
sampler = paddle.io.BatchSampler(
dataset=dataset, batch_size=batch_size, shuffle=shuffle)
dataloader = paddle.io.DataLoader(
dataset, batch_sampler=sampler, collate_fn=batchify_fn)
return dataloader
from paddlenlp.transformers import SkepForSequenceClassification, SkepTokenizer
指定模型名称,一键加载模型
model = SkepForSequenceClassification.from_pretrained(pretrained_model_name_or_path="skep_ernie_2.0_large_en", num_classes=13)
同样地,通过指定模型名称一键加载对应的Tokenizer,用于处理文本数据,如切分token,转token_id等。
tokenizer = SkepTokenizer.from_pretrained(pretrained_model_name_or_path="skep_ernie_2.0_large_en")
from functools import partial
import numpy as np
import paddle
import paddle.nn.functional as F
from paddlenlp.data import Stack, Tuple, Pad
batch_size=16
max_seq_length=166
# 处理测试集数据
trans_func = partial(
convert_example,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
is_test=True)
batchify_fn = lambda samples, fn=Tuple(
Pad(axis=0, pad_val=tokenizer.pad_token_id), # input
Pad(axis=0, pad_val=tokenizer.pad_token_type_id), # segment
Stack() # qid
): [data for data in fn(samples)]
test_data_loader = create_dataloader(
test_ds,
mode='test',
batch_size=batch_size,
batchify_fn=batchify_fn,
trans_fn=trans_func)
加载模型
import os
# 根据实际运行情况,更换加载的参数路径
params_path = 'best_checkpoint/best_model.pdparams'
if params_path and os.path.isfile(params_path):
# 加载模型参数
state_dict = paddle.load(params_path)
model.set_dict(state_dict)
print("Loaded parameters from %s" % params_path)
results = []
# 切换model模型为评估模式,关闭dropout等随机因素
model.eval()
for batch in test_data_loader:
input_ids, token_type_ids, qids = batch
# 喂数据给模型
logits = model(input_ids, token_type_ids)
# 预测分类
probs = F.softmax(logits, axis=-1)
idx = paddle.argmax(probs, axis=1).numpy()
idx = idx.tolist()
qids = qids.numpy().tolist()
results.extend(zip(qids, idx))
# 写入预测结果,提交
with open( "submission.csv", 'w', encoding="utf-8") as f:
# f.write("数据ID,评分
")
f.write("tweet_id,label
")
for (idx, label) in results:
f.write('tweet_'+str(idx[0])+","+str(label)+"
")
https://gitee.com/paddlepaddle/PaddleNLP/blob/develop/README.md
PaddleNLP 2.0是飞桨生态的文本领域核心库,具备易用的文本领域API,多场景的应用示例、和高性能分布式训练三大特点,旨在提升开发者文本领域的开发效率,并提供基于飞桨2.0核心框架的NLP任务最佳实践。
基于飞桨核心框架领先的自动混合精度优化策略,结合分布式Fleet API,支持4D混合并行策略,可高效地完成超大规模参数的模型训练。全部0条评论
快来发表一下你的评论吧 !