Task03:基于机器学习的文本分类

1. One-Hot

2. 词袋

Bag of Words(词袋表示),也称为Count Vectors,每个文档的字/词可以使用其出现次数来进行表示。

from sklearn.feature_extraction.text import CountVectorizer

corpus = [
    'This is the first document.',
    'This document is the second document.',
    'And this is the third one.',
    'Is this the first document?',
]

vectorizer = CountVectorizer()
vectorizer.fit_transform(corpus).toarray()

Output:

array([[0, 1, 1, 1, 0, 0, 1, 0, 1],
            [0, 2, 0, 1, 0, 1, 1, 0, 1],
            [1, 0, 0, 1, 1, 0, 1, 1, 1],
            [0, 1, 1, 1, 0, 0, 1, 0, 1]], dtype=int64)

3. N-gram

4. TF-IDF

由两部分组成:
​ 第一部分是词语频率(Term Frequency),
​ 第二部分是逆文档频率(Inverse Document Frequency)。
其中计算语料库中文档总数除以含有该词语的文档数量,然后再取对数就是逆文档频率。

TF(t)= 该词语在当前文档出现的次数 / 当前文档中词语的总数
IDF(t)= log_e(文档总数 / 出现该词语的文档总数)

对比不同文本表示算法的精度,通过本地构建验证集计算F1得分

PlanA:Count Vectors + RidgeClassifier

import pandas as pd


from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import RidgeClassifier
from sklearn.metrics import f1_score


train_df = pd.read_csv('/Users/summer/Desktop/xul_data/learning/DataWhale/20200719NLP/task01_preparing_20200719/input/train_set.csv', *sep*='t', *nrows*=15000)


vectorizer = CountVectorizer(*max_features*=3000)
train_test = vectorizer.fit_transform(train_df['text'])


# https://blog.csdn.net/LOLUN9/article/details/106012418/
# https://blog.csdn.net/fantacy10000/article/details/90647686


'''RidgeClassifier()通过Ridge()以下方式使用回归模型来创建分类器:
    为了简单起见,让我们考虑二进制分类,目标变量等于+1或-1。
    建立一个Ridge()回归模型(这是一个回归模型)来预测我们的目标变量。损失函数是RMSE + l2 penality
    如果Ridge()回归的预测值(基于decision_function()函数计算)大于0,则将其预测为正类,否则为负类。
'''


# L2岭回归,压缩最优解的系数,计算效率高,模型稳定性好;L1减少项的个数

clf = RidgeClassifier() 
clf.fit(train_test[:10000], train_df['label'].values[:10000])

val_pred = clf.predict(train_test[10000:])
print(f1_score(train_df['label'].values[10000:], val_pred, *average*='macro'))

Output:

>>>> 0.65441877581244

PlanB:TF-IDF + RidgeClassifier

import pandas as pd


from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import RidgeClassifier
from sklearn.metrics import f1_score


# train_df = pd.read_csv('../data/train_set.csv', sep='t', nrows=15000)


tfidf = TfidfVectorizer(*ngram_range*=(1,3), *max_features*=3000)
train_test = tfidf.fit_transform(train_df['text'])


clf = RidgeClassifier()
clf.fit(train_test[:10000], train_df['label'].values[:10000])


val_pred = clf.predict(train_test[10000:])
print(f1_score(train_df['label'].values[10000:], val_pred, *average*='macro'))

Output:

>>> 0.8719372173702

本章作业

Q1:尝试改变TF-IDF的参数,并验证精度

A1:Tfidf Vectorizer

使用参考文档 - https://github.com/scikit-learn/scikit-learn/blob/f0ab589f1541b1ca4570177d93fd7979613497e3/sklearn/feature_extraction/text.py

train_df_hw = pd.read_csv('/Users/summer/Desktop/xul_data/learning/DataWhale/20200719NLP/task01_preparing_20200719/input/train_set.csv', sep='\t', nrows=10000)

tfidf_hw = TfidfVectorizer(ngram_range=(1,5), max_features=3000)
train_hw_test = tfidf_hw.fit_transform(train_df_hw['text'])

clf = RidgeClassifier() 
clf.fit(train_hw_test[:7000], train_df_hw['label'].values[:7000])

val_pred_hw = clf.predict(train_hw_test[7000:10000]) # [:N]表示从第一个开始取到第N个
print(f1_score(train_df_hw['label'].values[7000:10000], val_pred_hw, average='macro'))

Output:

# Test1-ngram_range=(1,3)
>>> 0.9317302315325816

# Test2-ngram_range=(1,5)
>>> 0.9326016109802603

# Test3-增加停用词stop_words='world'
>>> 0.9322304435086377

# Test4-norm='l2'
>>> 0.9322304435086377

# Test5-norm='l1'
>>> 0.5073894598279685

Q2:尝试使用其他机器学习模型,完成训练和验证

常用分类器 线性:LR、SVM 非线性:DF、RF、GBDT、XGBOOST

原理:https://www.cnblogs.com/andy-0212/p/10630608.html
对比:https://www.cnblogs.com/wkang/p/9657032.html
https://blog.csdn.net/twt520ly/article/details/79769705
https://www.jianshu.com/p/96173f2c2fb4
GBDT:https://blog.csdn.net/weixin_40924580/article/details/85043801?utm_medium=distribute.pc_relevant.none-task-blog-baidujs-2&spm=1001.2101.3001.4242

Test1:GBDT

from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_curve, roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder as OHE
from xgboost.sklearn import XGBClassifier


train_df_hw = pd.read_csv('/Users/summer/Desktop/xul_data/learning/DataWhale/20200719NLP/task01_preparing_20200719/input/train_set.csv', sep='\t', nrows=10000)
tfidf_hw = TfidfVectorizer(ngram_range=(1,5), max_features=3000)
train_hw_test = tfidf_hw.fit_transform(train_df_hw['text'])


x_train = train_hw_test[:7000]
y_train = train_df_hw['label'].values[:7000]

x_test = train_hw_test[7000:10000]
y_test = train_df_hw['label'].values[7000:10000]

gbm1 = GradientBoostingClassifier(n_estimators=50, random_state=10, subsample=0.6, max_depth=4,
                                  min_samples_split=400)
gbm1.fit(x_train, y_train)

gbm1_pred_hw = gbm1.predict(x_test) 
print(f1_score(y_test, gbm1_pred_hw, average='macro'))

Output:

>>> 0.8165503231061779

Test2:TF-IDF+GBDT+LR,基于Test1

import numpy as np

## 特征转换
## model.apply(x_train)返回训练数据x_train在训练好的模型里每棵树中所处的叶子节点的位置(索引)
y_pred = gbm1.apply(x_train)
y_pred = y_pred.reshape(7000, -1) # 一个ID对应一个特征,训练集中有7000个ID,因此reshape(7000, -1)

## 打印上面结果的输出,可以看到shape是(7000, 50),即训练数据量*树的棵树
print(np.array(y_pred).shape)
print(y_pred[0])

enc = OneHotEncoder()
enc.fit(y_pred)
y_pred2 = np.array(enc.transform(y_pred).toarray()) 


### 对测试集相同操作
y_pred_test = gbm1.apply(x_test)
y_pred_test = y_pred_test.reshape(3000, -1)
print(np.array(y_pred_test).shape) #(3000, 700)
print(y_pred_test[0])

y_pred_test2 = np.array(enc.transform(y_pred_test).toarray()) 

## 预测
LR = LogisticRegression(penalty='l2')
LR.fit(y_pred2, y_train)

lr_pred_hw = LR.predict(y_pred_test2) 
print(f1_score(y_test, lr_pred_hw, average='macro'))

Output:

>>> 0.8515161019132009
    通过LR提升3.5%
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

友情链接更多精彩内容