机器学习:对抗验证

什么是对抗验证?对抗验证是什么?怎么用?如何实现?

对抗验证,通常是在发现在训练集上模型表现得非常好,AUC非常高,此时如果采用k-fold交叉验证,模型在验证集上却表现非常糟糕。一种可能性就是训练集与测试集相差非常大。就如同许多数据科学竞赛都面临着测试集与训练集明显不同的问题(这违反了“相同分布”的假设)。因此很难建立一个具有代表性的验证集。对抗验证,选择与测试样本最相似的训练样本,并将其作为验证集。这个方法的核心思想是训练一个分类器来区分训练/测试样本。相反,理想情况下,来自同一分布的训练和测试样本,验证误差就可以很好地估计测试误差,分类器就可以很好地泛化到未发现的测试样本。

此时,你心里肯定有很多疑问,比如,这样选择的话,那岂不是过度拟合测试集了吗?

对抗验证在训练集和测试集分布“不同”的情况下,它做的选择是,宁可过拟合和测试集最相似的训练样本(用于验证),也不去过拟合那些与测试集相去甚远的样本,通过这个方法降低模型的置信度,从而降低AUC。
这么做会达到什么目的呢?这样做的结果是,我模型训练的效果可以与模型测试的效果相匹配,降低模型训练表现特别好,而测试时一团糟的情况。

import numpy as np
import pandas as pd
import gc
import datetime

from sklearn.model_selection import KFold
import lightgbm as lgb

# Params
NFOLD = 5
DATA_PATH = '../input/'

# Load data
train = pd.read_csv(DATA_PATH + "train.csv")
test = pd.read_csv(DATA_PATH + "test.csv")

# Mark train as 1, test as 0
train['target'] = 1
test['target'] = 0

# Concat dataframes
n_train = train.shape[0]
df = pd.concat([train, test], axis = 0)
del train, test
gc.collect()

# Remove columns with only one value in our training set
predictors = list(df.columns.difference(['ID', 'target']))
df_train = df.iloc[:n_train].copy()
cols_to_remove = [c for c in predictors if df_train[c].nunique() == 1]
df.drop(cols_to_remove, axis=1, inplace=True)

# Update column names
predictors = list(df.columns.difference(['ID', 'target']))

# Get some basic meta features
df['cols_mean'] = df[predictors].replace(0, np.NaN).mean(axis=1)
df['cols_count'] = df[predictors].replace(0, np.NaN).count(axis=1)
df['cols_sum'] = df[predictors].replace(0, np.NaN).sum(axis=1)
df['cols_std'] = df[predictors].replace(0, np.NaN).std(axis=1)

# Prepare for training

# Shuffle dataset
df = df.iloc[np.random.permutation(len(df))]
df.reset_index(drop = True, inplace = True)

# Get target column name
target = 'target'

# lgb params
lgb_params = {
        'boosting': 'gbdt',
        'application': 'binary',
        'metric': 'auc', 
        'learning_rate': 0.1,
        'num_leaves': 32,
        'max_depth': 8,
        'bagging_fraction': 0.7,
        'bagging_freq': 5,
        'feature_fraction': 0.7,
}

# Get folds for k-fold CV
folds = KFold(n_splits = NFOLD, shuffle = True, random_state = 0)
fold = folds.split(df)
    
eval_score = 0
n_estimators = 0
eval_preds = np.zeros(df.shape[0])

# Run LightGBM for each fold
for i, (train_index, test_index) in enumerate(fold):
    print( "\n[{}] Fold {} of {}".format(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), i+1, NFOLD))
    train_X, valid_X = df[predictors].values[train_index], df[predictors].values[test_index]
    train_y, valid_y = df[target].values[train_index], df[target].values[test_index]

    dtrain = lgb.Dataset(train_X, label = train_y,
                          feature_name = list(predictors)
                          )
    dvalid = lgb.Dataset(valid_X, label = valid_y,
                          feature_name = list(predictors)
                          )
        
    eval_results = {}
    
    bst = lgb.train(lgb_params, 
                         dtrain, 
                         valid_sets = [dtrain, dvalid], 
                         valid_names = ['train', 'valid'], 
                         evals_result = eval_results, 
                         num_boost_round = 5000,
                         early_stopping_rounds = 100,
                         verbose_eval = 100)
    
    print("\nRounds:", bst.best_iteration)
    print("AUC: ", eval_results['valid']['auc'][bst.best_iteration-1])

    n_estimators += bst.best_iteration
    eval_score += eval_results['valid']['auc'][bst.best_iteration-1]
   
    eval_preds[test_index] += bst.predict(valid_X, num_iteration = bst.best_iteration)
    
n_estimators = int(round(n_estimators/NFOLD,0))
eval_score = round(eval_score/NFOLD,6)

print("\nModel Report")
print("Rounds: ", n_estimators)
print("AUC: ", eval_score)    

# Feature importance
lgb.plot_importance(bst, max_num_features = 20)


# Get training rows that are most similar to test
df_av = df[['ID', 'target']].copy()
df_av['preds'] = eval_preds
df_av_train = df_av[df_av.target == 1]
df_av_train = df_av_train.sort_values(by=['preds']).reset_index(drop=True)

# Check distribution
df_av_train.preds.plot()

# Store to feather
df_av_train[['ID', 'preds']].reset_index(drop=True).to_feather('adversarial_validation.ft')


# Check first 20 rows
df_av_train.head(20)
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容