作为RNN的一种变体,LSTM广泛用于时间序列的预测。本文结合EMD(empirical mode decomposition)算法及LSTM提出了EMD-LSTM算法用于空气质量预测。结果表明,仅使用LSTM算法时,预测结果具有滞后性,与LSTM相比,EMD-LSTM不仅能够大幅提高RMSE(root-mean-square-error),且能够改善预测值滞后于真实值现象。
LSTM简介
LSTM为RNN的一种变体,能够解决RNN在训练过程中出现的梯度消失问题,可以记忆长距离的依赖信息。详情请点击该链接(https://zybuluo.com/hanbingtao/note/581764)
LSTM预测
使用LSTM进行预测需将时间序列数据转换成监督学习数据,具体过程可参考链接(https://machinelearningmastery.com/multivariate-time-series-forecasting-lstms-keras/),代码如下:
from keras import Sequential
from keras.layers import LSTM, Dense, Dropout
from keras.regularizers import l2
from Air_Pollution_Forcast_Beijing.model.data_tranform import scaler, test_x, train_X, test_X, train_y, test_y, n_hours, \
n_features
import matplotlib.pyplot as plt
from numpy import concatenate # 数组拼接
from math import sqrt
from sklearn.metrics import mean_squared_error
from scipy.interpolate import spline
import numpy as np
model = Sequential()
model.add(LSTM(20, input_shape=(train_X.shape[1], train_X.shape[2]), return_sequences=True, kernel_regularizer=l2(0.005),
recurrent_regularizer=l2(0.005)))
model.add(LSTM(20, kernel_regularizer=l2(0.005), recurrent_regularizer=l2(0.005)))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam')
history = model.fit(train_X, train_y, epochs=100, batch_size=2**8, validation_data=(test_X, test_y))
'''
对数据绘图
'''
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
# make the prediction,为了在原始数据的维度上计算损失,需要将数据转化为原来的范围再计算损失
yHat = model.predict(test_X)
y = model.predict(train_X)
test_X = test_X.reshape((test_X.shape[0], n_hours*n_features))
'''
这里注意的是保持拼接后的数组 列数 需要与之前的保持一致
'''
inv_yHat = concatenate((yHat, test_X[:, -7:]), axis=1) # 数组拼接
inv_yHat = scaler.inverse_transform(inv_yHat)
inv_yHat = inv_yHat[:, 0]
test_y = test_y.reshape((len(test_y), 1))
inv_y = concatenate((test_y, test_X[:, -7:]), axis=1)
inv_y = scaler.inverse_transform(inv_y) # 将标准化的数据转化为原来的范围
inv_y = inv_y[:, 0]
model.summary()
rmse = sqrt(mean_squared_error(inv_yHat, inv_y))
print('Test RMSE: %.3f' % rmse)
raw = inv_y.size
inv_y = inv_y[-24*3:]
inv_yHat = inv_yHat[-24*3:]
plt.plot(inv_yHat, label='forecast')
plt.plot(inv_y, label='observation')
plt.ylabel('pm2.5')
plt.legend()
plt.show()
预测结果RMSE为22.9,从图中可以看到,预测值滞后于真实值且当前时刻的预测值几乎等于上一时刻的真实值。这种现象可能是由于时间序列的非平稳性导致的,需要对时间序列进行平稳性处理。对时间序列进行平稳性处理的方法包括对数变换、平滑法、差分及分解(emd、小波变换等)等方法,本文采用emd分解算法对时间序列进行分解得到序列的imf(Intrinsic Mode Function,本征模函数)及残差,再对各分量数据分别进行LSTM预测,最后将各分量预测结果进行叠加,得到最终预测结果。
EMD-LSTM预测
时间序列信号经emd分解得到的各分量数据如下所示:
对各分量数据分别进行LSTM预测,代码如下:
import pandas as pd
import numpy as np
from Air_Pollution_Forcast_Beijing.util import PROCESS_LEVEL1
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import MinMaxScaler
from Air_Pollution_Forcast_Beijing.model.emd import emd
from pyhht.visualization import plot_imfs
from Air_Pollution_Forcast_Beijing.model.series_to_supervised_learning import series_to_supervised
pd.options.display.expand_frame_repr = False
from keras import Sequential
from keras.layers import LSTM, Dense, Dropout
from keras.regularizers import l2
from numpy import concatenate
from sklearn.metrics import mean_squared_error
from math import sqrt
import matplotlib.pyplot as plt
dataset = pd.read_csv(PROCESS_LEVEL1, header=0, index_col=0)
dataset_columns = dataset.columns
values = dataset.values
# values[:, 0] = pd.Series(values[:, 0]).diff(1)
# values = pd.DataFrame(values).dropna().values
'''
计算时间序列的自相关图及偏相关图
'''
# fig = plt.figure(figsize=(12,8))
# ax1=fig.add_subplot(211)
# fig = plot_acf(values[:, 0], lags=10, ax=ax1)
# ax2 = fig.add_subplot(212)
# fig = plot_pacf(values[:, 0], lags=10, ax=ax2)
# 对第四列(风向)数据进行编码,也可进行 哑编码处理
encoder = LabelEncoder()
values[:, 4] = encoder.fit_transform(values[:, 4])
values = values.astype('float32')
# 对数据进行归一化处理, valeus.shape=(, 8),inversed_transform时也需要8列
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
# scaleds = []
'''
进行emd分解
'''
pollution = values[:, 0]
imfs = emd(pollution)
plot_imfs(pollution, np.array(imfs))
imfsValues = []
for imf in imfs:
values[:, 0] = imf
imfsValues.append(values.copy())
inv_yHats = []
inv_ys = []
for imf in imfsValues:
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(imf)
# scaleds.append(scaled)
n_hours = 4
n_features = 8
reframed = series_to_supervised(scaled, n_hours, 1)
values = reframed.values
n_train_hours = 365 * 24 * 4
train = values[:n_train_hours, :]
test = values[n_train_hours:, :]
# 监督学习结果划分,test_x.shape = (, 8)
n_obs = n_hours * n_features
train_x, train_y = train[:, :n_obs], train[:, -n_features]
test_x, test_y = test[:, :n_obs], test[:, -n_features]
# 为了在LSTM中应用该数据,需要将其格式转化为3D format,即[Samples, timesteps, features]
train_X = train_x.reshape((train_x.shape[0], n_hours, n_features))
test_X = test_x.reshape((test_x.shape[0], n_hours, n_features))
model = Sequential()
model.add(LSTM(20, input_shape=(train_X.shape[1], train_X.shape[2]), return_sequences=True, kernel_regularizer=l2(0.005),
recurrent_regularizer=l2(0.005)))
model.add(LSTM(20, kernel_regularizer=l2(0.005), recurrent_regularizer=l2(0.005)))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam')
history = model.fit(train_X, train_y, epochs=500, batch_size=2 ** 10, validation_data=(test_X, test_y))
# make the prediction,为了在原始数据的维度上计算损失,需要将数据转化为原来的范围再计算损失
yHat = model.predict(test_X)
y = model.predict(train_X)
test_X = test_X.reshape((test_X.shape[0], n_hours * n_features))
'''
这里注意的是保持拼接后的数组 列数 需要与之前的保持一致
'''
inv_yHat = concatenate((yHat, test_X[:, -7:]), axis=1) # 数组拼接
inv_yHat = scaler.inverse_transform(inv_yHat)
inv_yHat = inv_yHat[:, 0]
inv_yHats.append(inv_yHat)
test_y = test_y.reshape((len(test_y), 1))
inv_y = concatenate((test_y, test_X[:, -7:]), axis=1)
inv_y = scaler.inverse_transform(inv_y) # 将标准化的数据转化为原来的范围
inv_y = inv_y[:, 0]
inv_ys.append(inv_y)
inv_yHats = np.array(inv_yHats)
inv_yHats = np.sum(inv_yHats, axis=0)
inv_ys = np.array(inv_ys)
inv_ys = np.sum(inv_ys, axis=0)
rmse = sqrt(mean_squared_error(inv_yHats, inv_ys))
print('Test RMSE: %.3f' % rmse)
inv_y = inv_ys[-24*3:]
inv_yHat = inv_yHats[-24*3:]
plt.plot(inv_yHat, label='forecast')
plt.plot(inv_y, label='observation')
plt.ylabel('pm2.5')
plt.legend()
plt.show()
预测结果rmse为18.7,高于仅使用LSTM进行预测的情形且滞后现象得到较大程度的改善。