尝试用卷积AE和卷积VAE做无监督检测,思路如下:

1.先用正常样本训练AE或VAE

2.输入测试集给AE或VAE,获得重构的测试集数据。

3.计算重构的数据和原始数据的误差,如果误差大于某一个阈值,则此测试样本为一样。

对于数据集的描述如下:

本数据集一共有10100个样本,每个样本是1行48列的向量,为了让它变成矩阵,自己在末尾补了一个0,将其转变成7*7的矩阵。前8000个是正常样本。后2100个中,前300个是正常样本,之后的1800个中包括6种异常时间序列,每种异常时间序列包括300个样本。

VAE的代码如下:

#https://blog.csdn.net/wyx100/article/details/80647379
'''This script demonstrates how to build a variational autoencoder
with Keras and deconvolution layers.
使用Keras和反卷积层建立变分自编码器演示脚本
# Reference
- Auto-Encoding Variational Bayes
自动编码变分贝叶斯
https://arxiv.org/abs/1312.6114
'''
from __future__ import print_function import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from pandas import read_csv
from keras.layers import Input, Dense, Lambda, Flatten, Reshape
from keras.layers import Conv2D, Conv2DTranspose
from keras.models import Model
from keras import backend as K
from keras import metrics
import xlwt
from keras.datasets import mnist
from matplotlib import pyplot
import numpy
# input image dimensions
# 输入图像维度
img_rows, img_cols, img_chns = 7, 7, 1
dimension_image=7
# number of convolutional filters to use
# 使用的卷积过滤器数量
filters = 64
# convolution kernel size
# 卷积核大小
num_conv = 3 batch_size = 50
if K.image_data_format() == 'channels_first':
original_img_size = (img_chns, img_rows, img_cols)
else:
original_img_size = (img_rows, img_cols, img_chns)
latent_dim = 2
intermediate_dim = 128
epsilon_std = 1.0
epochs = 100 x = Input(shape=original_img_size)
conv_1 = Conv2D(img_chns,
kernel_size=(2, 2),
padding='same', activation='relu')(x)
conv_2 = Conv2D(filters,
kernel_size=(2, 2),
padding='same', activation='relu',
strides=(2, 2))(conv_1)
conv_3 = Conv2D(filters,
kernel_size=num_conv,
padding='same', activation='relu',
strides=1)(conv_2)
conv_4 = Conv2D(filters,
kernel_size=num_conv,
padding='same', activation='relu',
strides=1)(conv_3)
flat = Flatten()(conv_4)
hidden = Dense(intermediate_dim, activation='relu')(flat) z_mean = Dense(latent_dim)(hidden)
z_log_var = Dense(latent_dim)(hidden) def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim),
mean=0., stddev=epsilon_std)
return z_mean + K.exp(z_log_var) * epsilon # note that "output_shape" isn't necessary with the TensorFlow backend
# so you could write `Lambda(sampling)([z_mean, z_log_var])`
# 注意,“output_shape”对于TensorFlow后端不是必需的。因此可以编写Lambda(sampling)([z_mean, z_log_var])`
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var]) # we instantiate these layers separately so as to reuse them later
# 分别实例化这些层,以便在以后重用它们。
number=4
decoder_hid = Dense(intermediate_dim, activation='relu')
decoder_upsample = Dense(filters * number * number, activation='relu') if K.image_data_format() == 'channels_first':
output_shape = (batch_size, filters, number, number)
else:
output_shape = (batch_size, number, number, filters) decoder_reshape = Reshape(output_shape[1:])
decoder_deconv_1 = Conv2DTranspose(filters,
kernel_size=num_conv,
padding='same',
strides=1,
activation='relu')
decoder_deconv_2 = Conv2DTranspose(filters,
kernel_size=num_conv,
padding='same',
strides=1,
activation='relu')
if K.image_data_format() == 'channels_first':
output_shape = (batch_size, filters, 13, 13)
else:
output_shape = (batch_size,13, 13, filters)
decoder_deconv_3_upsamp = Conv2DTranspose(filters,
kernel_size=(3, 3),
strides=(2, 2),
padding='valid',
activation='relu')
decoder_mean_squash = Conv2D(img_chns,
kernel_size=3,
padding='valid',
activation='sigmoid') hid_decoded = decoder_hid(z)
up_decoded = decoder_upsample(hid_decoded)
reshape_decoded = decoder_reshape(up_decoded)
deconv_1_decoded = decoder_deconv_1(reshape_decoded)
deconv_2_decoded = decoder_deconv_2(deconv_1_decoded)
x_decoded_relu = decoder_deconv_3_upsamp(deconv_2_decoded)
x_decoded_mean_squash = decoder_mean_squash(x_decoded_relu) # instantiate VAE model
# 实例化VAE模型
vae = Model(x, x_decoded_mean_squash)
# Compute VAE loss
# 计算VAE损失
xent_loss = img_rows * img_cols * metrics.binary_crossentropy(
K.flatten(x),
K.flatten(x_decoded_mean_squash))
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
vae_loss = K.mean(xent_loss + kl_loss)
vae.add_loss(vae_loss)
vae.compile(optimizer='Adam')
vae.summary() dataset = read_csv('randperm_zerone_Dataset.csv')
values = dataset.values
XY= values
n_train_hours1 =7000
n_train_hours3 =8000
x_train=XY[:n_train_hours1,:]
x_valid =XY[n_train_hours1:n_train_hours3, :]
x_test =XY[n_train_hours3:, :]
x_train=x_train.reshape(-1,dimension_image,dimension_image,1)
x_valid=x_valid.reshape(-1,dimension_image,dimension_image,1)
x_test=x_test.reshape(-1,dimension_image,dimension_image,1) history=vae.fit(x_train,
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_valid, None))
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='valid')
pyplot.legend()
pyplot.show() # 建立一个潜在空间输入模型
encoder = Model(x, z_mean)
# 在潜在空间中显示数字类的2D图
x_test_encoded = encoder.predict(x_test, batch_size=batch_size)
plt.figure(figsize=(6, 6))
plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1])
plt.show() Reconstructed_train = vae.predict(x_train)
Reconstructed_valid = vae.predict(x_valid)
Reconstructed_test = vae.predict(x_test)
ReconstructedData1=np.vstack((Reconstructed_train,Reconstructed_valid))
ReconstructedData2=np.vstack((ReconstructedData1,Reconstructed_test))
ReconstructedData3=ReconstructedData2.reshape((ReconstructedData2.shape[0], -1)) numpy.savetxt("ReconstructedData.csv", ReconstructedData3, delimiter=',')

AE代码如下

from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
import numpy as np
from pandas import read_csv
from matplotlib import pyplot
import numpy dimension_image=7
input_img = Input(shape=(dimension_image, dimension_image, 1)) # adapt this if using `channels_first` image data format
x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x) # at this point the representation is (4, 4, 8) i.e. 128-dimensional
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (2, 2), activation='sigmoid')(x) autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.summary() dataset = read_csv('randperm_zerone_Dataset.csv')
values = dataset.values
XY= values
n_train_hours1 =7000
n_train_hours3 =8000
x_train=XY[:n_train_hours1,:]
x_valid =XY[n_train_hours1:n_train_hours3, :]
x_test =XY[n_train_hours3:, :]
x_train=x_train.reshape(-1,dimension_image,dimension_image,1)
x_valid=x_valid.reshape(-1,dimension_image,dimension_image,1)
x_test=x_test.reshape(-1,dimension_image,dimension_image,1) history=autoencoder.fit(x_train, x_train,
epochs=200,
batch_size=32,
shuffle=True,
validation_data=(x_valid, x_valid))
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='valid')
pyplot.legend()
pyplot.show()
Reconstructed_train = autoencoder.predict(x_train)
Reconstructed_valid = autoencoder.predict(x_valid)
Reconstructed_test = autoencoder.predict(x_test)
ReconstructedData1=np.vstack((Reconstructed_train,Reconstructed_valid))
ReconstructedData2=np.vstack((ReconstructedData1,Reconstructed_test))
ReconstructedData3=ReconstructedData2.reshape((ReconstructedData2.shape[0], -1)) numpy.savetxt("ReconstructedData.csv", ReconstructedData3, delimiter=',')

至于数据集,正在上传到百度文库,以后更新

无监督异常检测之卷积AE和卷积VAE的更多相关文章

  1. 无监督异常检测之LSTM组成的AE

    我本来就是处理时间序列异常检测的,之前用了全连接层以及CNN层组成的AE去拟合原始时间序列,发现效果不佳.当利用LSTM组成AE去拟合时间序列时发现,拟合的效果很好.但是,利用重构误差去做异常检测这条 ...

  2. 无监督︱异常、离群点检测 一分类——OneClassSVM

    OneClassSVM两个功能:异常值检测.解决极度不平衡数据 因为之前一直在做非平衡样本分类的问题,其中如果有一类比例严重失调,就可以直接用这个方式来做:OneClassSVM:OneClassSV ...

  3. AIOps探索:基于VAE模型的周期性KPI异常检测方法——VAE异常检测

    AIOps探索:基于VAE模型的周期性KPI异常检测方法 from:jinjinlin.com   作者:林锦进 前言 在智能运维领域中,由于缺少异常样本,有监督方法的使用场景受限.因此,如何利用无监 ...

  4. 从时序异常检测(Time series anomaly detection algorithm)算法原理讨论到时序异常检测应用的思考

    1. 主要观点总结 0x1:什么场景下应用时序算法有效 历史数据可以被用来预测未来数据,对于一些周期性或者趋势性较强的时间序列领域问题,时序分解和时序预测算法可以发挥较好的作用,例如: 四季与天气的关 ...

  5. Abnormal Detection(异常检测)和 Supervised Learning(有监督训练)在异常检测上的应用初探

    1. 异常检测 VS 监督学习 0x1:异常检测算法和监督学习算法的对比 总结来讲: . 在异常检测中,异常点是少之又少,大部分是正常样本,异常只是相对小概率事件 . 异常点的特征表现非常不集中,即异 ...

  6. 杜伦大学提出GANomaly:无需负例样本实现异常检测

    杜伦大学提出GANomaly:无需负例样本实现异常检测 本期推荐的论文笔记来自 PaperWeekly 社区用户 @TwistedW.在异常检测模块下,如果没有异常(负例样本)来训练模型,应该如何实现 ...

  7. 基于变分自编码器(VAE)利用重建概率的异常检测

    本文为博主翻译自:Jinwon的Variational Autoencoder based Anomaly Detection using Reconstruction Probability,如侵立 ...

  8. kaggle信用卡欺诈看异常检测算法——无监督的方法包括: 基于统计的技术,如BACON *离群检测 多变量异常值检测 基于聚类的技术;监督方法: 神经网络 SVM 逻辑回归

    使用google翻译自:https://software.seek.intel.com/dealing-with-outliers 数据分析中的一项具有挑战性但非常重要的任务是处理异常值.我们通常将异 ...

  9. 使用GAN进行异常检测——可以进行网络流量的自学习哇,哥哥,人家是半监督,无监督的话,还是要VAE,SAE。

    实验了效果,下面的还是图像的异常检测居多. https://github.com/LeeDoYup/AnoGAN https://github.com/tkwoo/anogan-keras 看了下,本 ...

随机推荐

  1. es高级用法之冷热分离

    背景 用户需求:近期数据查询速度快,较远历史数据运行查询速度慢? 对于开发人员而言即数据的冷热分离,实现此功能有2个前提条件: 硬件:处理速度不同的硬件,最起码有读写速度不同的硬盘,如SSD.机械硬盘 ...

  2. C10K问题和多进程模型

    收录编辑来自马哥教育相关课程 内核空间的相关程序在调度用户空间里的进程的时候,也占用了cpu资源...... nginx可以作为两种类型的反向代理 http 和smtp(mail) C10K问题, 当 ...

  3. ueditor上粘贴从word中copy的图片和文字

    图片的复制无非有两种方法,一种是图片直接上传到服务器,另外一种转换成二进制流的base64码目前限chrome浏览器使用首先以um-editor的二进制流保存为例:打开umeditor.js,找到UM ...

  4. luogu 4047 [JSOI2010]部落划分 最小生成树

    最小生成树或者二分都行,但是最小生成树会好写一些~ Code: #include <bits/stdc++.h> #define ll long long #define N 100000 ...

  5. tensorflow实现siamese网络 (附代码)

    转载自:https://blog.csdn.net/qq1483661204/article/details/79039702 Learning a Similarity Metric Discrim ...

  6. 阿里云Ubuntu安装LNMP环境之Nginx

    在QQ群很多朋友问阿里云服务器怎么安装LNMP环境,怎么把项目放到服务器上面去,在这里,我就从头开始教大家怎么在阿里云服务器安装LNMP环境. 在这之前,我们先要知道什么是LNMP. L: 表示的是L ...

  7. Netfilter 之 五个钩子点

    概述 在协议栈的三层IPv4(IPv6还没看,不清楚)数据包的处理过程中,可能经过Netfilter的五个钩子点,分别为NF_INET_PRE_ROUTING.NF_INET_LOCAL_IN.NF_ ...

  8. moogdb操作

    本文转载自 https://my.oschina.net/kakakaka/blog/347954 首先,下载mongdb对JAVA的支持,点击这里下载驱动包,这里博主下载的是2.10.1版. mon ...

  9. Java-NIO 之 Buffer 与 Channel

    NIO:一种同步非阻塞的 I/O 模型,也是 I/O 多路复用的基础. 同步与异步 同步:发起一个调用后,被调用者未处理完请求之前,调用不返回. 异步:发起一个调用后,立刻得到被调用者的回应表示已接收 ...

  10. PHP学习之PHP代码的优化

    if代码块的优化 if(1===$orderState){     $status='success'; }else{     $status='error'; } return $status; 简 ...