A sample network anomaly detection project

Suppose we wanted to detect network anomalies with the understanding that an anomaly might point to hardware failure, application failure, or an intrusion.

What our model will show us

The RNN will train on a numeric representation of network activity logs, feature vectors that translate the raw mix of text and numerical data in logs.

By feeding a large volume of network activity logs, with each log line a time step, to the RNN, the neural net will learn what normal expected network activity looks like. When this trained network is fed new activity from the network, it will be able to classify the activity as normal and expected, or anomalous.

Training a neural net to recognize expected behavior has an advantage, because it is rare to have a large volume of abnormal data, or certainly not enough to accurately classify all abnormal behavior. We train our network on the normal data we have, so that it alerts us to non-normal activity in the future. We train for the opposite where we have enough data about attacks.

As an aside, the trained network does not necessarily note that certain activities happen at certain times (it does not know that a particular day is Sunday), but it does notice those more obvious temporal patterns we would be aware of, along with other connections between events that might not be apparent.

We’ll outline how to approach this problem using Deeplearning4j, a widely used open-source library for deep learning on the JVM. Deeplearning4j comes with a variety of tools that are  useful throughout the model development process: DataVec is a collection of tools to assist with the extract-transform-load (ETL) tasks used to prepare data for model training. Just as Sqoop helps load data into Hadoop, DataVec helps load data into neural nets by cleaning, preprocessing, normalizing and standardizing data. It’s similar to Trifacta’s Wrangler but focused a bit more on binary data.

Getting started

The first stage includes typical big data tasks and ETL: We need to gather, move, store, prepare, normalize, and vectorize the logs. The size of the time steps must be decided. Data transformation may require significant effort, since JSON logs, text logs, and logs with  inconsistent labeling patterns will have to be read and converted into a numeric array.  DataVec can help transform and normalize that data. As is the norm when developing machine learning models, the data must be split into a training set and a test (or evaluation) set.

Training the network

The net’s initial training will run on the training split of the input data.

For the first training runs, you may need to adjust some hyperparameters (“hyperparameters” are parameters that control the “configuration” of the model and how it trains) so that the model actually learns from the data, and does so in a reasonable amount of time. We discuss a few hyperparameters below. As the model trains, you should look for a steady decrease in error.

There is a risk that a neural network model will "overfit" on the data. A model that has been trained to the point of overfitting the dataset will get good scores on the training data, but will not make accurate decisions about data it has never seen before. It doesn’t “generalize” -- in machine-learning parlance. Deeplearning4J provides regularization tools and “early stopping” that help prevent overfitting while training.

Training the neural net is the step that will take the most time and hardware. Running training on GPUs will lead to a significant decrease in training time, especially for image recognition, but additional hardware comes with additional cost, so it’s important that your deep-learning framework use hardware as efficiently as possible. Cloud services such as Azure and Amazon provide access to GPU-based instances, and neural nets can be trained on heterogenous clusters with scalable commodity servers as well as purpose-built machines.

Productionizing the model

Deeplearning4J provides a ModelSerializer class to save a trained model. A trained model can  be saved and either be used (i.e., deployed to production) or updated later with further training.

When performing network anomaly detection in production, log files need to be serialized into the same format that the model trained on, and based on the output of the neural network, you would get reports on whether the current activity was in the range of normal expected network behavior.

Sample code

The configuration of a recurrent neural network might look something like this:

MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()

                .seed(123)

                .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).iterations(1)

                .weightInit(WeightInit.XAVIER)

                .updater(Updater.NESTEROVS).momentum(0.9)

                .learningRate(0.005)

                .gradientNormalization(GradientNormalization.ClipElementWiseAbsoluteValue)

                .gradientNormalizationThreshold(0.5)

                .list()

                .layer(0, new GravesLSTM.Builder().activation("tanh").nIn(1).nOut(10).build())

                .layer(1, new RnnOutputLayer.Builder(LossFunctions.LossFunction.MCXENT)

                        .activation("softmax").nIn(10).nOut(numLabelClasses).build())

                .pretrain(false).backprop(true).build();

MultiLayerNetwork net = new MultiLayerNetwork(conf);

net.init();

Let’s describe a few important lines of this code:

  • .seed(123)

sets a random seed to initialize the neural net’s weights, in order to obtain reproducible results. Typically, coefficients are initialized randomly, and so to obtain consistent results while adjusting other hyperparameters, we need to set a seed, so we can use the same random weights over and over as we tune and test.

  • .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).iterations(1)

determines which optimization algorithm to use (in this case, stochastic gradient descent) to determine how to modify the weights to improve the error score. You probably won’t have to modify this.

  • .learningRate(0.005)

When using stochastic gradient descent, the error gradient (that is, the relation of a change in coefficients to a change in the net’s error) is calculated and the weights are moved along this gradient in an attempt to move the error towards a minimum.  SGD gives us the direction of less error, and the learning rate determines how big of a step is taken in that direction. If the learning rate is too high, you may overshoot the error minimum; if it is too low, your training will take forever. This is a hyperparameter you may need to adjust.

Getting Help

There is an active community of Deeplearning4J users who can be found on several support channels on Gitter.

About the author

Tom Hanlon is currently at Skymind.IO where he is developing a Training Program for Deeplearning4J. The consistent thread in Tom’s career has been data, from MySQL to Hadoop and now neural networks.

摘自:https://www.infoq.com/articles/deep-learning-time-series-anomaly-detection

Anomaly Detection for Time Series Data with Deep Learning——本质分类正常和异常的行为,对于检测异常行为,采用预测正常行为方式来做的更多相关文章

  1. Open Data for Deep Learning

    Open Data for Deep Learning Here you’ll find an organized list of interesting, high-quality datasets ...

  2. 学习Data Science/Deep Learning的一些材料

    原文发布于我的微信公众号: GeekArtT. 从CFA到如今的Data Science/Deep Learning的学习已经有一年的时间了.期间经历了自我的兴趣.擅长事务的探索和试验,有放弃了的项目 ...

  3. 吴恩达机器学习笔记54-开发与评价一个异常检测系统及其与监督学习的对比(Developing and Evaluating an Anomaly Detection System and the Comparison to Supervised Learning)

    一.开发与评价一个异常检测系统 异常检测算法是一个非监督学习算法,意味着我们无法根据结果变量

  4. (转)分布式深度学习系统构建 简介 Distributed Deep Learning

    HOME ABOUT CONTACT SUBSCRIBE VIA RSS   DEEP LEARNING FOR ENTERPRISE Distributed Deep Learning, Part ...

  5. (转) Deep Learning Resources

    转自:http://www.jeremydjacksonphd.com/category/deep-learning/ Deep Learning Resources Posted on May 13 ...

  6. What are some good books/papers for learning deep learning?

    What's the most effective way to get started with deep learning?       29 Answers     Yoshua Bengio, ...

  7. Deep Learning Papers Reading Roadmap

    Deep Learning Papers Reading Roadmap https://github.com/songrotek/Deep-Learning-Papers-Reading-Roadm ...

  8. Why deep learning?

    1. 深度学习中网络越深越好么? 理论上说是这样的,因为网络越深,参数也越多,拟合能力也越强(但实际情况是,网络很深的时候,不容易训练,使得表现能力可能并不好). 2. 那么,不同什么深度的网络,在参 ...

  9. Does Deep Learning Come from the Devil?

    Does Deep Learning Come from the Devil? Deep learning has revolutionized computer vision and natural ...

随机推荐

  1. props default 数组(Array)/对象(Object)的默认值应当由一个工厂函数返回

    1.场景: Object: <!-- 步骤 --> <template> <div> <div class="m-cell"> &l ...

  2. 【Android实战】Gallary+ImageSwicther图片查看器

    仿照如今各大新闻站点图片新闻的浏览模式,上面展示详细图片(ImageSwitch),以下是能够滑动的小图片(Gallery). 当中须要注意的是ImageSwitch须要定义一个工厂返回的组件,而且能 ...

  3. HDU1323_Perfection【水题】

    Perfection Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/32768 K (Java/Others) Total ...

  4. 在 Linux 多节点安装配置 Apache Zookeeper 分布式集群

    规划: 三台物理服务器就形成了(法定人数).对于高可用性集群,您可以使用高于3的任何奇数.例如,如果设置5台服务器,则集群可以处理两个故障节点等. 物理服务器需要开启的端口 2888 , 3888 和 ...

  5. sprint3 【每日scrum】 TD助手站立会议第五天

    站立会议 组员 昨天 今天 困难 签到 刘铸辉 (组长) 通过网上的介绍懂得了闹钟的添加和工作原理,然后加入了震动效果 在添加日程类型处添加了选择闹钟间隔多长时间相应,并写了闹钟运行的类 广播协议也弄 ...

  6. java mysql自定义函数UDF之调用c函数

    正如sqlite可以定义自定义函数,它是通过API定义c函数的,不像其他,如这里的mysql.sqlite提供原生接口就可以方便的调用其他语言的方法,同样的mysql也支持调用其它语言的方法. goo ...

  7. Spark源码分析之四:Stage提交

    各位看官,上一篇<Spark源码分析之Stage划分>详细讲述了Spark中Stage的划分,下面,我们进入第三个阶段--Stage提交. Stage提交阶段的主要目的就一个,就是将每个S ...

  8. C# 托管

    委托 委托让我们可以把函数引用保存在变量中.这就像在 C++ 中使用 typedef 保存函数指针一样. 委托使用关键字 delegate 声明.看看这个例子,你就能理解什么是委托: 例子: 代码: ...

  9. Socket的UDP协议在erlang中的实现

    现在我们看看UDP协议(User Datagram Protocol,用户数据报协议).使用UDP,互联网上的机器之间可以互相发送小段的数据,叫做数据报.UDP数据报是不可靠的,这意味着如果客户端发送 ...

  10. Oracle JDBC 连接卡死后 Connection Reset解决过程

    https://www.cnblogs.com/pthwang/p/8949445.html