Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.2
Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.2
第二章Deep Belief Network (深度信念网络)
基础及源代码解析
2.1 Deep Belief Network深度信念网络基础知识
)综合基础知识參照:
http://tieba.baidu.com/p/2895759455
)原著资料參照:
《Learning Deep Architectures for AI》
url=suD736_WyPyNRj_CEcdo11mKBNMBoq73-u9IxJkbksOtNXdsfMnxOCN2TUz-zVuW80iyb72dyah_GI6qAaPKg42J2sQWLmHeqv4CrU1aqTq
《A Practical Guide to Training Restricted Boltzmann Machines》
2.2 Deep Learning DBN源代码解析
2.2.1 DBN代码结构
DBN源代码主要包含:DBN。DBNModel两个类。源代码结构例如以下:
DBN结构:
watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvc3VuYm93MA==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center">
DBNModel结构:
watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvc3VuYm93MA==/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center">
2.2.2 DBN训练过程
2.2.3 DBN解析
(1) DBNweight
/**
* W:权重
* b:偏置
* c:偏置
*/
caseclass DBNweight(
W: BDM[Double],
vW: BDM[Double],
b: BDM[Double],
vb: BDM[Double],
c: BDM[Double],
vc: BDM[Double])extendsSerializable
DBNweight:自己定义数据类型,存储权重。
(2) DBNConfig
/**
*配置參数
*/
caseclassDBNConfig(
size: Array[Int],
layer: Int,
momentum: Double,
alpha: Double)extends Serializable
DBNConfig:定义參数配置,存储配置信息。參数说明:
size:神经网络结构
layer:神经网络层数
momentum: Momentum因子
alpha:学习迭代因子
(3) InitialWeight
初始化权重
/**
* 初始化权重
*
*/
def InitialW(size: Array[Int]): Array[BDM[Double]] = {
// 初始化权重參数
// weights and weight momentum
// dbn.rbm{u}.W = zeros(dbn.sizes(u + 1), dbn.sizes(u));
valn = size.length
valrbm_W = ArrayBuffer[BDM[Double]]()
ton
- ) {
vald1 = BDM.zeros[Double](size(i), size(i
- ))
rbm_W += d1
}
rbm_W.toArray
}
(4) InitialWeightV
初始化权重vW
/**
* 初始化权重vW
*
*/
def InitialvW(size: Array[Int]): Array[BDM[Double]] = {
// 初始化权重參数
// weights and weight momentum
// dbn.rbm{u}.vW = zeros(dbn.sizes(u + 1), dbn.sizes(u));
valn = size.length
valrbm_vW = ArrayBuffer[BDM[Double]]()
ton
- ) {
vald1 = BDM.zeros[Double](size(i), size(i
- ))
rbm_vW += d1
}
rbm_vW.toArray
}
(5) Initialb
初始化偏置向量
/**
* 初始化偏置向量b
*
*/
def Initialb(size: Array[Int]): Array[BDM[Double]] = {
// 初始化偏置向量b
// weights and weight momentum
// dbn.rbm{u}.b = zeros(dbn.sizes(u), 1);
valn = size.length
valrbm_b = ArrayBuffer[BDM[Double]]()
ton
- ) {
)
rbm_b += d1
}
rbm_b.toArray
}
(6) Initialvb
初始化偏置向量
/**
* 初始化偏置向量vb
*
*/
def Initialvb(size: Array[Int]): Array[BDM[Double]] = {
// 初始化偏置向量b
// weights and weight momentum
// dbn.rbm{u}.vb = zeros(dbn.sizes(u), 1);
valn = size.length
valrbm_vb = ArrayBuffer[BDM[Double]]()
ton
- ) {
)
rbm_vb += d1
}
rbm_vb.toArray
}
(7) Initialc
初始化偏置向量
/**
* 初始化偏置向量c
*
*/
def Initialc(size: Array[Int]): Array[BDM[Double]] = {
// 初始化偏置向量c
// weights and weight momentum
// dbn.rbm{u}.c = zeros(dbn.sizes(u + 1), 1);
valn = size.length
valrbm_c = ArrayBuffer[BDM[Double]]()
ton
- ) {
)
rbm_c += d1
}
rbm_c.toArray
}
(8) Initialvc
初始化偏置向量
/**
* 初始化偏置向量vc
*
*/
def Initialvc(size: Array[Int]): Array[BDM[Double]] = {
// 初始化偏置向量c
// weights and weight momentum
// dbn.rbm{u}.vc = zeros(dbn.sizes(u + 1), 1);
valn = size.length
valrbm_vc = ArrayBuffer[BDM[Double]]()
ton
- ) {
)
rbm_vc += d1
}
rbm_vc.toArray
}
(8) sigmrnd
Gibbs採样
/**
* Gibbs採样
* X = double(1./(1+exp(-P)) > rand(size(P)));
*/
def sigmrnd(P: BDM[Double]): BDM[Double] = {
vals1 =1.0 / (Bexp(P * (-1.0))
+1.0)
valr1 = BDM.rand[Double](s1.rows,s1.cols)
vala1 =s1 :>r1
vala2 =a1.data.map
{ f =>if (f ==true)1.0else0.0
}
vala3 =new BDM(s1.rows,s1.cols,a2)
a3
}
/**
* Gibbs採样
* X = double(1./(1+exp(-P)))+1*randn(size(P));
*/
def sigmrnd2(P: BDM[Double]): BDM[Double] = {
vals1 =1.0 / (Bexp(P * (-1.0))
+1.0)
valr1 = BDM.rand[Double](s1.rows,s1.cols)
vala3 =s1 + (r1
*1.0)
a3
}
(9) DBNtrain
对神经网络每一层进行训练。
/**
* 深度信念网络(Deep Belief Network)
* 执行训练DBNtrain
*/
def DBNtrain(train_d: RDD[(BDM[Double], BDM[Double])], opts: Array[Double]): DBNModel = {
// 參数配置广播配置
valsc = train_d.sparkContext
valdbnconfig = DBNConfig(size,layer,momentum,
alpha)
// 初始化权重
vardbn_W = DBN.InitialW(size)
vardbn_vW = DBN.InitialvW(size)
vardbn_b = DBN.Initialb(size)
vardbn_vb = DBN.Initialvb(size)
vardbn_c = DBN.Initialc(size)
vardbn_vc = DBN.Initialvc(size)
层
printf()
))
valweight1 = RBMtrain(train_d, opts,dbnconfig,weight0)
) =weight1.W
) =weight1.vW
) =weight1.b
) =weight1.vb
) =weight1.c
) =weight1.vc
层至 n层
todbnconfig.layer
-) {
// 前向计算x
// x = sigm(repmat(rbm.c', size(x, 1), 1) + x * rbm.W');
printf("Training Level: %d.\n",i)
valtmp_bc_w =sc.broadcast(dbn_W(i
-))
valtmp_bc_c =sc.broadcast(dbn_c(i
-))
valtrain_d2 = train_d.map { f =>
vallable = f._1
valx = f._2
valx2 = DBN.sigm(x *tmp_bc_w.value.t
+tmp_bc_c.value.t)
(lable, x2)
}
// 训练第i层
valweighti =new DBNweight(dbn_W(i
-),
),dbn_b(i
-),dbn_c(i
-))
valweight2 = RBMtrain(train_d2, opts,dbnconfig,weighti)
) =weight2.W
) =weight2.vW
) =weight2.b
) =weight2.vb
) =weight2.c
) =weight2.vc
new DBNModel(dbnconfig,dbn_W,dbn_b,
dbn_c)
}
(10) RBMtrain
神经网络训练运行代码。
/**
* 深度信念网络(Deep Belief Network)
* 每一层神经网络进行训练rbmtrain
*/
def RBMtrain(train_t: RDD[(BDM[Double], BDM[Double])],
opts: Array[Double],
dbnconfig: DBNConfig,
weight: DBNweight): DBNweight = {
valsc = train_t.sparkContext
varStartTime = System.currentTimeMillis()
varEndTime = System.currentTimeMillis()
// 权重參数变量
varrbm_W = weight.W
varrbm_vW = weight.vW
varrbm_b = weight.b
varrbm_vb = weight.vb
varrbm_c = weight.c
varrbm_vc = weight.vc
// 广播參数
valbc_config =sc.broadcast(dbnconfig)
// 训练样本数量
valm = train_t.count
// 计算batch的数量
).toInt
).toInt
valnumbatches = (m /batchsize).toInt
// numepochs是循环的次数
tonumepochs)
{
StartTime = System.currentTimeMillis()
valsplitW2 = Array.fill(numbatches)(1.0
/ numbatches)
varerr =0.0
// 依据分组权重,随机划分每组样本数据
tonumbatches)
{
// 1 广播权重參数
valbc_rbm_W =sc.broadcast(rbm_W)
valbc_rbm_vW =sc.broadcast(rbm_vW)
valbc_rbm_b =sc.broadcast(rbm_b)
valbc_rbm_vb =sc.broadcast(rbm_vb)
valbc_rbm_c =sc.broadcast(rbm_c)
valbc_rbm_vc =sc.broadcast(rbm_vc)
// 2 样本划分
valtrain_split2 = train_t.randomSplit(splitW2, System.nanoTime())
valbatch_xy1 =train_split2(l
-)
// 3 前向计算
// v1 = batch;
// h1 = sigmrnd(repmat(rbm.c', opts.batchsize, 1) + v1 * rbm.W');
// v2 = sigmrnd(repmat(rbm.b', opts.batchsize, 1) + h1 * rbm.W);
// h2 = sigm(repmat(rbm.c', opts.batchsize, 1) + v2 * rbm.W');
// c1 = h1' * v1;
// c2 = h2' * v2;
valbatch_vh1 =batch_xy1.map { f =>
vallable = f._1
valv1 = f._2
valh1 = DBN.sigmrnd((v1 *bc_rbm_W.value.t
+bc_rbm_c.value.t))
valv2 = DBN.sigmrnd((h1 *bc_rbm_W.value
+bc_rbm_b.value.t))
valh2 = DBN.sigm(v2 *bc_rbm_W.value.t
+bc_rbm_c.value.t)
valc1 =h1.t *v1
valc2 =h2.t *v2
(lable, v1,h1,v2,h2,c1,c2)
}
// 4 更新前向计算
// rbm.vW = rbm.momentum * rbm.vW + rbm.alpha * (c1 - c2) / opts.batchsize;
// rbm.vb = rbm.momentum * rbm.vb + rbm.alpha * sum(v1 - v2)' / opts.batchsize;
// rbm.vc = rbm.momentum * rbm.vc + rbm.alpha * sum(h1 - h2)' / opts.batchsize;
// W 更新方向
valvw1 =batch_vh1.map {
case (lable,v1,h1,v2,h2,c1,c2)
=>
c1 -c2
}
valinitw = BDM.zeros[Double](bc_rbm_W.value.rows,bc_rbm_W.value.cols)
val (vw2,countw2) =vw1.treeAggregate((initw,0L))(
seqOp = (c, v) => {
// c: (m, count), v: (m)
valm1 = c._1
valm2 =m1
+ v
)
},
combOp = (c1, c2) => {
// c: (m, count)
valm1 = c1._1
valm2 = c2._1
valm3 =m1
+ m2
(m3, c1._2 + c2._2)
})
valvw3 =vw2 /countw2.toDouble
rbm_vW = bc_config.value.momentum *bc_rbm_vW.value
+bc_config.value.alpha *vw3
// b 更新方向
valvb1 =batch_vh1.map {
case (lable,v1,h1,v2,h2,c1,c2)
=>
(v1 -v2)
}
valinitb = BDM.zeros[Double](bc_rbm_vb.value.cols,bc_rbm_vb.value.rows)
val (vb2,countb2) =vb1.treeAggregate((initb,0L))(
seqOp = (c, v) => {
// c: (m, count), v: (m)
valm1 = c._1
valm2 =m1
+ v
)
},
combOp = (c1, c2) => {
// c: (m, count)
valm1 = c1._1
valm2 = c2._1
valm3 =m1
+ m2
(m3, c1._2 + c2._2)
})
valvb3 =vb2 /countb2.toDouble
rbm_vb = bc_config.value.momentum *bc_rbm_vb.value
+bc_config.value.alpha *vb3.t
// c 更新方向
valvc1 =batch_vh1.map {
case (lable,v1,h1,v2,h2,c1,c2)
=>
(h1 -h2)
}
valinitc = BDM.zeros[Double](bc_rbm_vc.value.cols,bc_rbm_vc.value.rows)
val (vc2,countc2) =vc1.treeAggregate((initc,0L))(
seqOp = (c, v) => {
// c: (m, count), v: (m)
valm1 = c._1
valm2 =m1
+ v
)
},
combOp = (c1, c2) => {
// c: (m, count)
valm1 = c1._1
valm2 = c2._1
valm3 =m1
+ m2
(m3, c1._2 + c2._2)
})
valvc3 =vc2 /countc2.toDouble
rbm_vc = bc_config.value.momentum *bc_rbm_vc.value
+bc_config.value.alpha *vc3.t
// 5 权重更新
// rbm.W = rbm.W + rbm.vW;
// rbm.b = rbm.b + rbm.vb;
// rbm.c = rbm.c + rbm.vc;
rbm_W = bc_rbm_W.value +rbm_vW
rbm_b = bc_rbm_b.value +rbm_vb
rbm_c = bc_rbm_c.value +rbm_vc
// 6 计算误差
valdbne1 =batch_vh1.map {
case (lable,v1,h1,v2,h2,c1,c2)
=>
(v1 -v2)
}
val (dbne2,counte) =dbne1.treeAggregate((0.0,0L))(
seqOp = (c, v) => {
// c: (e, count), v: (m)
vale1 = c._1
vale2 = (v :* v).sum
valesum =e1
+ e2
)
},
combOp = (c1, c2) => {
// c: (e, count)
vale1 = c1._1
vale2 = c2._1
valesum =e1
+ e2
(esum, c1._2 + c2._2)
})
valdbne =dbne2 /counte.toDouble
err += dbne
}
EndTime = System.currentTimeMillis()
// 打印误差结果
printf("epoch: numepochs = %d , Took = %d seconds; Average reconstruction error is: %f.\n",i,
scala.math.ceil(().toLong,err
/ numbatches.toDouble)
}
new DBNweight(rbm_W,rbm_vW,rbm_b,
rbm_vb,rbm_c,rbm_vc)
}
2.2.4 DBNModel解析
(1) DBNModel
DBNModel:存储DBN网络參数。包含:config配置參数,dbn_W权重,dbn_b偏置,dbn_c偏置。
class DBNModel(
valconfig: DBNConfig,
valdbn_W: Array[BDM[Double]],
valdbn_b: Array[BDM[Double]],
valdbn_c: Array[BDM[Double]])extends
Serializable {
}
(2) dbnunfoldtonn
dbnunfoldtonn:将DBN网络參数转换为NN參数。
/**
* DBN模型转化为NN模型
* 权重转换
*/
defdbnunfoldtonn(outputsize: Int): (Array[Int], Int, Array[BDM[Double]])
= {
//1 size layer 參数转换
)
{
valsize1 =config.size
valsize2 = ArrayBuffer[Int]()
size2 ++= size1
size2 += outputsize
size2.toArray
} elseconfig.size
)config.layer
+elseconfig.layer
//2 dbn_W 參数转换
varinitW = ArrayBuffer[BDM[Double]]()
todbn_W.length
-) {
initW += BDM.horzcat(dbn_c(i),dbn_W(i))
}
(size, layer,initW.toArray)
}
转载请注明出处:
Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.2的更多相关文章
- Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.3
Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.3 http://blog.csdn.net/sunbow0 第二章Deep ...
- Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.1
Spark MLlib Deep Learning Deep Belief Network (深度学习-深度信念网络)2.1 http://blog.csdn.net/sunbow0 Spark ML ...
- Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.1
3.Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.1 http://blog.csdn.net/sunbow0 ...
- Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.2
3.Spark MLlib Deep Learning Convolution Neural Network(深度学习-卷积神经网络)3.2 http://blog.csdn.net/sunbow0 ...
- Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.3
3.Spark MLlib Deep Learning Convolution Neural Network(深度学习-卷积神经网络)3.3 http://blog.csdn.net/sunbow0 ...
- Deep Learning 17:DBN的学习_读论文“A fast learning algorithm for deep belief nets”的总结
1.论文“A fast learning algorithm for deep belief nets”的“explaining away”现象的解释: 见:Explaining Away的简单理解 ...
- 调参侠的末日? Auto-Keras 自动搜索深度学习模型的网络架构和超参数
Auto-Keras 是一个开源的自动机器学习库.Auto-Keras 的终极目标是允许所有领域的只需要很少的数据科学或者机器学习背景的专家都可以很容易的使用深度学习.Auto-Keras 提供了一系 ...
- 深度学习图像分割——U-net网络
写在前面: 一直没有整理的习惯,导致很多东西会有所遗忘,遗漏.借着这个机会,养成一个习惯. 对现有东西做一个整理.记录,对新事物去探索.分享. 因此博客主要内容为我做过的,所学的整理记录以及新的算法. ...
- 深度学习|基于LSTM网络的黄金期货价格预测--转载
深度学习|基于LSTM网络的黄金期货价格预测 前些天看到一位大佬的深度学习的推文,内容很适用于实战,争得原作者转载同意后,转发给大家.之后会介绍LSTM的理论知识. 我把code先放在我github上 ...
随机推荐
- BZOJ 4562 搜索...
思路: 统计入度&出度 每搜到一个点 in[v[i]]--,f[v[i]]+=f[t]; if(!in[v[i]])if(out[v[i]])q.push(v[i]);else ans+=f[ ...
- 第一课trie 树 POJ 2001
最短前缀(Openjudge上抄的) 总时间限制: 1000ms 内存限制: 65536kB 描述 一个字符串的前缀是从该字符串的第一个字符起始的一个子串.例如 "carbon"的 ...
- PHP 二维数组排序 可以按指定 键值排序
<?php header("Content-Type:utf-8"); $arr = array( 0 => array( 'name' => '国际原油价格', ...
- RFC1867 HTTP file upload
RFC1867 HTTP file upload RFC1867 is the standard definition of that "Browse..." button tha ...
- iphone(苹果)手机浏览器顶部下拉出现网页源
在苹果手机下拉页面,会出现类似上图那样,具体方法如下: function handler(){//禁止默认滑动函数 event.preventDefault();}document.addEventL ...
- SqlSever2005 一千万条以上记录分页数据库优化经验总结【索引优化 + 代码优化】
对普通开发人员来说经常能接触到上千万条数据优化的机会也不是很多,这里还是要感谢公司提供了这样的一个环境,而且公司让我来做优化工作.当数据库中的记录不超过10万条时,很难分辨出开发人员的水平有多高,当数 ...
- IIS日志分析:SC-Status语义
在网站属性-网站-日志(属性) 中进行设定该站点IIS日志常规属性和扩展属性,扩展属性设置IIS日志包含字段显示. HTTP协议状态(sc-status)码的含义 IIS中 100 Continue ...
- 实验8 标准模板库STL
一.实验目的与要求: 了解标准模板库STL中的容器.迭代器.函数对象和算法等基本概念. 掌握STL,并能应用STL解决实际问题. 二.实验过程: 完成实验8标准模板库STL中练习题,见:http:// ...
- theano和keras安装
最近在学深度学习框架,要用到keras库,keras可以搭建在tensorflow和theano上,我电脑装的是Windows,因此决定在电脑上搭建theano框架 下面回顾我的安装过程: 1.安装a ...
- 预备篇 I :范畴与函子
拓扑是研究几何图形或空间在连续改变形状后还能保持不变的一些性质的一个学科.它只考虑物体间的位置关系而不考虑它们的形状和大小. 拓扑是集合上的一种结构. 拓扑英文名是Topology,直译是地志学,最早 ...