ELU:
 
梯度下降优化方式:
 
  1. GradientDescentOptimizer
    This one is sensitive to the problem and you can face lots of problems using it, from getting stuck in saddle points to oscillating around the minimum and slow convergence. I found it useful for Word2Vec, CBOW and feed-forward architectures in general, but Momentum is also good.
  2. AdadeltaOptimizer 
    Adadelta addresses the issues of using constant of linearly decaying learning rate. In case of recurrent networks it’s among the fastest.
  3. MomentumOptimizer
    If you learn a regression and find your loss function oscillating, switching from SGD to Momentum may be the right solution.
  4. AdamOptimizer
    Adaptive momentum in addition to the Adadelta features.
  5. FtrlOptimizer
    I haven’t used it myself, but from the paper I see that it’s better suited for online learning on large sparse datasets, like recommendation systems.
  6. RMSPropOptimizer
    This is a variant Adadelta that serves the same purpose - dynamic decay of a learning rate multiplier.
 
CNN神经网络一些tricky的地方:
摘要:
1、适合Relu的参数初始化:w = np.random.randn(n) * sqrt(2.0/n) # current recommendation
2、LR: In practice, if you see that you stopped making progress on the validation set, divide the LR by 2 (or by 5), and keep going, which might give you a surprise.亲测有效
3、关于learning rate:
RNN学习:
 
FCN:http://blog.csdn.net/happyer88/article/details/47205839:Fully Convolutional Networks for Semantic Segmentation笔记
优点:
1,训练一个end-to-end的FCN模型,利用卷积神经网络的很强的学习能力,得到较准确的结果,以前的基于CNN的方法都是要对输入或者输出做一些处理,才能得到最终结果。
 
2,直接使用现有的CNN网络,如AlexNet, VGG16, GoogLeNet,只需在末尾加上upsampling,参数的学习还是利用CNN本身的反向传播原理,"whole image training is effective and efficient."
 
3,不限制输入图片的尺寸,不要求图片集中所有图片都是同样尺寸,只需在最后upsampling时按原图被subsampling的比例缩放回来,最后都会输出一张与原图大小一致的dense prediction map
 
理解DL细节的不错的文章:
 
 
如果遇到了最后的输出值都一样的情况,可能的解决办法如下:
Hey, I had a similar issue with my own (hand-coded) CNN trying to get some results with the CIFAR-10 dataset. What I found was that I had forgotten to normalize the input images to some range that made sense with my weight scales. Try something like X = X / max(abs(X)) to put values between -1 and 1.
Another possibility is your weight initialization is causing many ReLU units to die. I usually initialize all weights with a small number times a normal Gaussian distribution. For wx+ b, b being the biases, you can try that + a small positive constant. I.e. b = weight_scale*random.randn(num, 1) + 0.1
Another idea — your sigmoid unit might be squashing your responses too much. They’re fairly uncommon in CNNs from what I understand, maybe just stick to ReLUs.
Last point — try testing on a small training batch (say 10–20 images) and just train until you overfit with 100% accuracy. That’s one way of knowing that your network is capable of doing something. I think these smaller tests are very important before investing hours or days into proper training, which is what these networks often require.
我最后的解决办法是:加了batch normalization,不过具体原因也没有确定
 
 
GAN的资料:

Deep Learning 资料总结的更多相关文章

  1. 机器学习(Machine Learning)&深度学习(Deep Learning)资料【转】

    转自:机器学习(Machine Learning)&深度学习(Deep Learning)资料 <Brief History of Machine Learning> 介绍:这是一 ...

  2. 机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2)

    ##机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2)---#####注:机器学习资料[篇目一](https://github.co ...

  3. 【重磅干货整理】机器学习(Machine Learning)与深度学习(Deep Learning)资料汇总

    [重磅干货整理]机器学习(Machine Learning)与深度学习(Deep Learning)资料汇总 .

  4. 机器学习(Machine Learning)&amp;深度学习(Deep Learning)资料

    机器学习(Machine Learning)&深度学习(Deep Learning)资料 機器學習.深度學習方面不錯的資料,轉載. 原作:https://github.com/ty4z2008 ...

  5. 机器学习(Machine Learning)&深度学习(Deep Learning)资料

    <Brief History of Machine Learning> 介绍:这是一篇介绍机器学习历史的文章,介绍很全面,从感知机.神经网络.决策树.SVM.Adaboost到随机森林.D ...

  6. 机器学习(Machine Learning)&深入学习(Deep Learning)资料

    <Brief History of Machine Learning> 介绍:这是一篇介绍机器学习历史的文章,介绍很全面,从感知机.神经网络.决策树.SVM.Adaboost 到随机森林. ...

  7. 机器学习(Machine Learning)&深度学习(Deep Learning)资料汇总 (上)

    转载:http://dataunion.org/8463.html?utm_source=tuicool&utm_medium=referral <Brief History of Ma ...

  8. 机器学习(Machine Learning)&深度学习(Deep Learning)资料(下)

    转载:http://www.jianshu.com/p/b73b6953e849 该资源的github地址:Qix <Statistical foundations of machine lea ...

  9. 机器学习(Machine Learning)与深度学习(Deep Learning)资料汇总

    <Brief History of Machine Learning> 介绍:这是一篇介绍机器学习历史的文章,介绍很全面,从感知机.神经网络.决策树.SVM.Adaboost到随机森林.D ...

随机推荐

  1. vue的项目

    vue的项目打开也是非常具有解耦性的 最重要的就是src目录了  我们的入口在main中  main是你的实例化vue  app中就是我们的每一块田地是我们的vue实例对这个的操作 ,index因为是 ...

  2. .Linode服务器的使用 网站迁移

    很多建站的朋友习惯了虚拟主机的 Cpanel 面板,但是面对 VPS 都感觉无所适从.毕竟外贸人很少接触到这类知识,所以需要一个贴心的新手教程. Linode VPS:国外最好的VPS注册购买教程 撇 ...

  3. Linux通过docker安装运行酷Q--用QQ骰子君进行跑团

    Linux通过docker安装运行酷Q 文:铁乐与猫 需求:和小伙伴周末进行愉快的TRPG跑团,需要在QQ讨论组上加了qq小号后,将qq小号用酷Q配合投骰的应用变成骰子君. 限制:我个人的云计算服务器 ...

  4. 阿里八八Alapa事后诸葛亮

    设想和目标 1.我们的软件要解决什么问题?是否定义的很清楚?是否对典型用户和典型场景有清晰的描述? 我们的项目希望解决用户对于时间.日程管理上不够方便.直观.易丢失的问题,因为并不是新颖高端的概念,因 ...

  5. JS 事件冒泡、捕获。学习记录

    作为一个转行刚到公司的新人,任务不多,这一周任务全部消灭,闲暇的一天也别闲着,悄悄的看起了书.今天写一下JS的事件冒泡.捕获. 也是今天看的内容有点多了,有些消化不了,就随手记录一下.纯属自我理解,如 ...

  6. [装]JMX监控Hadoop

    http://chenjc-it.iteye.com/blog/1539746 实验成功!

  7. [转]IE9.0或者360下js(JavaScript、jQuery)不能正确执行(加载),按F12后执行正常;Firefox下ajax的success返回数据data(json、string)无法获取

    兼容问题1: 页面的分享等插件加载不全,并无法点击. 兼容问题2: IE下页面选择器(#id..class.etc.)绑定click事件无法访问到,后台springmvc方法,也无法获取ajax的su ...

  8. elasticSearch curl 语法总结

    #创建索引a.put创建curl -XPUT http://localhost:9200/shb01/student/1-d'{"name":"jack",&q ...

  9. Lambda表达式和For循环使用需要注意的一个地方

    一个需要注意的地方看下面的代码: using System; using System.Collections.Generic; using System.Linq; namespace MyCsSt ...

  10. webbench安装和简单使用

    一.安装流程 wget http://home.tiscali.cz/~cz210552/distfiles/webbench-1.5.tar.gz tar zxvf webbench-1.5.tar ...