TFLearn构建神经网络
TFLearn构建神经网络
Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizersets the training method, here stochastic gradient descentlearning_rateis the learning ratelossdetermines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
TFLearn构建神经网络的更多相关文章
- 使用pytorch构建神经网络的流程以及一些问题
使用PyTorch构建神经网络十分的简单,下面是我总结的PyTorch构建神经网络的一般过程以及我在学习当中遇到的一些问题,期望对你有所帮助. PyTorch构建神经网络的一般过程 下面的程序是PyT ...
- Tensorflow BatchNormalization详解:2_使用tf.layers高级函数来构建神经网络
Batch Normalization: 使用tf.layers高级函数来构建神经网络 觉得有用的话,欢迎一起讨论相互学习~Follow Me 参考文献 吴恩达deeplearningai课程 课程笔 ...
- 使用 Visual Studio 2015 + Python3.6 + tensorflow 构建神经网络时报错:'utf-8' codec can't decode byte 0xcc in position 78: invalid continuation byte
使用 Visual Studio 2015 + Python3.6 + tensorflow 构建神经网络时报错:'utf-8' codec can't decode byte 0xcc in pos ...
- 使用PyTorch构建神经网络以及反向传播计算
使用PyTorch构建神经网络以及反向传播计算 前一段时间南京出现了疫情,大概原因是因为境外飞机清洁处理不恰当,导致清理人员感染.话说国外一天不消停,国内就得一直严防死守.沈阳出现了一例感染人员,我在 ...
- 使用Sybmol模块来构建神经网络
符号编程 在之前的文章,我们介绍了NDArray模块,它是MXNet中处理数据的核心模块,我们可以使用NDArray完成非常丰富的数学运算.实际上,我们完全可以使用NDArray来定义神经网络,这种方 ...
- pytorch中torch.nn构建神经网络的不同层的含义
主要是参考这里,写的很好PyTorch 入门实战(四)--利用Torch.nn构建卷积神经网络 卷积层nn.Con2d() 常用参数 in_channels:输入通道数 out_channels:输出 ...
- TensorFlow深度学习!构建神经网络预测股票价格!⛵
作者:韩信子@ShowMeAI 深度学习实战系列:https://www.showmeai.tech/tutorials/42 TensorFlow 实战系列:https://www.showmeai ...
- caffe中是如何运用protobuf构建神经网络的?
caffe这个框架设计的比较小巧精妙,它采用了protobuf来作为交互的媒介,避免了繁重的去设计各个语言的接口,开发者可以使用任意语言通过这个protobuf这个媒介,来运行这个框架. 我们这里不过 ...
- 用tensorflow构建神经网络学习简单函数
目标是学习\(y=2x+3\) 建立一个5层的神经网络,用平方误差作为损失函数. 代码如下: import tensorflow as tf import numpy as np import tim ...
随机推荐
- 终极锁实战:单JVM锁+分布式锁
目录 1.前言 2.单JVM锁 3.分布式锁 4.总结 =========正文分割线================= 1.前言 锁就像一把钥匙,需要加锁的代码就像一个房间.出现互斥操作的场景:多人同 ...
- VerilogHDL可综合设计的注意事项
可综合的语法已经记录得差不多了,剩下一些遗留的问题,在这里记录一下吧. 一.逻辑设计 (1)组合逻辑设计 下面是一些用Verilog进行组合逻辑设计时的一些注意事项: ①组合逻辑可以得到两种常用的RT ...
- Android学习笔记-构建一个可复用的自定义BaseAdapter
转载自http://www.runoob.com/w3cnote/android-tutorial-customer-baseadapter.html 作者:coder-pig 本节引言: 如题, ...
- oracle 审计日志清理
--进入审计日志目录: cd $ORACLE_BASE/admin/$ORACLE_SID/adump --删除3个月前的审计文件: find ./ -type f -name "*.a ...
- C++ STL Binary search详解
一.解释 以前遇到二分的题目都是手动实现二分,不得不说错误比较多,关于返回值,关于区间的左闭右开等很容易出错,最近做题发现直接使用STL中的二分函数方便快捷还不会出错,不过对于没有接触过的同学,二分函 ...
- LAP+mysql-主从+redis
Redis是一个开源的,内存中的数据结构存储系统,他可以用作数据库,缓存和消息中间介.支持多种类型数据库结构,如字符串(strings),散列(hashes),列表(lists),集合(sets),有 ...
- [Git] 1、常用Git命令行总结(一)
一.GIT CLONE最常用的有如下几个 1.最简单直接的命令:git clone xxx.git 2.如果想clone到指定目录:git clone xxx.git “指定目录” 3.clone时创 ...
- swift3.0 屏幕截图并且保存到本地相册
所要截取的对象 var bg_view: UIView! 截取并且保存的代码如下 UIGraphicsBeginImageContextWithOptions(bg_view.frame.size, ...
- code_smith生成实体类
- 安徽省2016“京胜杯”程序设计大赛_B_阵前第一功
阵前第一功 Time Limit: 1000 MS Memory Limit: 65536 KB Total Submissions: 63 Accepted: 29 Description A国每个 ...