keras构建1D-CNN模型
接触过深度学习的人一定听过keras,为了学习的方便,接下来将要仔细的讲解一下这keras库是如何构建1D-CNN深度学习框架的
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Embedding, Conv1D, MaxPooling1D, GlobalMaxPooling1D, Dense,Reshape
from keras.optimizers import RMSprop
import warnings
warnings.filterwarnings("ignore")
模式一
import numpy as np
x_train = np.random.randint(100,size=(1200,100))
y_train = np.random.randint(100,size=(1200,1))
model = Sequential()
model.add(Embedding(max_features, 500, input_length = len(x_train[1])))# 输入(1200,100),输出(10,100,500)
model.add(Conv1D(32, 7, activation = 'relu'))
model.add(MaxPooling1D(5))
model.add(Conv1D(32, 7, activation = 'relu'))
model.add(GlobalMaxPooling1D())
model.add(Dense(1))
model.summary()
model.compile(optimizer = RMSprop(lr = 1e-4),
loss = 'binary_crossentropy',
metrics = ['acc'])
history = model.fit(x_train, y_train,
epochs = 10,
batch_size = 10,
validation_split = 0.2)
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 100, 500) 5000000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 94, 32) 112032
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 18, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 12, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 5,119,265
Trainable params: 5,119,265
Non-trainable params: 0
_________________________________________________________________
Train on 960 samples, validate on 240 samples
Epoch 1/10
960/960 [==============================] - 17s 17ms/step - loss: -108.8848 - acc: 0.0063 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 2/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 3/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 4/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 5/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 6/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 7/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 8/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 9/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 10/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
模式二
from keras import Input, Model
from keras.layers import Dense,Conv1D,Embedding,MaxPool1D
class oneDCNN:
def __init__(self, maxlen, max_features, embedding_dims,
last_activation='softmax'):
self.maxlen = maxlen
self.max_features = max_features
self.embedding_dims = embedding_dims
# self.class_num = class_num
self.last_activation = last_activation
def get_model(self):
input = Input((self.maxlen,))
embedding = Embedding(self.max_features, self.embedding_dims, input_length=self.maxlen)(input)
c1 = Conv1D(32, 7, activation='relu')(embedding)
MP1 = MaxPool1D(5)(c1)
c2 = Conv1D(32, 7, activation="relu")(MP1)
x = GlobalMaxPooling1D()(c2)
output = Dense(1)(x)
model = Model(inputs=input, outputs=output)
return model
model = oneDCNN(maxlen=100,max_features=100,embedding_dims=500).get_model()
model.summary()
model.compile(optimizer = RMSprop(lr = 1e-4),
loss = 'binary_crossentropy',
metrics = ['acc'])
history = model.fit(x_train, y_train,
epochs = 10,
batch_size = 10,
validation_split = 0.2)
Model: "model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) (None, 100) 0
_________________________________________________________________
embedding_3 (Embedding) (None, 100, 500) 50000
_________________________________________________________________
conv1d_5 (Conv1D) (None, 94, 32) 112032
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 18, 32) 0
_________________________________________________________________
conv1d_6 (Conv1D) (None, 12, 32) 7200
_________________________________________________________________
global_max_pooling1d_4 (Glob (None, 32) 0
_________________________________________________________________
dense_3 (Dense) (None, 1) 33
=================================================================
Total params: 169,265
Trainable params: 169,265
Non-trainable params: 0
_________________________________________________________________
Train on 960 samples, validate on 240 samples
Epoch 1/10
960/960 [==============================] - 1s 964us/step - loss: 89.8610 - acc: 0.0094 - val_loss: -54.5870 - val_acc: 0.0042
Epoch 2/10
960/960 [==============================] - 1s 732us/step - loss: -682.0644 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 3/10
960/960 [==============================] - 1s 706us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 4/10
960/960 [==============================] - 1s 676us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 5/10
960/960 [==============================] - 1s 666us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 6/10
960/960 [==============================] - 1s 677us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 7/10
960/960 [==============================] - 1s 728us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 8/10
960/960 [==============================] - 1s 694us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 9/10
960/960 [==============================] - 1s 721us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 10/10
960/960 [==============================] - 1s 729us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
keras构建1D-CNN模型的更多相关文章
- 数据挖掘入门系列教程(十二)之使用keras构建CNN网络识别CIFAR10
简介 在上一篇博客:数据挖掘入门系列教程(十一点五)之CNN网络介绍中,介绍了CNN的工作原理和工作流程,在这一篇博客,将具体的使用代码来说明如何使用keras构建一个CNN网络来对CIFAR-10数 ...
- Keras入门(四)之利用CNN模型轻松破解网站验证码
项目简介 在之前的文章keras入门(三)搭建CNN模型破解网站验证码中,笔者介绍介绍了如何用Keras来搭建CNN模型来破解网站的验证码,其中验证码含有字母和数字. 让我们一起回顾一下那篇文 ...
- 入门项目数字手写体识别:使用Keras完成CNN模型搭建(重要)
摘要: 本文是通过Keras实现深度学习入门项目——数字手写体识别,整个流程介绍比较详细,适合初学者上手实践. 对于图像分类任务而言,卷积神经网络(CNN)是目前最优的网络结构,没有之一.在面部识别. ...
- keras训练cnn模型时loss为nan
keras训练cnn模型时loss为nan 1.首先记下来如何解决这个问题的:由于我代码中 model.compile(loss='categorical_crossentropy', optimiz ...
- Keras 构建DNN 对用户名检测判断是否为非法用户名(从数据预处理到模型在线预测)
一. 数据集的准备与预处理 1 . 收集dataset (大量用户名--包含正常用户名与非法用户名) 包含两个txt文件 legal_name.txt ilegal_name.txt. 如下图所 ...
- keras入门(三)搭建CNN模型破解网站验证码
项目介绍 在文章CNN大战验证码中,我们利用TensorFlow搭建了简单的CNN模型来破解某个网站的验证码.验证码如下: 在本文中,我们将会用Keras来搭建一个稍微复杂的CNN模型来破解以上的 ...
- 在R中使用Keras和TensorFlow构建深度学习模型
一.以TensorFlow为后端的Keras框架安装 #首先在ubuntu16.04中运行以下代码 sudo apt-get install libcurl4-openssl-dev libssl-d ...
- [Keras] mnist with cnn
典型的卷积神经网络. Keras傻瓜式读取数据:自动下载,自动解压,自动加载. # X_train: array([[[[ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0. ...
- 构建分布式Tensorflow模型系列:CVR预估之ESMM
https://zhuanlan.zhihu.com/p/42214716 本文是“基于Tensorflow高阶API构建大规模分布式深度学习模型系列”的第五篇,旨在通过一个完整的案例巩固一下前面几篇 ...
- 卷积神经网络(CNN)模型结构
在前面我们讲述了DNN的模型与前向反向传播算法.而在DNN大类中,卷积神经网络(Convolutional Neural Networks,以下简称CNN)是最为成功的DNN特例之一.CNN广泛的应用 ...
随机推荐
- npm i error:0909006C:PEM routines:get_name:no start line 遇到问题解决
找了大半天的问题,结果是有个httpd的线程开机自动启动,把端口占用了
- COMMON_FUNC_SPLIT_STRING
void SplitString(const std::string& s, std::vector<std::string>& v, const std::string& ...
- 掌控安全学院SQL注入靶场-布尔盲注(二)
首页打开如下 判断注入 闭合报错 先判断数据库的长度....
- php对接钉钉机器人报警接口
<?php function request_by_curl($remote_server, $post_string) { $ch = curl_init(); curl_setopt($ch ...
- spring mvc @Configuration addConverterFactory 无效问题
spring 版本: 4.3.7 addFormatters(FormatterRegistry registry) 不生效 <!-- 此处与 @EnableWebmvc 冲突, 配置此处后 E ...
- centos NTP时间同步
1.先设置时区 timedatectl set-timezone Asia/Shanghai 2安装ntp服务 yum install chrony 3.修改ntp配置文件的ntp服务器 vi /et ...
- 简易Map模板
非红黑树,排序+二分搜索,查找修改O(logN),插入删除O(N) #ifndef MAP_H #define MAP_H #include "main.h" /*-------- ...
- 官网jdk8,jdk11下载时需要登录Oracle账号的问题解决
当到这一步骤时先勾选同意,在这个下载按钮上点鼠标右键复制链接地址 文件的下载地址 我们需要把地址做些修改.把等号前面的地址删掉,然后找到等号后面地址中的otn后面加上-pub 然后把这个地址直接复制到 ...
- CAD动态输入框不见了怎么办?教你三个调出方法,轻松搞定!
CAD动态输入是除了命令行以外又一种友好的人机交互方式,在CAD设计过程中,启用CAD动态输入功能,可以直接在光标附近显示信息.输入值等.可当CAD动态输入框不见了的时候,该怎么办呢?本文小编以浩辰C ...
- C++的switch/case,需要大括号
如果,switch/case的某一条case语句包含初始化定义变量,例如int i. 那么case后面的语句,需要用大括号包装起来. 原因如下: https://stackoverflow.com/q ...