keras构建1D-CNN模型
接触过深度学习的人一定听过keras,为了学习的方便,接下来将要仔细的讲解一下这keras库是如何构建1D-CNN深度学习框架的
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Embedding, Conv1D, MaxPooling1D, GlobalMaxPooling1D, Dense,Reshape
from keras.optimizers import RMSprop
import warnings
warnings.filterwarnings("ignore")
模式一
import numpy as np
x_train = np.random.randint(100,size=(1200,100))
y_train = np.random.randint(100,size=(1200,1))
model = Sequential()
model.add(Embedding(max_features, 500, input_length = len(x_train[1])))# 输入(1200,100),输出(10,100,500)
model.add(Conv1D(32, 7, activation = 'relu'))
model.add(MaxPooling1D(5))
model.add(Conv1D(32, 7, activation = 'relu'))
model.add(GlobalMaxPooling1D())
model.add(Dense(1))
model.summary()
model.compile(optimizer = RMSprop(lr = 1e-4),
loss = 'binary_crossentropy',
metrics = ['acc'])
history = model.fit(x_train, y_train,
epochs = 10,
batch_size = 10,
validation_split = 0.2)
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 100, 500) 5000000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 94, 32) 112032
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 18, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 12, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 5,119,265
Trainable params: 5,119,265
Non-trainable params: 0
_________________________________________________________________
Train on 960 samples, validate on 240 samples
Epoch 1/10
960/960 [==============================] - 17s 17ms/step - loss: -108.8848 - acc: 0.0063 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 2/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 3/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 4/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 5/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 6/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 7/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 8/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 9/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 10/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
模式二
from keras import Input, Model
from keras.layers import Dense,Conv1D,Embedding,MaxPool1D
class oneDCNN:
def __init__(self, maxlen, max_features, embedding_dims,
last_activation='softmax'):
self.maxlen = maxlen
self.max_features = max_features
self.embedding_dims = embedding_dims
# self.class_num = class_num
self.last_activation = last_activation
def get_model(self):
input = Input((self.maxlen,))
embedding = Embedding(self.max_features, self.embedding_dims, input_length=self.maxlen)(input)
c1 = Conv1D(32, 7, activation='relu')(embedding)
MP1 = MaxPool1D(5)(c1)
c2 = Conv1D(32, 7, activation="relu")(MP1)
x = GlobalMaxPooling1D()(c2)
output = Dense(1)(x)
model = Model(inputs=input, outputs=output)
return model
model = oneDCNN(maxlen=100,max_features=100,embedding_dims=500).get_model()
model.summary()
model.compile(optimizer = RMSprop(lr = 1e-4),
loss = 'binary_crossentropy',
metrics = ['acc'])
history = model.fit(x_train, y_train,
epochs = 10,
batch_size = 10,
validation_split = 0.2)
Model: "model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) (None, 100) 0
_________________________________________________________________
embedding_3 (Embedding) (None, 100, 500) 50000
_________________________________________________________________
conv1d_5 (Conv1D) (None, 94, 32) 112032
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 18, 32) 0
_________________________________________________________________
conv1d_6 (Conv1D) (None, 12, 32) 7200
_________________________________________________________________
global_max_pooling1d_4 (Glob (None, 32) 0
_________________________________________________________________
dense_3 (Dense) (None, 1) 33
=================================================================
Total params: 169,265
Trainable params: 169,265
Non-trainable params: 0
_________________________________________________________________
Train on 960 samples, validate on 240 samples
Epoch 1/10
960/960 [==============================] - 1s 964us/step - loss: 89.8610 - acc: 0.0094 - val_loss: -54.5870 - val_acc: 0.0042
Epoch 2/10
960/960 [==============================] - 1s 732us/step - loss: -682.0644 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 3/10
960/960 [==============================] - 1s 706us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 4/10
960/960 [==============================] - 1s 676us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 5/10
960/960 [==============================] - 1s 666us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 6/10
960/960 [==============================] - 1s 677us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 7/10
960/960 [==============================] - 1s 728us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 8/10
960/960 [==============================] - 1s 694us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 9/10
960/960 [==============================] - 1s 721us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 10/10
960/960 [==============================] - 1s 729us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
keras构建1D-CNN模型的更多相关文章
- 数据挖掘入门系列教程(十二)之使用keras构建CNN网络识别CIFAR10
简介 在上一篇博客:数据挖掘入门系列教程(十一点五)之CNN网络介绍中,介绍了CNN的工作原理和工作流程,在这一篇博客,将具体的使用代码来说明如何使用keras构建一个CNN网络来对CIFAR-10数 ...
- Keras入门(四)之利用CNN模型轻松破解网站验证码
项目简介 在之前的文章keras入门(三)搭建CNN模型破解网站验证码中,笔者介绍介绍了如何用Keras来搭建CNN模型来破解网站的验证码,其中验证码含有字母和数字. 让我们一起回顾一下那篇文 ...
- 入门项目数字手写体识别:使用Keras完成CNN模型搭建(重要)
摘要: 本文是通过Keras实现深度学习入门项目——数字手写体识别,整个流程介绍比较详细,适合初学者上手实践. 对于图像分类任务而言,卷积神经网络(CNN)是目前最优的网络结构,没有之一.在面部识别. ...
- keras训练cnn模型时loss为nan
keras训练cnn模型时loss为nan 1.首先记下来如何解决这个问题的:由于我代码中 model.compile(loss='categorical_crossentropy', optimiz ...
- Keras 构建DNN 对用户名检测判断是否为非法用户名(从数据预处理到模型在线预测)
一. 数据集的准备与预处理 1 . 收集dataset (大量用户名--包含正常用户名与非法用户名) 包含两个txt文件 legal_name.txt ilegal_name.txt. 如下图所 ...
- keras入门(三)搭建CNN模型破解网站验证码
项目介绍 在文章CNN大战验证码中,我们利用TensorFlow搭建了简单的CNN模型来破解某个网站的验证码.验证码如下: 在本文中,我们将会用Keras来搭建一个稍微复杂的CNN模型来破解以上的 ...
- 在R中使用Keras和TensorFlow构建深度学习模型
一.以TensorFlow为后端的Keras框架安装 #首先在ubuntu16.04中运行以下代码 sudo apt-get install libcurl4-openssl-dev libssl-d ...
- [Keras] mnist with cnn
典型的卷积神经网络. Keras傻瓜式读取数据:自动下载,自动解压,自动加载. # X_train: array([[[[ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0. ...
- 构建分布式Tensorflow模型系列:CVR预估之ESMM
https://zhuanlan.zhihu.com/p/42214716 本文是“基于Tensorflow高阶API构建大规模分布式深度学习模型系列”的第五篇,旨在通过一个完整的案例巩固一下前面几篇 ...
- 卷积神经网络(CNN)模型结构
在前面我们讲述了DNN的模型与前向反向传播算法.而在DNN大类中,卷积神经网络(Convolutional Neural Networks,以下简称CNN)是最为成功的DNN特例之一.CNN广泛的应用 ...
随机推荐
- ORACLE 失效索引重建
-- 获取失效索引 SELECT * FROM USER_INDEXES WHERE TABLE_NAME IN ('表名') AND STATUS = 'UNUSABLE'; -- 重建语法alte ...
- 17.SQLite数据库存储
Android系统内置一个SQLite数据库,SQLite是一款轻量级的关系型数据库,它的运算速度非常快,占用资源很少,通常只需要几百K的内存就足够了. SQLite不仅支持标准的SQL语法,还遵循了 ...
- [MySQL高级](一) EXPLAIN用法和结果分析
转载自: https://blog.csdn.net/why15732625998/article/details/80388236
- SWUpdate(Suricatta) + Hawkbit Server
SWUpdate的详细介绍 https://community.nxp.com/pwmxy87654/attachments/pwmxy87654/imx-processors%40tkb/5632/ ...
- 力扣51. N 皇后(回溯法)
按照国际象棋的规则,皇后可以攻击与之处在同一行或同一列或同一斜线上的棋子. n 皇后问题 研究的是如何将 n 个皇后放置在 n×n 的棋盘上,并且使皇后彼此之间不能相互攻击. 给你一个整数 n ,返回 ...
- k8s 基础
创建pod(kubectl create -f {podname} .yaml pod "{podname} " created -n {namespace}) cat name ...
- “adb”不是内部或外部命令——解决方案
在AS(Android Studio简称AS)app真机测试中adb可以轻松找到安卓设备,ADB全称Android Debug Bridge,用于Android设备进行交互,也可以这样理解ADB是An ...
- python代码规范PEP8
1.引言 本文档给出了 Python 编码规约,主要 Python 发行版中的标准库即遵守该规约.对于 C 代码风格的 Python 程序,请参阅配套的 C 代码风格指南. 本文档和 PEP 257( ...
- 自考网络原理:安全套接字层SSL
对ssl/tls的理解 前:SSL; 后:TLS: 以下是B站上的up主讲的,非常的深入浅出,讲的很好.感谢技术蛋老师. https://www.bilibili.com/video/BV1KY411 ...
- Android studio java文件显示j爆红
今天在android studio打开一个原来的工程,此工程是很久以前使用eclipse创建的,在android studio下有些问题需要解决. 1.设置project的jdk, 2.设置modul ...