接触过深度学习的人一定听过keras,为了学习的方便,接下来将要仔细的讲解一下这keras库是如何构建1D-CNN深度学习框架的

from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Embedding, Conv1D, MaxPooling1D, GlobalMaxPooling1D, Dense,Reshape
from keras.optimizers import RMSprop
import warnings
warnings.filterwarnings("ignore")

模式一

import numpy as np
x_train = np.random.randint(100,size=(1200,100))
y_train = np.random.randint(100,size=(1200,1))
model = Sequential()
model.add(Embedding(max_features, 500, input_length = len(x_train[1])))# 输入(1200,100),输出(10,100,500)
model.add(Conv1D(32, 7, activation = 'relu'))
model.add(MaxPooling1D(5))
model.add(Conv1D(32, 7, activation = 'relu'))
model.add(GlobalMaxPooling1D())
model.add(Dense(1)) model.summary() model.compile(optimizer = RMSprop(lr = 1e-4),
loss = 'binary_crossentropy',
metrics = ['acc']) history = model.fit(x_train, y_train,
epochs = 10,
batch_size = 10,
validation_split = 0.2)
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 100, 500) 5000000
_________________________________________________________________
conv1d_1 (Conv1D) (None, 94, 32) 112032
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 18, 32) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 12, 32) 7200
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 33
=================================================================
Total params: 5,119,265
Trainable params: 5,119,265
Non-trainable params: 0
_________________________________________________________________
Train on 960 samples, validate on 240 samples
Epoch 1/10
960/960 [==============================] - 17s 17ms/step - loss: -108.8848 - acc: 0.0063 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 2/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 3/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 4/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 5/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 6/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 7/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 8/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 9/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 10/10
960/960 [==============================] - 2s 2ms/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042

模式二

from keras import Input, Model
from keras.layers import Dense,Conv1D,Embedding,MaxPool1D class oneDCNN:
def __init__(self, maxlen, max_features, embedding_dims,
last_activation='softmax'):
self.maxlen = maxlen
self.max_features = max_features
self.embedding_dims = embedding_dims
# self.class_num = class_num
self.last_activation = last_activation def get_model(self):
input = Input((self.maxlen,))
embedding = Embedding(self.max_features, self.embedding_dims, input_length=self.maxlen)(input)
c1 = Conv1D(32, 7, activation='relu')(embedding)
MP1 = MaxPool1D(5)(c1)
c2 = Conv1D(32, 7, activation="relu")(MP1)
x = GlobalMaxPooling1D()(c2) output = Dense(1)(x)
model = Model(inputs=input, outputs=output)
return model
model = oneDCNN(maxlen=100,max_features=100,embedding_dims=500).get_model()

model.summary()
model.compile(optimizer = RMSprop(lr = 1e-4),
loss = 'binary_crossentropy',
metrics = ['acc']) history = model.fit(x_train, y_train,
epochs = 10,
batch_size = 10,
validation_split = 0.2)
Model: "model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) (None, 100) 0
_________________________________________________________________
embedding_3 (Embedding) (None, 100, 500) 50000
_________________________________________________________________
conv1d_5 (Conv1D) (None, 94, 32) 112032
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 18, 32) 0
_________________________________________________________________
conv1d_6 (Conv1D) (None, 12, 32) 7200
_________________________________________________________________
global_max_pooling1d_4 (Glob (None, 32) 0
_________________________________________________________________
dense_3 (Dense) (None, 1) 33
=================================================================
Total params: 169,265
Trainable params: 169,265
Non-trainable params: 0
_________________________________________________________________
Train on 960 samples, validate on 240 samples
Epoch 1/10
960/960 [==============================] - 1s 964us/step - loss: 89.8610 - acc: 0.0094 - val_loss: -54.5870 - val_acc: 0.0042
Epoch 2/10
960/960 [==============================] - 1s 732us/step - loss: -682.0644 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 3/10
960/960 [==============================] - 1s 706us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 4/10
960/960 [==============================] - 1s 676us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 5/10
960/960 [==============================] - 1s 666us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 6/10
960/960 [==============================] - 1s 677us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 7/10
960/960 [==============================] - 1s 728us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 8/10
960/960 [==============================] - 1s 694us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 9/10
960/960 [==============================] - 1s 721us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042
Epoch 10/10
960/960 [==============================] - 1s 729us/step - loss: -748.9009 - acc: 0.0052 - val_loss: -762.5254 - val_acc: 0.0042

keras构建1D-CNN模型的更多相关文章

  1. 数据挖掘入门系列教程(十二)之使用keras构建CNN网络识别CIFAR10

    简介 在上一篇博客:数据挖掘入门系列教程(十一点五)之CNN网络介绍中,介绍了CNN的工作原理和工作流程,在这一篇博客,将具体的使用代码来说明如何使用keras构建一个CNN网络来对CIFAR-10数 ...

  2. Keras入门(四)之利用CNN模型轻松破解网站验证码

    项目简介   在之前的文章keras入门(三)搭建CNN模型破解网站验证码中,笔者介绍介绍了如何用Keras来搭建CNN模型来破解网站的验证码,其中验证码含有字母和数字.   让我们一起回顾一下那篇文 ...

  3. 入门项目数字手写体识别:使用Keras完成CNN模型搭建(重要)

    摘要: 本文是通过Keras实现深度学习入门项目——数字手写体识别,整个流程介绍比较详细,适合初学者上手实践. 对于图像分类任务而言,卷积神经网络(CNN)是目前最优的网络结构,没有之一.在面部识别. ...

  4. keras训练cnn模型时loss为nan

    keras训练cnn模型时loss为nan 1.首先记下来如何解决这个问题的:由于我代码中 model.compile(loss='categorical_crossentropy', optimiz ...

  5. Keras 构建DNN 对用户名检测判断是否为非法用户名(从数据预处理到模型在线预测)

    一.  数据集的准备与预处理 1 . 收集dataset (大量用户名--包含正常用户名与非法用户名) 包含两个txt文件  legal_name.txt  ilegal_name.txt. 如下图所 ...

  6. keras入门(三)搭建CNN模型破解网站验证码

    项目介绍   在文章CNN大战验证码中,我们利用TensorFlow搭建了简单的CNN模型来破解某个网站的验证码.验证码如下: 在本文中,我们将会用Keras来搭建一个稍微复杂的CNN模型来破解以上的 ...

  7. 在R中使用Keras和TensorFlow构建深度学习模型

    一.以TensorFlow为后端的Keras框架安装 #首先在ubuntu16.04中运行以下代码 sudo apt-get install libcurl4-openssl-dev libssl-d ...

  8. [Keras] mnist with cnn

    典型的卷积神经网络. Keras傻瓜式读取数据:自动下载,自动解压,自动加载. # X_train: array([[[[ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0. ...

  9. 构建分布式Tensorflow模型系列:CVR预估之ESMM

    https://zhuanlan.zhihu.com/p/42214716 本文是“基于Tensorflow高阶API构建大规模分布式深度学习模型系列”的第五篇,旨在通过一个完整的案例巩固一下前面几篇 ...

  10. 卷积神经网络(CNN)模型结构

    在前面我们讲述了DNN的模型与前向反向传播算法.而在DNN大类中,卷积神经网络(Convolutional Neural Networks,以下简称CNN)是最为成功的DNN特例之一.CNN广泛的应用 ...

随机推荐

  1. 高效XML绑定框架JIBX

    高效XML绑定框架JIBX demo源码地址 https://gitee.com/clover-clover/clover.git 具体路径: clover/clover-frame/clover-f ...

  2. 关于SaaS的图

  3. vs输出重定向

    1.右键点击解决工程->项目属性 2.配置属性->生成事件->生成后事件 在命令行中输入:"$(TargetPath)  >$(outdir)\1.txt" ...

  4. csr_matrix与ndarray类型互转

    ndarry 转 csr_matrix>>> import numpy as np>>> import scipy.sparse >>> my_m ...

  5. 防止react-re-render: Why Suspense and how ?

    近期内部项目基础项目依赖升级,之前使用的路由缓存不再适用,需要一个适配方案.而在此过程中react re-render算是困扰了笔者很久.后来通过多方资料查找使用了freeze解决了此问题.本文主要论 ...

  6. 解决 http://www.diamond-sh.com/favicon.ico 404 (Not Found) 报错问题

    html5页面中经常会遇到这个报错,解决方法有以下两种: 1. 根目录下建一个个favicon.ico文件,在head标签引入favicon.ico文件即可 <link href="f ...

  7. k8s各个服务和执行流程介绍

    Master节点部署的都是kubernetes的核心模块 APIServer 提供资源操作的唯一入口,并且提供认证/授权/kubernets的访问控制可以通过kubectl和自己开发的客户端,通过ht ...

  8. ubuntu 16.04 Chrome安装

    打开终端 输入 命令1:sudo wget http://www.linuxidc.com/files/repo/google-chrome.list -P /etc/apt/sources.list ...

  9. C语言学习--文件操作--文件流指针--打开文件

    当打开一个文件时, 系统会返回一个结构体, 这个结构体有对此文件操作的所有信息 调用fopen时,系统返回这个结构体的地址 FILE *p = fopen("a.txt") 打开一 ...

  10. Chrome(谷歌浏览器)安装Vue插件vue-devtools

    安装步骤如下:1.首先给大家提供一个git地址,虽然官网也有地址(https://github.com/vuejs/vue-devtools.git),我认为不太好用给大家提供另一个git地址: ht ...