一.代码实现

 # -*- coding: utf-8 -*-
"""
Created on Sat Feb 9 15:33:39 2019 @author: zhen
""" from keras.applications.vgg16 import VGG16 from keras.layers import Flatten
from keras.layers import Dense
from keras.layers import Dropout
from keras.models import Model
from keras.optimizers import SGD from keras.datasets import mnist import cv2
import numpy as np
# 因初始设置需大量内存(至少24G),现设置为最小分辨率以降低内存的要求
model_vgg = VGG16(include_top=False, weights='imagenet', input_shape=(48, 48, 3)) for layer in model_vgg.layers:
layer.trainable = False
model = Flatten(name='flatten')(model_vgg.output) # 扁平化
model = Dense(4096, activation='relu', name='fc1')(model)
model = Dense(4096, activation='relu', name='fc2')(model)
model = Dropout(0.5)(model)
model = Dense(10, activation='softmax')(model)
model_vgg_mnist = Model(inputs=model_vgg.input, outputs=model, name='vgg16') model_vgg_mnist.summary() # VGGNet初始推荐
model_vgg = VGG16(include_top=False, weights='imagenet', input_shape=(224, 224, 3))
for layer in model_vgg.layers:
layer.trainable = False model = Flatten()(model_vgg.output)
model = Dense(4096, activation='relu', name='fc1')(model)
model = Dense(4096, activation='relu', name='fc2')(model)
model = Dropout(0.5)(model)
model = Dense(10, activation='softmax', name='prediction')(model)
model_vgg_mnist_pretrain = Model(model_vgg.input, model, name='vgg16_pretrain') model_vgg_mnist_pretrain.summary() sgd = SGD(lr=0.05, decay=1e-5) # 随机梯度下降
model_vgg_mnist.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) (x_train, y_train), (x_test, y_test) = mnist.load_data("../test_data_home")
x_train, y_train = x_train[:1000], y_train[:1000]
x_test, y_test = x_test[:1000], y_test[:1000]
# GRAY两通道转换为RGB三通道
x_train = [cv2.cvtColor(cv2.resize(i, (48, 48)), cv2.COLOR_GRAY2RGB) for i in x_train]
x_train = np.concatenate([arr[np.newaxis] for arr in x_train]).astype('float32') x_test = [cv2.cvtColor(cv2.resize(i, (48, 48)), cv2.COLOR_GRAY2RGB) for i in x_test]
x_test = np.concatenate([arr[np.newaxis] for arr in x_test]).astype('float32') print(x_train.shape)
print(x_test.shape) x_train = x_train / 255
x_test = x_test / 255 def tran_y(y):
y_ohe = np.zeros(10)
y_ohe[y] = 1
return y_ohe y_train_ohe = np.array([tran_y(y_train[i]) for i in range(len(y_train))])
y_test_ohe = np.array([tran_y(y_test[i]) for i in range(len(y_test))]) model_vgg_mnist.fit(x_train, y_train_ohe, validation_data=(x_test, y_test_ohe), epochs=20, batch_size=100)

二.结果

_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_9 (InputLayer) (None, 48, 48, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 48, 48, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 48, 48, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 24, 24, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 24, 24, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 24, 24, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 12, 12, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 12, 12, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 12, 12, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 12, 12, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 6, 6, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 6, 6, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 3, 3, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 1, 1, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 512) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 2101248
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_9 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_5 (Dense) (None, 10) 40970
=================================================================
Total params: 33,638,218
Trainable params: 18,923,530
Non-trainable params: 14,714,688
_________________________________________________________________
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_10 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten_5 (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_10 (Dropout) (None, 4096) 0
_________________________________________________________________
prediction (Dense) (None, 10) 40970
=================================================================
Total params: 134,301,514
Trainable params: 119,586,826
Non-trainable params: 14,714,688
_________________________________________________________________
(1000, 48, 48, 3)
(1000, 48, 48, 3)
Train on 1000 samples, validate on 1000 samples
Epoch 1/20
1000/1000 [==============================] - 175s 175ms/step - loss: 2.1289 - acc: 0.2350 - val_loss: 1.9100 - val_acc: 0.4230
Epoch 2/20
1000/1000 [==============================] - 190s 190ms/step - loss: 1.7685 - acc: 0.4420 - val_loss: 1.6503 - val_acc: 0.4930
Epoch 3/20
1000/1000 [==============================] - 265s 265ms/step - loss: 1.5582 - acc: 0.5140 - val_loss: 1.5005 - val_acc: 0.5440
Epoch 4/20
1000/1000 [==============================] - 373s 373ms/step - loss: 1.4210 - acc: 0.5710 - val_loss: 1.3019 - val_acc: 0.6160
Epoch 5/20
1000/1000 [==============================] - 295s 295ms/step - loss: 1.1946 - acc: 0.6490 - val_loss: 1.1182 - val_acc: 0.7280
Epoch 6/20
1000/1000 [==============================] - 277s 277ms/step - loss: 1.0291 - acc: 0.7330 - val_loss: 1.0279 - val_acc: 0.7430
Epoch 7/20
1000/1000 [==============================] - 177s 177ms/step - loss: 1.0065 - acc: 0.7060 - val_loss: 0.9229 - val_acc: 0.7690
Epoch 8/20
1000/1000 [==============================] - 169s 169ms/step - loss: 0.8438 - acc: 0.7810 - val_loss: 0.9716 - val_acc: 0.6670
Epoch 9/20
1000/1000 [==============================] - 169s 169ms/step - loss: 0.8898 - acc: 0.7230 - val_loss: 0.9710 - val_acc: 0.6660
Epoch 10/20
1000/1000 [==============================] - 166s 166ms/step - loss: 0.8258 - acc: 0.7460 - val_loss: 0.9026 - val_acc: 0.7130
Epoch 11/20
1000/1000 [==============================] - 169s 169ms/step - loss: 0.7592 - acc: 0.7640 - val_loss: 0.9691 - val_acc: 0.6730
Epoch 12/20
1000/1000 [==============================] - 165s 165ms/step - loss: 0.7793 - acc: 0.7520 - val_loss: 0.8350 - val_acc: 0.6800
Epoch 13/20
1000/1000 [==============================] - 164s 164ms/step - loss: 0.6677 - acc: 0.7780 - val_loss: 0.7203 - val_acc: 0.7730
Epoch 14/20
1000/1000 [==============================] - 164s 164ms/step - loss: 0.7018 - acc: 0.7630 - val_loss: 0.6947 - val_acc: 0.7760
Epoch 15/20
1000/1000 [==============================] - 163s 163ms/step - loss: 0.6129 - acc: 0.8100 - val_loss: 0.7025 - val_acc: 0.7610
Epoch 16/20
1000/1000 [==============================] - 163s 163ms/step - loss: 0.6104 - acc: 0.8190 - val_loss: 0.6385 - val_acc: 0.8220
Epoch 17/20
1000/1000 [==============================] - 163s 163ms/step - loss: 0.5507 - acc: 0.8320 - val_loss: 0.6273 - val_acc: 0.8290
Epoch 18/20
1000/1000 [==============================] - 164s 164ms/step - loss: 0.5205 - acc: 0.8360 - val_loss: 0.8740 - val_acc: 0.6750
Epoch 19/20
1000/1000 [==============================] - 163s 163ms/step - loss: 0.5852 - acc: 0.8150 - val_loss: 0.6614 - val_acc: 0.7890
Epoch 20/20
1000/1000 [==============================] - 166s 166ms/step - loss: 0.5310 - acc: 0.8340 - val_loss: 0.5718 - val_acc: 0.8250

三.解析

  VGGNet是牛津大学计算机视觉组(Visual Geometry Group)和Google DeepMind公司的研究员一起研发的深度卷积神经网络。VGG探索了卷积神经网络的深度与其性能之间的关系,通过反复堆叠3*3的小型卷积核和2*2的最大池化层,VGG成功构筑了16-19层深的卷积神经网络。
  VGG取得了2014年比赛分类项目第二名和定位项目第一名。同时,VGG拓展性很强,迁移到其他图片数据上的泛化性非常好。VGG的结构简洁,整个网络都是使用了同样大小的卷积核尺寸3*3和池化层2*2。VGG现在也还经常被用来提取图像特征,可用来在图像分类任务上进行再训练,相当于提供了非常好的初始化权重。
  VGG通过加深层次来提升性能,拥有5段卷积,每一段内有2-3个卷积层,同时每段尾部都会连接一个最大池化层来缩小图片尺寸。每段内的卷积核数量一样,越靠后段的卷积核数量越多,64-128-256-512-512。

Keras实现VGG16的更多相关文章

  1. 【Keras篇】---利用keras改写VGG16经典模型在手写数字识别体中的应用

    一.前述 VGG16是由16层神经网络构成的经典模型,包括多层卷积,多层全连接层,一般我们改写的时候卷积层基本不动,全连接层从后面几层依次向前改写,因为先改参数较小的. 二.具体 1.因为本文中代码需 ...

  2. keras用vgg16做图像分类

    实际上我只是提供一个模版而已,代码应该很容易看得懂,label是存在一个csv里面的,图片是在一个文件夹里面的 没GPU的就不用尝试了,训练一次要很久很久... ## import libaries ...

  3. 基于Keras 的VGG16神经网络模型的Mnist数据集识别并使用GPU加速

    这段话放在前面:之前一种用的Pytorch,用着还挺爽,感觉挺方便的,但是在最近文献的时候,很多实验都是基于Google 的Keras的,所以抽空学了下Keras,学了之后才发现Keras相比Pyto ...

  4. keras系列︱Application中五款已训练模型、VGG16框架(Sequential式、Model式)解读(二)

    引自:http://blog.csdn.net/sinat_26917383/article/details/72859145 中文文档:http://keras-cn.readthedocs.io/ ...

  5. 1.keras实现-->使用预训练的卷积神经网络(VGG16)

    VGG16内置于Keras,可以通过keras.applications模块中导入. --------------------------------------------------------将 ...

  6. Keras(二)Application中五款已训练模型、VGG16框架解读

    Application的五款已训练模型 + H5py简述 Keras的应用模块Application提供了带有预训练权重的Keras模型,这些模型可以用来进行预测.特征提取和finetune. 后续还 ...

  7. VGG16等keras预训练权重文件的下载及本地存放

    VGG16等keras预训练权重文件的下载: https://github.com/fchollet/deep-learning-models/releases/ .h5文件本地存放目录: Linux ...

  8. 我的Keras使用总结(3)——利用bottleneck features进行微调预训练模型VGG16

    Keras的预训练模型地址:https://github.com/fchollet/deep-learning-models/releases 一个稍微讲究一点的办法是,利用在大规模数据集上预训练好的 ...

  9. Keras官方中文文档:常见问题与解答

    所属分类:Keras Keras FAQ:常见问题 如何引用Keras? 如何使Keras调用GPU? 如何在多张GPU卡上使用Keras "batch", "epoch ...

随机推荐

  1. mysql 开发进阶篇系列 43 逻辑备份与恢复(mysqldump 的基于时间和位置的不完全恢复)

    一. 概述 在上篇讲到了逻辑备份,使用mysqldump工具来备份一个库,并使用完全恢复还原了数据库.在结尾也讲到了误操作是不能用完全恢复的.解决办法是:我们需要恢复到误操作之前的状态,然后跳过误操作 ...

  2. solr7.3.1在CentOS7上的安装

    1 solr的下载 从Solr官方网站(http://archive.apache.org/dist/lucene/solr/7.3.1/ )下载Solr最新版本, 根据Solr的运行环境,Linux ...

  3. Flex(ActionScript)与JavaScript交互的两种方式示例

    随着各单位部门信息化进程的不断发展,互通互联.共享协调不断的被越来越多的客户所重视.很多新项目都要去必须能够集成已有的早期系统,至少也要能够实现交互对接.今天跟大家分享的是系统对接中ActionScr ...

  4. C++ STL中的map用红黑树实现,搜索效率是O(lgN),为什么不像python一样用散列表从而获得常数级搜索效率呢?

    C++ STL中的标准规定: map, 有序 unordered_map,无序,这个就是用散列表实现 谈谈hashmap和map的区别,我们知道hashmap是平均O(1),map是平均O(lnN)的 ...

  5. springboot情操陶冶-@Conditional和@AutoConfigureAfter注解解析

    承接前文springboot情操陶冶-@Configuration注解解析,本文将在前文的基础上阐述@AutoConfigureAfter和@Conditional注解的作用与解析 1.@Condit ...

  6. npm设置和查看仓库源

    转载请注明出处:https://www.cnblogs.com/wenjunwei/p/10078460.html 在使用npm命令时,如果直接从国外的仓库下载依赖,下载速度很慢,甚至会下载不下来,我 ...

  7. Dubbo 源码分析系列之三 —— 架构原理

    1 核心功能 首先要了解Dubbo提供的三大核心功能: Remoting:远程通讯 提供对多种NIO框架抽象封装,包括"同步转异步"和"请求-响应"模式的信息交 ...

  8. meterpreter持久后门

    meterpreter持久后门 背景:meterpreter好是好,但有个大问题,只要目标和本机连接断开一次后就再也连不上了,需要再次用exploit打才行!!! 解决方案:在与目标设备的meterp ...

  9. 华为路由器 IPSec 与 GRE 结合实验

    二者结合的目的 GRE 支持单播.组播.广播,IPSec 仅支持单播.GRE 不支持对于数据完整性以及身份认证的验证功能,并且也不具备数据加密保护.而 IPSec 恰恰拥有强大的安全机制.达到了互补的 ...

  10. SpringBoot系列——花里胡哨的banner.txt

    前言 我们注意到springboot项目启动时,控制台会打印自带的banner,然后对于部分IT骚年来说,太单调太普通太一般了:所以,是时候表演真正的技术了 项目结构 我们只需要在springboot ...