Keras实现VGG16
一.代码实现
# -*- coding: utf-8 -*-
"""
Created on Sat Feb 9 15:33:39 2019 @author: zhen
""" from keras.applications.vgg16 import VGG16 from keras.layers import Flatten
from keras.layers import Dense
from keras.layers import Dropout
from keras.models import Model
from keras.optimizers import SGD from keras.datasets import mnist import cv2
import numpy as np
# 因初始设置需大量内存(至少24G),现设置为最小分辨率以降低内存的要求
model_vgg = VGG16(include_top=False, weights='imagenet', input_shape=(48, 48, 3)) for layer in model_vgg.layers:
layer.trainable = False
model = Flatten(name='flatten')(model_vgg.output) # 扁平化
model = Dense(4096, activation='relu', name='fc1')(model)
model = Dense(4096, activation='relu', name='fc2')(model)
model = Dropout(0.5)(model)
model = Dense(10, activation='softmax')(model)
model_vgg_mnist = Model(inputs=model_vgg.input, outputs=model, name='vgg16') model_vgg_mnist.summary() # VGGNet初始推荐
model_vgg = VGG16(include_top=False, weights='imagenet', input_shape=(224, 224, 3))
for layer in model_vgg.layers:
layer.trainable = False model = Flatten()(model_vgg.output)
model = Dense(4096, activation='relu', name='fc1')(model)
model = Dense(4096, activation='relu', name='fc2')(model)
model = Dropout(0.5)(model)
model = Dense(10, activation='softmax', name='prediction')(model)
model_vgg_mnist_pretrain = Model(model_vgg.input, model, name='vgg16_pretrain') model_vgg_mnist_pretrain.summary() sgd = SGD(lr=0.05, decay=1e-5) # 随机梯度下降
model_vgg_mnist.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) (x_train, y_train), (x_test, y_test) = mnist.load_data("../test_data_home")
x_train, y_train = x_train[:1000], y_train[:1000]
x_test, y_test = x_test[:1000], y_test[:1000]
# GRAY两通道转换为RGB三通道
x_train = [cv2.cvtColor(cv2.resize(i, (48, 48)), cv2.COLOR_GRAY2RGB) for i in x_train]
x_train = np.concatenate([arr[np.newaxis] for arr in x_train]).astype('float32') x_test = [cv2.cvtColor(cv2.resize(i, (48, 48)), cv2.COLOR_GRAY2RGB) for i in x_test]
x_test = np.concatenate([arr[np.newaxis] for arr in x_test]).astype('float32') print(x_train.shape)
print(x_test.shape) x_train = x_train / 255
x_test = x_test / 255 def tran_y(y):
y_ohe = np.zeros(10)
y_ohe[y] = 1
return y_ohe y_train_ohe = np.array([tran_y(y_train[i]) for i in range(len(y_train))])
y_test_ohe = np.array([tran_y(y_test[i]) for i in range(len(y_test))]) model_vgg_mnist.fit(x_train, y_train_ohe, validation_data=(x_test, y_test_ohe), epochs=20, batch_size=100)
二.结果
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_9 (InputLayer) (None, 48, 48, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 48, 48, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 48, 48, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 24, 24, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 24, 24, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 24, 24, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 12, 12, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 12, 12, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 12, 12, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 12, 12, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 6, 6, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 6, 6, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 6, 6, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 3, 3, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 3, 3, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 1, 1, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 512) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 2101248
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_9 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_5 (Dense) (None, 10) 40970
=================================================================
Total params: 33,638,218
Trainable params: 18,923,530
Non-trainable params: 14,714,688
_________________________________________________________________
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_10 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten_5 (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_10 (Dropout) (None, 4096) 0
_________________________________________________________________
prediction (Dense) (None, 10) 40970
=================================================================
Total params: 134,301,514
Trainable params: 119,586,826
Non-trainable params: 14,714,688
_________________________________________________________________
(1000, 48, 48, 3)
(1000, 48, 48, 3)
Train on 1000 samples, validate on 1000 samples
Epoch 1/20
1000/1000 [==============================] - 175s 175ms/step - loss: 2.1289 - acc: 0.2350 - val_loss: 1.9100 - val_acc: 0.4230
Epoch 2/20
1000/1000 [==============================] - 190s 190ms/step - loss: 1.7685 - acc: 0.4420 - val_loss: 1.6503 - val_acc: 0.4930
Epoch 3/20
1000/1000 [==============================] - 265s 265ms/step - loss: 1.5582 - acc: 0.5140 - val_loss: 1.5005 - val_acc: 0.5440
Epoch 4/20
1000/1000 [==============================] - 373s 373ms/step - loss: 1.4210 - acc: 0.5710 - val_loss: 1.3019 - val_acc: 0.6160
Epoch 5/20
1000/1000 [==============================] - 295s 295ms/step - loss: 1.1946 - acc: 0.6490 - val_loss: 1.1182 - val_acc: 0.7280
Epoch 6/20
1000/1000 [==============================] - 277s 277ms/step - loss: 1.0291 - acc: 0.7330 - val_loss: 1.0279 - val_acc: 0.7430
Epoch 7/20
1000/1000 [==============================] - 177s 177ms/step - loss: 1.0065 - acc: 0.7060 - val_loss: 0.9229 - val_acc: 0.7690
Epoch 8/20
1000/1000 [==============================] - 169s 169ms/step - loss: 0.8438 - acc: 0.7810 - val_loss: 0.9716 - val_acc: 0.6670
Epoch 9/20
1000/1000 [==============================] - 169s 169ms/step - loss: 0.8898 - acc: 0.7230 - val_loss: 0.9710 - val_acc: 0.6660
Epoch 10/20
1000/1000 [==============================] - 166s 166ms/step - loss: 0.8258 - acc: 0.7460 - val_loss: 0.9026 - val_acc: 0.7130
Epoch 11/20
1000/1000 [==============================] - 169s 169ms/step - loss: 0.7592 - acc: 0.7640 - val_loss: 0.9691 - val_acc: 0.6730
Epoch 12/20
1000/1000 [==============================] - 165s 165ms/step - loss: 0.7793 - acc: 0.7520 - val_loss: 0.8350 - val_acc: 0.6800
Epoch 13/20
1000/1000 [==============================] - 164s 164ms/step - loss: 0.6677 - acc: 0.7780 - val_loss: 0.7203 - val_acc: 0.7730
Epoch 14/20
1000/1000 [==============================] - 164s 164ms/step - loss: 0.7018 - acc: 0.7630 - val_loss: 0.6947 - val_acc: 0.7760
Epoch 15/20
1000/1000 [==============================] - 163s 163ms/step - loss: 0.6129 - acc: 0.8100 - val_loss: 0.7025 - val_acc: 0.7610
Epoch 16/20
1000/1000 [==============================] - 163s 163ms/step - loss: 0.6104 - acc: 0.8190 - val_loss: 0.6385 - val_acc: 0.8220
Epoch 17/20
1000/1000 [==============================] - 163s 163ms/step - loss: 0.5507 - acc: 0.8320 - val_loss: 0.6273 - val_acc: 0.8290
Epoch 18/20
1000/1000 [==============================] - 164s 164ms/step - loss: 0.5205 - acc: 0.8360 - val_loss: 0.8740 - val_acc: 0.6750
Epoch 19/20
1000/1000 [==============================] - 163s 163ms/step - loss: 0.5852 - acc: 0.8150 - val_loss: 0.6614 - val_acc: 0.7890
Epoch 20/20
1000/1000 [==============================] - 166s 166ms/step - loss: 0.5310 - acc: 0.8340 - val_loss: 0.5718 - val_acc: 0.8250
三.解析
VGGNet是牛津大学计算机视觉组(Visual Geometry Group)和Google DeepMind公司的研究员一起研发的深度卷积神经网络。VGG探索了卷积神经网络的深度与其性能之间的关系,通过反复堆叠3*3的小型卷积核和2*2的最大池化层,VGG成功构筑了16-19层深的卷积神经网络。
VGG取得了2014年比赛分类项目第二名和定位项目第一名。同时,VGG拓展性很强,迁移到其他图片数据上的泛化性非常好。VGG的结构简洁,整个网络都是使用了同样大小的卷积核尺寸3*3和池化层2*2。VGG现在也还经常被用来提取图像特征,可用来在图像分类任务上进行再训练,相当于提供了非常好的初始化权重。
VGG通过加深层次来提升性能,拥有5段卷积,每一段内有2-3个卷积层,同时每段尾部都会连接一个最大池化层来缩小图片尺寸。每段内的卷积核数量一样,越靠后段的卷积核数量越多,64-128-256-512-512。
Keras实现VGG16的更多相关文章
- 【Keras篇】---利用keras改写VGG16经典模型在手写数字识别体中的应用
一.前述 VGG16是由16层神经网络构成的经典模型,包括多层卷积,多层全连接层,一般我们改写的时候卷积层基本不动,全连接层从后面几层依次向前改写,因为先改参数较小的. 二.具体 1.因为本文中代码需 ...
- keras用vgg16做图像分类
实际上我只是提供一个模版而已,代码应该很容易看得懂,label是存在一个csv里面的,图片是在一个文件夹里面的 没GPU的就不用尝试了,训练一次要很久很久... ## import libaries ...
- 基于Keras 的VGG16神经网络模型的Mnist数据集识别并使用GPU加速
这段话放在前面:之前一种用的Pytorch,用着还挺爽,感觉挺方便的,但是在最近文献的时候,很多实验都是基于Google 的Keras的,所以抽空学了下Keras,学了之后才发现Keras相比Pyto ...
- keras系列︱Application中五款已训练模型、VGG16框架(Sequential式、Model式)解读(二)
引自:http://blog.csdn.net/sinat_26917383/article/details/72859145 中文文档:http://keras-cn.readthedocs.io/ ...
- 1.keras实现-->使用预训练的卷积神经网络(VGG16)
VGG16内置于Keras,可以通过keras.applications模块中导入. --------------------------------------------------------将 ...
- Keras(二)Application中五款已训练模型、VGG16框架解读
Application的五款已训练模型 + H5py简述 Keras的应用模块Application提供了带有预训练权重的Keras模型,这些模型可以用来进行预测.特征提取和finetune. 后续还 ...
- VGG16等keras预训练权重文件的下载及本地存放
VGG16等keras预训练权重文件的下载: https://github.com/fchollet/deep-learning-models/releases/ .h5文件本地存放目录: Linux ...
- 我的Keras使用总结(3)——利用bottleneck features进行微调预训练模型VGG16
Keras的预训练模型地址:https://github.com/fchollet/deep-learning-models/releases 一个稍微讲究一点的办法是,利用在大规模数据集上预训练好的 ...
- Keras官方中文文档:常见问题与解答
所属分类:Keras Keras FAQ:常见问题 如何引用Keras? 如何使Keras调用GPU? 如何在多张GPU卡上使用Keras "batch", "epoch ...
随机推荐
- [机器学习] 分类 --- Naive Bayes(朴素贝叶斯)
Naive Bayes-朴素贝叶斯 Bayes' theorem(贝叶斯法则) 在概率论和统计学中,Bayes' theorem(贝叶斯法则)根据事件的先验知识描述事件的概率.贝叶斯法则表达式如下所示 ...
- Python下用Scrapy和MongoDB构建爬虫系统(1)
本文由 伯乐在线 - 木羊 翻译,xianhu 校稿.未经许可,禁止转载!英文出处:realpython.com.欢迎加入翻译小组. 这篇文章将根据真实的兼职需求编写一个爬虫,用户想要一个Python ...
- Coder解压探索===冥想补蓝v.1.0
主题是什么? 这是一篇是我自己在探索冥想术的过程中,有了一些浅薄的收获,所以写出来记录与分享. 我不太记得最早是因为什么原因去学习冥想,一开始对我而言,这个词带有很多成见,诸如“老僧入定”“三界六道” ...
- ZooKeeper系列(3):znode说明和znode状态
ZooKeeper系列文章:https://www.cnblogs.com/f-ck-need-u/p/7576137.html#zk 1.znode znode的官方说明:http://zookee ...
- Linux文件权限与属性详解 之 chattr & lsattr
Linux文件权限与属性详解 之 一般权限 Linux文件权限与属性详解 之 ACL Linux文件权限与属性详解 之 SUID.SGID & SBIT Linux文件权限与属性详解 之 ch ...
- mybatis_15整合ehcache
3.4 整合ehcache Mybatis本身是一个持久层框架,它不是专门的缓存框架,所以它对缓存的实现不够好,不能支持分布式. Ehcache是一个分布式的缓存框架. 什么是分布式 系统为了提高性能 ...
- MSQL基本增删改语句汇总练习
删除约束注意: 网上说是 ALTER TABLE 表名 DROP CONSTRAINT 约束名; 这里的CONSTRAINT 是指primary key,foreign key,unique,等实际的 ...
- webpack4 系列教程(十五):开发模式与webpack-dev-server
作者按:因为教程所示图片使用的是 github 仓库图片,网速过慢的朋友请移步<webpack4 系列教程(十五):开发模式与 webpack-dev-server>原文地址.更欢迎来我的 ...
- 1.创建和销毁对象_EJ
在这里记录<Effective Java>学习笔记.该书介绍了java编程中70多种极具实用价值的经验规则,揭示了该做什么,不该做什么才能产生清晰.健壮和高效的代码. 第1条: 考虑用静态 ...
- JavaScript是如何工作的:与WebAssembly比较及其使用场景
摘要: WebAssembly未来可期. 原文:JavaScript是如何工作的:与WebAssembly比较及其使用场景 作者:前端小智 Fundebug经授权转载,版权归原作者所有. 这是专门探索 ...