用keras实现人脸关键点检测

改良版:http://www.cnblogs.com/ansang/p/8583122.html

第一步:准备好需要的库

  • tensorflow  1.4.0
  • h5py 2.7.0
  • hdf5 1.8.15.1
  • Keras     2.0.8
  • opencv-python     3.3.0
  • numpy    1.13.3+mkl

第二步:准备数据集:

data.7z

如图:里面包含着标签和数据

第三步:将图片和标签转成numpy array格式:

 def __data_label__(path):
     f = open(path+"lable.txt", "r")
     i = 0
     datalist = []
     labellist = []
     for line in f.readlines():

         i+=1
         a = line.replace("\n", "")
         b = a.split(",")
         labellist.append(b[1:])
         imgname = path + b[0]
         image = load_img(imgname, target_size=(218, 178))
         datalist.append(img_to_array(image))

     img_data = np.array(datalist)
     img_data = img_data.astype('float32')
     img_data /= 255
     label = np.array(labellist)
     # print(img_data)
     return img_data,label

第四步:搭建网络:

这里使用非常简单的网络

 def __CNN__():
     model = Sequential()#218*178*3
     model.add(Conv2D(32, (3, 3), input_shape=(218, 178, 3)))
     model.add(Activation('relu'))
     model.add(MaxPooling2D(pool_size=(2, 2)))

     model.add(Conv2D(32, (3, 3)))
     model.add(Activation('relu'))
     model.add(MaxPooling2D(pool_size=(2, 2)))

     model.add(Conv2D(64, (3, 3)))
     model.add(Activation('relu'))
     model.add(MaxPooling2D(pool_size=(2, 2)))

     model.add(Flatten())
     model.add(Dense(64))
     model.add(Activation('relu'))
     model.add(Dropout(0.5))
     model.add(Dense(10))
     model.add(Activation('softmax'))
     model.summary()
     return model

_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 216, 176, 32) 896
_________________________________________________________________
activation_1 (Activation) (None, 216, 176, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 108, 88, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 106, 86, 32) 9248
_________________________________________________________________
activation_2 (Activation) (None, 106, 86, 32) 0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 53, 43, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 51, 41, 64) 18496
_________________________________________________________________
activation_3 (Activation) (None, 51, 41, 64) 0
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 25, 20, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 32000) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 2048064
_________________________________________________________________
activation_4 (Activation) (None, 64) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 64) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 650
_________________________________________________________________
activation_5 (Activation) (None, 10) 0
=================================================================
Total params: 2,077,354
Trainable params: 2,077,354
Non-trainable params: 0
_________________________________________________________________

第五步:训练保存和预测:

 def train(model, testdata, testlabel, traindata, trainlabel):

     # model.compile里的参数loss就是损失函数(目标函数)
     model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
     # 开始训练, show_accuracy在每次迭代后显示正确率 。  batch_size是每次带入训练的样本数目 , nb_epoch 是迭代次数
     model.fit(traindata, trainlabel, batch_size=16, epochs=20,
               validation_data=(testdata, testlabel))
     # 设置测试评估参数,用测试集样本
     model.evaluate(testdata, testlabel, batch_size=16, verbose=1,)

 def save(model, file_path=FILE_PATH):
     print('Model Saved.')
     model.save_weights(file_path)

 # def load(model, file_path=FILE_PATH):
 #     print('Model Loaded.')
 #     model.load_weights(file_path)

 def predict(model,image):

     img = image.resize((1, 218, 178, 3))
     img = image.astype('float32')
     img /= 255

     #归一化
     result = model.predict(img)
     result = result*1000+10

     print(result)
     return result

第六步:主模块:

 ############
 # 主模块
 ############
 if __name__ == '__main__':
     model = __CNN__()
     testdata, testlabel = __data_label__(testpath)
     traindata, trainlabel = __data_label__(trainpath)
     # print(testlabel)
     # train(model,testdata, testlabel, traindata, trainlabel)
     # model.save(FILE_PATH)
     model.load_weights(FILE_PATH)
     img = []
     path = "D:/pycode/facial-keypoints-master/data/train/000096.jpg"
     # path = "D:/pycode/Abel_Aguilar_0001.jpg"
     image = load_img(path)
     img.append(img_to_array(image))
     img_data = np.array(img)
     rects = predict(model,img_data)
     img = cv2.imread(path)
     for x, y, w, h, a,b,c,d,e,f in rects:
         point(x,y)
         point(w, h)
         point(a,b)
         point(c,d)
         point(e,f)

     cv2.imshow('img', img)
     cv2.waitKey(0)
     cv2.destroyAllWindows()

训练的时候把train函数的注释取消

预测的时候把train函数注释掉。

下面上全代码:

 from tensorflow.contrib.keras.api.keras.preprocessing.image import ImageDataGenerator,img_to_array
 from keras.models import Sequential
 from keras.layers.core import Dense, Dropout, Activation, Flatten
 from keras.layers.advanced_activations import PReLU
 from keras.layers.convolutional import Conv2D, MaxPooling2D, ZeroPadding2D
 from keras.optimizers import SGD, Adadelta, Adagrad
 from keras.preprocessing.image import load_img, img_to_array
 from keras.utils import np_utils, generic_utils
 import numpy as np
 import cv2

 FILE_PATH = 'face_landmark.h5'
 trainpath = 'D:/pycode/facial-keypoints-master/data/train/'
 testpath = 'D:/pycode/facial-keypoints-master/data/test/'

 def __data_label__(path):
     f = open(path+"lable.txt", "r")
     i = 0
     datalist = []
     labellist = []
     for line in f.readlines():

         i+=1
         a = line.replace("\n", "")
         b = a.split(",")
         labellist.append(b[1:])
         imgname = path + b[0]
         image = load_img(imgname, target_size=(218, 178))
         datalist.append(img_to_array(image))

     img_data = np.array(datalist)
     img_data = img_data.astype('float32')
     img_data /= 255
     label = np.array(labellist)
     # print(img_data)
     return img_data,label

 ###############
 # 开始建立CNN模型
 ###############

 # 生成一个model

 def __CNN__():
     model = Sequential()#218*178*3
     model.add(Conv2D(32, (3, 3), input_shape=(218, 178, 3)))
     model.add(Activation('relu'))
     model.add(MaxPooling2D(pool_size=(2, 2)))

     model.add(Conv2D(32, (3, 3)))
     model.add(Activation('relu'))
     model.add(MaxPooling2D(pool_size=(2, 2)))

     model.add(Conv2D(64, (3, 3)))
     model.add(Activation('relu'))
     model.add(MaxPooling2D(pool_size=(2, 2)))

     model.add(Flatten())
     model.add(Dense(64))
     model.add(Activation('relu'))
     model.add(Dropout(0.5))
     model.add(Dense(10))
          model.summary()
     return model

 def train(model, testdata, testlabel, traindata, trainlabel):

     # model.compile里的参数loss就是损失函数(目标函数)
     model.compile(loss='categorical_crossentropy', optimizer='adam')
     # 开始训练, show_accuracy在每次迭代后显示正确率 。  batch_size是每次带入训练的样本数目 , nb_epoch 是迭代次数
     model.fit(traindata, trainlabel, batch_size=16, epochs=20,
               validation_data=(testdata, testlabel))
     # 设置测试评估参数,用测试集样本
     model.evaluate(testdata, testlabel, batch_size=16, verbose=1,)

 def save(model, file_path=FILE_PATH):
     print('Model Saved.')
     model.save_weights(file_path)

 # def load(model, file_path=FILE_PATH):
 #     print('Model Loaded.')
 #     model.load_weights(file_path)

 def predict(model,image):

     img = image.resize((1, 218, 178, 3))
     img = image.astype('float32')
     img /= 255

     #归一化
     result = model.predict(img)
     result = result*1000+10

     print(result)
     return result
 def point(x, y):
     cv2.circle(img, (x, y), 1, (0, 0, 255), 10)

 ############
 # 主模块
 ############
 if __name__ == '__main__':
     model = __CNN__()
     testdata, testlabel = __data_label__(testpath)
     traindata, trainlabel = __data_label__(trainpath)
     # print(testlabel)
     # train(model,testdata, testlabel, traindata, trainlabel)
     # model.save(FILE_PATH)
     model.load_weights(FILE_PATH)
     img = []
     path = "D:/pycode/facial-keypoints-master/data/train/000096.jpg"
     # path = "D:/pycode/Abel_Aguilar_0001.jpg"
     image = load_img(path)
     img.append(img_to_array(image))
     img_data = np.array(img)
     rects = predict(model,img_data)
     img = cv2.imread(path)
     for x, y, w, h, a,b,c,d,e,f in rects:
         point(x,y)
         point(w, h)
         point(a,b)
         point(c,d)
         point(e,f)

     cv2.imshow('img', img)
     cv2.waitKey(0)
     cv2.destroyAllWindows()

结果如下:

        

未来计划:

用tensorflow-cpu跑的,数据量很少,网络很简单,提升数据量和网络深度应该还能有较大的改善空间。

而且目前网络只能预测大小为(218,178)像素的图片,将适用性提升是未来的目标。

改进方案:

将图片全部resize成方形,边长不够的加黑边补齐。

 # 按照指定图像大小调整尺寸
 def resize_image(image, height=IMAGE_SIZE, width=IMAGE_SIZE):
     top, bottom, left, right = (0, 0, 0, 0)

     # 获取图像尺寸
     h, w, _ = image.shape

     # 对于长宽不相等的图片,找到最长的一边
     longest_edge = max(h, w)

     # 计算短边需要增加多上像素宽度使其与长边等长
     if h < longest_edge:
         dh = longest_edge - h
         top = dh // 2
         bottom = dh - top
     elif w < longest_edge:
         dw = longest_edge - w
         left = dw // 2
         right = dw - left
     else:
         pass

         # RGB颜色
     BLACK = [0, 0, 0]

     # 给图像增加边界,是图片长、宽等长,cv2.BORDER_CONSTANT指定边界颜色由value指定
     constant = cv2.copyMakeBorder(image, top, bottom, left, right, cv2.BORDER_CONSTANT, value=BLACK)

     # 调整图像大小并返回
     return cv2.resize(constant, (height, width))

keras实现简单CNN人脸关键点检测的更多相关文章

  1. 用keras实现人脸关键点检测(2)

    上一个代码只能实现小数据的读取与训练,在大数据训练的情况下.会造内存紧张,于是我根据keras的官方文档,对上一个代码进行了改进. 用keras实现人脸关键点检测 数据集:https://pan.ba ...

  2. dlib人脸关键点检测的模型分析与压缩

    本文系原创,转载请注明出处~ 小喵的博客:https://www.miaoerduo.com 博客原文(排版更精美):https://www.miaoerduo.com/c/dlib人脸关键点检测的模 ...

  3. 机器学习进阶-人脸关键点检测 1.dlib.get_frontal_face_detector(构建人脸框位置检测器) 2.dlib.shape_predictor(绘制人脸关键点检测器) 3.cv2.convexHull(获得凸包位置信息)

    1.dlib.get_frontal_face_detector()  # 获得人脸框位置的检测器, detector(gray, 1) gray表示灰度图, 2.dlib.shape_predict ...

  4. OpenCV实战:人脸关键点检测(FaceMark)

    Summary:利用OpenCV中的LBF算法进行人脸关键点检测(Facial Landmark Detection) Author:    Amusi Date:       2018-03-20 ...

  5. OpenCV Facial Landmark Detection 人脸关键点检测

    Opencv-Facial-Landmark-Detection 利用OpenCV中的LBF算法进行人脸关键点检测(Facial Landmark Detection) Note: OpenCV3.4 ...

  6. Opencv与dlib联合进行人脸关键点检测与识别

    前言 依赖库:opencv 2.4.9 /dlib 19.0/libfacedetection 本篇不记录如何配置,重点在实现上.使用libfacedetection实现人脸区域检测,联合dlib标记 ...

  7. opencv+python+dlib人脸关键点检测、实时检测

    安装的是anaconde3.python3.7.3,3.7环境安装dlib太麻烦, 在anaconde3中新建环境python3.6.8, 在3.6环境下安装dlib-19.6.1-cp36-cp36 ...

  8. Facial landmark detection - 人脸关键点检测

    Facial landmark detection  (Facial keypoints detection) OpenSourceLibrary: DLib Project Home:  http: ...

  9. 级联MobileNet-V2实现CelebA人脸关键点检测(转)

    https://blog.csdn.net/u011995719/article/details/79435615

随机推荐

  1. Java内存模型与指令重排

    Java内存模型与指令重排 本文暂不讲JMM(Java Memory Model)中的主存, 工作内存以及数据如何在其中流转等等, 这些本身还牵扯到硬件内存架构, 直接上手容易绕晕, 先从以下几个点探 ...

  2. Python-Web框架之 - 利用SQLALchemy创建与数据库MySQL的连接, 详解用Flask时会遇到的一些大坑 !

    经过这个小项目算是对Django与Flask这两个web框架有了新的认识 , Django本身的轮子非常齐全 , 套路也很固定 , 新手在接触Django框架时 , 不会陷入到处找轮子的大坑 ; 那么 ...

  3. printf("Hello 2018!");

    月考 has Boom! 要全心准备期末考试,到年前是不能再看Blog了 新年加油!!! 不要感冒 :joy:

  4. jQuery的学习笔记4

    JQuery学习笔记3 2.9属性选择器 属性选择器就是根据元素的属性和属性值作为过滤条件,来匹配对应的DOM元素.属性选择器一般都以中括号作为起止分界符 它的形式如下: [attribute] [a ...

  5. 分布式消息队列XXL-MQ

    <分布式消息队列XXL-MQ>     一.简介 1.1 概述 XXL-MQ是一款轻量级分布式消息队列,支持串行.并行和广播等多种消息模型.现已开放源代码,开箱即用. 支持三种消息模式: ...

  6. bootstrap-table+x-editable入门

    Bootstrap-table 快速入门bootstrap-table----我的表单不可能这么帅. Table of contents Quick start Why use it What's i ...

  7. HTML 标签小细节

    简书地址:https://www.jianshu.com/p/03a23aa28a34 今天重新学习了一下HTML中标签的用法,补充并记录一下自己新学到的知识. a中的href href Contai ...

  8. QString与string的相互转换

    1.QString转换String string s = qstr.toStdString(); 2.String转换QString QString qstr2 = QString::fromStdS ...

  9. bestcoder round 74 div2

    随便看了一场以前的bestcoder,然后顺便写了一下,都不码的样子 有中文题面,这里就不写题目大意了 T1. 刚开始想复杂了,T1可能是4道题里面想的最久的 我们大概弄一下就可以发现,如果a[i]& ...

  10. JS 数据类型、赋值、深拷贝和浅拷贝

    js 数据类型 六种 基本数据类型: Boolean. 布尔值,true 和 false. null. 一个表明 null 值的特殊关键字. JavaScript 是大小写敏感的,因此 null 与 ...