【Tensorflow】Object Detection API-训练自己的手势识别模型

1. 安装tensorflow以及下载object detection api

1.安装tensorflow:

对于CPU版本:pip install tensorflow
对于GPU版本:pip install tensorflow-gpu
升级tensorflow到最新版1.4.0:pip install --upgrade tensorflow-gpu

2.安装必须库:

sudo pip install pillow
sudo pip install lxml
sudo pip install jupyter
sudo pip install matplotlib
3.下载object detection api:
t clone https://github.com/tensorflow/models.git
4.protobuf编译:在tensorflow/models/research/目录下
protoc object_detection/protos/*.proto --python_out=.
5.添加pythonpath,在tensorflow/models/research/目录下
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
6.测试安装:
python object_detection/builders/model_builder_test.py

2.训练数据集准备

1.在model下新建文件夹dataset/VOCdevkit/VOC2007目录,VOC2007目录机构如下图所示:

2.在VOC2007目录下运行generate_txt.py程序,划分数据集,我的训练集和验证集比例为7:3,总量为1557

import os
import random train_percent = 0.70
xmlfilepath = 'Annotations'
txtsavepath = 'ImageSets\Main'
total_xml = os.listdir(xmlfilepath) num=len(total_xml)
list=range(num)
tr=int(num*train_percent)
train=random.sample(list,tr) ftrain = open('ImageSets/Main/train.txt', 'w')
fval = open('ImageSets/Main/val.txt', 'w') for i in list:
name=total_xml[i][:-4]+'\n'
if i in train:
ftrain.write(name)
else:
fval.write(name) ftrain.close()
fval.close()
print ("finished")

3.将models/research/object_detection/dataset_tools目录下的create_pascal_tf_record.py文件复制到dataset文件夹下,做如下修改:
(1)修改第85行:img_path = os.path.join(data['folder'], image_subdirectory, data['filename'])
改为:img_path = os.path.join("VOC2007", image_subdirectory, data['filename'])
原因:因为我的数据标注的xml文件中的folder项是"hand_2",但是我本地并没有该目录,所以直接改为"VOC2007"。
(2)修改第163行:examples_path = os.path.join(data_dir, year, 'ImageSets', 'Main','aeroplane_' + FLAGS.set + '.txt')
改为:examples_path = os.path.join(data_dir, year, 'ImageSets', 'Main',FLAGS.set + '.txt')
原因:我的Main中的txt文件中没有aeroplane_前缀
(3)根据自己的标签创建pascal_label_map.pbtxt 文件,内容如下:

(4)运行以下命令,就可以得到用于训练和验证的tf_record文件:
python create_pascal_tf_record.py
--data_dir=./VOCdevkit
--label_map_path=./pascal_label_map.pbtxt
--year=VOC2007
--set=train
--output_path=./pascal_train.record

python create_pascal_tf_record.py
--data_dir=./VOCdevkit
--label_map_path=./pascal_label_map.pbtxt
--year=VOC2007
--set=val
--output_path=./pascal_val.record
此处写的是相对路径,若有需要可改为绝对路径。 运行完成后将会在 目录下得到pascal_train.record和pascal_val.record两个文件,训练集和验证集的二进制文件。

3.解压SSDMobilenet模型

tar -xvf ssd_mobilenet_v1_coco_2018_01_28.tar得到如下文件:

将文件夹里面的model.ckpt.*的三个文件copy到dataset文件夹。

4.修改config文件

将文件models/research/object_detection/samples/configs/ssd_mobilenet_v1_pets.config复制到dataset.修改:
(1)num_classes修改为自己的类别数目,我的是6
(2)修改路径。(5处)
  fine_tune_checkpoint: "./models/dataset/model.ckpt"
  input_path: "./models/dataset/pascal_train.record"
  label_map_path: ".models/dataset/pascal_label_map.pbtxt"
  input_path: "./models/dataset/pascal_val.record"
  label_map_path: "./models/dataset/pascal_label_map.pbtxt"
此处建议写为绝对路径
保存config文件,重命名为ssd_mobilenet_v1_pascal.config。

5.开始训练

将models/research/object_detection/model_main.py文件复制到dataset路径下:
在model_main.py文件中加三行代码:

import os
os.environ['CUDA_VISIBLE_DEVICES'] = '' # 修改为当前能用的GPU
tf.logging.set_verbosity(tf.logging.INFO) # 打印日志

执行训练命令
python model_main.py
--./ssd_mobilenet_v1_pascal.config
--model_dir=./output
--num_train_steps=50000
--sample_1_of_n_eval_examples=1 --alsologtostderr

6.评估模型

暂时还没找到评估模型的文件

7.查看结果

tensorboard --logdir=./models/dataset/output --port=6006
可以在浏览器打开http://服务器IP:6005/ 页面观察训练过程

主要是观察loss和mAP@.50IOU

8.生成可以被调用的模型

将models/research/object_detection目录下的export_inference_graph.py文件复制到dataset路径下
python export_inference_graph.py
--input_type=image_tensor
--pipeline_config_path=./ssd_mobilenet_v1_pascal.config
--trained_checkpoint_prefix=./output/model.ckpt-10000
--output_directory=./savedModelcd

生成的模型如图所示:

9.调用生成的模型

在dataset目录下创建object_detection_test.py,并将其复制到models/research/object_detection目录下,因为要调用该目录下的utils.py文件
可以在dataset下创建你自己的测试文件夹,然后更改object_detection_test.py的相应的路径

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image ## This is needed to display the images.
#%matplotlib inline # This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..") from utils import label_map_util from utils import visualization_utils as vis_util # What model to download.
#MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
#MODEL_FILE = MODEL_NAME + '.tar.gz'
#DOWNLOAD_BASE = #'http://download.tensorflow.org/models/object_detection/'
MODEL_NAME = '/home/minelab/chenqingyun/models/dataset/savedModelcd' # Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = "/home/minelab/chenqingyun/models/dataset/pascal_label_map.pbtxt" NUM_CLASSES = 6 #download model
#opener = urllib.request.URLopener()
#opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
#tar_file = tarfile.open(MODEL_FILE)
#for file in tar_file.getmembers():
# file_name = os.path.basename(file.name)
# if 'frozen_inference_graph.pb' in file_name:
# tar_file.extract(file, os.getcwd()) #Load a (frozen) Tensorflow model into memory.
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
#Loading label map
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
#Helper code
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8) # For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = '/home/minelab/chenqingyun/models/dataset/1'
#TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
TEST_IMAGE = "2.jpeg"
#print('the test image is:', TEST_IMAGE)
TEST_IMAGE = os.path.join(PATH_TO_TEST_IMAGES_DIR,TEST_IMAGE)
# Size, in inches, of the output images.
IMAGE_SIZE = (224, 224)
IMGES_LIST = os.listdir(PATH_TO_TEST_IMAGES_DIR)
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
for IMG_NAME in IMGES_LIST:
TEST_IMAGE = os.path.join(PATH_TO_TEST_IMAGES_DIR,IMG_NAME)
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
#for image_path in TEST_IMAGE_PATHS:
image = Image.open(TEST_IMAGE)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
scores = detection_graph.get_tensor_by_name('detection_scores:0')
classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8) # print(scores)
# print(classes)
# print(category_index) final_score = np.squeeze(scores)
count = 0
for i in range(100):
if scores is None or final_score[i] > 0.5:
count = count + 1
print(IMG_NAME,classes[0][i],scores[0][i])
#print ('the count of objects is: ', count) # plt.figure(figsize=IMAGE_SIZE)
# plt.imshow(image_np)
# plt.show()

基于ssd的手势识别模型(object detection api方式)的更多相关文章

  1. 基于谷歌开源的TensorFlow Object Detection API视频物体识别系统搭建自己的应用(四)

    本章主要内容是利用mqtt.多线程.队列实现模型一次加载,批量图片识别分类功能 目录结构如下: mqtt连接及多线程队列管理 MqttManager.py # -*- coding:utf8 -*- ...

  2. 基于TensorFlow Object Detection API进行迁移学习训练自己的人脸检测模型(二)

    前言 已完成数据预处理工作,具体参照: 基于TensorFlow Object Detection API进行迁移学习训练自己的人脸检测模型(一) 设置配置文件 新建目录face_faster_rcn ...

  3. TensorFlow Object Detection API中的Faster R-CNN /SSD模型参数调整

    关于TensorFlow Object Detection API配置,可以参考之前的文章https://becominghuman.ai/tensorflow-object-detection-ap ...

  4. 第三十二节,使用谷歌Object Detection API进行目标检测、训练新的模型(使用VOC 2012数据集)

    前面已经介绍了几种经典的目标检测算法,光学习理论不实践的效果并不大,这里我们使用谷歌的开源框架来实现目标检测.至于为什么不去自己实现呢?主要是因为自己实现比较麻烦,而且调参比较麻烦,我们直接利用别人的 ...

  5. Tensorflow object detection API 搭建属于自己的物体识别模型

    一.下载Tensorflow object detection API工程源码 网址:https://github.com/tensorflow/models,可通过Git下载,打开Git Bash, ...

  6. object detection api调参详解(兼SSD算法参数详解)

    一.引言 使用谷歌提供的object detection api图像识别框架,我们可以很方便地重新训练一个预训练模型,用于自己的具体业务.以我所使用的ssd_mobilenet_v1预训练模型为例,训 ...

  7. Tensorflow object detection API 搭建物体识别模型(四)

    四.模型测试 1)下载文件 在已经阅读并且实践过前3篇文章的情况下,读者会有一些文件夹.因为每个读者的实际操作不同,则文件夹中的内容不同.为了保持本篇文章的独立性,制作了可以独立运行的文件夹目标检测. ...

  8. Tensorflow object detection API 搭建物体识别模型(三)

    三.模型训练 1)错误一: 在桌面的目标检测文件夹中打开cmd,即在路径中输入cmd后按Enter键运行.在cmd中运行命令: python /your_path/models-master/rese ...

  9. Tensorflow object detection API 搭建物体识别模型(一)

    一.开发环境 1)python3.5 2)tensorflow1.12.0 3)Tensorflow object detection API :https://github.com/tensorfl ...

随机推荐

  1. vue-resource+element upload上传(遇到formData总是变为object格式)

    文件上传这种业务需求很常见,但是最近用了element,仔细看了文档,按照demo写了上传,与后台传参调取接口时,控制台总是显示未获取到文件,想了又想,发现一开始思路就跑遍了... 写此博记录下遇到的 ...

  2. python爬虫套件在mac上的安装-bs的安装

    1,首先安装pip gem install pip 这种方式会报错: ERROR:  While executing gem ... (Gem::FilePermissionError) You do ...

  3. 神州数码策略路由(PBR)配置

    实验要求:掌握PBR配置的方法 拓扑如下 R1 enable 进入特权模式 config 进入全局模式 hostname R1 修改名称 interface s0/1 进入端口 ip address ...

  4. Django JWT Token RestfulAPI用户认证

    一般情况下我们Django默认的用户系统是满足不了我们的需求的,那么我们会对他做一定的扩展 创建用户项目 python manage.py startapp users 添加项目apps INSTAL ...

  5. docker1

    Docker笔记 1, https://gitlab-demo.com Docker官网:https://docs.docker.com/install/overview/ 有两个版本: Docker ...

  6. 【EMV L2】Select PSE应用选择相关的卡片数据格式

    The data field of the response message contains the FCI specific to the selected PSE, DDF, or ADF. 一 ...

  7. rsa公钥和私钥的生成

    在liunx环境中 openssl 然后生成私钥: genrsa -out app_private_key.pem 2048 # 私钥的生成 在利用私钥生成公钥: rsa -in app_privat ...

  8. 关于Airtest的使用探索

    一.Airtest的简介   Airtest是网易出品的一款基于图像识别和poco控件识别的一款UI自动化测试工具.Airtest的框架是网易团队自己开发的一个图像识别框架,这个框架的祖宗就是一种新颖 ...

  9. 在java代码中控制UI界面

    public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle saved ...

  10. 使用Angular cli创建组件报错: Unexpected token / in JSON at position....

    之前为了熟悉流程一直都是手动创建组件,今天试着用cli创建组件,居然报错了,报错大致为: Unexpected token / in JSON at position.... ,并且错误指向了.ang ...