请根据 models/blob/master/research/object_detection/g3doc/ 目录下的 installation.md 配置好你的环境

环境搭建可参考:基于win10,GPU的Tensorflow Object Detection API部署及USB摄像头目标检测

1. 测试opencv调用usb,c++和python两个版本

在Ubuntu16.04安装OpenCV3.1并实现USB摄像头图像采集

import cv2
cv2.namedWindow('testcamera', cv2.WINDOW_NORMAL) capture = cv2.VideoCapture(0)
print (capture.isOpened())
num = 0 while 1:
ret, img = capture.read()
cv2.imshow('testcamera', img)
key = cv2.waitKey(1)
num += 1
if key==1048603:#<ESC>
break capture.release()
cv2.destroyAllWindows()
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv; int main(int argc, char** argv) {
cvNamedWindow("视频"); CvCapture* capture = cvCreateCameraCapture(-);
IplImage* frame; while() {
frame = cvQueryFrame(capture);
if(!frame) break;
cvShowImage("视频", frame); char c = cvWaitKey();
if(c==) break;
} cvReleaseCapture(&capture);
cvDestroyWindow("视频");
return ;
}

2. GPU的Tensorflow Object Detection API部署及USB摄像头目标检测

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import cv2
import time from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image # This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..") from utils import label_map_util
from utils import visualization_utils as vis_util # What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
#MODEL_NAME = 'faster_rcnn_resnet101_coco_11_06_2017'
#MODEL_NAME = 'ssd_inception_v2_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz' # Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('/home/dsp/ranjiewen/tensorflow_models/models/research/object_detection/data', 'mscoco_label_map.pbtxt') #extract the ssd_mobilenet
start = time.clock()
NUM_CLASSES = 90
opener = urllib.request.URLopener()
#opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
end= time.clock()
print ('load the model',(end-start)) detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='') label_map = label_map_util.load_labelmap(PATH_TO_LABELS) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories) cap = cv2.VideoCapture(0)
print (cap.isOpened())
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
writer = tf.summary.FileWriter("logs/", sess.graph)
sess.run(tf.global_variables_initializer()) while(1): print("-------")
ret, frame = cap.read()
start = time.clock()
if cv2.waitKey(1) & 0xFF == ord('q'):
break
image_np=frame
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
scores = detection_graph.get_tensor_by_name('detection_scores:0')
classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=6)
end = time.clock()
print ('frame fps:',1.0/(end - start))
#print 'frame:',time.time() - start
cv2.imshow("capture", image_np)
cv2.waitKey(1)
cap.release()
cv2.destroyAllWindows()

- 速度感觉还可以 。。。

TensorFlow目标检测(object_detection)api使用的更多相关文章

  1. tensorflow目标检测API之训练自己的数据集

    1.训练文件的配置 将生成的csv和record文件都放在新建的mydata文件夹下,并打开object_detection文件夹下的data文件夹,复制一个后缀为.pbtxt的文件到mtdata文件 ...

  2. tensorflow目标检测API之建立自己的数据集

    1 收集数据 为了方便,我找了11张月儿的照片做数据集,如图1,当然这在实际应用过程中是远远不够的 2 labelImg软件的安装 使用labelImg软件(下载地址:https://github.c ...

  3. tensorflow目标检测API安装及测试

    1.环境安装配置 1.1 安装tensorflow 安装tensorflow不再仔细说明,但是版本一定要是1.9 1.2 下载Tensorflow object detection API  下载地址 ...

  4. 实战小项目之基于yolo的目标检测web api实现

    上个月,对微服务及web service有了一些想法,看了一本app后台开发及运维的书,主要是一些概念性的东西,对service有了一些基本了解.互联网最开始的构架多是cs构架,浏览器兴起以后,变成了 ...

  5. tensorflow2.4与目标检测API在3060显卡上的配置安装

    目前,由于3060显卡驱动版本默认>11.0,因此,其不能使用tensorflow1版本的任何接口,所以学习在tf2版本下的目标检测驱动是很有必要的,此配置过程同样适用于任何30系显卡配置tf2 ...

  6. (转)如何用TensorLayer做目标检测的数据增强

    数据增强在机器学习中的作用不言而喻.和图片分类的数据增强不同,训练目标检测模型的数据增强在对图像做处理时,还需要对图片中每个目标的坐标做相应的处理.此外,位移.裁剪等操作还有可能使得一些目标在处理后只 ...

  7. Tensorflow Object_Detection 目标检测 笔记

    Tensorflow models Code:https://github.com/tensorflow/models 编写时间:2017.7 记录在使用Object_Detection 中遇到的问题 ...

  8. 目标检测 - Tensorflow Object Detection API

    一. 找到最好的工具 "工欲善其事,必先利其器",如果你想找一个深度学习框架来解决深度学习问题,TensorFlow 就是你的不二之选,究其原因,也不必过多解释,看过其优雅的代码架 ...

  9. tensorflow利用预训练模型进行目标检测(一):安装tensorflow detection api

    一.tensorflow安装 首先系统中已经安装了两个版本的tensorflow,一个是通过keras安装的, 一个是按照官网教程https://www.tensorflow.org/install/ ...

随机推荐

  1. Java Language Keywords

    Java Language Keywords 典型题目 true.false.null.sizeof.goto.synchronized 哪些是Java关键字?(goto.synchronized) ...

  2. 在 github 中新建仓库后,如何上传文件到这个仓库里面。

    在 github 中新建仓库后,如何上传文件到这个仓库里面. libin@hglibin MINGW64 /e/github.io (master) $ git remote libin@hglibi ...

  3. Java实现web在线预览office文档与pdf文档实例

    https://yq.aliyun.com/ziliao/1768?spm=5176.8246799.blogcont.24.1PxYoX 摘要: 本文讲的是Java实现web在线预览office文档 ...

  4. ref:mysql命令大全

    Mysql常用命令行大全 ref:https://www.cnblogs.com/bluealine/p/7832219.html 1)查看表结构:desc table_name; 2)查看创建表的s ...

  5. 【最短路径】 SPFA算法优化

    首先先明确一个问题,SPFA是什么?(不会看什么看,一边学去,传送门),SPFA是bellman-ford的队列优化版本,只有在国内才流行SPFA这个名字,大多数人就只知道SPFA就是一个顶尖的高效算 ...

  6. Google的代码高亮-code-prettify

    前不久发现,在wordpress中贴代码的时候,发现code标签并没有意料中的好使用,在贴代码的时候没有高亮真的是一件无法忍受的事情. 正巧,两周前听过同事Eason的一个关于Markdown的分享, ...

  7. 【BZOJ 2437】 2437: [Noi2011]兔兔与蛋蛋 (博弈+二分图匹配**)

    未经博主同意不得转载 2437: [Noi2011]兔兔与蛋蛋 Time Limit: 10 Sec  Memory Limit: 128 MBSubmit: 693  Solved: 442 Des ...

  8. hdu 3336

    KMP的next数组,对于next[i],是:1~i-1的最长的匹配的前缀和后缀的长度(也即在i位置匹配失败后,应该跳到的模式串的位置) 然后我们将所有满足要求的字串按照它的末尾位置分类. #incl ...

  9. 某DP题目3

    题意: 一根数轴上有n只怪物,第i个怪物所在的位置为ai,另有m个特殊点,第i个特殊点所在的位置为bi.你可以对怪物进行移动,若两怪物相邻,那么你不能把他们分开,移动时要看作一个整体.你可以选择向左或 ...

  10. leetcode76. Minimum Window Substring

    leetcode76. Minimum Window Substring 题意: 给定字符串S和字符串T,找到S中的最小窗口,其中将包含复杂度O(n)中T中的所有字符. 例如, S ="AD ...