OpenCV 3.3 
Aug 3, 2017

OpenCV 3.3 has been released with greatly improved Deep Learning module and lots of optimizations.

Adrian Rosebrock: http://www.pyimagesearch.com/author/adrian/ [nice]

Ref: Real-time object detection with deep learning and OpenCV

Thus, tracking is essential.

Multiple-thread is another consideration: Efficient, threaded video streams with OpenCV

 
 

The following networks have been tested and known to work:

    • AlexNet
    • GoogLeNet v1 (also referred to as Inception-5h)
    • ResNet-34/50/...
    • SqueezeNet v1.1
    • VGG-based FCN (semantical segmentation network)
    • ENet (lightweight semantical segmentation network)
    • VGG-based SSD (object detection network)
    • MobileNet-based SSD (light-weight object detection network)

下面是我们将用到的一些函数。

在dnn中从磁盘加载图片:

    • cv2.dnn.blobFromImage
    • cv2.dnn.blobFromImages

用“create”方法直接从各种框架中导出模型:

    • cv2.dnn.createCaffeImporter
    • cv2.dnn.createTensorFlowImporter
    • cv2.dnn.createTorchImporter

使用“读取”方法从磁盘直接加载序列化模型:

    • cv2.dnn.readNetFromCaffe
    • cv2.dnn.readNetFromTensorFlow
    • cv2.dnn.readNetFromTorch
    • cv2.dnn.readhTorchBlob

从磁盘加载完模型之后,可以用.forward方法来向前传播我们的图像,获取分类结果。

看样子就是好东西,那么,一起来安装:Installing OpenCV 3.3.0 on Ubuntu 16.04 LTS

You may meet the trouble in conflicting with python in anaconda3. Solve it as following:

lolo@lolo-UX303UB$ mv /usr/bin/python3
python3 python3.-config python3.4m-config python3m
python3. python3.4m python3-config python3m-config lolo: Move them away.
cmake -D CMAKE_BUILD_TYPE=RELEASE       \
-D CMAKE_INSTALL_PREFIX=/usr/local/anaconda3 \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D OPENCV_EXTRA_MODULES_PATH=/home/unsw/Android/opencv-3.3./opencv_contrib-3.3./modules \
-D PYTHON_EXECUTABLE=/usr/local/anaconda3/bin/python3. \
-D BUILD_EXAMPLES=ON ..

Done :-)

Installing ref: 

https://hackmd.io/s/S1gWq7BwW

http://www.linuxfromscratch.org/blfs/view/cvs/general/opencv.html

https://medium.com/@debugvn/installing-opencv-3-3-0-on-ubuntu-16-04-lts-7db376f93961


Now, you have got everything. Let's practice.

From: http://www.pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/

In the first part of today’s post on object detection using deep learning we’ll discuss Single Shot Detectors and MobileNets.

SSD Paper: http://lib.csdn.net/article/deeplearning/53059

SSD Paperhttps://arxiv.org/abs/1512.02325 [Origin]

When it comes to deep learning-based object detection there are three primary object detection methods that you’ll likely encounter:

If we combine both the MobileNet architecture and the Single Shot Detector (SSD) framework, we arrive at a fast, efficient deep learning-based method to object detection.

The model we’ll be using in this blog post is a Caffe versionof the original TensorFlow implementation by Howard et al. and was trained by chuanqi305 (see GitHub).

In this section we will use the MobileNet SSD + deep neural network ( dnn ) module in OpenCV to build our object detector.

Code analysis: 

# USAGE
# python deep_learning_object_detection.py --image images/example_01.jpg \
# --prototxt MobileNetSSD_deploy.prototxt.txt --model MobileNetSSD_deploy.caffemodel # import the necessary packages
import numpy as np
import argparse
import cv2 # construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to input image")
ap.add_argument("-p", "--prototxt", required=True,
help="path to Caffe 'deploy' prototxt file")
ap.add_argument("-m", "--model", required=True,
help="path to Caffe pre-trained model")
ap.add_argument("-c", "--confidence", type=float, default=0.2,
help="minimum probability to filter weak detections")
args = vars(ap.parse_args()) # initialize the list of class labels MobileNet SSD was trained to
# detect, then generate a set of bounding box colors for each class
CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
"bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
"dog", "horse", "motorbike", "person", "pottedplant", "sheep",
"sofa", "train", "tvmonitor"]
COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3)) # load our serialized model from disk
print("[INFO] loading model...")
net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"]) # load the input image and construct an input blob for the image
# by resizing to a fixed 300x300 pixels and then normalizing it
# (note: normalization is done via the authors of the MobileNet SSD
# implementation)
image = cv2.imread(args["image"])
(h, w) = image.shape[:2]
blob = cv2.dnn.blobFromImage(image, 0.007843, (300, 300), 127.5)  # --> NCHW # pass the blob through the network and obtain the detections and
# predictions
print("[INFO] computing object detections...")
net.setInput(blob)
detections = net.forward()  # --> net.forward
# loop over the detections
for i in np.arange(0, detections.shape[2]):
# extract the confidence (i.e., probability) associated with the
# prediction
confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the `confidence` is
# greater than the minimum confidence
if confidence > args["confidence"]:
# extract the index of the class label from the `detections`,
# then compute the (x, y)-coordinates of the bounding box for
# the object
idx = int(detections[0, 0, i, 1])
box= detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(startX, startY, endX, endY) = box.astype("int") # display the prediction
label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100)
print("[INFO] {}".format(label))
cv2.rectangle(image, (startX, startY), (endX, endY), COLORS[idx], 2)
y = startY - 15 if startY - 15 > 15 else startY + 15
cv2.putText(image, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2) # show the output image
cv2.imshow("Output", image)
cv2.waitKey(0)

NCHW

There is a comment that explains this, but in a different source file, ConvolutionalNodes.h, pasted below.

Note that the NVidia abbreviations refer to row-major layout, so to map them to column-major tensor indices are used by CNTK, you will need to reverse their order. E.g. cudnn stores images in “NCHW,” which is a [W x H x C x N] tensor in CNTK notation (W being the fastest-changing dimension; and there are N objects of dimension [W x H x C] concatenated).

Note that the “legacy” (non-cuDNN) memory layout is old code written before NCHW became the standard, so we are likely phasing out the old representation eventually.

net.forward

[INFO] loading model...
[INFO] computing object detections...
(1, 1, 2, 7)
[[[[ 0. 12. 0.95878285 0.49966827 0.6235761 0.69597626 0.87614471]
[ 0. 15. 0.99952459 0.04266162 0.20033446 0.45632178 0.84977102]]]]
[INFO] dog: 95.88%
[INFO] person: 99.95%

[OpenCV] Install OpenCV 3.3 with DNN的更多相关文章

  1. [OpenCV] Install OpenCV 3.4 with DNN

    目标定位 一.开始全面支持 Tensorflow OpenCV3.4 新功能 当前最新进展OpenCV 3.4 dev:https://github.com/opencv/opencv/tree/ma ...

  2. [OpenCV] Install openCV in Qt Creator

    Learn openCV.pdf qmake: link with opencv (Key Point) QT += core gui greaterThan(QT_MAJOR_VERSION, 4) ...

  3. Install OpenCV 3.0 and Python 2.7+ on OSX

    http://www.pyimagesearch.com/2015/06/15/install-OpenCV-3-0-and-Python-2-7-on-osx/ As I mentioned las ...

  4. Install OpenCV 3.0 and Python 2.7+ on Ubuntu

    为了防止原文消失或者被墙,转载留个底,最好还是去看原贴,因为随着版本变化,原贴是有人维护升级的 http://www.pyimagesearch.com/2015/06/22/install-Open ...

  5. $ npm install opencv ? 你试试?! 在windows环境下,使用node.js调用opencv攻略

    博主之前写过一篇文章<html5与EmguCV前后端实现——人脸识别篇>,叙述的是opencv和C#的故事.最近在公司服务器上更新了一套nodejs环境,早就听闻npm上有opencv模块 ...

  6. [OpenCV] How to install opencv by compiling source code

    Introduction Install OpenCV and its dependence ! STEPs 1, compiler sudo apt-get install build-essent ...

  7. Install OpenCV on Ubuntu or Debian

    http://milq.github.io/install-OpenCV-ubuntu-debian/转注:就用第一个方法吧,第二个方法的那个sh文件执行失败,因为我价格kurento.org的源,在 ...

  8. Install opencv on Centos

    研究centos 有很长一段时间了,一直没有写过这方面的感觉,今天在看到网友的一篇文章时,结合亲身体会就下面安装opencv的一些步骤与大家共享. CentOS OpenCV已被广泛应用但是也在不断的 ...

  9. 【OpenCV入门教程之一】 安装OpenCV:OpenCV 3.0 +VS 2013 开发环境配置

    图片太多,具体过程参照: [OpenCV入门教程之一] 安装OpenCV:OpenCV 3.0.OpenCV 2.4.8.OpenCV 2.4.9 +VS 开发环境配置 说下我这边的设置: 选择deb ...

随机推荐

  1. java异常中throw和throws的区别

    throws和throwthrows:用来声明一个方法可能产生的所有异常,不做任何处理而是将异常往上传,谁调用我我就抛给谁.  用在方法声明后面,跟的是异常类名  可以跟多个异常类名,用逗号隔开  表 ...

  2. Android典型界面设计(3)——访网易新闻实现双导航tab切换

    一.问题描述 双导航tab切换(底部区块+区域内头部导航),实现方案底部区域使用FragmentTabHost+Fragment, 区域内头部导航使用ViewPager+Fragment,可在之前博客 ...

  3. Visio画流程图风格设置

    第一步:选取设计下选用“简单” 第二步:设置颜色为“铅笔” 第三步:设置效果为“辐射” 第四步:效果

  4. C#高级编程四十一天----用户定义的数据类型转换

    用户定义的数据类型转换 C#同意定义自己的 数据类型,这意味着须要某些 工具支持在自己的数据类型间进行数据转换.方法是把数据类型转换定义为相关类的一个成员运算符,数据类型转换必须声明为隐式或者显式,以 ...

  5. docker-compose中启动镜像失败的问题

    http://blog.csdn.net/boling_cavalry/article/details/79050451 解决docker-compose启动镜像失败的问题: 原文地址:http:// ...

  6. (98)Address already in use: make_sock: could not bind to address 80 [resolved] (2012-10-11 09:04)

    以前遇到一个问题: sudo /etc/init.d/apache2 start * Starting web server apache2 apache2: Could not reliably d ...

  7. Oracle只读用户角色的建立

    授予某模式下对象读权限给角色,就可以建立Oracle只读用户角色,下文对该方法的实现步骤作了详细的介绍,供您参考学习. 下面为您介绍的是Oracle只读用户角色的建立方法,该方法供您参考,如果您在Or ...

  8. Swift -- 中文版两大官方文档汇总

    Swift官方文档由CocoaChina翻译小组精心翻译制作而成,目前两本文档中文版已全部完成!在此,我们对所有参与的译者.组织人员以及工作人员表示衷心的感谢!本文为您提供两本文档的在线阅读以及下载! ...

  9. Wordpress无法连接Mysql8的问题

    安装了mysql 8.0.11 之后本地可以登录,但是远程第三方工具无法连接,本地安装的Wordpress在初始化时也连接失败.防火墙已经放通的, 解决之道: 首先登陆到mysql命令行: mysql ...

  10. SqlDateTime overflow / SqlDateTime 溢出

    Error - SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM SqlDat ...