前言

人脸检测标准库FDDB详细介绍了数据库和使用方法。对于训练的模型,如何评估模型的效果呢,本文对此进行介绍。说实话,参考了很多博客,但是感觉都不是很明白(当然本文也会有瑕疵),故在此记录!

测试环境

1.安装Perl;

2.安装Gnuplot;

操作步骤

1.根据训练好的模型测试数据库的人脸检测结果,并将结果输出,输出格式与要求一致即可,即out-fold-**.txt和results.txt;

检测结果格式如下:

...
<image name i>
<number of faces in this image =im>
<face i1>
<face i2>
...
<face im>
...

shapes format:

   a. Rectangular regions
Each face region is represented as:
<left_x top_y width height detection_score> b. Elliptical regions
Each face region is represented as:
<major_axis_radius minor_axis_radius angle center_x center_y detection_score>.

这里需要得到detection_score这个参数,如何得到这个参数是一个好问题,可以使用opencv自带的函数获取,也可以使用其他方法获取(fddb_faq);

cascade.detectMultiScale(img, objs, reject_levels, level_weights, scale_factor, min_neighbors, , cv::Size(), cv::Size(), true);

fddb_faq:

Q. How do you compute the detection score that is to be included in the face detection output file?
The score included in the output file of a face detection system should be obtained by the system itself. The evaluation code included in the benchmark varies the threshold on this score to obtain different points on the desired ROC curve. Q. What range should the detection score be?
The range of scores can lie anywhere on the real line (-infinity to infinity). In other words, the scores are used to order the detections, and their absolute values do not matter.

本文使用的是opencv自带函数和IOU判断结合来获取的。

//re:https://blog.csdn.net/xukaiwen_2016/article/details/52318476?locationNum=6
/************************************************************************
* File: genResult.cpp
* Coder: AMY
* Email:happyamyhope@163.com
* Date: 2018/10/15
* ChLog: score max.
* Re: http://haoxiang.org/2013/11/opencv-detectmultiscale-output-detection-score/
************************************************************************/
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <cctype>
#include <iostream>
#include <fstream>
#include <iterator>
#include <stdio.h> std::vector<cv::Rect> detectAndScore(cv::Mat& img, cv::CascadeClassifier& cascade, double scale, double* score);
std::vector<cv::Rect> detectAndScoreMax(cv::Mat& img, cv::CascadeClassifier& cascade, double scale, double* score);
std::string cascadeName = "..//src//haar_roboman_ff_alt2.xml";
//compute iou.
float compute_iou(cv::Rect boxA, cv::Rect boxB)
{
int xA = std::max(boxA.x, boxB.x);
int yA = std::max(boxA.y, boxB.y);
int xB = std::min(boxA.x+boxA.width, boxB.x+boxB.width);
int yB = std::min(boxA.y+boxA.height, boxB.y+boxB.height); float inter_area = std::max(, xB-xA+) * std::max(, yB-yA+);
float boxA_area = boxA.width * boxA.height;
float boxB_area = boxB.width * boxB.height; float iou = inter_area / (boxA_area + boxB_area - inter_area);
return iou; } int main(int argc, const char** argv)
{
cv::Mat frame, frameCopy, image;
std::string inputName;
std::string dir;
cv::CascadeClassifier cascade;
double scale = ;
cascade.load(cascadeName);
std::ofstream out_all_txt("result_all.txt");
for (unsigned int file_num=; file_num<;file_num++)
{
std::string str = std::to_string(file_num);
if (str.size() < ) str = "" + str;
std::string out_name = "out-fold-" + str + ".txt";
std::cout << "start file " << out_name << std::endl;
std::ifstream in_txt("..//FDDB-folds//FDDB-fold-" + str + ".txt");
std::ofstream out_txt(out_name);
std::string dir1 = "..//FDDB-originalPics//";
while (!in_txt.eof())
{
getline(in_txt, inputName);
if (in_txt.eof()) break;
dir = dir1 + inputName;
dir += ".jpg";
//dir = "..//FDDB-originalPics////////big//img_674.jpg";
//std::cout << dir << std::endl;
image = cv::imread(dir, CV_LOAD_IMAGE_COLOR);
if (!image.empty())
{
std::ofstream out_txt(out_name, std::ios::app);
double scoreBuffer[];
std::vector<cv::Rect> faces = detectAndScoreMax(image, cascade, scale, scoreBuffer);
out_txt << inputName << std::endl << faces.size() << std::endl;
out_all_txt << inputName << std::endl << faces.size() << std::endl;
//std::cout << faces.size() << std::endl;
for (unsigned int i = ; i<faces.size(); i++)
{
cv::rectangle(image, faces[i], cv::Scalar(, , ), , , );
out_txt << faces[i].x << " " << faces[i].y << " " << faces[i].width
<< " " << faces[i].height << " " << scoreBuffer[i] << std::endl;
out_all_txt << faces[i].x << " " << faces[i].y << " " << faces[i].width
<< " " << faces[i].height << " " << scoreBuffer[i] << std::endl;
}
faces.clear();
}
//cv::imshow("src", image);
//cv::waitKey(100);
}
//if (in_txt.eof()) std::cout << "[EOF reached]" << std::endl;
//else std::cout << "[EOF reading]" << std::endl;
in_txt.close();
out_txt.close();
}
out_all_txt.close();
cv::waitKey();
return ;
}
std::vector<cv::Rect> detectAndScoreMax(cv::Mat& color, cv::CascadeClassifier& cascade, double scale, double* scoreBuffer)
{
cv::Mat gray;
cv::Mat img(cvRound(color.rows / scale), cvRound(color.cols / scale), CV_8UC1);
cv::cvtColor(color, gray, CV_BGR2GRAY);
cv::resize(gray, img, img.size(), , , CV_INTER_LINEAR);
cv::equalizeHist(img, img);
const float scale_factor(1.2f);
const int min_neighbors(); std::vector<cv::Rect> faces;
std::vector<int> reject_levels;
std::vector<double> level_weights;
cascade.detectMultiScale(img, faces, reject_levels, level_weights, scale_factor, min_neighbors, , cv::Size(), cv::Size(), true);
//std::cout << "faces.size(): " << faces.size() << "---level_weights.size(): " << level_weights.size() << std::endl;
for (unsigned int n = ; n < faces.size(); n++)
{
scoreBuffer[n] = level_weights[n];
//std::cout << "level_weight: " << level_weights[n] << std::endl;
} return faces;
}
std::vector<cv::Rect> detectAndScore(cv::Mat& color, cv::CascadeClassifier& cascade, double scale, double* scoreBuffer)
{
cv::Mat gray;
cv::Mat img(cvRound(color.rows / scale), cvRound(color.cols / scale), CV_8UC1);
cv::cvtColor(color, gray, CV_BGR2GRAY);
cv::resize(gray, img, img.size(), , , CV_INTER_LINEAR);
cv::equalizeHist(img, img);
const float scale_factor(1.2f);
const int min_neighbors();
//long t0 = cv::getTickCount();
std::vector<cv::Rect> faces;
cascade.detectMultiScale(img, faces, scale_factor, min_neighbors, , cv::Size(), cv::Size());
//long t1 = cv::getTickCount();
//double secs = (t1 - t0)/cv::getTickFrequency();
//std::cout << "Detections takes " << secs << " seconds " << std::endl; std::vector<cv::Rect> objs;
std::vector<int> reject_levels;
std::vector<double> level_weights;
cascade.detectMultiScale(img, objs, reject_levels, level_weights, scale_factor, min_neighbors, , cv::Size(), cv::Size(), true);
//std::cout << "faces.size(): " << faces.size() << "---objs.size(): " << objs.size() << std::endl;
for (unsigned int n = ; n < faces.size(); n++)
{
int iou_max_idx = ;
float iou_max = 0.0;
for (unsigned int k=; k < objs.size(); k++)
{
float iou = compute_iou(faces[n], objs[k]);
if ( (iou>0.5) && (reject_levels[k]>=) && (iou>iou_max) )
{
iou_max = iou;
iou_max_idx = k;
//std::cout << "iou: " << iou << "---reject_levels[k]: " << reject_levels[k] << std::endl;
}
}
scoreBuffer[n] = level_weights[iou_max_idx]; } return faces;
}

2.准备好图片数据库、数据库的groundtruth文件(ellipseList.txt、imList.txt)及其对应的输出文件(results.txt),根据下载的evaluation程序,修改evaluate.cpp的内容,对应修改程序(runEvaluate.pl),运行该程序即可得到检测器的效果;

evaluate.cpp

#ifdef _WIN32
string baseDir = "..//..//FDDB-originalPics//";
string listFile = "..//..//imList.txt";
string detFile = "..//..//results.txt";
string annotFile = "..//..//ellipseList.txt";
#else
string baseDir = "..//FDDB-originalPics//";
string listFile = "..//imList.txt";
string detFile = "..//results.txt";
string annotFile = "..//ellipseList.txt";
#endif

runEvaluate.pl

#!/usr/bin/perl -w

use strict;

#### VARIABLES TO EDIT ####
# where gnuplot is
my $GNUPLOT = "/usr/bin/gnuplot";
# where the binary is
my $evaluateBin = "./evaluate";
# where the images are
my $imDir = "../FDDB-originalPics/";
# where the folds are
my $fddbDir = "../FDDB-folds/";
# where the detections are
my $detDir = "../out-folds/";
########################### my $detFormat = ; # 0: rectangle, 1: ellipse 2: pixels sub makeGNUplotFile
{
my $rocFile = shift;
my $gnuplotFile = shift;
my $title = shift;
my $pngFile = shift; open(GF, ">$gnuplotFile") or die "Can not open $gnuplotFile for writing\n";
#print GF "$GNUPLOT\n";
print GF "set term png\n";
print GF "set size .75,1\n";
print GF "set output \"$pngFile\"\n";
#print GF "set xtics 500\n";
#print GF "set logscale x\n";
print GF "set ytics .1\n";
print GF "set grid\n";
#print GF "set size ratio -1\n";
print GF "set ylabel \"True positive rate\"\n";
print GF "set xlabel \"False positives\"\n";
#print GF "set xr [0:50000]\n";
print GF "set yr [0:1]\n";
print GF "set key right bottom\n";
print GF "plot \"$rocFile\" using 2:1 with linespoints title \"$title\"\n";
close(GF);
} my $annotFile = "ellipseList.txt";
my $listFile = "imList.txt";
my $gpFile = "createROC.p"; # read all the folds into a single file for evaluation
my $detFile = $detDir;
$detFile =~ s/\//_/g;
$detFile = $detFile."Dets.txt"; if(-e $detFile){
system("rm", $detFile);
} if(-e $listFile){
system("rm", $listFile);
} if(-e $annotFile){
system("rm", $annotFile);
} foreach my $fi (..){
my $foldFile = sprintf("%s/out-fold-%02d.txt", $detDir, $fi);
system("cat $foldFile >> $detFile");
$foldFile = sprintf("%s/FDDB-fold-%02d.txt", $fddbDir, $fi);
system("cat $foldFile >> $listFile");
$foldFile = sprintf("%s/FDDB-fold-%02d-ellipseList.txt", $fddbDir, $fi);
system("cat $foldFile >> $annotFile");
} #die;
# run the actual evaluation code to obtain different points on the ROC curves
#system($evaluateBin, "-a", $annotFile, "-d", $detFile, "-f", $detFormat, "-i", $imDir, "-l", $listFile, "-r", $detDir, "-s");
#system($evaluateBin, "-a", $annotFile, "-d", $detFile, "-f", $detFormat, "-i", $imDir, "-l", $listFile, "-r", $detDir);
system($evaluateBin, "-a", $annotFile, "-d", $detFile, "-f", $detFormat, "-i", $imDir, "-l", $listFile, "-r", $detDir, "-z", ".jpg"); # plot the two ROC curves using GNUplot
makeGNUplotFile($detDir."ContROC.txt", $gpFile, $detDir, $detDir."ContROC.png");
system("echo \"load '$gpFile'\" | $GNUPLOT"); makeGNUplotFile($detDir."DiscROC.txt", $gpFile, $detDir, $detDir."DiscROC.png");
system("echo \"load '$gpFile'\" | $GNUPLOT"); # remove intermediate files
system("rm", $annotFile, $listFile, $gpFile, $detFile);

注意不同文件目录的相对路径一定要正确。

得到的文件:ContROC.txt和DiscROC.txt、ContROC.png和DiscROC.png;

3.和其他检测器的算法结果进行比较;

将生成的两个*.txt文件放在compareROC目录,在ContROC.p和DiscROC.p(也可以是ContROC_unpub.p和DiscROC_unpub.p)文件分别对应地添加一行语句(注意对应格式一致)即可运行;

#"rocCurves/filename_DiscROC.txt" using : title 'filename' with lines lw  , \

运行命令

gnuplot contROC.p
或者
gnuplot discROC.p

即可生成对应的多个算法检测结果的比较;

 问题

Q1:

Incompatible annotation and detection files. See output specifications

注意直接将上面生成的txt文件复制到ubuntu16下会报错Incompatible annotation and detection files. See output specifications ,由于windows下文件和ubuntu下不同导致的。只需要在ubuntu下面创建一个txt文件,然后将内容复制进去即可。当然也有可能是生成txt文件的代码有一些小问题,需要再仔细检查一下。

Q2:

为什么同样的评估程序,对于ContROC.txt和DiscROC.txt以及对应的ROC结果图片,有时候可以得到正常的曲线,有时候却得到只是直线?有大神看到的话麻烦解答一下下啦~~~

 注意

1.编写生成检测器结果文件的程序,注意文件内容的格式,可参考FDDB;

2.在FDDB网站下载评估程序;

3.注意评估程序的目录结构;

4.注意根据具体情况改写*.p和*.pl文件的内容;

总结评估的步骤:准备好检测器、以txt的形式按要求输出检测器的检测结果、修改evaluation程序并运行生成该检测器的ROC效果图、修改compareROC程序运行生成多个检测器算法的ROC效果比较。

参考

1.fddb评估

2.windows下fddb评估

3.人脸检测的评价方式

4.fddb-eval

5.github-fddb-evaluate

6.ubuntu-fddb-evaluate

7.windows-fddb-evaluate-good

8.stackoverflow;

9.windows-努力奔跑的小白博客

10. fddb;

【计算机视觉】人脸检测模型的评估方法-linux的更多相关文章

  1. 基于TensorFlow Object Detection API进行迁移学习训练自己的人脸检测模型(二)

    前言 已完成数据预处理工作,具体参照: 基于TensorFlow Object Detection API进行迁移学习训练自己的人脸检测模型(一) 设置配置文件 新建目录face_faster_rcn ...

  2. DPM检测模型 VoC-release 5 linux 下编译运行

    (转载请注明作者和出处 楼燚(yì)航的blog :http://www.cnblogs.com/louyihang-loves-baiyan/ 未经允许请勿用于商业用途) DPM目前使非神经网络方法 ...

  3. win10+anaconda+cuda配置dlib,使用GPU对dlib的深度学习算法进行加速(以人脸检测为例)

    在计算机视觉和机器学习方向有一个特别好用但是比较低调的库,也就是dlib,与opencv相比其包含了很多最新的算法,尤其是深度学习方面的,因此很有必要学习一下.恰好最近换了一台笔记本,内含一块GTX1 ...

  4. 人脸检测学习笔记(数据集-DLIB人脸检测原理-DLIB&OpenCV人脸检测方法及对比)

    1.Easily Create High Quality Object Detectors with Deep Learning 2016/10/11 http://blog.dlib.net/201 ...

  5. opencv 美白磨皮人脸检测<转>

    1. 简介 这学期的计算机视觉课,我们组的课程项目为“照片自动美化”,其中我负责的模块为人脸检测与自动磨皮.功能为:用户上传一张照片,自动检测并定位出照片中的人脸,将照片中所有的人脸进行“磨皮”处理, ...

  6. 调用opencv的接口实现人脸检测(简单)

    import cv2 import matplotlib.pyplot as plt %matplotlib inline # 提取预训练的人脸检测模型,提前下载好的模型 face_cascade = ...

  7. 手把手教你 在Pytorch框架上部署和测试 关键点人脸检测项目DBFace,成功实现人脸检测效果

    这期教向大家介绍仅仅 1.3M 的轻量级高精度的关键点人脸检测模型DBFace,并手把手教你如何在自己的电脑端进行部署和测试运行,运行时bug解决. 01. 前言 前段时间DBFace人脸检测库横空出 ...

  8. 【计算机视觉】如何使用opencv自带工具训练人脸检测分类器

    前言 使用opencv自带的分类器效果并不是很好,由此想要训练自己的分类器,正好opencv有自带的工具进行训练.本文就对此进行展开. 步骤 1.查找工具文件: 2.准备样本数据: 3.训练分类器: ...

  9. 人脸检测及识别python实现系列(5)——利用keras库训练人脸识别模型

    人脸检测及识别python实现系列(5)——利用keras库训练人脸识别模型 经过前面稍显罗嗦的准备工作,现在,我们终于可以尝试训练我们自己的卷积神经网络模型了.CNN擅长图像处理,keras库的te ...

随机推荐

  1. Jmeter性能测试 对服务器使用资源进行监控之ServerAgent插件使用

    百度云盘友情赞助地址如下: 链接:https://pan.baidu.com/s/1cpAeOcfFX8kss1eo79UD9g 密码:b8o7 在windows上或者linux上打开服务 用Jmet ...

  2. 《剑指offer》第三十一题(栈的压入、弹出序列)

    // 面试题31:栈的压入.弹出序列 // 题目:输入两个整数序列,第一个序列表示栈的压入顺序,请判断第二个序列是 // 否为该栈的弹出顺序.假设压入栈的所有数字均不相等.例如序列1.2.3.4. / ...

  3. Android JNI(一)——NDK与JNI基础

    本系列文章如下: Android JNI(一)——NDK与JNI基础 Android JNI学习(二)——实战JNI之“hello world” Android JNI学习(三)——Java与Nati ...

  4. 【Golang 接口自动化02】使用标准库net/http发送Post请求

    写在前面 上一篇我们介绍了使用 net/http 发送get请求,因为考虑到篇幅问题,把Post单独拎了出来,我们在这一篇一起从源码来了解一下Golang的Post请求. 发送Post请求 net/h ...

  5. WPF操作RichTextBox

    http://www.cnblogs.com/wzwyc/p/6291895.html

  6. 1月4日编程基础hash

    早上git加星了webapp的教程 > h = Hash.new('Go Fishing')       => {}  // 创建一个空hash,设定"Go Fishing&qu ...

  7. websphere设置企业应用使用的jvm最大最小内存

    websphere设置企业应用使用的jvm最大最小内存 设置jvm 内存的最大最小值.打开was管理控制台  点击应用程序服务器-----server1  点击java和进程管理前面的加号  点击进程 ...

  8. CentOS 7 install Nginx

    1. rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.r ...

  9. 『科学计算』科学绘图库matplotlib练习

    思想:万物皆对象 作业 第一题: import numpy as np import matplotlib.pyplot as plt x = [1, 2, 3, 1] y = [1, 3, 0, 1 ...

  10. 比较windows下的5种IO模型

    看到一个很有意思的解释: 老陈有一个在外地工作的女儿,不能经常回来,老陈和她通过信件联系.他们的信会被邮递员投递到他们的信箱里. 这和Socket模型非常类似.下面我就以老陈接收信件为例讲解Socke ...