最近帮别人做一个项目,主要是使用摄像头做人脸识别

github地址:https://github.com/qugang/AVCaptureVideoTemplate

要使用IOS的摄像头,需要使用AVFoundation 库,库里面的东西我就不介绍。

启动摄像头需要使用AVCaptureSession 类。

然后得到摄像头传输的每一帧数据,需要使用AVCaptureVideoDataOutputSampleBufferDelegate 委托。

首先在viewDidLoad 里添加找摄像头设备的代码,找到摄像头设备以后,开启摄像头

1
2
3
4
5
6
7
8
9
10
11
12
13
captureSession.sessionPreset = AVCaptureSessionPresetLow
let devices = AVCaptureDevice.devices()
for device in devices {
  if (device.hasMediaType(AVMediaTypeVideo)) {
    if (device.position == AVCaptureDevicePosition.Front) {
      captureDevice = device as?AVCaptureDevice
      if captureDevice != nil {
        println("Capture Device found")
        beginSession()
      }
    }
  }
}

beginSession,开启摄像头:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
func beginSession() {
  var err : NSError? = nil
  captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
  let output = AVCaptureVideoDataOutput()
  let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
  output.setSampleBufferDelegate(self, queue: cameraQueue)
  output.videoSettings = [kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA]
  captureSession.addOutput(output)
  if err != nil {
    println("error: \(err?.localizedDescription)")
  }
  previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
  previewLayer?.videoGravity = "AVLayerVideoGravityResizeAspect"
  previewLayer?.frame = self.view.bounds
  self.view.layer.addSublayer(previewLayer)
  captureSession.startRunning()
}

开启以后,实现captureOutput 方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
  if(self.isStart)
  {
    let resultImage = sampleBufferToImage(sampleBuffer)
    let context = CIContext(options:[kCIContextUseSoftwareRenderer:true])
    let detecotr = CIDetector(ofType:CIDetectorTypeFace,  context:context, options:[CIDetectorAccuracy: CIDetectorAccuracyHigh])
    let ciImage = CIImage(image: resultImage)
    let results:NSArray = detecotr.featuresInImage(ciImage,options: ["CIDetectorImageOrientation" : 6])
    for r in results {
      let face:CIFaceFeature = r as! CIFaceFeature;
      let faceImage = UIImage(CGImage: context.createCGImage(ciImage, fromRect: face.bounds),scale: 1.0, orientation: .Right)
      NSLog("Face found at (%f,%f) of dimensions %fx%f", face.bounds.origin.x, face.bounds.origin.y,pickUIImager.frame.origin.x, pickUIImager.frame.origin.y)
      dispatch_async(dispatch_get_main_queue()) {
        if (self.isStart)
        {
          self.dismissViewControllerAnimated(true, completion: nil)
          self.didReceiveMemoryWarning()
          self.callBack!(face: faceImage!)
        }
        self.isStart = false
      }
    }
  }
}

在每一帧图片上使用CIDetector 得到人脸,CIDetector 还可以得到眨眼,与微笑的人脸,如果要详细使用去官方查看API

上面就是关键代码,设置了有2秒的延迟,2秒之后开始人脸检测。

全部代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
//
//  ViewController.swift
//  AVSessionTest
//
//  Created by qugang on 15/7/8.
//  Copyright (c) 2015年 qugang. All rights reserved.
//
 
import UIKit
import AVFoundation
class AVCaptireVideoPicController: UIViewController,AVCaptureVideoDataOutputSampleBufferDelegate {
  var callBack :((face: UIImage) ->())?
  let captureSession = AVCaptureSession()
  var captureDevice : AVCaptureDevice?
  var previewLayer : AVCaptureVideoPreviewLayer?
  var pickUIImager : UIImageView = UIImageView(image: UIImage(named: "pick_bg"))
  var line : UIImageView = UIImageView(image: UIImage(named: "line"))
  var timer : NSTimer!
  var upOrdown = true
  var isStart = false
  override func viewDidLoad() {
    super.viewDidLoad()
    captureSession.sessionPreset = AVCaptureSessionPresetLow
    let devices = AVCaptureDevice.devices()
    for device in devices {
      if (device.hasMediaType(AVMediaTypeVideo)) {
        if (device.position == AVCaptureDevicePosition.Front) {
          captureDevice = device as?AVCaptureDevice
          if captureDevice != nil {
            println("Capture Device found")
            beginSession()
          }
        }
      }
    }
    pickUIImager.frame = CGRect(x: self.view.bounds.width / 2 - 100, y: self.view.bounds.height / 2 - 100,width: 200,height: 200)
    line.frame = CGRect(x: self.view.bounds.width / 2 - 100, y: self.view.bounds.height / 2 - 100, width: 200, height: 2)
    self.view.addSubview(pickUIImager)
    self.view.addSubview(line)
    timer =  NSTimer.scheduledTimerWithTimeInterval(0.01, target: self, selector: "animationSate", userInfo: nil, repeats: true)
     
    NSTimer.scheduledTimerWithTimeInterval(2, target: self, selector: "isStartTrue", userInfo: nil, repeats: false)
  }
  func isStartTrue(){
    self.isStart = true
  }
  override func didReceiveMemoryWarning(){
    super.didReceiveMemoryWarning()
    captureSession.stopRunning()
  }
   
  func animationSate(){
    if upOrdown {
      if (line.frame.origin.y >= pickUIImager.frame.origin.y + 200)
      {
        upOrdown = false
      }
      else
      {
        line.frame.origin.y += 2
      }
    } else {
      if (line.frame.origin.y <= pickUIImager.frame.origin.y)
      {
        upOrdown = true
      }
      else
      {
        line.frame.origin.y -= 2
      }
    }
  }
  func beginSession() {
    var err : NSError? = nil
    captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
    let output = AVCaptureVideoDataOutput()
    let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
    output.setSampleBufferDelegate(self, queue: cameraQueue)
    output.videoSettings = [kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA]
    captureSession.addOutput(output)
    if err != nil {
      println("error: \(err?.localizedDescription)")
    }
    previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
    previewLayer?.videoGravity = "AVLayerVideoGravityResizeAspect"
    previewLayer?.frame = self.view.bounds
    self.view.layer.addSublayer(previewLayer)
    captureSession.startRunning()
  }
  func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
    if(self.isStart)
    {
      let resultImage = sampleBufferToImage(sampleBuffer)
      let context = CIContext(options:[kCIContextUseSoftwareRenderer:true])
      let detecotr = CIDetector(ofType:CIDetectorTypeFace,  context:context, options:[CIDetectorAccuracy: CIDetectorAccuracyHigh])
      let ciImage = CIImage(image: resultImage)
      let results:NSArray = detecotr.featuresInImage(ciImage,options: ["CIDetectorImageOrientation" : 6])
      for r in results {
        let face:CIFaceFeature = r as! CIFaceFeature;
        let faceImage = UIImage(CGImage: context.createCGImage(ciImage, fromRect: face.bounds),scale: 1.0, orientation: .Right)
        NSLog("Face found at (%f,%f) of dimensions %fx%f", face.bounds.origin.x, face.bounds.origin.y,pickUIImager.frame.origin.x, pickUIImager.frame.origin.y)
        dispatch_async(dispatch_get_main_queue()) {
          if (self.isStart)
          {
            self.dismissViewControllerAnimated(true, completion: nil)
            self.didReceiveMemoryWarning()
            self.callBack!(face: faceImage!)
          }
          self.isStart = false
        }
      }
    }
  }
  private func sampleBufferToImage(sampleBuffer: CMSampleBuffer!) -> UIImage {
    let imageBuffer: CVImageBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)
    CVPixelBufferLockBaseAddress(imageBuffer, 0)
    let baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0)
    let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
    let width = CVPixelBufferGetWidth(imageBuffer)
    let height = CVPixelBufferGetHeight(imageBuffer)
    let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceRGB()
    let bitsPerCompornent = 8
    var bitmapInfo = CGBitmapInfo((CGBitmapInfo.ByteOrder32Little.rawValue | CGImageAlphaInfo.PremultipliedFirst.rawValue) as UInt32)
    let newContext = CGBitmapContextCreate(baseAddress, width, height, bitsPerCompornent, bytesPerRow, colorSpace, bitmapInfo) as CGContextRef
    let imageRef: CGImageRef = CGBitmapContextCreateImage(newContext)
    let resultImage = UIImage(CGImage: imageRef, scale: 1.0, orientation: UIImageOrientation.Right)!
    return resultImage
  }
  func imageResize (imageObj:UIImage, sizeChange:CGSize)-> UIImage{
    let hasAlpha = false
    let scale: CGFloat = 0.0
     
    UIGraphicsBeginImageContextWithOptions(sizeChange, !hasAlpha, scale)
    imageObj.drawInRect(CGRect(origin: CGPointZero, size: sizeChange))
    let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
    return scaledImage
  }
}

swift通过摄像头读取每一帧的图片,并且做识别做人脸识别的更多相关文章

  1. OpenCV摄像头人脸识别

    注: 从外设摄像装置中获取图像帧,把每帧的图片与人脸特征进行匹配,用方框框住识别出来的人脸 需要用到的函数: CvHaarClassifierCascade* cvLoadHaarClassifier ...

  2. 基于OpenCV读取摄像头进行人脸检测和人脸识别

    前段时间使用OpenCV的库函数实现了人脸检测和人脸识别,笔者的实验环境为VS2010+OpenCV2.4.4,opencv的环境配置网上有很多,不再赘述.检测的代码网上很多,记不清楚从哪儿copy的 ...

  3. QQ摄像头读取条码

    跟我学机器视觉-HALCON学习例程中文详解-QQ摄像头读取条码 第一步:插入QQ摄像头,安装好驱动(有的可能免驱动) 第二步:打开HDevelop,点击助手—打开新的Image Acquisitio ...

  4. 跟我学机器视觉-HALCON学习例程中文详解-QQ摄像头读取条码

    跟我学机器视觉-HALCON学习例程中文详解-QQ摄像头读取条码 第一步:插入QQ摄像头,安装好驱动(有的可能免驱动) 第二步:打开HDevelop,点击助手-打开新的Image Acquisitio ...

  5. opencv摄像头读取图片

    # 摄像头捕获图像或视频import numpy as np import cv2 # 创建相机的对象 cap = cv2.VideoCapture(0) while(True): # 读取相机所拍到 ...

  6. Opencv摄像头实时人脸识别

    Introduction 网上存在很多人脸识别的文章,这篇文章是我的一个作业,重在通过摄像头实时采集人脸信息,进行人脸检测和人脸识别,并将识别结果显示在左上角. 利用 OpenCV 实现一个实时的人脸 ...

  7. Python3利用Dlib19.7实现摄像头人脸识别的方法

    0.引言 利用python开发,借助Dlib库捕获摄像头中的人脸,提取人脸特征,通过计算欧氏距离来和预存的人脸特征进行对比,达到人脸识别的目的: 可以自动从摄像头中抠取人脸图片存储到本地,然后提取构建 ...

  8. matlab使用摄像头人脸识别

    #关于matlab如何读取图片.视频.摄像头设备数据# 参见:http://blog.csdn.net/u010177286/article/details/45646173 但是,关于摄像头读取,上 ...

  9. OpenCV视频读取播放,视频转换为图片

    转载请注明出处!!! http://blog.csdn.net/zhonghuan1992 OpenCV视频读取播放,视频转换为图片 介绍几个有关视频读取的函数: VideoCapture::Vide ...

随机推荐

  1. 高性能WEB开发(6) - web性能測试工具推荐

    WEB性能測试工具主要分为三种.一种是測试页面资源载入速度的,一种是測试页面载入完成后页面呈现.JS操作速度的,另一种是整体上对页面进行评价分析,以下分别对这些工具进行介绍,假设谁有更好的工具也请一起 ...

  2. hdu 4908 BestCoder Sequence

    # include <stdio.h> # include <algorithm> using namespace std; int main() { int n,m,i,su ...

  3. Android系统匿名共享内存Ashmem(Anonymous Shared Memory)在进程间共享的原理分析

    文章转载至CSDN社区罗升阳的安卓之旅,原文地址:http://blog.csdn.net/luoshengyang/article/details/6666491 在前面一篇文章Android系统匿 ...

  4. hdoj 1863 畅通工程 最小生成树---prime算法

    题目: http://acm.hdu.edu.cn/showproblem.php?pid=1863 注意有可能出现无法生成树的情况. #include <iostream> #inclu ...

  5. SQLLoader8(加载的数据中有换行符处理方法)

    SQLLDR加载的数据中有换行符处理方法1.创建测试表: CREATE TABLE MANAGER( MGRNO NUMBER, MNAME ), JOB ), REMARK ) ); 2.创建控制文 ...

  6. EffectiveC#5--始终提供ToString()

    1.System.Object版的ToString()方法只返回类型的名字 2.知道要重写它,返回更有意义的信息,最好是提供几个重载版本. 3.当你设计更多的复杂的类型时(格式化文本)应该实现应变能力 ...

  7. String和StringBuilder 的使用区别

    String 类有不可变性,每次执行操作时都会创建一个新的String对像,需要对该对象分配新的空间. StringBuilder 解决了对字符串重复修改过程中创建大量对象的问题.初始化一个Strin ...

  8. Day_8.《无懈可击的web设计》-巧妙地浮动效果

    > 本章内容略显陈旧,主要描述如何用浮动替代表格布局,并没有什么出彩的地方.不过其间提到了清楚浮动的几种方法,那么今天就总结一下如何清楚浮动吧. #### 为什么要清除浮动?虽说是清除浮动,其实 ...

  9. python os模块文件相关

    使用前 import os导入模块   os模块: os.sep     可以取代操作系统特定的路径分割符 os.linesep  字符串给出当前平台使用的行终止符.例如,Windows使用'\r\n ...

  10. mvc分页生成静态页,mvc生成静态页

    http://blog.csdn.net/xxj_jing/article/details/7899125 分页生成静态页 http://www.cnblogs.com/luanyilin/archi ...