最近帮别人做一个项目,主要是使用摄像头做人脸识别

github地址:https://github.com/qugang/AVCaptureVideoTemplate

要使用IOS的摄像头,需要使用AVFoundation 库,库里面的东西我就不介绍。

启动摄像头需要使用AVCaptureSession 类。

然后得到摄像头传输的每一帧数据,需要使用AVCaptureVideoDataOutputSampleBufferDelegate 委托。

首先在viewDidLoad 里添加找摄像头设备的代码,找到摄像头设备以后,开启摄像头

1
2
3
4
5
6
7
8
9
10
11
12
13
captureSession.sessionPreset = AVCaptureSessionPresetLow
let devices = AVCaptureDevice.devices()
for device in devices {
  if (device.hasMediaType(AVMediaTypeVideo)) {
    if (device.position == AVCaptureDevicePosition.Front) {
      captureDevice = device as?AVCaptureDevice
      if captureDevice != nil {
        println("Capture Device found")
        beginSession()
      }
    }
  }
}

beginSession,开启摄像头:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
func beginSession() {
  var err : NSError? = nil
  captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
  let output = AVCaptureVideoDataOutput()
  let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
  output.setSampleBufferDelegate(self, queue: cameraQueue)
  output.videoSettings = [kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA]
  captureSession.addOutput(output)
  if err != nil {
    println("error: \(err?.localizedDescription)")
  }
  previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
  previewLayer?.videoGravity = "AVLayerVideoGravityResizeAspect"
  previewLayer?.frame = self.view.bounds
  self.view.layer.addSublayer(previewLayer)
  captureSession.startRunning()
}

开启以后,实现captureOutput 方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
  if(self.isStart)
  {
    let resultImage = sampleBufferToImage(sampleBuffer)
    let context = CIContext(options:[kCIContextUseSoftwareRenderer:true])
    let detecotr = CIDetector(ofType:CIDetectorTypeFace,  context:context, options:[CIDetectorAccuracy: CIDetectorAccuracyHigh])
    let ciImage = CIImage(image: resultImage)
    let results:NSArray = detecotr.featuresInImage(ciImage,options: ["CIDetectorImageOrientation" : 6])
    for r in results {
      let face:CIFaceFeature = r as! CIFaceFeature;
      let faceImage = UIImage(CGImage: context.createCGImage(ciImage, fromRect: face.bounds),scale: 1.0, orientation: .Right)
      NSLog("Face found at (%f,%f) of dimensions %fx%f", face.bounds.origin.x, face.bounds.origin.y,pickUIImager.frame.origin.x, pickUIImager.frame.origin.y)
      dispatch_async(dispatch_get_main_queue()) {
        if (self.isStart)
        {
          self.dismissViewControllerAnimated(true, completion: nil)
          self.didReceiveMemoryWarning()
          self.callBack!(face: faceImage!)
        }
        self.isStart = false
      }
    }
  }
}

在每一帧图片上使用CIDetector 得到人脸,CIDetector 还可以得到眨眼,与微笑的人脸,如果要详细使用去官方查看API

上面就是关键代码,设置了有2秒的延迟,2秒之后开始人脸检测。

全部代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
//
//  ViewController.swift
//  AVSessionTest
//
//  Created by qugang on 15/7/8.
//  Copyright (c) 2015年 qugang. All rights reserved.
//
 
import UIKit
import AVFoundation
class AVCaptireVideoPicController: UIViewController,AVCaptureVideoDataOutputSampleBufferDelegate {
  var callBack :((face: UIImage) ->())?
  let captureSession = AVCaptureSession()
  var captureDevice : AVCaptureDevice?
  var previewLayer : AVCaptureVideoPreviewLayer?
  var pickUIImager : UIImageView = UIImageView(image: UIImage(named: "pick_bg"))
  var line : UIImageView = UIImageView(image: UIImage(named: "line"))
  var timer : NSTimer!
  var upOrdown = true
  var isStart = false
  override func viewDidLoad() {
    super.viewDidLoad()
    captureSession.sessionPreset = AVCaptureSessionPresetLow
    let devices = AVCaptureDevice.devices()
    for device in devices {
      if (device.hasMediaType(AVMediaTypeVideo)) {
        if (device.position == AVCaptureDevicePosition.Front) {
          captureDevice = device as?AVCaptureDevice
          if captureDevice != nil {
            println("Capture Device found")
            beginSession()
          }
        }
      }
    }
    pickUIImager.frame = CGRect(x: self.view.bounds.width / 2 - 100, y: self.view.bounds.height / 2 - 100,width: 200,height: 200)
    line.frame = CGRect(x: self.view.bounds.width / 2 - 100, y: self.view.bounds.height / 2 - 100, width: 200, height: 2)
    self.view.addSubview(pickUIImager)
    self.view.addSubview(line)
    timer =  NSTimer.scheduledTimerWithTimeInterval(0.01, target: self, selector: "animationSate", userInfo: nil, repeats: true)
     
    NSTimer.scheduledTimerWithTimeInterval(2, target: self, selector: "isStartTrue", userInfo: nil, repeats: false)
  }
  func isStartTrue(){
    self.isStart = true
  }
  override func didReceiveMemoryWarning(){
    super.didReceiveMemoryWarning()
    captureSession.stopRunning()
  }
   
  func animationSate(){
    if upOrdown {
      if (line.frame.origin.y >= pickUIImager.frame.origin.y + 200)
      {
        upOrdown = false
      }
      else
      {
        line.frame.origin.y += 2
      }
    } else {
      if (line.frame.origin.y <= pickUIImager.frame.origin.y)
      {
        upOrdown = true
      }
      else
      {
        line.frame.origin.y -= 2
      }
    }
  }
  func beginSession() {
    var err : NSError? = nil
    captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
    let output = AVCaptureVideoDataOutput()
    let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
    output.setSampleBufferDelegate(self, queue: cameraQueue)
    output.videoSettings = [kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA]
    captureSession.addOutput(output)
    if err != nil {
      println("error: \(err?.localizedDescription)")
    }
    previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
    previewLayer?.videoGravity = "AVLayerVideoGravityResizeAspect"
    previewLayer?.frame = self.view.bounds
    self.view.layer.addSublayer(previewLayer)
    captureSession.startRunning()
  }
  func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
    if(self.isStart)
    {
      let resultImage = sampleBufferToImage(sampleBuffer)
      let context = CIContext(options:[kCIContextUseSoftwareRenderer:true])
      let detecotr = CIDetector(ofType:CIDetectorTypeFace,  context:context, options:[CIDetectorAccuracy: CIDetectorAccuracyHigh])
      let ciImage = CIImage(image: resultImage)
      let results:NSArray = detecotr.featuresInImage(ciImage,options: ["CIDetectorImageOrientation" : 6])
      for r in results {
        let face:CIFaceFeature = r as! CIFaceFeature;
        let faceImage = UIImage(CGImage: context.createCGImage(ciImage, fromRect: face.bounds),scale: 1.0, orientation: .Right)
        NSLog("Face found at (%f,%f) of dimensions %fx%f", face.bounds.origin.x, face.bounds.origin.y,pickUIImager.frame.origin.x, pickUIImager.frame.origin.y)
        dispatch_async(dispatch_get_main_queue()) {
          if (self.isStart)
          {
            self.dismissViewControllerAnimated(true, completion: nil)
            self.didReceiveMemoryWarning()
            self.callBack!(face: faceImage!)
          }
          self.isStart = false
        }
      }
    }
  }
  private func sampleBufferToImage(sampleBuffer: CMSampleBuffer!) -> UIImage {
    let imageBuffer: CVImageBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)
    CVPixelBufferLockBaseAddress(imageBuffer, 0)
    let baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0)
    let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
    let width = CVPixelBufferGetWidth(imageBuffer)
    let height = CVPixelBufferGetHeight(imageBuffer)
    let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceRGB()
    let bitsPerCompornent = 8
    var bitmapInfo = CGBitmapInfo((CGBitmapInfo.ByteOrder32Little.rawValue | CGImageAlphaInfo.PremultipliedFirst.rawValue) as UInt32)
    let newContext = CGBitmapContextCreate(baseAddress, width, height, bitsPerCompornent, bytesPerRow, colorSpace, bitmapInfo) as CGContextRef
    let imageRef: CGImageRef = CGBitmapContextCreateImage(newContext)
    let resultImage = UIImage(CGImage: imageRef, scale: 1.0, orientation: UIImageOrientation.Right)!
    return resultImage
  }
  func imageResize (imageObj:UIImage, sizeChange:CGSize)-> UIImage{
    let hasAlpha = false
    let scale: CGFloat = 0.0
     
    UIGraphicsBeginImageContextWithOptions(sizeChange, !hasAlpha, scale)
    imageObj.drawInRect(CGRect(origin: CGPointZero, size: sizeChange))
    let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
    return scaledImage
  }
}

swift通过摄像头读取每一帧的图片,并且做识别做人脸识别的更多相关文章

  1. OpenCV摄像头人脸识别

    注: 从外设摄像装置中获取图像帧,把每帧的图片与人脸特征进行匹配,用方框框住识别出来的人脸 需要用到的函数: CvHaarClassifierCascade* cvLoadHaarClassifier ...

  2. 基于OpenCV读取摄像头进行人脸检测和人脸识别

    前段时间使用OpenCV的库函数实现了人脸检测和人脸识别,笔者的实验环境为VS2010+OpenCV2.4.4,opencv的环境配置网上有很多,不再赘述.检测的代码网上很多,记不清楚从哪儿copy的 ...

  3. QQ摄像头读取条码

    跟我学机器视觉-HALCON学习例程中文详解-QQ摄像头读取条码 第一步:插入QQ摄像头,安装好驱动(有的可能免驱动) 第二步:打开HDevelop,点击助手—打开新的Image Acquisitio ...

  4. 跟我学机器视觉-HALCON学习例程中文详解-QQ摄像头读取条码

    跟我学机器视觉-HALCON学习例程中文详解-QQ摄像头读取条码 第一步:插入QQ摄像头,安装好驱动(有的可能免驱动) 第二步:打开HDevelop,点击助手-打开新的Image Acquisitio ...

  5. opencv摄像头读取图片

    # 摄像头捕获图像或视频import numpy as np import cv2 # 创建相机的对象 cap = cv2.VideoCapture(0) while(True): # 读取相机所拍到 ...

  6. Opencv摄像头实时人脸识别

    Introduction 网上存在很多人脸识别的文章,这篇文章是我的一个作业,重在通过摄像头实时采集人脸信息,进行人脸检测和人脸识别,并将识别结果显示在左上角. 利用 OpenCV 实现一个实时的人脸 ...

  7. Python3利用Dlib19.7实现摄像头人脸识别的方法

    0.引言 利用python开发,借助Dlib库捕获摄像头中的人脸,提取人脸特征,通过计算欧氏距离来和预存的人脸特征进行对比,达到人脸识别的目的: 可以自动从摄像头中抠取人脸图片存储到本地,然后提取构建 ...

  8. matlab使用摄像头人脸识别

    #关于matlab如何读取图片.视频.摄像头设备数据# 参见:http://blog.csdn.net/u010177286/article/details/45646173 但是,关于摄像头读取,上 ...

  9. OpenCV视频读取播放,视频转换为图片

    转载请注明出处!!! http://blog.csdn.net/zhonghuan1992 OpenCV视频读取播放,视频转换为图片 介绍几个有关视频读取的函数: VideoCapture::Vide ...

随机推荐

  1. mysql 自己定义存储过程和触发器

    mysql 自己定义存储过程和触发器 --存储过程示范 DROP PROCEDURE IF EXISTS PRO_TEST; CREATE PROCEDURE PRO_TEST(IN NUM_IN I ...

  2. JS 数组扩展函数--求起始项到终止项和

    Array.prototype.sum= function(l,r){ l=l==undefined ? 0 : l; r=r==undefined ? this.length - 1 : r; va ...

  3. Html中value和name属性的作用

    1.按钮中用的value 指的是按钮上要显示的文本  比如“确定”“删除”等 2.复选框用的value 指的是这个复选框的值 3.单选框用的value 和复选框一样 4.下拉菜单用的value 是列表 ...

  4. Android 打造自己的个性化应用(一):应用程序换肤主流方式的分析与概述

    Android平台api没有特意为换肤提供一套简便的机制,这可能是外国的软件更注重功能和易用,不流行换肤.系统不提供直接支持,只能自行研究. 换肤,可以认为是动态替换资源(文字.颜色.字体大小.图片. ...

  5. 一般处理程序、ASP.NET核心知识(5)--转载

    初窥 1.新建一个一般处理程序 新建一个一般处理程序 2.看看里头的代码 public class MyHandler : IHttpHandler { public void ProcessRequ ...

  6. 使用C#创建自定义背景色/形状的菜单栏与工具栏

    C#对于菜单栏与工具栏都提供了统一的背景色,形状的渲染类,即ToolStripRenderer类,同时根据不同的情形,提供了多个继承类,分别是ToolStripProfessionalRender,T ...

  7. mvc原理和mvc模式的优缺点

    一.mvc原理   mvc是一种程序开发设计模式,它实现了显示模块与功能模块的分离.提高了程序的可维护性.可移植性.可扩展性与可重用性,降低了程序的开发难度.它主要分模型.视图.控制器三层. 1.模型 ...

  8. 1.iOS第一个简单APP

    大纲: iOS系统发展 UI和OC 简单的APP程序 程序的生命周期   1.iOS的系统发展 从1983年OC程序开始发展到2015年,30多年的时间,但这依然不是一个十分完善的语言,可以说现在都没 ...

  9. setNeedsDisplay、layoutSubViews

    UIView的setNeedsDisplay和setNeedsLayout方法.首先两个方法都是异步执行的.而setNeedsDisplay会调 用自动调用drawRect方法,这样可以拿到UIGra ...

  10. 转:Dictionary<int,string>怎么获取它的值的集合?急!急!急!

    怎么获取Dictionary<int, string>的值?我知道这个是键值对的,我知道可以根据key得到value,但关键是现在连key也不知道啊就是想让这个显示在listbox中,应该 ...