本文介绍了使用Vision框架跟踪本地视频中的人脸的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用Vision框架检测本地录制的视频中的人脸.提供的大多数样本都是在实时凸轮视频中检测人脸.

I am trying to detect faces in local recorded video using Vision framework. Most of samples provided are detecting faces in Live cam video.

  • 如何使用Vision/CoreML框架在本地视频中进行人脸检测并在运行时在检测到的人脸中放置一个矩形?

推荐答案

  • 等待您的videoItem准备播放
  • 向其中添加输出
  • 添加一个周期性的观察者,该观察者应在每个帧上执行ping操作
  • 提取新的像素缓冲区,并根据需要在Vision/CoreML中对其进行处理:
  • 如果使用视觉框架,则要使用 VNSequenceRequestHandler 而不是 VNImageRequestHandler .
  • .

    import UIKit
    import AVFoundation
    import CoreML
    import Vision
    
    class ViewController: UIViewController {
      var player: AVPlayer!
      var videoOutput: AVPlayerItemVideoOutput?
    
      override func viewDidLoad() {
        super.viewDidLoad()
    
        let player = AVPlayer(url: localURL)
        player.play()
    
        player.currentItem?.addObserver(
          self,
          forKeyPath: #keyPath(AVPlayerItem.status),
          options: [.initial, .old, .new],
          context: nil)
        player.addPeriodicTimeObserver(
          forInterval: CMTime(value: 1, timescale: 30),
          queue: DispatchQueue(label: "videoProcessing", qos: .background),
          using: { time in
            self.doThingsWithFaces()
        })
        self.player = player
      }
    
      override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
        guard let keyPath = keyPath, let item = object as? AVPlayerItem
          else { return }
    
        switch keyPath {
        case #keyPath(AVPlayerItem.status):
          if item.status == .readyToPlay {
            self.setUpOutput()
          }
          break
        default: break
        }
      }
    
      func setUpOutput() {
        guard self.videoOutput == nil else { return }
        let videoItem = player.currentItem!
        if videoItem.status != AVPlayerItemStatus.readyToPlay {
          // see https://forums.developer.apple.com/thread/27589#128476
          return
        }
    
        let pixelBuffAttributes = [
          kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
          ] as [String: Any]
    
        let videoOutput = AVPlayerItemVideoOutput(pixelBufferAttributes: pixelBuffAttributes)
        videoItem.add(videoOutput)
        self.videoOutput = videoOutput
      }
    
      func getNewFrame() -> CVPixelBuffer? {
        guard let videoOutput = videoOutput, let currentItem = player.currentItem else { return nil }
    
        let time = currentItem.currentTime()
        if !videoOutput.hasNewPixelBuffer(forItemTime: time) { return nil }
        guard let buffer = videoOutput.copyPixelBuffer(forItemTime: time, itemTimeForDisplay: nil)
          else { return nil }
        return buffer
      }
    
      func doThingsWithFaces() {
        guard let buffer = getNewFrame() else { return }
        // some CoreML / Vision things on that.
        // There are numerous examples with this
      }
    }
    

    这篇关于使用Vision框架跟踪本地视频中的人脸的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-14 23:32