我有一个记录视频的应用程序,但我需要它向用户实时显示麦克风捕捉到的声音的音调级别。我已经能够使用AVCaptureSession成功地将音频和视频录制到MP4。但是,当我将AVCaptureAudioDataOutput添加到会话并分配AVCaptureAudioDataOutputSampleBufferDelegate时,我不会收到错误,但是一旦会话启动,captureOutput函数就不会被调用。
代码如下:

import UIKit
import AVFoundation
import CoreLocation


class ViewController: UIViewController,
AVCaptureVideoDataOutputSampleBufferDelegate,
AVCaptureFileOutputRecordingDelegate, CLLocationManagerDelegate ,
AVCaptureAudioDataOutputSampleBufferDelegate {

var videoFileOutput: AVCaptureMovieFileOutput!
let session = AVCaptureSession()
var outputURL: URL!
var timer:Timer!
var locationManager:CLLocationManager!
var currentMagnitudeValue:CGFloat!
var defaultMagnitudeValue:CGFloat!
var visualMagnitudeValue:CGFloat!
var soundLiveOutput: AVCaptureAudioDataOutput!


override func viewDidLoad() {
    super.viewDidLoad()
    self.setupAVCapture()
}


func setupAVCapture(){

    session.beginConfiguration()

    //Add the camera INPUT to the session
    let videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
                                              for: .video, position: .front)
    guard
        let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice!),
        session.canAddInput(videoDeviceInput)
        else { return }
    session.addInput(videoDeviceInput)

    //Add the microphone INPUT to the session
    let microphoneDevice = AVCaptureDevice.default(.builtInMicrophone, for: .audio, position: .unspecified)
    guard
        let audioDeviceInput = try? AVCaptureDeviceInput(device: microphoneDevice!),
        session.canAddInput(audioDeviceInput)
        else { return }
    session.addInput(audioDeviceInput)

    //Add the video file OUTPUT to the session
    videoFileOutput = AVCaptureMovieFileOutput()
    guard session.canAddOutput(videoFileOutput) else {return}
    if (session.canAddOutput(videoFileOutput)) {
        session.addOutput(videoFileOutput)
    }

    //Add the audio output so we can get PITCH of the sounds
    //AND assign the SampleBufferDelegate
    soundLiveOutput = AVCaptureAudioDataOutput()
    soundLiveOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "test"))
    if (session.canAddOutput(soundLiveOutput)) {
        session.addOutput(soundLiveOutput)
        print ("Live AudioDataOutput added")
    } else
    {
        print("Could not add AudioDataOutput")
    }



    //Preview Layer
    let previewLayer = AVCaptureVideoPreviewLayer(session: session)
    let rootLayer :CALayer = self.cameraView.layer
    rootLayer.masksToBounds=true
    previewLayer.frame = rootLayer.bounds
    rootLayer.addSublayer(previewLayer)
    previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill;

    //Finalize the session
    session.commitConfiguration()

   //Begin the session
    session.startRunning()


}

func captureOutput(_: AVCaptureOutput, didOutput: CMSampleBuffer, from:
AVCaptureConnection) {
    print("Bingo")
}

}

预期产量:
Bingo
Bingo
Bingo
...

我读过:
StackOverflow: captureOutput not being called-用户没有正确声明captureOutput方法。
StackOverflow: AVCaptureVideoDataOutput captureOutput not being called-用户根本没有声明captureOutput方法。
Apple - AVCaptureAudioDataOutputSampleBufferDelegate-苹果关于委托及其方法的文档-该方法与我声明的方法匹配。
我在网上遇到的其他常见错误:
对旧版本的Swift使用声明(我使用的是v4.1)
显然,在Swift 4.0之后的一篇文章中,AVCaptureMetadataOutput取代了AVCaptureAudioDataOutput——虽然我在苹果的文档中找不到这个,但我也尝试过,但类似地,metadataOutput函数从未被调用。
我刚想不出主意。我是不是漏掉了一些显而易见的东西?

最佳答案

您正在使用的方法已用此方法更新,将为AVCaptureAudioDataOutput和AVCaptureVideoDataOutput调用此方法在将示例缓冲区写入资产编写器之前,请确保检查输出。

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

    //Make sure you check the output before using sample buffer
    if output == audioDataOutput {
      //Use sample buffer for audio
   }
}

关于ios - 未由AVCaptureAudioDataOutputSampleBufferDelegate调用captureOutput,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/51573039/

10-15 07:25