问题描述
我有一个AudioTapProcessor附加到AVPlayerItem.它将调用static void tap_ProcessCallback(MTAudioProcessingTapRef tap, CMItemCount numberFrames, MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut, CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut)
处理时.
I have an AudioTapProcessor attached to AVPlayerItem.which will callstatic void tap_ProcessCallback(MTAudioProcessingTapRef tap, CMItemCount numberFrames, MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut, CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut)
when processing.
我需要将AudioBufferList
转换为CMSampleBuffer
,以便可以使用AVAssetWriterAudioInput.appendSampleBuffer
将其写入电影文件.
I need to convert the AudioBufferList
to CMSampleBuffer
so I could use AVAssetWriterAudioInput.appendSampleBuffer
to write it into a movie file.
那么如何将AudioBufferList
转换为CMSampleBuffer
?我尝试了此操作,但出现-12731错误:错误cCMSampleBufferSetDataBufferFromAudioBufferList:Optional(-12731")
So how to convert AudioBufferList
to CMSampleBuffer
? I tried this but got -12731 error:Error cCMSampleBufferSetDataBufferFromAudioBufferList :Optional("-12731")
func processAudioData(audioData: UnsafeMutablePointer<AudioBufferList>, framesNumber: UInt32) {
var sbuf : Unmanaged<CMSampleBuffer>?
var status : OSStatus?
var format: Unmanaged<CMFormatDescription>?
var formatId = UInt32(kAudioFormatLinearPCM)
var formatFlags = UInt32( kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked )
var audioFormat = AudioStreamBasicDescription(mSampleRate: 44100.00, mFormatID:formatId, mFormatFlags:formatFlags , mBytesPerPacket: 1, mFramesPerPacket: 1, mBytesPerFrame: 16, mChannelsPerFrame: 2, mBitsPerChannel: 2, mReserved: 0)
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &format)
if status != noErr {
println("Error CMAudioFormatDescriptionCreater :\(status?.description)")
return
}
var timing = CMSampleTimingInfo(duration: CMTimeMake(1, 44100), presentationTimeStamp: kCMTimeZero, decodeTimeStamp: kCMTimeInvalid)
status = CMSampleBufferCreate(kCFAllocatorDefault,nil,Boolean(0),nil,nil,format?.takeRetainedValue(), CMItemCount(framesNumber), 1, &timing, 0, nil, &sbuf);
if status != noErr {
println("Error CMSampleBufferCreate :\(status?.description)")
return
}
status = CMSampleBufferSetDataBufferFromAudioBufferList(sbuf?.takeRetainedValue(), kCFAllocatorDefault , kCFAllocatorDefault, 0, audioData)
if status != noErr {
println("Error cCMSampleBufferSetDataBufferFromAudioBufferList :\(status?.description)")
return
}
var currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp(sbuf?.takeRetainedValue());
println(" audio buffer at time: \(CMTimeCopyDescription(kCFAllocatorDefault, currentSampleTime))")
if !assetWriterAudioInput!.readyForMoreMediaData {
return
}else if assetWriter.status == .Writing {
if !assetWriterAudioInput!.appendSampleBuffer(sbuf?.takeRetainedValue()) {
println("Problem appending audio buffer at time: \(CMTimeCopyDescription(kCFAllocatorDefault, currentSampleTime))")
}
}else{
println("assetWriterStatus:\(assetWriter.status.rawValue), Error: \(assetWriter.error.localizedDescription)")
println("Could not write a frame")
}
}
推荐答案
好的,我已经成功解决了这个问题.
OK, I've successfully resolved this problem.
问题是我不应该自己构造AudioStreamBasicDescription
结构.但是请使用AudioProcessorTap
的prepare回调提供的代码.
The problem is I should not construct the AudioStreamBasicDescription
struct myself. But use the one provided by prepare callback of AudioProcessorTap
.
static void tap_PrepareCallback(MTAudioProcessingTapRef tap, CMItemCount maxFrames, const AudioStreamBasicDescription *processingFormat)//retain this one
static void tap_PrepareCallback(MTAudioProcessingTapRef tap, CMItemCount maxFrames, const AudioStreamBasicDescription *processingFormat)//retain this one
这篇关于如何将AudioBufferList转换为CMSampleBuffer?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!