本文介绍了在Swift 3中使用音频队列获取麦克风输入的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在开发一个通过内置麦克风录制语音并将其实时发送到服务器的应用程序.因此,我需要在录音时从麦克风获取字节流.
I am developing an app that records voice via built-in microphone and sends it to a server live. So I need to get the byte stream from the microphone while recording.
在谷歌搜索和堆栈溢出一段时间之后,我想我已经弄清楚了它应该如何工作,但事实并非如此.我认为使用音频队列可能是可行的方法.
After googling and stack-overflowing for quite a while, I think I figured out how it should work, but it does not. I think using Audio Queues might be the way to go.
这是我到目前为止尝试过的:
Here is what I tried so far:
之前,使用AudioQueueNewInput(...)(或输出)初始化音频队列:Use AudioQueueNewInput(...) (or output) to initialize your audio queue before you are using it:
let sampleRate = 16000 let numChannels = 2 var inFormat = AudioStreamBasicDescription( mSampleRate: Double(sampleRate), mFormatID: kAudioFormatLinearPCM, mFormatFlags: kAudioFormatFlagsNativeFloatPacked, mBytesPerPacket: UInt32(numChannels * MemoryLayout<UInt32>.size), mFramesPerPacket: 1, mBytesPerFrame: UInt32(numChannels * MemoryLayout<UInt32>.size), mChannelsPerFrame: UInt32(numChannels), mBitsPerChannel: UInt32(8 * (MemoryLayout<UInt32>.size)), mReserved: UInt32(0) var inQueue: AudioQueueRef? = nil AudioQueueNewInput(&inFormat, callback, nil, nil, nil, 0, &inQueue) var aqData = AQRecorderState( mDataFormat: inFormat, mQueue: inQueue!, // inQueue is initialized now and can be unwrapped mBuffers: [AudioQueueBufferRef](), bufferByteSize: 32, mCurrentPacket: 0, mIsRunning: true)在 Apple文档
这篇关于在Swift 3中使用音频队列获取麦克风输入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!