本文介绍了使用Audio Queue框架录制的数据格式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写一个应该记录用户语音的iPhone应用程序,并将音频数据提供到库中以进行修改,例如改变速度和音高。我开始使用Apple的SpeakHere示例代码:

I'm writing an iPhone app which should record the users voice, and feed the audio data into a library for modifications such as changing tempo and pitch. I started off with the SpeakHere example code from Apple:

该项目为记录用户的声音并播放它奠定了基础。它运作良好。

That project lays the groundwork for recording the user's voice and playing it back. It works well.

现在我正在深入研究代码,我需要弄清楚如何将音频数据输入SoundTouch库()改变音高。我在浏览代码时熟悉了音频队列框架,并且找到了我从录音中收到音频数据的地方。

Now I'm diving into the code and I need to figure out how to feed the audio data into the SoundTouch library (http://www.surina.net/soundtouch/) to change the pitch. I became familiar with the Audio Queue framework while going through the code, and I found the place where I receive the audio data from the recording.

基本上,你打电话给 AudioQueueNewInput 创建一个新的输入队列。您传递一个回调函数,每当有一大块音频数据可用时调用该函数。在这个回调中,我需要将数据块传递到SoundTouch。

Essentially, you call AudioQueueNewInput to create a new input queue. You pass a callback function which is called every time a chunk of audio data is available. It is within this callback that I need to pass the chunks of data into SoundTouch.

我已经完成了所有设置,但是我从SoundTouch库中播放的噪音非常大staticky(它几乎与原版相似)。如果我没有通过SoundTouch传递它并播放原始音频它可以正常工作。

I have it all setup, but the noise I play back from the SoundTouch library is very staticky (it barely resembles the original). If I don't pass it through SoundTouch and play the original audio it works fine.

基本上,我错过了一些关于我得到的实际数据的信息。我假设我得到一个的流,这是样本,每个通道有1个样本。这就是SoundTouch所期望的,所以它不能以某种方式正确。

Basically, I'm missing something about what the actual data I'm getting represents. I was assuming that I am getting a stream of shorts which are samples, 1 sample for each channel. That's how SoundTouch is expecting it, so it must not be right somehow.

这是设置音频队列的代码,以便您可以看到它是如何配置的。

Here is the code which sets up the audio queue so you can see how it is configured.

void AQRecorder::SetupAudioFormat(UInt32 inFormatID)
{
memset(&mRecordFormat, 0, sizeof(mRecordFormat));

UInt32 size = sizeof(mRecordFormat.mSampleRate);
XThrowIfError(AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate,
                                          &size,
                                          &mRecordFormat.mSampleRate), "couldn't get hardware sample rate");

size = sizeof(mRecordFormat.mChannelsPerFrame);
XThrowIfError(AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels,
                                          &size,
                                          &mRecordFormat.mChannelsPerFrame), "couldn't get input channel count");

mRecordFormat.mFormatID = inFormatID;
if (inFormatID == kAudioFormatLinearPCM)
{
    // if we want pcm, default to signed 16-bit little-endian
    mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
    mRecordFormat.mBitsPerChannel = 16;
    mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8) * mRecordFormat.mChannelsPerFrame;
    mRecordFormat.mFramesPerPacket = 1;
}
}

这是实际设置代码的部分代码:

And here's part of the code which actually sets it up:

    SetupAudioFormat(kAudioFormatLinearPCM);

    // create the queue
    XThrowIfError(AudioQueueNewInput(
                                  &mRecordFormat,
                                  MyInputBufferHandler,
                                  this /* userData */,
                                  NULL /* run loop */, NULL /* run loop mode */,
                                  0 /* flags */, &mQueue), "AudioQueueNewInput failed");

最后,这里是处理新音频数据的回调:

And finally, here is the callback which handles new audio data:

void AQRecorder::MyInputBufferHandler(void *inUserData,
                                  AudioQueueRef inAQ,
                                  AudioQueueBufferRef inBuffer,
                                  const AudioTimeStamp *inStartTime,
                                  UInt32 inNumPackets,
                                  const AudioStreamPacketDescription *inPacketDesc) {
AQRecorder *aqr = (AQRecorder *)inUserData;
try {
        if (inNumPackets > 0) {
            CAStreamBasicDescription queueFormat = aqr->DataFormat();
            SoundTouch *soundTouch = aqr->getSoundTouch();

            soundTouch->putSamples((const SAMPLETYPE *)inBuffer->mAudioData,
                                   inBuffer->mAudioDataByteSize / 2 / queueFormat.NumberChannels());

            SAMPLETYPE *samples = (SAMPLETYPE *)malloc(sizeof(SAMPLETYPE) * 10000 * queueFormat.NumberChannels());
            UInt32 numSamples;
            while((numSamples = soundTouch->receiveSamples((SAMPLETYPE *)samples, 10000))) {
                // write packets to file
                XThrowIfError(AudioFileWritePackets(aqr->mRecordFile,
                                                    FALSE,
                                                    numSamples * 2 * queueFormat.NumberChannels(),
                                                    NULL,
                                                    aqr->mRecordPacket,
                                                    &numSamples,
                                                    samples),
                              "AudioFileWritePackets failed");
                aqr->mRecordPacket += numSamples;
            }
            free(samples);
        }

        // if we're not stopping, re-enqueue the buffe so that it gets filled again
        if (aqr->IsRunning())
            XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
} catch (CAXException e) {
    char buf[256];
    fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}

你可以看到我正在传递数据SoundTouch的 inBuffer-> mAudioData 。在我的回调中,究竟是什么字节代表,即如何从 mAudioData 中提取样本?

You can see that I'm passing the data in inBuffer->mAudioData to SoundTouch. In my callback, what exactly are the bytes representing, i.e. how do I extract samples from mAudioData?

推荐答案

您必须检查所获得的内容,签名等是否与库所期望的相匹配。使用 mFormatFlags AudioStreamBasicDescription 来确定源格式。然后你可能需要转换样本(例如 newSample = sample + 0x8000

You have to check that the endianess, the signedness, etc. of what you are getting match what the library expects. Use mFormatFlags of AudioStreamBasicDescription to determine the source format. Then you might have to convert the samples (e.g. newSample = sample + 0x8000)

这篇关于使用Audio Queue框架录制的数据格式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-30 16:56