formatConverterCanonicalTo16

formatConverterCanonicalTo16

This will slow your app so much that it will probably prevent any audio from actually playing, but you should be able to run it long enough to see what the samples look like. If the callback is receiving 16-bit audio, the samples should be positive or negative integers between -32000 and 32000. If the samples alternate between a normal-looking number and a much smaller number, try this code in your callback instead:SInt32 *dataLeftChannel = (SInt32 *)ioData->mBuffers[0].mData;for (UInt32 frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) { NSLog(@"sample %lu: %ld", frameNumber, dataLeftChannel[frameNumber]);}这应该会显示完整的 8.24 示例.This should show you the complete 8.24 samples.如果你能以回调接收的格式保存数据,那么你应该有你需要的.如果你需要以不同的格式保存它,你应该能够在远程 I/O 音频单元中转换格式......但我当它连接到多通道混音器单元时,无法弄清楚如何做到这一点.作为替代方案,您可以使用 音频转换器服务.首先定义输入输出格式:If you can save the data in the format the callback is receiving, then you should have what you need. If you need to save it in a different format, you should be able to convert the format in the Remote I/O audio unit ... but I haven't been able to figure out how to do that when it's connected to a Multichannel Mixer unit. As an alternative, you can convert the data using the Audio Converter Services. First, define the input and output formats:AudioStreamBasicDescription monoCanonicalFormat;size_t bytesPerSample = sizeof (AudioUnitSampleType);monoCanonicalFormat.mFormatID = kAudioFormatLinearPCM;monoCanonicalFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;monoCanonicalFormat.mBytesPerPacket = bytesPerSample;monoCanonicalFormat.mFramesPerPacket = 1;monoCanonicalFormat.mBytesPerFrame = bytesPerSample;monoCanonicalFormat.mChannelsPerFrame = 1;monoCanonicalFormat.mBitsPerChannel = 8 * bytesPerSample;monoCanonicalFormat.mSampleRate = graphSampleRate;AudioStreamBasicDescription mono16Format;bytesPerSample = sizeof (SInt16);mono16Format.mFormatID = kAudioFormatLinearPCM;mono16Format.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;mono16Format.mChannelsPerFrame = 1;mono16Format.mSampleRate = graphSampleRate;mono16Format.mBitsPerChannel = 16;mono16Format.mFramesPerPacket = 1;mono16Format.mBytesPerPacket = 2;mono16Format.mBytesPerFrame = 2;然后在回调之外的某处定义一个转换器,并创建一个临时缓冲区来处理转换过程中的数据:Then define a converter somewhere outside your callback, and create a temporary buffer for handling the data during conversion:AudioConverterRef formatConverterCanonicalTo16;@property AudioConverterRef formatConverterCanonicalTo16;@synthesize AudioConverterRef;AudioConverterNew( &monoCanonicalFormat, &mono16Format, &formatConverterCanonicalTo16);SInt16 *data16;@property (readwrite) SInt16 *data16;@synthesize data16;data16 = malloc(sizeof(SInt16) * 4096);然后将此添加到您的回调中,然后再保存数据:Then add this to your callback, before you save your data:UInt32 dataSizeCanonical = ioData->mBuffers[0].mDataByteSize;SInt32 *dataCanonical = (SInt32 *)ioData->mBuffers[0].mData;UInt32 dataSize16 = dataSizeCanonical;AudioConverterConvertBuffer( effectState->formatConverterCanonicalTo16, dataSizeCanonical, dataCanonical, &dataSize16, effectState->data16);然后您可以保存 data16,它是 16 位格式,可能是您想要保存在文件中的内容.它将更加兼容并且只有规范数据的一半.Then you can save data16, which is in 16-bit format and might be what you want saved in your file. It will be more compatible and half as large as the canonical data.完成后,您可以清理一些东西:When you're done, you can clean up a couple things:AudioConverterDispose(formatConverterCanonicalTo16);free(data16); 这篇关于如何记录混音器单元输出产生的声音(iOS Core Audio &amp; Audio Graph)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云!
08-14 00:43