问题描述
考虑这两个场景的音频文件读/写数据(发送通过网络的目的):
consider these two scenarios for reading/writing data from Audio files (for the purpose of sending over a network):
方案1:音频文件服务:的结果
使用的 AudioFileReadPackets 的从音频文件服务。这会产生音频数据包,您可以轻松地通过网络发送。在接收端使用的 AudioFileStreamOpen 的和的 AudioFileStreamParseBytes 的分析数据。
Scenario 1: Audio File Services:
Using AudioFileReadPackets from Audio File Services. This generates audio packets that you can easily send over the network. On the receiving side you use AudioFileStreamOpen and AudioFileStreamParseBytes to parse the data.
的 的则有两个回调函数: AudioFileStream_PropertyListenerProc 和 AudioFileStream_PacketsProc 。当流中发现了一个新的属性,当从流分别接收到的分组,这些人被调用。一旦接收到数据包,可以将其输送到使用音频队列音频队列服务起着文件就好了。
AudioFileStreamParseBytes then has two callback functions: AudioFileStream_PropertyListenerProc and AudioFileStream_PacketsProc. These guys are called when a new property is discovered in the stream and when packets are received from the stream, respectively. Once you receive the packets, you can feed it to an audio queue using Audio Queue Service which plays the file just fine.
注意:此方法不适用于储存在iPod库,它给我们带来了第二个场景中的音乐文件工作:
Note: This method does NOT work with music files stored in the iPod library, which brings us to the 2nd scenario:
方案2:AVAssetReader:的结果
随着 AVAssetReader 的你可以从iPod音乐库中读取,并通过网络发送数据包。通常情况下,你会在一个音频队列直接加载包类似于上面。然而,在这种情况下,您将不得不创建一个线程,以确保您阻止接收数据包时,队列已满,当队列缓冲区可用解锁(见的的例子)。
问:的结果
是否有可能使用的 AVAssetReader 的送包过,只有把它通过的 AudioFileStreamParseBytes 的阅读? (动机是,该AudioFileStreamParseBytes的回调将处理线程/阻塞业务并节省您的痛苦)。我试图做它像这样:结果
1。首先阅读使用AVAssetReader音频文件
Question:
Is it possible to use AVAssetReader to send packets over, only to have it read by AudioFileStreamParseBytes? (the motive would be that the AudioFileStreamParseBytes's callbacks will handle the threading/blocking business and save you that pain). I tried doing it like so:
1. first read the audio file using AVAssetReader
//NSURL *assetURL = [NSURL URLWithString:@"ipod-library://item/item.m4a?id=1053020204400037178"];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
NSError * error = nil;
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
AVAssetTrack* track = [songAsset.tracks objectAtIndex:0];
// Note: I don't supply an audio format description here, rather I pass on nil to keep the original
// file format. In another piece of code (see here: http://stackoverflow.com/questions/12264799/why-is-audio-coming-up-garbled-when-using-avassetreader-with-audio-queue?answertab=active#tab-top) I can extract the audio format from the track, let's say it's an AAC format.
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track
outputSettings:nil];
[reader addOutput:readerOutput];
[reader startReading];
2。成立了流光
// notice how i manually add the audio file type (for the file hint parameter)
// using the info from step one.. If i leave it as 0, this call fails and returns
// the typ? error, which is :The specified file type is not supported.
streamer->err = AudioFileStreamOpen((__bridge void*)streamer,
ASPropertyListenerProc, ASPacketsProc,
kAudioFileAAC_ADTSType, &(streamer->audioFileStream));
3。一旦我接收到的数据,我解析字节:
streamer->err = AudioFileStreamParseBytes(streamer->audioFileStream, inDataByteSize, inData, 0);
问题:如果我做这种方式。我发送字节和AudioFileStreamParseBytes做的不的失败。但是,回调* AudioFileStream_PropertyListenerProc *和* AudioFileStream_PacketsProc *永远不会被调用。这让我觉得解析器未能解析字节,并提取出其中的任何有用的信息。对于 AudioStreamParseBytes 该规定:*您必须提供比至少有一个数据包的价值音频文件数据的,但最好是提供一些数据包在同一时间几秒钟的数据*。我送了900字节,这是仅低于GKSession数据额度。我是pretty确保900字节就够了(在方案1中测试这个时候,总字节数是每次417,它工作得很好)。
problem: When I do it this way.. I send the bytes and the AudioFileStreamParseBytes does not fail. However, the callbacks *AudioFileStream_PropertyListenerProc* and *AudioFileStream_PacketsProc* are never called. Which makes me think that the parser has failed to parse the bytes and extract any useful information out of them.. in the documentation for AudioStreamParseBytes it states:* You should provide at least more than a single packet’s worth of audio file data, but it is better to provide a few packets to a few seconds data at a time.* I'm sending over 900 bytes, which is just below GKSession's data limit. I'm pretty sure 900 bytes is enough (when testing this under scenario 1, the total bytes was 417 each time and it worked fine).
任何想法?
推荐答案
简短的回答是,那根本没有意义有音频数据的数据包,以AudioFileStreamParseBytes解析。在文档 AudioFileStreamParseBytes取决于它被定义为的解析器的ID你要传递这些数据的音频文件(这样的参数inAudioFileStream ..存在的功能。解析器ID由AudioFileStreamOpen函数返回。的)
The short answer is that it simply doesn't make sense to have packets of audio data be parsed by AudioFileStreamParseBytes.. in the docs AudioFileStreamParseBytes is a function dependent on the existence of an audio file (thus the parameter inAudioFileStream.. which is defined as the ID of the parser to which you wish to pass data. The parser ID is returned by the AudioFileStreamOpen function.)
这样教训:不要试图鸽子洞的iOS功能,以适应您的情况..它应该是周围的其他方法
so lesson learned: don't try to pigeon hole iOS functions to fit your situation.. it should be the other way around.
我最终什么事做直接送入数据到音频队列..没有通过所有这些不必要的中介功能..更深入的方式将被喂养的数据音频单元去..但我的应用程序没有需要控制的该级别
What I ended up doing was feeding the data directly to an Audio Queue.. without going through all these unnecessary intermediary functions.. a more in depth way would be feeding the data to audio units.. but my application didn't need that level of control
这篇关于之间AVAssetReader和AudioFileReadPackets如何读取音频区别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!