我正在尝试播放从RTMP流接收的音频(我已经设法播放了视频部分)。音频采用.aac格式。我有NSData即将到来。然后,将其放入CMAudiSampleBuffer并将其放入AVSampleBufferAudioRenderer中。 (基本上,我正在执行与视频数据包相同的操作)。

一切都很好,除了我听不到声音。现在我对Objective-C和iOS编程还很陌生,因此问题出在其他地方,欢迎所有想法。

这是我用来进行格式描述的代码

-(void)createFormatDescription:(NSData*)payload
{
    OSStatus status;
    NSData* data = [NSData dataWithData:[payload subdataWithRange:NSMakeRange(2, [payload length]-2)]];
   const uint8_t* bytesBuffer = [data bytes];
   _type = bytesBuffer[0]>>3;
   _frequency = [self getSampleRate:(bytesBuffer[0] & 0b00000111) << 1 | (bytesBuffer[1] >> 7)];
   _channel = (bytesBuffer[1] & 0b01111000) >> 3;
    AudioStreamBasicDescription audioFormat;
    audioFormat.mFormatID = kAudioFormatMPEG4AAC;
    audioFormat.mSampleRate = _frequency;
    audioFormat.mFormatFlags = _type;
    audioFormat.mBytesPerPacket = 0;
    audioFormat.mFramesPerPacket = 1024;
    audioFormat.mBytesPerFrame = 0;
    audioFormat.mChannelsPerFrame = _channel;
    audioFormat.mBitsPerChannel = 0;
    audioFormat.mReserved = 0;
    status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &_formatDesc);
}

这是我使用在包前面添加adts数据并创建缓冲区的代码:
- (NSData*) adts:(int)length
{
    int size = 7;
    int fullSize =length + size;
    uint8_t adts[size];
    adts[0] = 0xFF;
    adts[1] = 0xF9;
    adts[2] = (_type - 1) << 6 | (_frequency << 2) | (_channel >> 2);
    adts[3] = (_channel & 3) << 6 | (fullSize >> 11);
    adts[4] = (fullSize & 0x7FF) >> 3;
    adts[5] = ((fullSize & 7) << 5) + 0x1F;
    adts[6] = 0xFC;
    NSData* result = [NSData dataWithBytes:adts length:size];
    return result;
}

-(void)enqueueBuffer:(RTMPMessage*)message {
    OSStatus status;
    NSData* payloadData = [NSData dataWithData:[message.payloadData
    subdataWithRange:NSMakeRange(2, [message.payloadData length]-2)]];
    NSData* adts = [NSData dataWithData:[self adts:(int)[payloadData length]]];
    NSMutableData* data = [NSMutableData dataWithData:adts];
    [data appendData:payloadData];
    uint8_t* bytesBuffer[[data length]];
    [data getBytes:bytesBuffer length:[data length]];
    const size_t sampleSize = [data length];
    AudioStreamPacketDescription packetDescription;
    packetDescription.mDataByteSize = (int)sampleSize;
    packetDescription.mStartOffset = 0;
    packetDescription.mVariableFramesInPacket = 0;
    CMBlockBufferRef blockBuffer = NULL;
    CMSampleBufferRef sampleBuffer = NULL;
    CMTime time = CMTimeMake(5, _frequency);
    status = CMBlockBufferCreateWithMemoryBlock(NULL, bytesBuffer, [data length], kCFAllocatorNull, NULL, 0, [data length], 0, &blockBuffer);
    status = CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault, blockBuffer, true, NULL, NULL, _formatDesc, 1, time, &packetDescription, &sampleBuffer);
    CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES);
    CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);
    CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);
    [_audioRenderer enqueueSampleBuffer:sampleBuffer];

}

预先感谢您的任何帮助

最佳答案

不需要ADTS header 。 AVAudioSampleRenderer只需要裸aac压缩数据包即可播放。但是前提是必须设置正确的formatDescription以及用于创建样本缓冲区的正确参数。

您需要知道,HE-AAC(LC + SBR)像AAC-LC一样包装,但采样率为22050。 HE-V2(LC + SBR + PS)像AAC-LC一样包装,但具有22050的采样率,每个样本一个通道。
并且所有HE-AAC(v1,v2),samplesPerFrame始终为2048,而不是LC的1024。

这就是我所知道的如何正确使用AVAudioSampleRenderer播放AAC流的方法。这是很长的路要走。

关于ios - 从RTMP流iOS播放AAC音频,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/50957320/

10-13 05:29