我试图制作的应用程序的主要目标是点对点视频流。 (排序类似于使用蓝牙/ WiFi的FaceTime)。

使用AVFoundation,我能够捕获视频/音频样本缓冲区。然后,我发送视频/ audo样本缓冲区数据。现在的问题是在接收端处理样本缓冲区数据。

至于视频样本缓冲区,我能够从样本缓冲区中获取UIImage。但是对于音频样本缓冲区,我不知道如何处理它,以便可以播放音频。

所以问题是如何处理/播放音频样本缓冲区?

现在,我只是在绘制波形,就像苹果的Wavy示例代码中那样:

CMSampleBufferRef sampleBuffer;

CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
NSUInteger channelIndex = 0;

CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
size_t lengthAtOffset = 0;
size_t totalLength = 0;
SInt16 *samples = NULL;
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));

int numSamplesToRead = 1;
for (int i = 0; i < numSamplesToRead; i++) {

    SInt16 subSet[numSamples / numSamplesToRead];
    for (int j = 0; j < numSamples / numSamplesToRead; j++)
        subSet[j] = samples[(i * (numSamples / numSamplesToRead)) + j];

    SInt16 audioSample = [Util maxValueInArray:subSet ofSize:(numSamples / numSamplesToRead)];
    double scaledSample = (double) ((audioSample / SINT16_MAX));

    // plot waveform using scaledSample
    [updateUI:scaledSample];
}

最佳答案

要显示视频,您可以使用
(这是:获取ARGB图片并转换为Qt(nokia qt)QImage,可以用其他图像替换)

放置给代表班



 - (void)captureOutput:(AVCaptureOutput *)captureOutput
    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
           fromConnection:(AVCaptureConnection *)connection




NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

CVPixelBufferLockBaseAddress(imageBuffer,0);

SVideoSample sample;

sample.pImage      = (char *)CVPixelBufferGetBaseAddress(imageBuffer);
sample.bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
sample.width       = CVPixelBufferGetWidth(imageBuffer);
sample.height      = CVPixelBufferGetHeight(imageBuffer);

QImage img((unsigned char *)sample.pImage, sample.width, sample.height, sample.bytesPerRow, QImage::Format_ARGB32);

self->m_receiver->eventReceived(img);

CVPixelBufferUnlockBaseAddress(imageBuffer,0);
[pool drain];

关于ios4 - 如何从AVCaptureAudioDataOutput播放音频样本缓冲区,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/5598934/

10-09 01:25