问题描述
我已经能够使用AVFoundation的 AVAssetReader
类将视频帧上传到OpenGL ES纹理中。它有一个,但是,它与 AVURLAsset
一起使用时指向远程媒体的失败。这个失败没有很好的记录,我想知道是否有任何解决方法的缺点。
I've been able to use AVFoundation's AVAssetReader
class to upload video frames into an OpenGL ES texture. It has a caveat, however, in that it fails when used with an AVURLAsset
that points to remote media. This failure isn't well documented, and I'm wondering if there's any way around the shortcoming.
推荐答案
有一些API随着iOS 6发布,我已经能够使用这个过程轻而易举。它根本不使用 AVAssetReader
,而是依赖于一个名为 AVPlayerItemVideoOutput
的类。可以通过新的 -addOutput:
方法将此类的实例添加到任何 AVPlayerItem
实例。
There's some API that was released with iOS 6 that I've been able to use to make the process a breeze. It doesn't use AVAssetReader
at all, and instead relies on a class called AVPlayerItemVideoOutput
. An instance of this class can be added to any AVPlayerItem
instance via a new -addOutput:
method.
与 AVAssetReader
不同,此类适用于 AVPlayerItem
由远程 AVURLAsset
支持,并且还具有允许通过支持非线性播放的更复杂的播放界面的好处-copyPixelBufferForItemTime:itemTimeForDisplay:
(而不是 AVAssetReader
严重限制 -copyNextSampleBuffer
方法。
Unlike the AVAssetReader
, this class will work fine for AVPlayerItem
s that are backed by a remote AVURLAsset
, and also has the benefit of allowing for a more sophisticated playback interface that supports non-linear playback via -copyPixelBufferForItemTime:itemTimeForDisplay:
(instead of of AVAssetReader
's severely limiting -copyNextSampleBuffer
method.
// Initialize the AVFoundation state
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:someUrl options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler:^{
NSError* error = nil;
AVKeyValueStatus status = [asset statusOfValueForKey:@"tracks" error:&error];
if (status == AVKeyValueStatusLoaded)
{
NSDictionary* settings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] };
AVPlayerItemVideoOutput* output = [[[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings] autorelease];
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:asset];
[playerItem addOutput:[self playerItemOutput]];
AVPlayer* player = [AVPlayer playerWithPlayerItem:playerItem];
// Assume some instance variable exist here. You'll need them to control the
// playback of the video (via the AVPlayer), and to copy sample buffers (via the AVPlayerItemVideoOutput).
[self setPlayer:player];
[self setPlayerItem:playerItem];
[self setOutput:output];
}
else
{
NSLog(@"%@ Failed to load the tracks.", self);
}
}];
// Now at any later point in time, you can get a pixel buffer
// that corresponds to the current AVPlayer state like this:
CVPixelBufferRef buffer = [[self output] copyPixelBufferForItemTime:[[self playerItem] currentTime] itemTimeForDisplay:nil];
获得缓冲区后,您可以根据需要将其上传到OpenGL。我推荐可怕的 CVOpenGLESTextureCacheCreateTextureFromImage()
函数,因为你将在所有新设备上获得硬件加速,这比<$ <快> C $ C> glTexSubImage2D()。请参阅Apple的和示例演示。
Once you've got your buffer, you can upload it to OpenGL however you want. I recommend the horribly documented CVOpenGLESTextureCacheCreateTextureFromImage()
function, because you'll get hardware acceleration on all the newer devices, which is much faster than glTexSubImage2D()
. See Apple's GLCameraRipple and RosyWriter demos for examples.
这篇关于我可以使用AVFoundation将下载的视频帧流式传输到OpenGL ES纹理中吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!