本文介绍了如何在 RealityKit 中录制视频?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

限时删除!!

我在 Xcode 中有一个 RealityKit 项目,我想记录 ARView.我考虑过 ReplayKit,但那是为了屏幕录制,我只想用它的相机源录制 ARView.我考虑过 AFathi 的开源项目 ARVideoKit,但它不支持 RealityKit ......一些关于不同渲染路径的东西.我找到了一篇 Medium 文章,它描述了如何在 ARKit 应用程序中实现录音功能,但问题是它需要方法:func renderer(_ renderer: SCNSceneRenderer) 这在 RealityKit 中不可用因为它是专门的 SceneKit 方法.

解决方案

我的回答假设您熟悉使用 AVAssetWriter 录制视频和音频.

有一个捕获的帧作为 ARKit session(_:didUpdate:) 方法的一部分提供.返回的 ARFrame 对象有一个名为 capturedFrameCVPixelBuffer.像处理常规视频录制会话一样处理帧,除了不是在 captureOutput(_:didOutput:from:) 方法中捕获,而是在此处捕获.如果您也打算从麦克风录制音频,您可能仍然需要 captureOutput(_:didOutput:from:) 音频方法.

就我而言,我将捕获的帧转换为 MTLTexture 并使用 Metal 处理我的视频帧,然后再将它们传递给 AVAssetWriter.我想在录制之前在我的相机帧上画画.不幸的是,这样做非常复杂,恐怕不是一个快速而简短的复制+粘贴答案.希望将您指向 ARKit 返回的 capturedFrame 对象是您开始的好地方.

使用 AVAssetWriter 录制视频的示例:https://programmersought.com/article/8013104126734B3CBA6743FB3C440DE9D2B25A6854B25C88C58C88C8D8C8D8/p>

如果您想在将 3D 模型编码为视频之前将其绘制到捕获源中,您还需要在 Metal 中自学:https://developer.apple.com/documentation/metalkit/

I have a RealityKit project in Xcode and I want to record the ARView. I considered ReplayKit, but that is for screen recording, I want to record only the ARView with its camera feed. I considered the open source project ARVideoKit by AFathi but that doesn't support RealityKit... something about different rendering paths. I have found a Medium article which describes how to implement a recording feature in an ARKit app, but the problem is that it requires the method: func renderer(_ renderer: SCNSceneRenderer) which is not available in RealityKit because it is specifically a SceneKit method.

解决方案

My answer assumes you are familiar with recording video and audio using AVAssetWriter.

There is a captured frame that is provided as part of the ARKit session(_:didUpdate:) method. The ARFrame object returned has a CVPixelBuffer named capturedFrame. Handle the frame as you would a regular video recording session, except instead of being captured in captureOutput(_:didOutput:from:) method, it is captured here instead. You may still need a captureOutput(_:didOutput:from:) method for audio if you intend on recording audio from the microphone, too.

In my case, I converted my captured frame into a MTLTexture and used Metal to process my video frames before passing them to an AVAssetWriter. I wanted to draw on top of my camera frames before recording. Unfortunately, doing this is very complicated and not a quick and short copy+paste answer I'm afraid. Hopefully pointing you to the capturedFrame object returned by ARKit is a good place for you to start.

Example on how to record videos using AVAssetWriter:https://programmersought.com/article/80131041234/;jsessionid=38CBA6743FB3C440DE9D2B25A6854B28

You will also need to verse yourself in Metal if you want to draw your 3D models into the capture feed before it is encoded to video:https://developer.apple.com/documentation/metalkit/

这篇关于如何在 RealityKit 中录制视频?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

1403页,肝出来的..

09-06 15:21