本文介绍了ARAnchor到底是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试理解和使用ARKit.但是有一件事我无法完全理解.

I'm trying to understand and use ARKit. But there is one thing that I cannot fully understand.

苹果谈到ARAnchor:

Apple said about ARAnchor:

但这还不够.所以我的问题是:

But that's not enough. So my questions are:

  • ARAnchor到底是什么?
  • 锚点和特征点之间有什么区别?
  • ARAnchor是否只是特征点的一部分?
  • ARKit如何确定其锚点?
  • What is ARAnchor exactly?
  • What are the differences between anchors and feature points?
  • Is ARAnchor just part of feature points?
  • And how does ARKit determines its anchors?

推荐答案

更新时间:2020年6月24日.

简单地说,ARAnchor是一个不可见的空对象,可以在世界空间中保存3D内容(在锚点位置).将ARAnchor视为3D对象的局部轴.每个3D对象都有一个枢轴点,对不对?因此,此枢轴点必须符合ARAnchor.

Simply put, ARAnchor is an invisible null-object that can hold a 3D content (at anchor's position) in World Space. Think of ARAnchor just like it's a local axis for your 3D object. Every 3D object has a pivot point, right? So this pivot point must meet an ARAnchor.

如果在ARKit/RealityKit应用程序中不使用锚点,则虚拟对象可能会偏离放置位置,这可能会影响应用程序的真实感和用户体验.因此,锚点是AR场景中的关键元素.

If you do not use anchors in ARKit/RealityKit app, your virtual objects can drift away from where they were placed and this can impact your app's realism and user experience. Thus, anchors are crucial elements in AR scene.

根据ARKit文档2017:

According to ARKit documentation 2017:

ARAnchor是ARKit框架中现有的所有其他锚类型的父类,因此所有这些子类均继承自ARAnchor类,但不能直接在您的代码中使用.我还应该说ARAnchorFeature Points没有共同之处. Feature Points只是用于调试.

ARAnchor is a parent class for all other types of anchors existing in ARKit framework, hence all these subclasses inherit from ARAnchor class but cannot use it directly in your code. I should also say that ARAnchor and Feature Points have nothing in common. Feature Points are rather for debugging.

ARAnchor不会自动跟踪真实目标.如果需要自动化,则必须使用rendered()session()实例方法,如果您分别符合协议ARSCNViewDelegateARSessionDelegate,则可以调用这些实例方法.

ARAnchor doesn't automatically track a real world target. If you need automation you have to use rendered() or session() instance methods that you can call if you conformed to protocols ARSCNViewDelegate or ARSessionDelegate respectively.

这是一幅具有平面锚点可视化表示的图像.但请记住:默认情况下,您既看不到检测到的平面,也看不到其对应的ARPlaneAnchor.

在ARKit中,您可以使用不同的场景将ARAnchors自动添加到场景中:

In ARKit you can automatically add ARAnchors to your scene using different scenarios:

  • ARPlaneAnchor

  • 如果水平和/或垂直 planeDetection 实例属性为ON,则ARKit可以将ARPlaneAnchors添加到会话中.有时启用planeDetection会大大增加场景理解阶段所需的时间.
  • If horizontal and/or vertical planeDetection instance property is ON, ARKit is able to add ARPlaneAnchors to the session. Sometimes enabled planeDetection considerably increases a time required for scene understanding stage.

ARImageAnchor (符合 ARTrackable 协议)

  • 此类锚点包含有关在AR世界跟踪会话中检测到的图像(锚点位于图像中心)的位置和方向的信息.要进行激活,请使用 detectionImages 实例属性.在ARKit 2.0中,您最多可以跟踪多达25张图像,在ARKit 3.0中,分别最多可以跟踪100张图像.但是,在两种情况下,.

ARBodyAnchor (符合ARTrackable协议)

  • 在最新版本的ARKit中,您可以通过使用 ARBodyTrackingConfiguration() 运行会话来启用身体跟踪.您将在3D骨架的 Root Joint 中获得ARBodyAnchor.

ARFaceAnchor (符合ARTrackable协议)

  • Face Anchor存储有关可使用前置TrueDepth摄像头或常规RGB摄像头检测到的拓扑和姿势以及面部表情的信息.当检测到脸部时,脸部固定点"将附加在鼻子后面的脸部中央.在ARKit 2.0中,您只能跟踪一张脸,而在ARKit 3.0中,最多可以同时跟踪3张脸.在ARKit 4.0中,许多跟踪的面部也取决于传感器:TrueDepth最多可跟踪3个面部,RGB摄像机仅跟踪一个面部.

ARObjectAnchor

  • 这种锚点包含有关在世界跟踪会话中检测到的真实3D对象的6个自由度(位置和方向)的信息.

AREnvironmentProbeAnchor

  • Probe Anchor在世界跟踪会议中为特定空间区域提供环境照明信息. ARKit的人工智能将其用于为金属着色器提供环境反射.

ARParticipantAnchor

  • 这是多用户AR体验必不可少的锚点类型.如果要使用它,请对isCollaborationEnabled实例属性和Multipeer Connectivity框架使用true值.
  • This is an indispensable anchor type for multiuser AR experiences. If you want to employ it, use true value for isCollaborationEnabled instance property and Multipeer Connectivity framework.

ARMeshAnchor

  • ARKit 3.5+将围绕用户的重建的真实世界场景细分为网格锚点.随着ARKit增强对现实世界的了解,网格锚不断更新其数据.尽管ARKit更新了网格以反映物理环境的变化,但网格的后续变化并非旨在实时反映.

ARGeoAnchor (符合 ARTrackable 协议)

ARGeoAnchor (conforms to ARTrackable protocol)

  • 在ARKit 4.0+中,有一个地理定位点(也称为位置定位点),可使用GPS跟踪地理位置.这种定位标记可标识应用程序可以引用的世界中的特定区域.当用户在场景中四处移动时,会话会根据坐标和设备的地理锚指南针方向更新位置锚的变换.

  • 命中测试方法

    • 在屏幕上点击,将一个点投影到一个隐藏的检测平面上,将ARAnchor放置在虚构射线与该平面相交的位置.

    射线铸造方法

    • 在屏幕上点击,还将点投影到隐藏的检测平面上.但是,您也可以在3D场景中的位置A和B之间执行射线投射.射线投射与命中测试的主要区别在于,当使用第一个ARKit时,它会在了解有关检测到的表面的更多信息时继续完善射线投射,并且射线投射可以是2D到3D和3D到3D.

    功能点

    • ARKit在现实对象的高对比度边缘上自动生成的特殊黄点,可以为您提供放置ARAnchor的位置.这种方法也可以通过点击测试方法来实现.

    ARCamera的转换

    • iPhone相机的位置和方向(矩阵4x4)可以轻松用作ARAnchor的位置.

    任意世界位置

    • 将ARAnchor放置在您想要的场景中的任何地方.

    下面是代码的代码段,如何在renderer方法内实现锚点:

    And below is a code's snippet how to implement anchors inside renderer method:

    func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
    
        guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
        let grid = Grid(anchor: planeAnchor)
        node.addChildNode(grid)
    }
    


    RealityKit 框架于2019年发布.它具有一个名为AnchorEntity的新类.您可以将AnchorEntity用作实体层次结构的根点,并将其添加到场景实例的锚点集合中.这使ARKit可以将锚实体及其所有层次后代放置到现实世界中. AnchorEntity自动跟踪现实世界的目标.

    RealityKit framework was released in 2019. It has a new class named AnchorEntity. You can use AnchorEntity as the root point of an entity hierarchy, and you can add it to the anchors collection for a scene instance. This enables ARKit to place the anchor entity, along with all of its hierarchical descendants, into the real world. AnchorEntity automatically tracks real world target.

    根据RealityKit文档2019:

    According to RealityKit documentation 2019:

    让我们看一下代码片段:

    Let's look at a code snippet:

    func makeUIView(context: Context) -> ARView {
    
        let arView = ARView(frame: .zero)
        let modelAnchor = try! Experience.loadModel()
        arView.scene.anchors.append(modelAnchor)
        return arView
    }
    

    AnchorEntity 存储三个组件:

    AnchorEntity stores three components:

    • Anchoring component
    • Transform component
    • Synchronization component

    当前版本的RealityKit中有七个AnchorEntity案例:

    Here are seven AnchorEntity's cases available in current version of RealityKit:

    // Fixed position in the AR scene
    AnchorEntity(.world(transform: mtx))
    
    // For body tracking (Motion Capture)
    AnchorEntity(.body)
    
    // Pinned to the tracking camera
    AnchorEntity(.camera)
    
    // For face tracking (front camera)
    AnchorEntity(.face)
    
    // For image tracking config
    AnchorEntity(.image(group: "Group", name: "model"))
    
    // For object tracking config
    AnchorEntity(.object(group: "Group", name: "object"))
    
    // For horizontal plane detection
    AnchorEntity(.plane([.any], classification: [.seat], minimumBounds: [1.0, 1.0]))
    
    // When you use ray-casting
    AnchorEntity(raycastResult: myRaycastResult)     /* no dot notation */
    
    // When you use ARAnchor with a given identifier
    AnchorEntity(.anchor(identifier: uuid))
    
    // Creates anchor entity on a basis of ARAnchor
    AnchorEntity(anchor: arAnchor)                   /* no dot notation */
    
    func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
    
        guard let faceAnchor = anchors.first as? ARFaceAnchor else { return }
        let anchor = AnchorEntity(anchor: faceAnchor)
        anchor.addChild(model)
        arView.scene.anchors.append(anchor)
    }
    

    这篇关于ARAnchor到底是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-27 03:21