如何安全地解耦渲染更新模型

如何安全地解耦渲染更新模型

本文介绍了如何安全地解耦渲染更新模型?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

与一些游戏开发者谈话,他们建议一个基于OpenGL ES的游戏引擎不能处理主线程上的一切。这允许游戏引擎在具有多个CPU内核的设备上执行得更好。



他们说我可以将更新与渲染解耦。所以如果我理解这个正确,一个游戏引擎运行循环可以这样工作:


  1. 设置CADisplayLink调用<$


  2. render


  3. render 方法,然后调用 update



  4. 因此,当它在后台渲染时,它可以同时更新世界模型下一次迭代。



    对我来说,这一切都感觉很累。有人可以解释或链接到这种并发渲染+模型的更新在现实中如何做?它困扰我的心,如何这不会导致问题,因为如果模型更新需要更长的时间比渲染或其他方式。谁等待什么和什么时候。



    我试图理解的是如何在理论上从高层次的角度来实现这一点,但也详细。

    解决方案

    在现实中有很多不同的方法。没有一个真正的方式。什么是适合你真的取决于一个

    很多因素你没有在你的问题讨论,但我仍然会采取一个镜头。我也不知道 CADisplayLink 是你想要的。我通常认为这对于需要帧同步(即口型同步的音频和视频),这听起来不像你需要的东西,但让我们看看一些不同的方式,你可以这样做有用。我认为你的问题的关键是是否需要在模型和视图之间的第二个层。



    背景:单线程(即仅主线程)示例



    让我们先考虑一个普通的单线程-threaded app could work:


    1. 主线程上有用户事件

    2. 事件处理程序触发器调用控制器方法。

    3. 控制器方法更新模型状态。

    4. 更改模型状态会使视图状态无效。 (即 -setNeedsDisplay

    5. 当下一帧出现时,窗口服务器将触发视图状态的重新渲染当前模型状态并显示结果

    请注意,步骤1-4可能在步骤5的发生之间发生多次,这是一个单线程应用程序,而步骤5发生,步骤1-4没有发生,用户事件正在排队等待步骤5完成。假设步骤1-4非常快,这通常会以预期的方式丢帧。



    从主线程中解耦渲染



    现在,让我们考虑你想将渲染卸载到后台线程的情况。在这种情况下,序列应该如下所示:


    1. 主线程上有用户事件

    2. 事件处理程序触发对控制器方法的调用。

    3. 控制器方法更新模型状态。

    4. 对模型状态的更改会为异步呈现任务执行后台执行。

    5. 如果异步呈现任务完成,它将生成的位图放在视图已知的某处,并在视图上调用 -setNeedsDisplay

    6. 当下一帧出现时,窗口服务器将触发对视图上的 -drawRect 的调用,从已知共享位置获取最近完成的位图并将其复制到视图中。

    这里有一些细微差别。让我们首先考虑这样一种情况,你只是试图从主线程中解耦渲染(并暂时忽略多个内核的使用 - 更多的内容):



    你几乎肯定不会想要多个渲染任务一次运行。一旦你开始渲染一个框架,你可能不想取消/停止渲染它。您可能想将未来的未启动的渲染操作排入单个插槽队列,该插槽队列总是包含最后一个入队的未启动的渲染操作。这应该给你合理的帧丢弃行为,所以你不会后面渲染框架,你应该放弃,而不是。



    如果存在完全呈现,但不是但显示,框架,我想你总是要显示该框架。记住这一点,你不想在视图上调用 -setNeedsDisplay ,直到位图完成并在已知位置。



    您需要在线程之间同步您的访问。例如,当您将渲染操作排入队列时,最简单的方法是获取模型状态的只读快照,并将其传递到渲染操作,这将只从快照中读取。这使您不必与实时游戏模型(可能在主线程上通过您的控制器方法对未来的用户事件进行响应)进行同步。另一个同步问题是将完成的位图传递到视图和调用 -setNeedsDisplay 。最简单的方法可能是使图像成为视图上的属性,并分派该属性的设置(使用完成的图像)​​和调用 -setNeedsDisplay



    这里有一个小小的麻烦:如果用户事件以高速率进入,并且您能够在主线程中渲染多个帧单个显示帧持续时间(1/60秒),您可以结束渲染在地板上丢弃的位图。这种方法具有总是在显示时为视图提供最新的帧的优点(减少的感知延迟),但是它具有* dis *优点,它引起渲染从未获得的帧的所有计算成本(即功率)。



    使用多个内核 - 固有的平行渲染



    假设你已经如上所述从主线程中解耦渲染,并且渲染操作本身是可并行化的,那么只需并行化一个渲染操作,同时继续以相同的方式与视图交互,你应该免费获得多核并行性。也许你可以将每个帧划分为N个瓦片,其中N是核心数,然后一旦所有N个瓦片完成渲染,您可以将它们拼凑在一起,并将它们提交到视图,就像渲染操作是整体一样。如果使用模型的只读快照,N个图块任务的设置成本应该是最小的(因为它们都可以使用相同的源模型)。



    使用多个内核 - 内部串行渲染



    在渲染操作本身是串行的情况下(在我的经验中大多数情况下),使用多个内核的选项是在核中具有与飞行中一样多的渲染操作。当一个帧完成时,它将发信号通知任何入队或仍然在飞行中,但先前的渲染操作,他们可能放弃和取消,然后它将设置自己被视图显示,正如解耦唯一的例子。 / p>

    如在解耦唯一的情况中所提到的,这总是在显示时为视图提供最新的帧,但是它会带来所有的计算渲染从未显示的帧的成本。



    当模型慢时...



    我没有解决它实际上是更新该模型基于用户事件太慢,因为在某种意义上,如果是这种情况,在许多方面,你不再关心渲染。如果模型 甚至不能跟上,如何渲染可能继续?此外,假设您找到一种方法来互锁渲染和模型计算,渲染总是从模型计算中夺取循环,根据定义,模型计算总是在后面。换句话说,你不能每秒钟渲染N次,因为每秒钟不能更新N次。



    我可以想到的情况你可能可以卸载像一个连续运行的物理模拟到后台线程。这样的系统将必须自己管理其实时性能,并且假设它这样做,那么你就会遇到将该系统的结果与传入的用户事件流同步的挑战。这是一个混乱。



    在通常情况下,你真的想要事件处理和模型突变比实时更快,并且呈现为硬的部分。我试图想象一个有意义的情况,其中模型更新是限制因素,但你仍然关心解耦渲染的性能。



    换句话说:如果你的模型只能更新在10Hz,它永远不会有意义,更新您的视图比10Hz更快。当用户事件比10Hz快得多时,这种情况的主要挑战。这个挑战是要有意义地丢弃,取样或合并传入的事件,以保持有意义并提供良好的用户体验。



    一些代码



    这里是一个简单的例子,如何解耦背景渲染可能看起来,基于Cocoa应用程序模板在Xcode。 (我在编码这个基于OS X的示例后意识到问题被标记为 ios ,所以我想这是无论什么价值)

      @class MyModel; 

    @interface NSAppDelegate:NSObject< NSApplicationDelegate>
    @property(assign)IBOutlet NSWindow * window;
    @property(nonatomic,readwrite,copy)MyModel * model;
    @end

    @interface MyModel:NSObject< NSMutableCopying>
    @property(nonatomic,readonly,assign)CGPoint lastMouseLocation;
    @end

    @interface MyMutableModel:MyModel
    @property(nonatomic,readwrite,assign)CGPoint lastMouseLocation;
    @end

    @interface MyBackgroundRenderingView:NSView
    @property(nonatomic,readwrite,assign)CGPoint坐标;
    @end

    @interface MyViewController:NSViewController
    @end

    @implementation NSAppDelegate
    {
    MyViewController * _vc;
    NSTrackingArea * _trackingArea;
    }

    - (void)applicationDidFinishLaunching:(NSNotification *)aNotification
    {
    //在此插入代码以初始化应用程序
    self.window。 acceptsMouseMovedEvents = YES;

    int opts =(NSTrackingActiveAlways | NSTrackingInVisibleRect | NSTrackingMouseMoved);
    _trackingArea = [[NSTrackingArea alloc] initWithRect:[self.window.contentView bounds]
    options:opts
    owner:self
    userInfo:nil];
    [self.window.contentView addTrackingArea:_trackingArea];


    _vc = [[MyViewController alloc] initWithNibName:NSStringFromClass([MyViewController class])bundle:[NSBundle mainBundle]];
    _vc.representedObject = self;

    _vc.view.frame = [self.window.contentView bounds];
    [self.window.contentView addSubview:_vc.view];
    }

    - (void)mouseEntered:(NSEvent *)theEvent
    {
    }

    - (void)mouseExited: *)theEvent
    {
    }

    - (void)mouseMoved:(NSEvent *)theEvent
    {
    //更新鼠标移动的模型。
    MyMutableModel * mutableModel = self.model.mutableCopy?:[[MyMutableModel alloc] init];
    mutableModel.lastMouseLocation = theEvent.locationInWindow;
    self.model = mutableModel;
    }

    @end

    @interface MyModel()
    //私下重新声明,所以setter存在可变子类使用
    @property(nonatomic,readwrite,assign)CGPoint lastMouseLocation;
    @end

    @implementation MyModel

    @synthesize lastMouseLocation;

    - (id)copyWithZone:(NSZone *)zone
    {
    if([self isMemberOfClass:[MyModel class]])
    {
    return自;
    }

    MyModel * copy = [[MyModel alloc] init];
    copy.lastMouseLocation = self.lastMouseLocation;
    return copy;
    }

    - (id)mutableCopyWithZone:(NSZone *)zone
    {
    MyMutableModel * copy = [[MyMutableModel alloc] init];
    copy.lastMouseLocation = self.lastMouseLocation;
    return copy;
    }

    @end

    @implementation MyMutableModel
    @end

    @interface MyViewController(downcast)
    - (MyBackgroundRenderingView *)view; // downcast
    @end

    @implementation MyViewController

    static void * const MyViewControllerKVOContext =(void *)& MyViewControllerKVOContext;

    - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
    {
    if(self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil])
    {
    [self addObserver:self forKeyPath:@renderedObject.model.lastMouseLocationoptions:NSKeyValueObservingOptionOld | NSKeyValueObservingOptionNew | NSKeyValueObservingOptionInitial context:MyViewControllerKVOContext];
    }
    return self;
    }

    - (void)dealloc
    {
    [self removeObserver:self forKeyPath:@renderedObject.model.lastMouseLocationcontext:MyViewControllerKVOContext];
    }

    - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context
    {
    if(MyViewControllerKVOContext == context)
    {
    //更新视图...
    NSValue * oldCoordinates = change [NSKeyValueChangeOldKey];
    oldCoordinates = [oldCoordinates isKindOfClass:[NSValue class]]? oldCoordinates:nil;
    NSValue * newCoordinates = change [NSKeyValueChangeNewKey];
    newCoordinates = [newCoordinates isKindOfClass:[NSValue class]]?新朋友:nil;
    CGPoint old = CGPointZero,new = CGPointZero;
    [oldCoordinates getValue:& old];
    [newCoordinates getValue:& new];

    if(!CGPointEqualToPoint(old,new))
    {
    self.view.coordinates = new;
    }
    }
    else
    {
    [super observeValueForKeyPath:keyPath ofObject:object change:change context:context];
    }
    }

    @end

    @interface MyBackgroundRenderingView()
    @property(nonatomic,readwrite,retain)id toDisplay; //不需要是原子的,因为它应该只在主线程上使用。
    @end

    @implementation MyBackgroundRenderingView
    {
    //指针大小读取/
    intptr_t _lastFrameStarted;
    intptr_t _lastFrameDisplayed;
    CGPoint _coordinates;
    }

    @synthesize coordinates = _coordinates;

    - (void)setCoordinates:(CGPoint)coordinates
    {
    _coordinates = coordinates;

    //而不是setNeedDisplay ...
    [self doBackgroundRenderingForPoint:coordinates];
    }

    - (void)setNeedsDisplay:(BOOL)flag
    {
    if(flag)
    {
    [self doBackgroundRenderingForPoint:self .coordinates];
    }
    }

    - (void)doBackgroundRenderingForPoint:(CGPoint)value
    {
    NSAssert(NSThread.isMainThread,@ 。);

    const intptr_t thisFrame = _lastFrameStarted ++;
    const NSSize imageSize = self.bounds.size;
    const NSRect imageRect = NSMakeRect(0,0,imageSize.width,imageSize.height);

    dispatch_async(dispatch_get_global_queue(0,0),^ {

    //如果另一帧已经排队了,不要打扰这个
    if _lastFrameStarted - 1> thisFrame)
    {
    dispatch_async(dispatch_get_global_queue(0,0),^ {NSLog(@不渲染一个帧,因为有一个更新的队列已经排队了。 ;
    return;
    }

    //引入一个介于1ms和1 / 15th之间的任意伪延迟)
    const uint32_t delays = arc4random_uniform(65);
    for(NSUInteger i = 1; i< delays; i ++)
    {
    //已显示后面的框架。放弃渲染这个旧框架。
    if(_lastFrameDisplayed> thisFrame)
    {
    dispatch_async(dispatch_get_global_queue(0,0),^ {NSLog(@中止渲染未准备好的帧);} );
    return;
    }
    usleep(1000);
    }

    //渲染图像...
    NSImage * image = [[NSImage alloc] initWithSize:imageSize];
    [image lockFocus];
    NSString * coordsString = [NSString stringWithFormat:@%g,%g,value.x,value.y];
    [coordsString drawInRect:imageRect withAttributes:nil];
    [image unlockFocus];

    NSArray * toDisplay = @ [image,@(thisFrame)];
    dispatch_async(dispatch_get_main_queue(),^ {
    self.toDisplay = toDisplay;
    [super setNeedsDisplay:YES];
    });
    });
    }

    - (void)drawRect:(NSRect)dirtyRect
    {
    NSArray * toDisplay = self.toDisplay;
    if(!toDisplay)
    return;
    NSImage * img = toDisplay [0];
    const int64_t frameOrdinal = [toDisplay [1] longLongValue];

    if(frameOrdinal return;

    [img drawInRect:self.bounds];
    _lastFrameDisplayed = frameOrdinal;

    dispatch_async(dispatch_get_global_queue(0,0),^ {NSLog(@Displayed a frame);});
    }

    @end



    结论



    在抽象中,只是从主线程中解耦渲染,但不一定要并行化(即第一种情况)就足够了。要进一步从那里,你可能想研究如何并行化您的每帧渲染操作。并行化多个框架的绘制赋予了一些优点,但是在像iOS这样的电池供电的环境中,它可能将你的应用/游戏变成电池猪。



    对于模型更新(而不是呈现)是限制试剂的任何情况,正确的方法将在很大程度上取决于情境的具体细节,很难一概而论,与渲染相比。


    Talking with some game developers, they suggested that a performant OpenGL ES based game engine does not handle everything on the main thread. This allows the game engine to perform better on devices with multiple CPU cores.

    They said that I could decouple updates from rendering. So if I understood this correct, a game engine run loop can work like this:

    1. Setup a CADisplayLink which calls a render method.

    2. render method renders current world model in background.

    3. render method then calls update method on main thread.

    So while it renders in background, it can concurrently already update world model for next iteration.

    To me this all feels a lot wonky. Can someone explain or link to how this concurrent rendering + updating of model is done in reality? It boggles my mind how this would not lead to problems because what if model update takes longer than rendering or other way around. Who waits for what and when.

    What I try to understand is how this is implemented both theoretically from a high level viewpoint but also in detail.

    解决方案

    In "reality" there are lots of different approaches. There's not "one true way." What's right for you really depends a lot on factors you've not discussed in your question, but I'll take a shot anyway. I'm also not sure how CADisplayLink is what you want here. I would typically think of that being useful for things that require frame synchronization (i.e. lip-syncing audio and video), which it doesn't sound like you need, but let's look at a couple different ways you might do this. I think the crux of your question is whether or not there's a need for a second "layer" between the model and the view.

    Background: Single-Threaded (i.e. Main thread only) Example

    Let's first consider how a normal, single-threaded app might work:

    1. User events come in on the main thread
    2. Event handlers trigger calls to controller methods.
    3. Controller methods update model state.
    4. Changes to model state invalidate view state. (i.e. -setNeedsDisplay)
    5. When the next frame comes around, the window server will trigger a re-rendering of the view state from the current model state and display the results

    Note that steps 1-4 can happen many times between occurrences of step 5, however, since this is a single-threaded app, while step 5 is happening, steps 1-4 are not happening, and user events are getting queued up waiting for step 5 to complete. This will typically drop frames in an expected way, assuming steps 1-4 are "very fast".

    Decoupling Rendering From the Main Thread

    Now, let's consider the case where you want to offload the rendering to a background thread. In that case, the sequence should look something like this:

    1. User events come in on the main thread
    2. Event handlers trigger calls to controller methods.
    3. Controller methods update model state.
    4. Changes to model state enqueues an asynchronous rendering task for background execution.
    5. If the asynchronous rendering task completes, it puts the resulting bitmap somewhere known to the view, and calls -setNeedsDisplay on the view.
    6. When the next frame comes around, the window server will trigger a call to -drawRect on the view, which is now implemented as taking the most recently completed bitmap from the "known shared place" and copying it into the view.

    There are a few nuances here. Let's first consider the case where you're merely trying to decouple rendering from the main thread (and ignore, for the moment, utilization of multiple cores -- more later):

    You almost certainly never want more than one rendering task running at once. Once you start rendering a frame, you probably don't want to cancel/stop rendering it. You probably want to queue up future, un-started rendering operations into a single slot queue which always contains the last enqueued un-started render operation. This should give you reasonable frame dropping behavior so you don't get "behind" rendering frames that you should just drop instead.

    If there exists a fully rendered, but not yet displayed, frame, I think you always want to display that frame. With that in mind, you don't want to call -setNeedsDisplay on the view until the bitmap is complete and in the known place.

    You will need to synchronize your access across the threads. For instance, when you enqueue the rendering operation, the simplest approach would be to take a read-only snapshot of the model state, and pass that to the render operation, which will only read from the snapshot. This frees you from having to synchronize with the "live" game model (which might be being mutated on the main thread by your controller methods in response to future user events.) The other synchronization challenge is the passing of the completed bitmaps to the view and the calling of -setNeedsDisplay. The easiest approach will likely be to have the image be a property on the view, and to dispatch the setting of that property (with the completed image) and the calling of -setNeedsDisplay over to the main thread.

    There is a little hitch here: if user events are coming in at a high rate, and you're capable of rendering multiple frames in the duration of a single display frame (1/60s), you could end up rendering bitmaps that get dropped on the floor. This approach has the advantage of always providing the most up-to-date frame to the view at display time (reduced perceived latency), but it has the *dis*advantage that it incurs all the computational costs of rendering the frames that never get shown (i.e. power). The right trade off here will be different for every situation, and may include more fine-grained adjustments.

    Utilizing Multiple Cores -- Inherently Parallel Rendering

    Assuming that you've decoupled rendering form the main thread as discussed above, and your rendering operation itself is inherently parallelizable, then just parallelize your one rendering operation while continuing to interact with the view the same way, and you should get multi-core parallelism for free. Perhaps you could divide each frame into N tiles where N is the number of cores, and then once all N tiles finish rendering, you can cobble them together and deliver them to the view as if the rendering operation had been monolithic. If you're working with a read-only snapshot of the model, the setup costs of the N tile tasks should be minimal (since they can all use the same source model.)

    Utilizing Multiple Cores -- Inherently Serial Rendering

    In the case where your rendering operation is inherently serial (most cases in my experiences) your option to utilize multiple cores is to have as many rendering operations in flight as cores. When one frame completes, it would signal any enqueued or still in flight, but prior, render operations that they may give up and cancel, and then it would set itself to be displayed by the view just as in the decoupling-only example.

    As mentioned in the decoupling only case, this always provides the most up-to-date frame to the view at display time, but it incurs all the computational (i.e. power) costs of rendering the frames that never get shown.

    When the Model is Slow...

    I haven't addressed cases where it's actually the update of the model based on user events that is too slow, because in a sense, if that's the case, in many ways, you no longer care about rendering. How can rendering possibly keep up if the model can't even keep up? Furthermore, assuming you find a way to interlock the rendering and the model computations, the rendering is always robbing cycles from the model computations which are, by definition, always behind. Put differently, you can't hope to render something N times per second when the something itself can't be updated N times per second.

    I can conceive of cases where you might be able to offload something like a continuous running physics simulation to a background thread. Such a system would have to manage its real-time performance on its own, and assuming it does that, then you're stuck with the challenge of synchronizing the results from that system with the incoming user event stream. It's a mess.

    In the common case, you really want the event handling and model mutation to be way faster than real-time, and have rendering be the "hard part." I struggle to envision a meaningful case where the model updating is the limiting factor, but yet you still care about decoupling rendering for performance.

    Put differently: If your model can only update at 10Hz, it will never make sense to update your view faster than 10Hz. The principal challenge of that situation comes when user events are coming much faster than 10Hz. That challenge would be to meaningfully discard, sample or coalesce the incoming events so as to remain meaningful and provide a good user experience.

    Some code

    Here is a trivial example of how decoupled background rendering might look, based on the Cocoa Application template in Xcode. (I realized after coding up this OS X-based example, that the question was tagged with ios, so I guess this is "for whatever it's worth")

    @class MyModel;
    
    @interface NSAppDelegate : NSObject <NSApplicationDelegate>
    @property (assign) IBOutlet NSWindow *window;
    @property (nonatomic, readwrite, copy) MyModel* model;
    @end
    
    @interface MyModel : NSObject <NSMutableCopying>
    @property (nonatomic, readonly, assign) CGPoint lastMouseLocation;
    @end
    
    @interface MyMutableModel : MyModel
    @property (nonatomic, readwrite, assign) CGPoint lastMouseLocation;
    @end
    
    @interface MyBackgroundRenderingView : NSView
    @property (nonatomic, readwrite, assign) CGPoint coordinates;
    @end
    
    @interface MyViewController : NSViewController
    @end
    
    @implementation NSAppDelegate
    {
        MyViewController* _vc;
        NSTrackingArea* _trackingArea;
    }
    
    - (void)applicationDidFinishLaunching:(NSNotification *)aNotification
    {
        // Insert code here to initialize your application
        self.window.acceptsMouseMovedEvents = YES;
    
        int opts = (NSTrackingActiveAlways | NSTrackingInVisibleRect | NSTrackingMouseMoved);
        _trackingArea = [[NSTrackingArea alloc] initWithRect: [self.window.contentView bounds]
                                                            options:opts
                                                              owner:self
                                                           userInfo:nil];
        [self.window.contentView addTrackingArea: _trackingArea];
    
    
        _vc = [[MyViewController alloc] initWithNibName: NSStringFromClass([MyViewController class]) bundle: [NSBundle mainBundle]];
        _vc.representedObject = self;
    
        _vc.view.frame = [self.window.contentView bounds];
        [self.window.contentView addSubview: _vc.view];
    }
    
    - (void)mouseEntered:(NSEvent *)theEvent
    {
    }
    
    - (void)mouseExited:(NSEvent *)theEvent
    {
    }
    
    - (void)mouseMoved:(NSEvent *)theEvent
    {
        // Update the model for mouse movement.
        MyMutableModel* mutableModel = self.model.mutableCopy ?: [[MyMutableModel alloc] init];
        mutableModel.lastMouseLocation = theEvent.locationInWindow;
        self.model = mutableModel;
    }
    
    @end
    
    @interface MyModel ()
    // Re-declare privately so the setter exists for the mutable subclass to use
    @property (nonatomic, readwrite, assign) CGPoint lastMouseLocation;
    @end
    
    @implementation MyModel
    
    @synthesize lastMouseLocation;
    
    - (id)copyWithZone:(NSZone *)zone
    {
        if ([self isMemberOfClass: [MyModel class]])
        {
            return self;
        }
    
        MyModel* copy = [[MyModel alloc] init];
        copy.lastMouseLocation = self.lastMouseLocation;
        return copy;
    }
    
    - (id)mutableCopyWithZone:(NSZone *)zone
    {
        MyMutableModel* copy = [[MyMutableModel alloc] init];
        copy.lastMouseLocation = self.lastMouseLocation;
        return copy;
    }
    
    @end
    
    @implementation MyMutableModel
    @end
    
    @interface MyViewController (Downcast)
    - (MyBackgroundRenderingView*)view; // downcast
    @end
    
    @implementation MyViewController
    
    static void * const MyViewControllerKVOContext = (void*)&MyViewControllerKVOContext;
    
    - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
    {
        if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil])
        {
            [self addObserver: self forKeyPath: @"representedObject.model.lastMouseLocation" options: NSKeyValueObservingOptionOld | NSKeyValueObservingOptionNew | NSKeyValueObservingOptionInitial context: MyViewControllerKVOContext];
        }
        return self;
    }
    
    - (void)dealloc
    {
        [self removeObserver: self forKeyPath: @"representedObject.model.lastMouseLocation" context: MyViewControllerKVOContext];
    }
    
    - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context
    {
        if (MyViewControllerKVOContext == context)
        {
            // update the view...
            NSValue* oldCoordinates = change[NSKeyValueChangeOldKey];
            oldCoordinates = [oldCoordinates isKindOfClass: [NSValue class]] ? oldCoordinates : nil;
            NSValue* newCoordinates = change[NSKeyValueChangeNewKey];
            newCoordinates = [newCoordinates isKindOfClass: [NSValue class]] ? newCoordinates : nil;
            CGPoint old = CGPointZero, new = CGPointZero;
            [oldCoordinates getValue: &old];
            [newCoordinates getValue: &new];
    
            if (!CGPointEqualToPoint(old, new))
            {
                self.view.coordinates = new;
            }
        }
        else
        {
            [super observeValueForKeyPath:keyPath ofObject:object change:change context:context];
        }
    }
    
    @end
    
    @interface MyBackgroundRenderingView ()
    @property (nonatomic, readwrite, retain) id toDisplay; // doesn't need to be atomic because it should only ever be used on the main thread.
    @end
    
    @implementation MyBackgroundRenderingView
    {
        // Pointer sized reads/
        intptr_t _lastFrameStarted;
        intptr_t _lastFrameDisplayed;
        CGPoint _coordinates;
    }
    
    @synthesize coordinates = _coordinates;
    
    - (void)setCoordinates:(CGPoint)coordinates
    {
        _coordinates = coordinates;
    
        // instead of setNeedDisplay...
        [self doBackgroundRenderingForPoint: coordinates];
    }
    
    - (void)setNeedsDisplay:(BOOL)flag
    {
        if (flag)
        {
            [self doBackgroundRenderingForPoint: self.coordinates];
        }
    }
    
    - (void)doBackgroundRenderingForPoint: (CGPoint)value
    {
        NSAssert(NSThread.isMainThread, @"main thread only...");
    
        const intptr_t thisFrame = _lastFrameStarted++;
        const NSSize imageSize = self.bounds.size;
        const NSRect imageRect = NSMakeRect(0, 0, imageSize.width, imageSize.height);
    
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
    
            // If another frame is already queued up, don't bother starting this one
            if (_lastFrameStarted - 1 > thisFrame)
            {
                dispatch_async(dispatch_get_global_queue(0, 0), ^{ NSLog(@"Not rendering a frame because there's a more recent one queued up already."); });
                return;
            }
    
            // introduce an arbitrary fake delay between 1ms and 1/15th of a second)
            const uint32_t delays = arc4random_uniform(65);
            for (NSUInteger i = 1; i < delays; i++)
            {
                // A later frame has been displayed. Give up on rendering this old frame.
                if (_lastFrameDisplayed > thisFrame)
                {
                    dispatch_async(dispatch_get_global_queue(0, 0), ^{ NSLog(@"Aborting rendering a frame that wasn't ready in time"); });
                    return;
                }
                usleep(1000);
            }
    
            // render image...
            NSImage* image = [[NSImage alloc] initWithSize: imageSize];
            [image lockFocus];
            NSString* coordsString = [NSString stringWithFormat: @"%g,%g", value.x, value.y];
            [coordsString drawInRect: imageRect withAttributes: nil];
            [image unlockFocus];
    
            NSArray* toDisplay = @[ image, @(thisFrame) ];
            dispatch_async(dispatch_get_main_queue(), ^{
                self.toDisplay = toDisplay;
                [super setNeedsDisplay: YES];
            });
        });
    }
    
    - (void)drawRect:(NSRect)dirtyRect
    {
        NSArray* toDisplay = self.toDisplay;
        if (!toDisplay)
            return;
        NSImage* img = toDisplay[0];
        const int64_t frameOrdinal = [toDisplay[1] longLongValue];
    
        if (frameOrdinal < _lastFrameDisplayed)
            return;
    
        [img drawInRect: self.bounds];
        _lastFrameDisplayed = frameOrdinal;
    
        dispatch_async(dispatch_get_global_queue(0, 0), ^{ NSLog(@"Displayed a frame"); });
    }
    
    @end
    

    Conclusion

    In the abstract, just decoupling rendering from the main thread, but not necessarily parallelizing, (i.e. the first case) may be enough. To go further from there, you probably want to investigate ways to parallelize your per-frame render operation. Parallelizing the drawing of multiple frames confers some advantages, but in a battery powered environment like iOS it's likely to turn your app/game into a battery hog.

    For any situation in which model updates, and not rendering, are the limiting reagent, the right approach is going to depend heavily on the specific details of the situation, and is much harder to generalize, compared to rendering.

    这篇关于如何安全地解耦渲染更新模型?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-28 05:05