最近,我从mikeash中阅读了blog,其中介绍了dispatch_once的详细实现。我也在macosforge中获得了它的源代码。

我了解除以下这一行外的大多数代码:

dispatch_atomic_maximally_synchronizing_barrier();

它是一个宏,定义如下:

#define dispatch_atomic_maximally_synchronizing_barrier() \
    do { unsigned long _clbr; __asm__ __volatile__( \
    "cpuid" \
    : "=a" (_clbr) : "0" (0) : "rbx", "rcx", "rdx", "cc", "memory" \
    ); } while(0)


我知道它用于确保它“击败对等CPU的推测性预读”,但是我不知道cpuid及其后跟的单词。我对汇编语言一无所知。

谁能为我详细说明一下?非常感谢。

最佳答案

libdispatch源代码几乎可以解释它。

http://opensource.apple.com/source/libdispatch/libdispatch-442.1.4/src/shims/atomic.h

// see comment in dispatch_once.c
#define dispatch_atomic_maximally_synchronizing_barrier() \


http://opensource.apple.com/source/libdispatch/libdispatch-442.1.4/src/once.c

// The next barrier must be long and strong.
//
// The scenario: SMP systems with weakly ordered memory models
// and aggressive out-of-order instruction execution.
//
// The problem:
//
// The dispatch_once*() wrapper macro causes the callee's
// instruction stream to look like this (pseudo-RISC):
//
//      load r5, pred-addr
//      cmpi r5, -1
//      beq  1f
//      call dispatch_once*()
//      1f:
//      load r6, data-addr
//
// May be re-ordered like so:
//
//      load r6, data-addr
//      load r5, pred-addr
//      cmpi r5, -1
//      beq  1f
//      call dispatch_once*()
//      1f:
//
// Normally, a barrier on the read side is used to workaround
// the weakly ordered memory model. But barriers are expensive
// and we only need to synchronize once! After func(ctxt)
// completes, the predicate will be marked as "done" and the
// branch predictor will correctly skip the call to
// dispatch_once*().
//
// A far faster alternative solution: Defeat the speculative
// read-ahead of peer CPUs.
//
// Modern architectures will throw away speculative results
// once a branch mis-prediction occurs. Therefore, if we can
// ensure that the predicate is not marked as being complete
// until long after the last store by func(ctxt), then we have
// defeated the read-ahead of peer CPUs.
//
// In other words, the last "store" by func(ctxt) must complete
// and then N cycles must elapse before ~0l is stored to *val.
// The value of N is whatever is sufficient to defeat the
// read-ahead mechanism of peer CPUs.
//
// On some CPUs, the most fully synchronizing instruction might
// need to be issued.

dispatch_atomic_maximally_synchronizing_barrier();


对于x86_64和i386体系结构,它使用cpuid指令刷新@Michael提到的指令管道。 cpuid正在序列化指令,以防止内存重新排序。 __sync_synchronize用于其他体系结构。

https://gcc.gnu.org/onlinedocs/gcc-4.6.2/gcc/Atomic-Builtins.html

__sync_synchronize (...)
This builtin issues a full memory barrier.



  这些内置函数被认为是完全障碍。也就是说,不会在操作中向前或向后移动任何内存操作数。此外,将根据需要发出指令,以防止处理器推测整个操作的负载以及在操作之后排队存储。

关于ios - dispatch_atomic_maximally_synchronizing_barrier()是什么?意思?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/27562334/

10-10 14:24