问题描述
说,我有两个线程A
和B
分别写入全局布尔变量fA
和fB
,它们最初设置为false
并受std::mutex
对象mA
和mB
分别:
// Thread A
mA.lock();
assert( fA == false );
fA = true;
mA.unlock();
// Thread B
mB.lock()
assert( fB == false );
fB = true;
mB.unlock()
是否可以在不同的线程C
和D
中以不同的顺序观察fA
和fB
的修改?换句话说,下面的程序可以
#include <atomic>
#include <cassert>
#include <iostream>
#include <mutex>
#include <thread>
using namespace std;
mutex mA, mB, coutMutex;
bool fA = false, fB = false;
int main()
{
thread A{ []{
lock_guard<mutex> lock{mA};
fA = true;
} };
thread B{ [] {
lock_guard<mutex> lock{mB};
fB = true;
} };
thread C{ [] { // reads fA, then fB
mA.lock();
const auto _1 = fA;
mA.unlock();
mB.lock();
const auto _2 = fB;
mB.unlock();
lock_guard<mutex> lock{coutMutex};
cout << "Thread C: fA = " << _1 << ", fB = " << _2 << endl;
} };
thread D{ [] { // reads fB, then fA (i. e. vice versa)
mB.lock();
const auto _3 = fB;
mB.unlock();
mA.lock();
const auto _4 = fA;
mA.unlock();
lock_guard<mutex> lock{coutMutex};
cout << "Thread D: fA = " << _4 << ", fB = " << _3 << endl;
} };
A.join(); B.join(); C.join(); D.join();
}
合法打印
Thread C: fA = 1, fB = 0
Thread D: fA = 0, fB = 1
根据C ++标准吗?
注意:可以使用std::atomic<bool>
变量使用顺序一致的内存顺序或获取/释放内存顺序来实现自旋锁.因此,问题是std::mutex
的行为是否像顺序一致的自旋锁或获取/释放内存顺序自旋锁.
不可能输出,但是std::mutex
不一定顺序一致.获取/释放足以排除该行为.
std::mutex
在标准中未定义为顺序一致,只是
与的定义似乎与std::memory_order::release/acquire
相同(请参见).
据我所知,获取/释放自旋锁将满足std :: mutex的标准.
大修改:
但是,我不认为这意味着您的想法(或我的想法).输出仍然是不可能的,因为获取/释放语义足以将其排除在外. 此处,这是一种微妙的观点.起初看来似乎不可能,但我认为对这种事情保持谨慎是正确的.
根据标准,unlock()与 lock()同步.这意味着 unlock()之前发生的任何事情在lock()之后可见. 之前发生(此后为->)是一个有点怪异的关系,在上面的链接中有更好的解释,但是由于此示例中的所有内容周围都有互斥锁,因此一切都按您期望的方式运行,即const auto _1 = fA;
happens const auto _2 = fB;
之前,并且unlock()
互斥锁时线程可见的任何更改对于lock()
互斥锁的下一个线程可见.它还具有一些预期的属性,例如如果X发生在Y之前,并且Y发生在Z之前,则X-> Z,如果X发生在Y之前,那么Y也不会发生在X之前.
从这里不难看出看起来直观上正确的矛盾.
简而言之,每个互斥对象都有明确定义的操作顺序-例如对于互斥锁A,线程A,C,D按一定顺序持有锁.为了使线程D打印fA = 0,它必须在线程A之前锁定mA,反之亦然,对于线程C.mA的锁定顺序为D(mA)-> A(mA)-> C(mA).对于互斥锁B,序列必须为C(mB)-> B(mB)-> D(mB).
但是从程序中我们知道C(mA)-> C(mB),因此让我们将两者放在一起得到D(mA)-> A(mA)-> C(mA)-> C(mB )-> B(mB)-> D(mB),表示D(mA)-> D(mB).但是代码也给了我们D(mB)-> D(mA),这是矛盾的,这意味着您观察到的输出是不可能的.
对于获取/释放自旋锁,此结果没有什么不同,我认为每个人都将对变量的常规获取/释放内存访问与对受自旋锁保护的变量的访问混淆了.区别在于,使用自旋锁时,读取线程还执行比较/交换和释放写入,这与单个释放写入和获取读取是完全不同的方案.
如果您使用顺序一致的自旋锁,则不会影响输出.唯一的区别是,您始终可以从没有获得任何锁的单独线程中分类回答互斥锁A在互斥锁B之前被锁定"之类的问题.但是对于这个例子和大多数其他例子,这种陈述是没有用的,因此获取/发布是标准.
Say, I have two threads A
and B
writing to a global Boolean variables fA
and fB
respectively which are initially set to false
and are protected by std::mutex
objects mA
and mB
respectively:
// Thread A
mA.lock();
assert( fA == false );
fA = true;
mA.unlock();
// Thread B
mB.lock()
assert( fB == false );
fB = true;
mB.unlock()
Is it possible to observe the modifications on fA
and fB
in different orders in different threads C
and D
? In other words, can the following program
#include <atomic>
#include <cassert>
#include <iostream>
#include <mutex>
#include <thread>
using namespace std;
mutex mA, mB, coutMutex;
bool fA = false, fB = false;
int main()
{
thread A{ []{
lock_guard<mutex> lock{mA};
fA = true;
} };
thread B{ [] {
lock_guard<mutex> lock{mB};
fB = true;
} };
thread C{ [] { // reads fA, then fB
mA.lock();
const auto _1 = fA;
mA.unlock();
mB.lock();
const auto _2 = fB;
mB.unlock();
lock_guard<mutex> lock{coutMutex};
cout << "Thread C: fA = " << _1 << ", fB = " << _2 << endl;
} };
thread D{ [] { // reads fB, then fA (i. e. vice versa)
mB.lock();
const auto _3 = fB;
mB.unlock();
mA.lock();
const auto _4 = fA;
mA.unlock();
lock_guard<mutex> lock{coutMutex};
cout << "Thread D: fA = " << _4 << ", fB = " << _3 << endl;
} };
A.join(); B.join(); C.join(); D.join();
}
legally print
Thread C: fA = 1, fB = 0
Thread D: fA = 0, fB = 1
according to the C++ standard?
Note: A spin-lock can be implemented using std::atomic<bool>
variables using either sequential consistent memory order or acquire/release memory order. So the question is whether an std::mutex
behaves like a sequentially consistent spin-lock or an acquire/release memory order spin-lock.
That output isn't possible, but std::mutex
is not necessarily sequentially consistent. Acquire/release is enough to rule out that behaviour.
std::mutex
is not defined in the standard to be sequentially consistent, only that
Synchronize-with seems to be defined in the same was as std::memory_order::release/acquire
(see this question).
As far as I can see, an acquire/release spinlock would satisfy the standards for std::mutex.
Big edit:
However, I don't think that means what you think (or what I thought). The output is still not possible, since acquire/release semantics are enough to rule it out. This is a kind of subtle point that is better explained here. It seems obviously impossible at first but I think it's right to be cautious with stuff like this.
From the standard, unlock() synchronises with lock(). That means anything that happens before unlock() is visible after lock(). Happens before (henceforth ->) is a slightly weird relation explained better in the above link, but because there's mutexes around everything in this example, everything works like you expect, i.e. const auto _1 = fA;
happens before const auto _2 = fB;
, and any changes visible to a thread when it unlock()
s the mutex are visible to the next thread that lock()
s the mutex. Also it has some expected properties, e.g. if X happens before Y and Y happens before Z, then X -> Z, also if X happens before Y then Y doesn't happen before X.
From here it's not hard to see the contradiction that seems intuitively correct.
In short, there's a well defined order of operations for each mutex - e.g. for mutex A, threads A, C, D hold the locks in some sequence. For thread D to print fA=0, it must lock mA before thread A, vice versa for thread C. Sothe lock sequence for mA is D(mA) -> A(mA) -> C(mA).
For mutex B the sequence must be C(mB) -> B(mB) -> D(mB).
But from the program we know C(mA) -> C(mB), so that lets us put both together to get D(mA) -> A(mA) -> C(mA) -> C(mB) -> B(mB) -> D(mB), which means D(mA) -> D(mB). But the code also gives us D(mB) -> D(mA), which is a contradiction, meaning your observed output is not possible.
This outcome is no different for an acquire/release spinlock, I think everyone was confusing regular acquire/release memory access on a variable with access to a variable protected by a spinlock. The difference is that with a spinlock, the reading threads also perform a compare/exchange and a release write, which is a completely different scenario to a single release write and acquire read.
If you used a sequentially consistent spinlock then this wouldn't affect the output. The only difference is that you could always categorically answer questions like "mutex A was locked before mutex B" from a separate thread that didn't acquire either lock. But for this example and most others, that kind of statement isn't useful, hence acquire/release being the standard.
这篇关于std :: mutex是否顺序一致?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!