I have the following situation: I want to start a somewhat long running asynchronous thread in response to some event, but only want to start one thread at a time in the face of multiple events coming in. Basically, I have the following:
void foo()
{
// Do some work that's fairly intensive
in_progress.store(false, /*???*/); // op 1
}
void bar()
{
if(in_progress.load(/*???*/)) // op 2
return
in_progress.store(true, /*???*/); // op 3
// calling thread is running an event loop
// t finishes whenever it's done
std::thread t(foo);
t.detach();
}
Now, obviously in_progress
needs to be synchronized, but my question is what memory ordering, if any, should be used in these operations?
My initial thought was that all three operations could use std::memory_order_relaxed
, since they're not indexing into an array or preparing any memory, I only need atomicity to protect against concurrent reads / writes. But then I thought that this isn't correct, since the construction of the thread object itself is a "write" that needs to have some memory order guarantees surrounding it. In other words, the construction of t
cannot occur before operation 3 in observed order.
My problem is then, that I'm not sure what memory order to use here, or even if an atomic flag is sufficient in this case. If operation 3 happened after the construction of t
in program order, then the answer would be clear: release & acquire semantics are appropriate (prepare the thread object, then "publish" the results through a release store). However, I can't move operation 3 after construction of t
, because then I wouldn't actually have a preventative guard here. What I'm really looking for is a memory order that prevents writes from being reordered before the atomic write, which seems to be kind of like the flip-side of what std::memory_order_release
guarantees.
It may also be that atomics can't provide such a memory ordering, and some other synchronization primitive is required, I guess a semaphore to signal when the thread has been constructed.
I have the following situation: I want to start a somewhat long running asynchronous thread in response to some event, but only want to start one thread at a time in the face of multiple events coming in. Basically, I have the following:
void foo()
{
// Do some work that's fairly intensive
in_progress.store(false, /*???*/); // op 1
}
void bar()
{
if(in_progress.load(/*???*/)) // op 2
return
in_progress.store(true, /*???*/); // op 3
// calling thread is running an event loop
// t finishes whenever it's done
std::thread t(foo);
t.detach();
}
Now, obviously in_progress
needs to be synchronized, but my question is what memory ordering, if any, should be used in these operations?
My initial thought was that all three operations could use std::memory_order_relaxed
, since they're not indexing into an array or preparing any memory, I only need atomicity to protect against concurrent reads / writes. But then I thought that this isn't correct, since the construction of the thread object itself is a "write" that needs to have some memory order guarantees surrounding it. In other words, the construction of t
cannot occur before operation 3 in observed order.
My problem is then, that I'm not sure what memory order to use here, or even if an atomic flag is sufficient in this case. If operation 3 happened after the construction of t
in program order, then the answer would be clear: release & acquire semantics are appropriate (prepare the thread object, then "publish" the results through a release store). However, I can't move operation 3 after construction of t
, because then I wouldn't actually have a preventative guard here. What I'm really looking for is a memory order that prevents writes from being reordered before the atomic write, which seems to be kind of like the flip-side of what std::memory_order_release
guarantees.
It may also be that atomics can't provide such a memory ordering, and some other synchronization primitive is required, I guess a semaphore to signal when the thread has been constructed.
Share Improve this question asked Feb 4 at 14:59 PacopenguinPacopenguin 3842 silver badges10 bronze badges 2- You'd maybe need a mechanism to see if the thread is still running - this could be useful stackoverflow/questions/9094422/… – Sven Nilsson Commented Feb 4 at 15:07
- Side note ` t.detach();` is an immediate red flag, you should ALWAYS synchronize with running threads at shutdown, either through join, condition_variable base mechanism. And why not start a thread at startup and use a queue of events to handle (again using std::condition_variables to synchronize producer and consumer) – Pepijn Kramer Commented Feb 4 at 16:14
1 Answer
Reset to default 5std::atomic<T>::load
followed by astd::atomic<T>::store
is usually entirely wrong. Are you absolutely certain that you did not require a std::atomic<T>::exchange
instead to actually combine those two into a single operation?
You are risking that somebody else has already seen the same flag, and now both are storing the same value.
If so, it's:
void foo()
{
// Do some work that's fairly intensive
// Ensure that nothing gets reordered past the point of release.
in_progress.store(false, std::memory_order_release); // op 1
}
void bar()
{
// Nothing can be ordered prior this point - but when using a thread like this it effectively won't have any effect as there are several memory barriers within std::thread as well.
if(in_progress.exchange(true, std::memory_order_aquire))
return
// calling thread is running an event loop
// t finishes whenever it's done
std::thread t(foo);
t.detach();
}
On an unrelated side note - you want to avoid std::thread::detach()
at any cost. It's a really bad idead that will very likely push you into undefined behavior on application shutdown, as the deinitializers of static variables as well as module unloading starts racing and clashing with the potentially still running thread.
Join ALL the threads you ever spawning. Respectively if you are using std::async
with std::launch::async
, make sure that you definitely wait for the future to be ready without exceptions.