最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

c++ - Preventing reordering of writes before atomic store - Stack Overflow

programmeradmin1浏览0评论

I have the following situation: I want to start a somewhat long running asynchronous thread in response to some event, but only want to start one thread at a time in the face of multiple events coming in. Basically, I have the following:

void foo() 
{

   // Do some work that's fairly intensive

   in_progress.store(false, /*???*/); // op 1
}

void bar()
{
    if(in_progress.load(/*???*/)) // op 2
        return
    
    in_progress.store(true, /*???*/); // op 3

    // calling thread is running an event loop
    // t finishes whenever it's done
    std::thread t(foo);
    t.detach();
}

Now, obviously in_progress needs to be synchronized, but my question is what memory ordering, if any, should be used in these operations?

My initial thought was that all three operations could use std::memory_order_relaxed, since they're not indexing into an array or preparing any memory, I only need atomicity to protect against concurrent reads / writes. But then I thought that this isn't correct, since the construction of the thread object itself is a "write" that needs to have some memory order guarantees surrounding it. In other words, the construction of t cannot occur before operation 3 in observed order.

My problem is then, that I'm not sure what memory order to use here, or even if an atomic flag is sufficient in this case. If operation 3 happened after the construction of t in program order, then the answer would be clear: release & acquire semantics are appropriate (prepare the thread object, then "publish" the results through a release store). However, I can't move operation 3 after construction of t, because then I wouldn't actually have a preventative guard here. What I'm really looking for is a memory order that prevents writes from being reordered before the atomic write, which seems to be kind of like the flip-side of what std::memory_order_release guarantees.

It may also be that atomics can't provide such a memory ordering, and some other synchronization primitive is required, I guess a semaphore to signal when the thread has been constructed.

I have the following situation: I want to start a somewhat long running asynchronous thread in response to some event, but only want to start one thread at a time in the face of multiple events coming in. Basically, I have the following:

void foo() 
{

   // Do some work that's fairly intensive

   in_progress.store(false, /*???*/); // op 1
}

void bar()
{
    if(in_progress.load(/*???*/)) // op 2
        return
    
    in_progress.store(true, /*???*/); // op 3

    // calling thread is running an event loop
    // t finishes whenever it's done
    std::thread t(foo);
    t.detach();
}

Now, obviously in_progress needs to be synchronized, but my question is what memory ordering, if any, should be used in these operations?

My initial thought was that all three operations could use std::memory_order_relaxed, since they're not indexing into an array or preparing any memory, I only need atomicity to protect against concurrent reads / writes. But then I thought that this isn't correct, since the construction of the thread object itself is a "write" that needs to have some memory order guarantees surrounding it. In other words, the construction of t cannot occur before operation 3 in observed order.

My problem is then, that I'm not sure what memory order to use here, or even if an atomic flag is sufficient in this case. If operation 3 happened after the construction of t in program order, then the answer would be clear: release & acquire semantics are appropriate (prepare the thread object, then "publish" the results through a release store). However, I can't move operation 3 after construction of t, because then I wouldn't actually have a preventative guard here. What I'm really looking for is a memory order that prevents writes from being reordered before the atomic write, which seems to be kind of like the flip-side of what std::memory_order_release guarantees.

It may also be that atomics can't provide such a memory ordering, and some other synchronization primitive is required, I guess a semaphore to signal when the thread has been constructed.

Share Improve this question asked Feb 4 at 14:59 PacopenguinPacopenguin 3842 silver badges10 bronze badges 2
  • You'd maybe need a mechanism to see if the thread is still running - this could be useful stackoverflow/questions/9094422/… – Sven Nilsson Commented Feb 4 at 15:07
  • Side note ` t.detach();` is an immediate red flag, you should ALWAYS synchronize with running threads at shutdown, either through join, condition_variable base mechanism. And why not start a thread at startup and use a queue of events to handle (again using std::condition_variables to synchronize producer and consumer) – Pepijn Kramer Commented Feb 4 at 16:14
Add a comment  | 

1 Answer 1

Reset to default 5

std::atomic<T>::load followed by astd::atomic<T>::store is usually entirely wrong. Are you absolutely certain that you did not require a std::atomic<T>::exchange instead to actually combine those two into a single operation?

You are risking that somebody else has already seen the same flag, and now both are storing the same value.

If so, it's:

void foo() 
{

   // Do some work that's fairly intensive

   // Ensure that nothing gets reordered past the point of release.
   in_progress.store(false, std::memory_order_release); // op 1
}

void bar()
{
    // Nothing can be ordered prior this point - but when using a thread like this it effectively won't have any effect as there are several memory barriers within std::thread as well.
    if(in_progress.exchange(true, std::memory_order_aquire))
        return

    // calling thread is running an event loop
    // t finishes whenever it's done
    std::thread t(foo);
    t.detach();
}

On an unrelated side note - you want to avoid std::thread::detach() at any cost. It's a really bad idead that will very likely push you into undefined behavior on application shutdown, as the deinitializers of static variables as well as module unloading starts racing and clashing with the potentially still running thread.

Join ALL the threads you ever spawning. Respectively if you are using std::async with std::launch::async, make sure that you definitely wait for the future to be ready without exceptions.

发布评论

评论列表(0)

  1. 暂无评论
ok 不同模板 switch ($forum['model']) { /*case '0': include _include(APP_PATH . 'view/htm/read.htm'); break;*/ default: include _include(theme_load('read', $fid)); break; } } break; case '10': // 主题外链 / thread external link http_location(htmlspecialchars_decode(trim($thread['description']))); break; case '11': // 单页 / single page $attachlist = array(); $imagelist = array(); $thread['filelist'] = array(); $threadlist = NULL; $thread['files'] > 0 and list($attachlist, $imagelist, $thread['filelist']) = well_attach_find_by_tid($tid); $data = data_read_cache($tid); empty($data) and message(-1, lang('data_malformation')); $tidlist = $forum['threads'] ? page_find_by_fid($fid, $page, $pagesize) : NULL; if ($tidlist) { $tidarr = arrlist_values($tidlist, 'tid'); $threadlist = well_thread_find($tidarr, $pagesize); // 按之前tidlist排序 $threadlist = array2_sort_key($threadlist, $tidlist, 'tid'); } $allowpost = forum_access_user($fid, $gid, 'allowpost'); $allowupdate = forum_access_mod($fid, $gid, 'allowupdate'); $allowdelete = forum_access_mod($fid, $gid, 'allowdelete'); $access = array('allowpost' => $allowpost, 'allowupdate' => $allowupdate, 'allowdelete' => $allowdelete); $header['title'] = $thread['subject']; $header['mobile_link'] = $thread['url']; $header['keywords'] = $thread['keyword'] ? $thread['keyword'] : $thread['subject']; $header['description'] = $thread['description'] ? $thread['description'] : $thread['brief']; $_SESSION['fid'] = $fid; if ($ajax) { empty($conf['api_on']) and message(0, lang('closed')); $apilist['header'] = $header; $apilist['extra'] = $extra; $apilist['access'] = $access; $apilist['thread'] = well_thread_safe_info($thread); $apilist['thread_data'] = $data; $apilist['forum'] = $forum; $apilist['imagelist'] = $imagelist; $apilist['filelist'] = $thread['filelist']; $apilist['threadlist'] = $threadlist; message(0, $apilist); } else { include _include(theme_load('single_page', $fid)); } break; default: message(-1, lang('data_malformation')); break; } ?>