From b39acfc3c618034a5ceb36a38be09f293a5a9147 Mon Sep 17 00:00:00 2001 From: Paolo Bonzini Date: Fri, 2 Mar 2018 08:01:00 -0500 Subject: [PATCH] cpu-exec: tighten barrier on TCG_EXIT_REQUESTED This seems to have worked just fine so far on weakly-ordered architectures, but I don't see anything that prevents the reordering from: store 1 to exit_request store 1 to tcg_exit_req load tcg_exit_req store 0 to tcg_exit_req load exit_request store 0 to exit_request store 1 to exit_request store 1 to tcg_exit_req to this: store 1 to exit_request store 1 to tcg_exit_req load tcg_exit_req load exit_request store 1 to exit_request store 1 to tcg_exit_req store 0 to tcg_exit_req store 0 to exit_request therefore losing a request. It's possible that other memory barriers (e.g. in rcu_read_unlock) are hiding it, but better safe than sorry. Backports commit a70fe14b7dddcb944fbd6c9f3739cd3a22089af5 from qemu --- qemu/cpu-exec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/qemu/cpu-exec.c b/qemu/cpu-exec.c index 50df1414..731e9549 100644 --- a/qemu/cpu-exec.c +++ b/qemu/cpu-exec.c @@ -405,11 +405,11 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb, * have set something else (eg exit_request or * interrupt_request) which we will handle * next time around the loop. But we need to - * ensure the tcg_exit_req read in generated code + * ensure the zeroing of tcg_exit_req (see cpu_tb_exec) * comes before the next read of cpu->exit_request * or cpu->interrupt_request. */ - smp_rmb(); + smp_mb(); *last_tb = NULL; break; case TB_EXIT_ICOUNT_EXPIRED: