cpu-exec: tighten barrier on TCG_EXIT_REQUESTED

This seems to have worked just fine so far on weakly-ordered
architectures, but I don't see anything that prevents the
reordering from:

store 1 to exit_request
store 1 to tcg_exit_req
load tcg_exit_req
store 0 to tcg_exit_req
load exit_request
store 0 to exit_request
store 1 to exit_request
store 1 to tcg_exit_req

to this:

store 1 to exit_request
store 1 to tcg_exit_req
load tcg_exit_req
load exit_request
store 1 to exit_request
store 1 to tcg_exit_req
store 0 to tcg_exit_req
store 0 to exit_request

therefore losing a request. It's possible that other memory barriers
(e.g. in rcu_read_unlock) are hiding it, but better safe than
sorry.

Backports commit a70fe14b7dddcb944fbd6c9f3739cd3a22089af5 from qemu
This commit is contained in:
Paolo Bonzini 2018-03-02 08:01:00 -05:00 committed by Lioncash
parent c9bdf5e6c7
commit b39acfc3c6
No known key found for this signature in database
GPG key ID: 4E3C3CC1031BA9C7

View file

@ -405,11 +405,11 @@ static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
* have set something else (eg exit_request or * have set something else (eg exit_request or
* interrupt_request) which we will handle * interrupt_request) which we will handle
* next time around the loop. But we need to * next time around the loop. But we need to
* ensure the tcg_exit_req read in generated code * ensure the zeroing of tcg_exit_req (see cpu_tb_exec)
* comes before the next read of cpu->exit_request * comes before the next read of cpu->exit_request
* or cpu->interrupt_request. * or cpu->interrupt_request.
*/ */
smp_rmb(); smp_mb();
*last_tb = NULL; *last_tb = NULL;
break; break;
case TB_EXIT_ICOUNT_EXPIRED: case TB_EXIT_ICOUNT_EXPIRED: