tcg: Rework tb_invalidated_flag

'tb_invalidated_flag' was meant to catch two events:
* some TB has been invalidated by tb_phys_invalidate();
* the whole translation buffer has been flushed by tb_flush().

Then it was checked:
* in cpu_exec() to ensure that the last executed TB can be safely
linked to directly call the next one;
* in cpu_exec_nocache() to decide if the original TB should be provided
for further possible invalidation along with the temporarily
generated TB.

It is always safe to patch an invalidated TB since it is not going to be
used anyway. It is also safe to call tb_phys_invalidate() for an already
invalidated TB. Thus, setting this flag in tb_phys_invalidate() is
simply unnecessary. Moreover, it can prevent from pretty proper linking
of TBs, if any arbitrary TB has been invalidated. So just don't touch it
in tb_phys_invalidate().

If this flag is only used to catch whether tb_flush() has been called
then rename it to 'tb_flushed'. Declare it as 'bool' and stick to using
only 'true' and 'false' to set its value. Also, instead of setting it in
tb_gen_code(), just after tb_flush() has been called, do it right inside
of tb_flush().

In cpu_exec(), this flag is used to track if tb_flush() has been called
and have made 'next_tb' (a reference to the last executed TB) invalid
for linking it to directly call the next TB. tb_flush() can be called
during the CPU execution loop from tb_gen_code(), during TB execution or
by another thread while 'tb_lock' is released. Catch for translation
buffer flush reliably by resetting this flag once before first TB lookup
and each time we find it set before trying to add a direct jump. Don't
touch in in tb_find_physical().

Each vCPU has its own execution loop in multithreaded mode and thus
should have its own copy of the flag to be able to reset it with its own
'next_tb' and don't affect any other vCPU execution thread. So make this
flag per-vCPU and move it to CPUState.

In cpu_exec_nocache(), we only need to check if tb_flush() has been
called from tb_gen_code() called by cpu_exec_nocache() itself. To do
this reliably, preserve the old value of the flag, reset it before
calling tb_gen_code(), check afterwards, and combine the saved value
back to the flag.

This patch is based on the patch "tcg: move tb_invalidated_flag to
CPUState" from Paolo Bonzini <pbonzini@redhat.com>.

Backports commit 6f789be56d3f38e9214dafcfab3bf9be7191f370 from qemu
This commit is contained in:
Sergey Fedorov 2018-02-23 23:32:10 -05:00 committed by Lioncash
parent c9700af2bd
commit ba9a237586
No known key found for this signature in database
GPG key ID: 4E3C3CC1031BA9C7
4 changed files with 8 additions and 16 deletions

View file

@ -38,7 +38,6 @@ static void cpu_handle_debug_exception(CPUState *cpu);
int cpu_exec(struct uc_struct *uc, CPUState *cpu)
{
CPUArchState *env = cpu->env_ptr;
TCGContext *tcg_ctx = env->uc->tcg_ctx;
CPUClass *cc = CPU_GET_CLASS(uc, cpu);
#ifdef TARGET_I386
X86CPU *x86_cpu = X86_CPU(uc, cpu);
@ -130,6 +129,7 @@ int cpu_exec(struct uc_struct *uc, CPUState *cpu)
}
last_tb = NULL; /* forget the last executed TB after exception */
cpu->tb_flushed = false; /* reset before first TB lookup */
for(;;) {
interrupt_request = cpu->interrupt_request;
@ -188,14 +188,12 @@ int cpu_exec(struct uc_struct *uc, CPUState *cpu)
ret = EXCP_HLT;
break;
}
/* Note: we do it here to avoid a gcc bug on Mac OS X when
doing it in tb_find_slow */
if (tcg_ctx->tb_ctx.tb_invalidated_flag) {
/* as some TB could have been invalidated because
of memory exceptions while generating the code, we
must recompute the hash index here */
if (cpu->tb_flushed) {
/* Ensure that no TB jump will be modified as the
* translation buffer has been flushed.
*/
last_tb = NULL;
tcg_ctx->tb_ctx.tb_invalidated_flag = 0;
cpu->tb_flushed = false;
}
/* See if we can patch the calling TB. */
if (last_tb && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
@ -337,8 +335,6 @@ static TranslationBlock *tb_find_slow(CPUState *cpu,
tb_page_addr_t phys_pc, phys_page1;
target_ulong virt_page2;
tcg_ctx->tb_ctx.tb_invalidated_flag = 0;
/* find translated block using physical mappings */
phys_pc = get_page_addr_code(env, pc); // qq
if (phys_pc == -1) { // invalid code?

View file

@ -297,8 +297,6 @@ struct TBContext {
/* statistics */
int tb_flush_count;
int tb_phys_invalidate_count;
int tb_invalidated_flag;
};
void tb_free(struct uc_struct *uc, TranslationBlock *tb);

View file

@ -256,6 +256,7 @@ struct CPUState {
bool stop;
bool stopped;
bool crash_occurred;
bool tb_flushed;
volatile sig_atomic_t exit_request;
uint32_t interrupt_request;
int singlestep_enabled;

View file

@ -919,6 +919,7 @@ void tb_flush(CPUState *cpu)
tcg_ctx->tb_ctx.nb_tbs = 0;
memset(cpu->tb_jmp_cache, 0, sizeof(cpu->tb_jmp_cache));
cpu->tb_flushed = true;
memset(tcg_ctx->tb_ctx.tb_phys_hash, 0, sizeof(tcg_ctx->tb_ctx.tb_phys_hash));
page_flush_tb(uc);
@ -1089,8 +1090,6 @@ void tb_phys_invalidate(struct uc_struct *uc,
invalidate_page_bitmap(p);
}
tcg_ctx->tb_ctx.tb_invalidated_flag = 1;
/* remove the TB from the hash list */
h = tb_jmp_cache_hash_func(tb->pc);
if (cpu->tb_jmp_cache[h] == tb) {
@ -1279,8 +1278,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
/* cannot fail at this point */
tb = tb_alloc(env->uc, pc);
assert(tb != NULL);
/* Don't forget to invalidate previous TB info. */
tcg_ctx->tb_ctx.tb_invalidated_flag = 1;
}
gen_code_buf = tcg_ctx->code_gen_ptr;
tb->tc_ptr = gen_code_buf;