mirror of
https://github.com/yuzu-emu/unicorn.git
synced 2024-12-24 19:25:42 +00:00
d3ada2feb5
Allocating an arbitrarily-sized array of tbs results in either (a) a lot of memory wasted or (b) unnecessary flushes of the code cache when we run out of TB structs in the array. An obvious solution would be to just malloc a TB struct when needed, and keep the TB array as an array of pointers (recall that tb_find_pc() needs the TB array to run in O(log n)). Perhaps a better solution, which is implemented in this patch, is to allocate TB's right before the translated code they describe. This results in some memory waste due to padding to have code and TBs in separate cache lines--for instance, I measured 4.7% of padding in the used portion of code_gen_buffer when booting aarch64 Linux on a host with 64-byte cache lines. However, it can allow for optimizations in some host architectures, since TCG backends could safely assume that the TB and the corresponding translated code are very close to each other in memory. See this message by rth for a detailed explanation: https://lists.gnu.org/archive/html/qemu-devel/2017-03/msg05172.html Subject: Re: GSoC 2017 Proposal: TCG performance enhancements Backports commit 6e3b2bfd6af488a896f7936e99ef160f8f37e6f2 from qemu |
||
---|---|---|
.. | ||
address-spaces.h | ||
cpu-all.h | ||
cpu-common.h | ||
cpu-defs.h | ||
cpu_ldst.h | ||
cpu_ldst_template.h | ||
cputlb.h | ||
exec-all.h | ||
gen-icount.h | ||
helper-gen.h | ||
helper-head.h | ||
helper-proto.h | ||
helper-tcg.h | ||
hwaddr.h | ||
ioport.h | ||
memattrs.h | ||
memory-internal.h | ||
memory.h | ||
ram_addr.h | ||
ramlist.h | ||
semihost.h | ||
tb-context.h | ||
tb-hash-xx.h | ||
tb-hash.h |