Use UBFX to avoid limitation on CPU_TLB_BITS. Since we're dropping
the initial shift, we need to replace the page masking. We can use
MOVW+BIC to do this without shifting. The result is the same size
as the armv6 path with one less conditional instruction.
Backports commit 647ab96aaf5defeb138e48d610f7f633c587b40d from qemu
Split out maybe_out_small_movi for use with other operations
that want to add to the constant pool.
Backports commit 28eef8aaece5e83df4568d9842ab9611ec130b2c from qemu
We were passing in -2 instead of +2, but then ignoring
the actual contents of addend in the calculation.
Backports commit e692a3492d04500355bcf23575eed7cf137b38d5 from qemu
Already it saves 2 bytes per call, but also the constant pool
entry may well be shared across multiple calls.
Backports commit 4e45f23943c0bb91588627de3801826546155ad8 from qemu
A new shared header tcg-pool.inc.c adds new_pool_label,
for registering a tcg_target_ulong to be emitted after
the generated code, plus relocation data to install a
pointer to the data.
A new pointer is added to the TCGContext, so that we
dump the constant pool as data, not code.
Backports commit 57a269469dbf70013dab3a176e1735636010a772 from qemu
Dispense with TCGBackendData, as it has never been used for more than
holding a single pointer. Use a define in the cpu/tcg-target.h to
signal requirement for TCGLabelQemuLdst, so that we can drop the no-op
tcg-be-null.h stubs. Rename tcg-be-ldst.h to tcg-ldst.inc.c.
Backports commit 659ef5cbb893872d25e9d95191cc23b16546c8a1 from qemu
Replace the USE_DIRECT_JUMP ifdef with a TCG_TARGET_HAS_direct_jump
boolean test. Replace the tb_set_jmp_target1 ifdef with an unconditional
function tb_target_set_jmp_target.
While we're touching all backends, add a parameter for tb->tc_ptr;
we're going to need it shortly for some backends.
Move tb_set_jmp_target and tb_add_jump from exec-all.h to cpu-exec.c.
Backports commit a85833933628384d74ec412024d55cf012640287 from qemu
This allows LOAD HALFWORD IMMEDIATE ON CONDITION,
eliminating one insn in some common cases.
Backports commit 7af525af01b9615c4f4df5da2e8a50f2fe00b023 from qemu
Currently, we cannot use mttcg for running strong memory model guests
on weak memory model hosts due to missing ordering semantics.
We implicitly generate fence instructions for stronger guests if an
ordering mismatch is detected. We generate fences only for the orders
for which fence instructions are necessary, for example a fence is not
necessary between a store and a subsequent load on x86 since its
absence in the guest binary tells that ordering need not be
ensured. Also note that if we find multiple subsequent fence
instructions in the generated IR, we combine them in the TCG
optimization pass.
This patch allows us to boot an x86 guest on ARM64 hosts using mttcg.
Backports commit b32dc3370a666e237b2099c22166b15e58cb6df8 from qemu
For a 64-bit ILP32 host, aligning to sizeof(long) is not enough.
Guess the minimum for any host is 8, as that covers uint64_t.
Qemu doesn't use a host long double or host vectors, except in
extremely limited circumstances.
Fixes a bus error for a sparc v8plus host.
Backports commit 13aaef678ed377b12b76dc7fb9e615b2f2f9047b from qemu
Patch 85aa80813dd changed the IF emitting the TST instruction,
but failed to change the ?: converting CMP to CMPEQ, so the
result of the TST is ignored.
Backports commit ca671de8af96798e0f493378240034620a3a04ee from qemu
Reserve a register for the guest_base using ppc code for reference.
By doing so, we do not have to recompute it for every memory load.
Backports commit 4df9cac57f5220c17d856292e90fce455f708421 from qemu
When running a helloworld program with qemu-i386 in linux-user
mode on Loongson 3A3000, it will crash. This patch fix the bug.
Backports commit 8b8d768f19037a825a0bc81654492caa7c8fab8b from qemu
This patch enables the indirect jump path using an LDR (literal)
instruction. It will be interesting to test and see which performs
better among the two paths.
Backports commit 2acee8b2b5e6bba2935bb6ce5be92d0f0f9799cb from qemu
We use ADRP+ADD to compute the target address for goto_tb. This patch
introduces the NOP instruction which is used to align the above
instruction pair so that we can use one atomic instruction to patch
the destination offsets.
Backports commit b68686bd4bfeb70040b4099df993dfa0b4f37b03 from qemu
We can use a branch to register instruction for exit_tb for offsets
greater than 128MB.
Backports commit 23b7aa1d2af04ba57cc94f74d9f0ab25dce72fa0 from qemu
Coldfire uses float64, but 680x0 use floatx80.
This patch introduces the use of floatx80 internally
and enables 680x0 80bits FPU.
Backports commit f83311e4764f1f25a8abdec2b32c64483be1759b from qemu
The new placement of the TB means that we can use one insn
to load the goto_tb destination directly from the TB.
Backports commit 308714e6bc945389c64faf1b9213e2c0d3f03391 from qemu
The new placement of the TB means that we can use one insn
to load the return value for exit_tb returning the TB pointer.
Backports commit cc74d332ff9a78684374847375ef63fc4bd10436 from qemu
Allocating an arbitrarily-sized array of tbs results in either
(a) a lot of memory wasted or (b) unnecessary flushes of the code
cache when we run out of TB structs in the array.
An obvious solution would be to just malloc a TB struct when needed,
and keep the TB array as an array of pointers (recall that tb_find_pc()
needs the TB array to run in O(log n)).
Perhaps a better solution, which is implemented in this patch, is to
allocate TB's right before the translated code they describe. This
results in some memory waste due to padding to have code and TBs in
separate cache lines--for instance, I measured 4.7% of padding in the
used portion of code_gen_buffer when booting aarch64 Linux on a
host with 64-byte cache lines. However, it can allow for optimizations
in some host architectures, since TCG backends could safely assume that
the TB and the corresponding translated code are very close to each
other in memory. See this message by rth for a detailed explanation:
https://lists.gnu.org/archive/html/qemu-devel/2017-03/msg05172.html
Subject: Re: GSoC 2017 Proposal: TCG performance enhancements
Backports commit 6e3b2bfd6af488a896f7936e99ef160f8f37e6f2 from qemu
In theory this would re-enable usage of QEMU on an armv4 host.
Whether this is worthwhile is debatable -- we've been unconditionally
issuing the armv5t BX instruction in the prologue since 2011 without
complaint. Possibly we should simply require an armv6 host.
Backports commit 702a947484eb3e615183dafc93de590ab0679f60 from qemu
Instead of exporting goto_ptr directly to TCG frontends, export
tcg_gen_lookup_and_goto_ptr(), which calls goto_ptr with the pointer
returned by the lookup_tb_ptr() helper. This is the only use case
we have for goto_ptr and lookup_tb_ptr, so having this function is
very convenient. Furthermore, it trivially allows us to avoid calling
the lookup helper if goto_ptr is not implemented by the backend.
Backports commit cedbcb01529cb6cf9a2289cdbebbc63f6149fc18 from qemu
Users of tcg_gen_atomic_cmpxchg and do_atomic_op rightfully utilize
the output. Even though this code is dead, it gets translated, and
without the initialization we encounter a tcg_error.
Backports commit 79b1af906245558c30e0a5faf26cb52b63f83cce from qemu
We already require gcc 4.1 or newer (for the atomic
support), so the fallback codepaths for older gcc
versions than that are now dead code and we can
just delete them.
NB: clang reports itself as gcc 4.2 (regardless of
clang version), so clang won't be using the fallbacks
either.
Backports commit fa54abb8c298f892639ffc4bc2f61448ac3be4a1 from qemu