Notice new attribute, byte swap, and force the transaction through the
memory slow path.
Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.
Backports commit a26fc6f5152b47f1d7ed928f9c9d462d01ff1624 from qemu
Now that MemOp has been pushed down into the memory API, and
callers are encoding endianness, we can collapse byte swaps
along the I/O path into the accelerator and target independent
adjust_endianness.
Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant
byte swaps cancelling out.
Backports commit 9bf825bf3df4ebae3af51566c8088e3f1249a910 from qemu
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.
Backports commit be5c4787e9a6eed12fd765d9e890f7cc6cd63220 from qemu
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.
Call memory_region_dispatch_{read|write} with endianness encoded into
the "MemOp op" operand.
This patch does not change any behaviour as
memory_region_dispatch_{read|write} is yet to handle the endianness.
Once it does handle endianness, callers with byte swaps can collapse
them into adjust_endianness.
Backports commit d5d680cacc66ef7e3c02c81dc8f3a34eabce6dfe from qemu
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be converted into a "MemOp op".
As size_memop is a no-op, this patch does not change any behaviour.
Backports commit 4cbb198eefef41bbca703605c78875fd4fec6ef6 from qemu
Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.
Target dependant attributes are conditionalized upon NEED_CPU_H.
Backports commit 14776ab5a12972ea439c7fb2203a4c15a09094b4 from qemu
The loop is written with scalars, not vectors.
Use the correct type when incrementing.
Fixes: 5ee5c14cacd
Backports commit 899f08ad1d1231dbbfa67298413f05ed2679fb02 from qemu
While size_t is defined to happily access the biggest host object this
isn't the case when generating masks for 64 bit guests on 32 bit
hosts. Otherwise we end up truncating the address when we fall back to
our unaligned helper.
Fixes: https://bugs.launchpad.net/qemu/+bug/1831545
Backports commit ab7a2009df66241a3742cbdfe8f9a1f66c6af21f from qemu
When running on 32 bit TCG backends a wide unaligned load ends up
truncating data before returning to the guest. We specifically have
the return type as uint64_t to avoid any premature truncation so we
should use the same for the interim types.
Fixes: https://bugs.launchpad.net/qemu/+bug/1830872
Fixes: eed5664238e
Backports commit 8c79b288513587e960b6b7257a9d955d5592f209 from qemu
Amusingly, we had already ignored the comment to keep this value
at the end of CPUState. This restores the minimum negative offset
from TCG_AREG0 for code generation.
For the couple of uses within qom/cpu.c, without NEED_CPU_H, add
a pointer from the CPUState object to the IcountDecr object within
CPUNegativeOffsetState.
Backports commit 5e1401969b25f676fee6b1c564441759cf967a43 from qemu
Now that we have both ArchCPU and CPUArchState, we can define
this generically instead of via macro in each target's cpu.h.
Backports commit 29a0af618ddd21f55df5753c3e16b0625f534b3c from qemu
Especially for guests with large numbers of tlbs, like ARM or PPC,
we may well not use all of them in between flush operations.
Remember which tlbs have been used since the last flush, and
avoid any useless flushing.
Backports much of 3d1523ced6060cdfe9e768a814d064067ccabfe5 from qemu
along with a bunch of updating changes.
This operation performs d = (b & a) | (c & ~a), and is present
on a majority of host vector units. Include gvec expanders.
Backports commit 38dc12947ec9106237f9cdbd428792c985cd86ae from qemu
Most of the existing users would continue around a loop which
would fault the tlb entry in via a normal load/store.
But for AArch64 SVE we have an existing emulation bug wherein we
would mark the first element of a no-fault vector load as faulted
(within the FFR, not via exception) just because we did not have
its address in the TLB. Now we can properly only mark it as faulted
if there really is no valid, readable translation, while still not
raising an exception. (Note that beyond the first element of the
vector, the hardware may report a fault for any reason whatsoever;
with at least one element loaded, forward progress is guaranteed.)
Backports commit 4811e9095c0491bc6f5450e5012c9c4796b9e59d from qemu
We can now use the CPUClass hook instead of a named function.
Create a static tlb_fill function to avoid other changes within
cputlb.c. This also isolates the asserts within. Remove the
named tlb_fill function from all of the targets.
Backports commit c319dc13579a92937bffe02ad2c9f1a550e73973 from qemu
The gvec expanders perform a modulo on the shift count. If the target
requires alternate behaviour, then it cannot use the generic gvec
expanders anyway, and will have to have its own custom code.
Backports commit 5ee5c14cacda27e904cd6b0d9e7ffe1acff42838 from qemu
This is less tricky than for loads, because we always fall
back to single byte stores to implement unaligned stores.
Backports commit 4601f8d10d7628bcaf2a8179af36e04b42879e91 from qemu
If we attempt to recurse from load_helper back to load_helper,
even via intermediary, we do not get all of the constants
expanded away as desired.
But if we recurse back to the original helper (or a shim that
has a consistent function signature), the operands are folded
away as desired.
Backports commit 2dd926067867c2dd19e66d31a7990e8eea7258f6 from qemu
Going to approach this problem via __attribute__((always_inline))
instead, but full conversion will take several steps.
Backports commit fc1bc777910dc14a3db4e2ad66f3e536effc297d from qemu
Having this in io_readx/io_writex meant that we forgot to
re-compute index after tlb_fill. It also means we can use
the normal aligned memory load path. It also fixes a bug
in that we had cached a use of index across a tlb_fill.
Backports commit f1be36969de2fb9b6b64397db1098f115210fcd9 from qemu
Instead of expanding a series of macros to generate the load/store
helpers we move stuff into common functions and rely on the compiler
to eliminate the dead code for each variant.
Backports commit eed5664238ea5317689cf32426d9318686b2b75c from qemu
Now that we have curr_cflags, we can include CF_USE_ICOUNT
early and then remove it as necessary.
Backports commit 416986d3f97329655e30da7271a2d11c6d707b06 from qemu
Now that all code generation has been converted to check CF_PARALLEL, we can
generate !CF_PARALLEL code without having yet set !parallel_cpus --
and therefore without having to be in the exclusive region during
cpu_exec_step_atomic.
While at it, merge cpu_exec_step into cpu_exec_step_atomic.
Backports commit ac03ee5331612e44beb393df2b578c951d27dc0d from qemu
Thereby decoupling the resulting translated code from the current state
of the system.
The tb->cflags field is not passed to tcg generation functions. So
we add a field to TCGContext, storing there a copy of tb->cflags.
Most architectures have <= 32 registers, which results in a 4-byte hole
in TCGContext. Use this hole for the new field.
Backports commit e82d5a2460b0e176128027651ff9b104e4bdf5cc from qemu
We were generating code during tb_invalidate_phys_page_range,
check_watchpoint, cpu_io_recompile, and (seemingly) discarding
the TB, assuming that it would magically be picked up during
the next iteration through the cpu_exec loop.
Instead, record the desired cflags in CPUState so that we request
the proper TB so that there is no more magic.
Backports commit 9b990ee5a3cc6aa38f81266fb0c6ef37a36c45b9 from qemu
This will enable us to decouple code translation from the value
of parallel_cpus at any given time. It will also help us minimize
TB flushes when generating code via EXCP_ATOMIC.
Note that the declaration of parallel_cpus is brought to exec-all.h
to be able to define there the "curr_cflags" inline.
Backports commit 4e2ca83e71b51577b06b1468e836556912bd5b6e from qemu
This change adapts io_readx() to its input access_type. Currently
io_readx() treats any memory access as a read, although it has an
input argument "MMUAccessType access_type". This results in:
1) Calling the tlb_fill() only with MMU_DATA_LOAD
2) Considering only entry->addr_read as the tlb_addr
Buglink: https://bugs.launchpad.net/qemu/+bug/1825359
Backports commit ef5dae6805cce7b59d129d801bdc5db71bcbd60d from qemu
If a TB generates too much code, try again with fewer insns.
Fixes: https://bugs.launchpad.net/bugs/1824853
Backports commit 6e6c4efed995d9eca6ae0cfdb2252df830262f50 from qemu
In order to handle TB's that translate to too much code, we
need to place the control of the length of the translation
in the hands of the code gen master loop.
Backports commit 8b86d6d25807e13a63ab6ea879f976b9f18cc45a from qemu
We are failing to take into account that tlb_fill() can cause a
TLB resize, which renders prior TLB entry pointers/indices stale.
Fix it by re-doing the TLB entry lookups immediately after tlb_fill.
Fixes: 86e1eff8bc ("tcg: introduce dynamic TLB sizing", 2019-01-28)
Backports commit 6d967cb86d5b4a60ba15b497126b621ce9ca6609 from qemu
It's either "GNU *Library* General Public version 2" or "GNU Lesser
General Public version *2.1*", but there was no "version 2.0" of the
"Lesser" library. So assume that version 2.1 is meant here.
Backports commit fb0343d5b4dd4b9b9e96e563d913a3e0c709fe4e from qemu
Strictly speaking, as far as the standard care, performing pointer
arithmetic on a void* type is ill formed. This is a GNU extension that
allows this. Instead, just use unsigned char* which preserves the same
behavior.
Add support for MMU protection regions that are smaller than
TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
pages with a flag TLB_RECHECK. This flag causes us to always
take the slow-path for accesses. In the slow path we can then
special case them to always call tlb_fill() again, so we have
the correct information for the exact address being accessed.
This change allows us to handle reading and writing from small
regions; we cannot deal with execution from the small region.
Backports commit 55df6fcf5476b44bc1b95554e686ab3e91d725c5 from qemu
GCC7+ will no longer advertise support for 16-byte __atomic operations
if only cmpxchg is supported, as for x86_64. Fortunately, x86_64 still
has support for __sync_compare_and_swap_16 and we can make use of that.
AArch64 does not have, nor ever has had such support, so open-code it.
Backports commit e6cd4bb59b8154fa00da611200beef7eb4e8ec56 from qemu
Isolate the computation of an index from an address into a
helper before we change that function.
Backports commit 383beda9cf32f795616c3b93f7d6154d70372d4b from qemu
Rather than test NOCHAIN before linking, do not emit the
goto_tb opcode at all. We already do this for goto_ptr.
Backports commit d7f425fdea991f052241c6479acd9feae834063b from qemu