Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Backports commit 757e725b58c57d3ebb66a31fd2210df977a12154 from qemu
Softmmu unaligned load/stores currently goes through through the slow
path for two reasons:
- to support unaligned access on host with strict alignement
- to correctly handle accesses crossing pages
x86 is only concerned by the second reason. Unaligned accesses are
avoided by compilers, but are not uncommon. We therefore would like
to see them going through the fast path, if they don't cross pages.
For that we can use the fact that two adjacent TLB entries can't contain
the same page. Therefore accessing the TLB entry corresponding to the
first byte, but comparing its content to page address of the last byte
ensures that we don't cross pages. We can do this check without adding
more instructions in the TLB code (but increasing its length by one
byte) by using the LEA instruction to combine the existing move with the
size addition.
On an x86-64 host, this gives a 3% boot time improvement for a powerpc
guest and 4% for an x86-64 guest.
Backports commit 8cc580f6a0d8c0e2f590c1472cf5cd8e51761760 from qemu
This will be used to size the TLB when more than 8 MMU modes are
used by the target. Limitations come from the limited size of
the immediate fields (which sometimes, as in the case of Aarch64,
extend to instructions that shift the immediate).
Backports commit 006f8638c62bca2b0caf609485f47fa5e14d8a3c from qemu
Rather than allow arbitrary shift+trunc, only concern ourselves
with low and high parts. This is all that was being used anyway.
Backports commit 609ad70562793937257c89d07bf7c1370b9fc9aa from qemu
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a
32-bit value is always converted to a 64-bit value and not propagated
through the register allocator or the optimizer.
Backports commit 4f2331e5b67af8172419eb1c8db510b497b30a7b from qemu
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32,
and the name in the README doesn't match the name offered to the
frontends.
Always use the long name to make it clear it is a size changing op.
Backports commit 0632e555fc4d281d69cb08d98d500d96185b041f from qemu
The addition of MO_AMASK means that places that used inverted masks
need to be changed to use positive masks, and places that failed to
mask the intended bits need updating.
Backports commit 2b7ec66f025263a5331f37d5ad78a625496fd7bd from qemu
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.
Backports commit 3972ef6f830d65e9bacbd31257abedc055fd6dc8 from qemu
At the tcg opcode level, not at the tcg-op.h generator level.
This requires minor changes through all of the tcg backends,
but none of the cpu translators.
Backports commit 59227d5d45bb3c31dc2118011691c35b3c00879c from qemu
This is less about improved type checking than enabling a
subsequent change to the representation of labels.
Backports commit bec1631100323fac0900aea71043d5c4e22fc2fa from qemu
This is improved type checking for the translators -- it's no longer
possible to accidentally swap arguments to the branch functions.
Note that the code generating backends still manipulate labels as int.
With notable exceptions, the scope of the change is just a few lines
for each target, so it's not worth building extra machinery to do this
change in per-target increments.
Backports commit 42a268c241183877192c376d03bd9b6d527407c7 from qemu