Commit graph

6185 commits

Author SHA1 Message Date
Peter Maydell 021da28bfd
target/arm: Fix short-vector increment behaviour
For VFP short vectors, the VFP registers are divided into a
series of banks: for single-precision these are s0-s7, s8-s15,
s16-s23 and s24-s31; for double-precision they are d0-d3,
d4-d7, ... d28-d31. Some banks are "scalar" meaning that
use of a register within them triggers a pure-scalar or
mixed vector-scalar operation rather than a full vector
operation. The scalar banks are s0-s7, d0-d3 and d16-d19.
When using a bank as part of a vector operation, we
iterate through it, increasing the register number by
the specified stride each time, and wrapping around to
the beginning of the bank.

Unfortunately our calculation of the "increment" part of this
was incorrect:
vd = ((vd + delta_d) & (bank_mask - 1)) | (vd & bank_mask)
will only do the intended thing if bank_mask has exactly
one set high bit. For instance for doubles (bank_mask = 0xc),
if we start with vd = 6 and delta_d = 2 then vd is updated
to 12 rather than the intended 4.

This only causes problems in the unlikely case that the
starting register is not the first in its bank: if the
register number doesn't have to wrap around then the
expression happens to give the right answer.

Fix this bug by abstracting out the "check whether register
is in a scalar bank" and "advance register within bank"
operations to utility functions which use the right
bit masking operations

Backports commit 18cf951af9a27ae573a6fa17f9d0c103f7b7679b from qemu
2019-06-13 19:44:27 -04:00
Peter Maydell 1a0d31c05e
target/arm: Convert float-to-integer VCVT insns to decodetree
Convert the float-to-integer VCVT instructions to decodetree.
Since these are the last unconverted instructions, we can
delete the old decoder structure entirely now.

Backports commit 3111bfc2da6ba0c8396dc97ca479942d711c6146 from qemu
2019-06-13 19:40:02 -04:00
Peter Maydell f6c67559d4
target/arm: Convert VCVT fp/fixed-point conversion insns to decodetree
Convert the VCVT (between floating-point and fixed-point) instructions
to decodetree.

Backports commit e3d6f4290c788e850c64815f0b3e331600a4bcc0 from qemu
2019-06-13 19:35:51 -04:00
Peter Maydell c66d477359
target/arm: Convert VJCVT to decodetree
Convert the VJCVT instruction to decodetree.

Backports commit 92073e947487e2109f3dfebfeaa48d6323cbd981 from qemu
2019-06-13 19:31:35 -04:00
Peter Maydell 7be9e6f9b4
target/arm: Convert integer-to-float insns to decodetree
Convert the VCVT integer-to-float instructions to decodetree.

Backports commit 8fc9d8918cde342c71923e361b9f2193e36ed18b from qemu
2019-06-13 19:20:41 -04:00
Peter Maydell e0e4f99103
target/arm: Convert double-single precision conversion insns to decodetree
Convert the VCVT double/single precision conversion insns to decodetree.

Backports commit 6ed7e49c3693ed8411773c4880f42b2932beb12d from qemu
2019-06-13 19:18:01 -04:00
Peter Maydell ab9d0235ed
target/arm: Convert VFP round insns to decodetree
Convert the VFP round-to-integer instructions VRINTR, VRINTZ and
VRINTX to decodetree.

These instructions were only introduced as part of the "VFP misc"
additions in v8A, so we check this. The old decoder's implementation
was incorrectly providing them even for v7A CPUs.

Backports commit e25155f55dc4abb427a88dfe58bbbc550fe7d643 from qemu
2019-06-13 19:15:05 -04:00
Peter Maydell 9e842a0f2a
target/arm: Convert the VCVT-to-f16 insns to decodetree
Convert the VCVTT and VCVTB instructions which convert from
f32 and f64 to f16 to decodetree.

Since we're no longer constrained to the old decoder's style
using cpu_F0s and cpu_F0d we can perform a direct 16 bit
store of the right half of the input single-precision register
rather than doing a load/modify/store sequence on the full
32 bits.

Backports commit cdfd14e86ab0b1ca29a702d13a8e4af2e902a9bf from qemu
2019-06-13 19:03:59 -04:00
Peter Maydell 7d927b2d0e
target/arm: Convert the VCVT-from-f16 insns to decodetree
Convert the VCVTT, VCVTB instructions that deal with conversion
from half-precision floats to f32 or 64 to decodetree.

Since we're no longer constrained to the old decoder's style
using cpu_F0s and cpu_F0d we can perform a direct 16 bit
load of the right half of the input single-precision register
rather than loading the full 32 bits and then doing a
separate shift or sign-extension.

Backports commit b623d803dda805f07aadcbf098961fde27315c19 from qemu
2019-06-13 19:00:23 -04:00
Peter Maydell e6cc2616d2
target/arm: Convert VFP comparison insns to decodetree
Convert the VFP comparison instructions to decodetree.

Note that comparison instructions should not honour the VFP
short-vector length and stride information: they are scalar-only
operations. This applies to all the 2-operand instructions except
for VMOV, VABS, VNEG and VSQRT. (In the old decoder this is
implemented via the "if (op == 15 && rn > 3) { veclen = 0; }" check.)

Backports commit 386bba2368842fc74388a3c1651c6c0c0c70adbd from qemu
2019-06-13 18:55:53 -04:00
Peter Maydell a75a3e321f
target/arm: Convert VMOV (register) to decodetree
Backports commit 17552b979ebb9848a534c25ebed18a1072710058 from qemu
2019-06-13 18:49:49 -04:00
Peter Maydell ee30962891
target/arm: Convert VSQRT to decodetree
Convert the VSQRT instruction to decodetree.

Backports commit b8474540cbce4e2fa45010416375d1bcbe86dc15 from qemu
2019-06-13 18:47:32 -04:00
Peter Maydell 7aea3da6b7
target/arm: Convert VNEG to decodetree
Convert the VNEG instruction to decodetree.

Backports commit 1882651afdb0ca44f0631192fbe65a71c660d809 from qemu
2019-06-13 18:43:50 -04:00
Peter Maydell 1032d86ad3
target/arm: Convert VABS to decodetree
Convert the VFP VABS instruction to decodetree.

Unlike the 3-op versions, we don't pass fpst to the VFPGen2OpSPFn or
VFPGen2OpDPFn because none of the operations which use this format
and support short vectors will need it.

Backports commit 90287e22c987e9840704345ed33d237cbe759dd9 from qemu
2019-06-13 18:41:43 -04:00
Peter Maydell 7a16bc6876
target/arm: Convert VMOV (imm) to decodetree
Convert the VFP VMOV (immediate) instruction to decodetree.

Backports commit b518c753f0b94e14e01e97b4ec42c100dafc0cc2 from qemu
2019-06-13 18:37:58 -04:00
Peter Maydell 0ebb6b8b90
target/arm: Convert VFP fused multiply-add insns to decodetree
Convert the VFP fused multiply-add instructions (VFNMA, VFNMS,
VFMA, VFMS) to decodetree.

Note that in the old decode structure we were implementing
these to honour the VFP vector stride/length. These instructions
were introduced in VFPv4, and in the v7A architecture they
are UNPREDICTABLE if the vector stride or length are non-zero.
In v8A they must UNDEF if stride or length are non-zero, like
all VFP instructions; we choose to UNDEF always.

Backports commit d4893b01d23060845ee3855bc96626e16aad9ab5 from qemu
2019-06-13 18:24:36 -04:00
Peter Maydell 321bcc822b
target/arm: Convert VDIV to decodetree
Convert the VDIV instruction to decodetree.

Backports commit 519ee7ae31e050eb0ff9ad35c213f0bd7ab1c03e from qemu
2019-06-13 18:19:47 -04:00
Peter Maydell 76c74bc657
target/arm: Convert VSUB to decodetree
Convert the VSUB instruction to decodetree.

Backports commit 8fec9a119264b7936503abce3c106fad7e3ccb76 from qemu.
2019-06-13 18:18:00 -04:00
Peter Maydell f56f0342ad
target/arm: Convert VADD to decodetree
Convert the VADD instruction to decodetree.

Backports commit ce28b303716e7eca3f3765bf6776d722ebbe1122 from qemu
2019-06-13 18:15:52 -04:00
Peter Maydell 06584edf61
target/arm: Convert VNMUL to decodetree
Convert the VNMUL instruction to decodetree.

Backports commit 43c4be1236c105090d134540da1036073d157cd4 from qemu
2019-06-13 18:14:16 -04:00
Peter Maydell 2c5e102017
target/arm: Convert VMUL to decodetree
Convert the VMUL instruction to decodetree.

Backports commit 88c5188ced60e9f2b8cc3af3b9bc4a8031c8c996 from qemu
2019-06-13 18:12:03 -04:00
Peter Maydell b26b6a12a2
target/arm: Convert VFP VNMLA to decodetree
Convert the VFP VNMLA instruction to decodetree.

Backports commit 8a483533adc1bdc2decb8f456dbe930a2d245a8b from qemu
2019-06-13 18:09:57 -04:00
Peter Maydell 638b90de31
target/arm: Convert VFP VNMLS to decodetree
Convert the VFP VNMLS instruction to decodetree.

Backports commit c54a416cc6d60efbc79dd37aaf0c8918c05b5815 from qemu
2019-06-13 18:06:59 -04:00
Peter Maydell 67ad40ffa4
target/arm: Convert VFP VMLS to decodetree
Convert the VFP VMLS instruction to decodetree.

Backports commit e7258280d46af4ab6a0cc93ccfe8f6614defb4b7 from qemu
2019-06-13 18:02:37 -04:00
Peter Maydell edf81eb214
target/arm: Convert VFP VMLA to decodetree
Convert the VFP VMLA instruction to decodetree.

This is the first of the VFP 3-operand data processing instructions,
so we include in this patch the code which loops over the elements
for an old-style VFP vector operation. The existing code to do this
looping uses the deprecated cpu_F0s/F0d/F1s/F1d TCG globals; since
we are going to be converting instructions one at a time anyway
we can take the opportunity to make the new loop use TCG temporaries,
which means we can do that conversion one operation at a time
rather than needing to do it all in one go.

We include an UNDEF check which was missing in the old code:
short-vector operations (with stride or length non-zero) were
deprecated in v7A and must UNDEF in v8A, so if the MVFR0 FPShVec
field does not indicate that support for short vectors is present
we UNDEF the operations that would use them. (This is a change
of behaviour for Cortex-A7, Cortex-A15 and the v8 CPUs, which
previously were all incorrectly allowing short-vector operations.)

Note that the conversion fixes a bug in the old code for the
case of VFP short-vector "mixed scalar/vector operations". These
happen where the destination register is in a vector bank but
but the second operand is in a scalar bank. For example
vmla.f64 d10, d1, d16 with length 2 stride 2
is equivalent to the pair of scalar operations
vmla.f64 d10, d1, d16
vmla.f64 d8, d3, d16
where the destination and first input register cycle through
their vector but the second input is scalar (d16). In the
old decoder the gen_vfp_F1_mul() operation uses cpu_F1{s,d}
as a temporary output for the multiply, which trashes the
second input operand. For the fully-scalar case (where we
never do a second iteration) and the fully-vector case
(where the loop loads the new second input operand) this
doesn't matter, but for the mixed scalar/vector case we
will end up using the wrong value for later loop iterations.
In the new code we use TCG temporaries and so avoid the bug.
This bug is present for all the multiply-accumulate insns
that operate on short vectors: VMLA, VMLS, VNMLA, VNMLS.

Note 2: the expression used to calculate the next register
number in the vector bank is not in fact correct; we leave
this behaviour unchanged from the old decoder and will
fix this bug later in the series.

Backports commit 266bd25c485597c94209bfdb3891c1d0c573c164 from qemu
2019-06-13 17:59:16 -04:00
Peter Maydell 93fe4cbe9e
target/arm: Remove VLDR/VSTR/VLDM/VSTM use of cpu_F0s and cpu_F0d
Expand out the sequences in the new decoder VLDR/VSTR/VLDM/VSTM trans
functions which perform the memory accesses by going via the TCG
globals cpu_F0s and cpu_F0d, to use local TCG temps instead.

Backports commit 3993d0407dff7233e42f2251db971e126a0497e9 from qemu
2019-06-13 17:31:28 -04:00
Peter Maydell ff7042567e
target/arm: Convert the VFP load/store multiple insns to decodetree
Convert the VFP load/store multiple insns to decodetree.
This includes tightening up the UNDEF checking for pre-VFPv3
CPUs which only have D0-D15 : they now UNDEF for any access
to D16-D31, not merely when the smallest register in the
transfer list is in D16-D31.

This conversion does not try to share code between the single
precision and the double precision versions; this looks a bit
duplicative of code, but it leaves the door open for a future
refactoring which gets rid of the use of the "F0" registers
by inlining the various functions like gen_vfp_ld() and
gen_mov_F0_reg() which are hiding "if (dp) { ... } else { ... }"
conditionalisation.

Backports commit fa288de272c5c8a66d5eb683b123706a52bc7ad6 from qemu
2019-06-13 17:26:52 -04:00
Peter Maydell 6f0633ce80
target/arm: Convert VFP VLDR and VSTR to decodetree
Convert the VFP single load/store insns VLDR and VSTR to decodetree.

Backports commit 79b02a3b5231c5b8cd31e50cd549968dd0a05c49 from qemu
2019-06-13 17:22:48 -04:00
Peter Maydell fe98885ff2
target/arm: Convert VFP two-register transfer insns to decodetree
Convert the VFP two-register transfer instructions to decodetree
(in the v8 Arm ARM these are the "Advanced SIMD and floating-point
64-bit move" encoding group).

Again, we expand out the sequences involving gen_vfp_msr() and
gen_msr_vfp().

Backports commit 81f681106eabe21c55118a5a41999fb7387fb714 from qemu
2019-06-13 17:20:00 -04:00
Peter Maydell 3fb3403b82
target/arm: Convert single-precision register moves to decodetree
Convert the "single-precision" register moves to decodetree:
* VMSR
* VMRS
* VMOV between general purpose register and single precision

Note that the VMSR/VMRS conversions make our handling of
the "should this UNDEF?" checks consistent between the two
instructions:
* VMSR to MVFR0, MVFR1, MVFR2 now UNDEF from EL0
  (previously was a nop)
* VMSR to FPSID now UNDEFs from EL0 or if VFPv3 or better
  (previously was a nop)
* VMSR to FPINST and FPINST2 now UNDEF if VFPv3 or better
  (previously would write to the register, which had no
  guest-visible effect because we always UNDEF reads)

We also tighten up the decode: we were previously underdecoding
some SBZ or SBO bits.

The conversion of VMOV_single includes the expansion out of the
gen_mov_F0_vreg()/gen_vfp_mrs() and gen_mov_vreg_F0()/gen_vfp_msr()
sequences into the simpler direct load/store of the TCG temp via
neon_{load,store}_reg32(): we know in the new function that we're
always single-precision, we don't need to use the old-and-deprecated
cpu_F0* TCG globals, and we don't happen to have the declaration of
gen_vfp_msr() and gen_vfp_mrs() at the point in the file where the
new function is.

Backports commit a9ab50011aeda2dd012da99069e078379315ea18 from qemu
2019-06-13 17:16:38 -04:00
Peter Maydell 694058da94
target/arm: Convert double-precision register moves to decodetree
Convert the "double-precision" register moves to decodetree:
this covers VMOV scalar-to-gpreg, VMOV gpreg-to-scalar and VDUP.

Note that the conversion process has tightened up a few of the
UNDEF encoding checks: we now correctly forbid:
* VMOV-to-gpr with U:opc1:opc2 == 10x00 or x0x10
* VMOV-from-gpr with opc1:opc2 == 0x10
* VDUP with B:E == 11
* VDUP with Q == 1 and Vn<0> == 1

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
The accesses of elements < 32 bits could be improved by doing
direct ld/st of the right size rather than 32-bit read-and-shift
or read-modify-write, but we leave this for later cleanup,
since this series is generally trying to stick to fixing
the decode.

Backports commit 9851ed9269d214c0c6feba960dd14ff09e6c34b4 from qemu
2019-06-13 17:11:56 -04:00
Peter Maydell 7265161108
target/arm: Add helpers for VFP register loads and stores
The current VFP code has two different idioms for
loading and storing from the VFP register file:
 1 using the gen_mov_F0_vreg() and similar functions,
   which load and store to a fixed set of TCG globals
   cpu_F0s, CPU_F0d, etc
 2 by direct calls to tcg_gen_ld_f64() and friends

We want to phase out idiom 1 (because the use of the
fixed globals is a relic of a much older version of TCG),
but idiom 2 is quite longwinded:
  tcg_gen_ld_f64(tmp, cpu_env, vfp_reg_offset(true, reg))
requires us to specify the 64-bitness twice, once in
the function name and once by passing 'true' to
vfp_reg_offset(). There's no guard against accidentally
passing the wrong flag.

Instead, let's move to a convention of accessing 64-bit
registers via the existing neon_load_reg64() and
neon_store_reg64(), and provide new neon_load_reg32()
and neon_store_reg32() for the 32-bit equivalents.

Implement the new functions and use them in the code in
translate-vfp.inc.c. We will convert the rest of the VFP
code as we do the decodetree conversion in subsequent
commits.

Backports commit 160f3b64c5cc4c8a09a1859edc764882ce6ad6bf from qemu
2019-06-13 17:01:59 -04:00
Peter Maydell 033a386ffb
target/arm: Move the VFP trans_* functions to translate-vfp.inc.c
Move the trans_*() functions we've just created from translate.c
to translate-vfp.inc.c. This is pure code motion with no textual
changes (this can be checked with 'git show --color-moved').

Backports commit f7bbb8f31f0761edbf0c64b7ab3c3f49c13612ea from qemu
2019-06-13 16:56:24 -04:00
Peter Maydell e55d31a5ac
target/arm: Convert VCVTA/VCVTN/VCVTP/VCVTM to decodetree
Convert the VCVTA/VCVTN/VCVTP/VCVTM instructions to decodetree.
trans_VCVT() is temporarily left in translate.c.

Backports commit c2a46a914cd5c38fd0ee57ff0befc1c5bde27bcf from qemu
2019-06-13 16:54:42 -04:00
Peter Maydell 9fb01cb526
target/arm: Convert VRINTA/VRINTN/VRINTP/VRINTM to decodetree
Convert the VRINTA/VRINTN/VRINTP/VRINTM instructions to decodetree.
Again, trans_VRINT() is temporarily left in translate.c.

Backports commit e3bb599d16e4678b228d80194cee328f894b1ceb from qemu
2019-06-13 16:50:36 -04:00
Peter Maydell 4501daf010
target/arm: Convert VMINNM, VMAXNM to decodetree
Convert the VMINNM and VMAXNM instructions to decodetree.
As with VSEL, we leave the trans_VMINMAXNM() function
in translate.c for the moment.

Backports commit f65988a1efdb42f9058db44297591491842e697c from qemu
2019-06-13 16:43:50 -04:00
Peter Maydell 3994dfd079
target/arm: Convert the VSEL instructions to decodetree
Convert the VSEL instructions to decodetree.
We leave trans_VSEL() in translate.c for now as this allows
the patch to show just the changes from the old handle_vsel().

In the old code the check for "do D16-D31 exist" was hidden in
the VFP_DREG macro, and assumed that VFPv3 always implied that
D16-D31 exist. In the new code we do the correct ID register test.
This gives identical behaviour for most of our CPUs, and fixes
previously incorrect handling for Cortex-R5F, Cortex-M4 and
Cortex-M33, which all implement VFPv3 or better with only 16
double-precision registers.

Backports commit b3ff4b87b4ae08120a51fe12592725e1dca8a085 from qemu
2019-06-13 16:41:22 -04:00
Peter Maydell 93adaa7de2
target/arm: Explicitly enable VFP short-vectors for aarch32 -cpu max
At the moment our -cpu max for AArch32 supports VFP short-vectors
because we always implement them, even for CPUs which should
not have them. The following commits are going to switch to
using the correct ID-register-check to enable or disable short
vector support, so we need to turn it on explicitly for -cpu max,
because Cortex-A15 doesn't implement it.

We don't enable this for the AArch64 -cpu max, because the v8A
architecture never supports short-vectors.

Backports commit 973751fd798d41402d34f9f705c0c6d1633d0cda from qemu
2019-06-13 16:38:01 -04:00
Peter Maydell 808d929d7c
target/arm: Fix Cortex-R5F MVFR values
The Cortex-R5F initfn was not correctly setting up the MVFR
ID register values. Fill these in, since some subsequent patches
will use ID register checks rather than CPU feature bit checks.

Backports commit 3de79d335c9aa7d726865e3933d9b21781032183 from qemu
2019-06-13 16:36:48 -04:00
Lioncash b3cfede44f
target/arm: Make load_cpu_offset() take a DisasContext* instead of uc_struct*
Keeps it consistent with store_cpu_offset
2019-06-13 16:35:31 -04:00
Peter Maydell 78997058e4
target/arm: Factor out VFP access checking code
Factor out the VFP access checking code so that we can use it in the
leaf functions of the decodetree decoder.

We call the function full_vfp_access_check() so we can keep
the more natural vfp_access_check() for a version which doesn't
have the 'ignore_vfp_enabled' flag -- that way almost all VFP
insns will be able to use vfp_access_check(s) and only the
special-register access function will have to use
full_vfp_access_check(s, ignore_vfp_enabled).

Backports commit 06db8196bba34776829020192ed623a0b22e6557 from qemu
2019-06-13 16:33:38 -04:00
Peter Maydell 9732ebba5c
target/arm: Add stubs for AArch32 VFP decodetree
Add the infrastructure for building and invoking a decodetree decoder
for the AArch32 VFP encodings. At the moment the new decoder covers
nothing, so we always fall back to the existing hand-written decode.

We need to have one decoder for the unconditional insns and one for
the conditional insns, as otherwise the patterns for conditional
insns would incorrectly match against the unconditional ones too.

Since translate.c is over 14,000 lines long and we're going to be
touching pretty much every line of the VFP code as part of the
decodetree conversion, we create a new translate-vfp.inc.c to hold
the code which deals with VFP in the new scheme. It should be
possible to convert this into a standalone translation unit
eventually, but the conversion process will be much simpler if we
simply #include it midway through translate.c to start with.

Backports commit 78e138bc1f672c145ef6ace74617db00eebaa2ba from qemu
2019-06-13 16:24:37 -04:00
Richard Henderson 7c03c8eb04
decodetree: Fix comparison of Field
Typo comparing the sign of the field, twice, instead of also comparing
the mask of the field (which itself encodes both position and length).

Backports commit 2c7d442743854d2c1f5475446e088bd523f4bb20 from qemu
2019-06-13 16:17:56 -04:00
Richard Henderson afaea6a291
target/arm: Fix output of PAuth Auth
The ARM pseudocode installs the error_code into the original
pointer, not the encrypted pointer. The difference applies
within the 7 bits of pac data; the result should be the sign
extension of bit 55.

Add a testcase to that effect.

Backports commit d67ebada159148bfdfde84871338738e4465e985 from qemu
2019-06-13 16:17:00 -04:00
Peter Maydell 230f8a091a
target/arm: Implement NSACR gating of floating point
The NSACR register allows secure code to configure the FPU
to be inaccessible to non-secure code. If the NSACR.CP10
bit is set then:
* NS accesses to the FPU trap as UNDEF (ie to NS EL1 or EL2)
* CPACR.{CP10,CP11} behave as if RAZ/WI
* HCPTR.{TCP11,TCP10} behave as if RAO/WI

Note that we do not implement the NSACR.NSASEDIS bit which
gates only access to Advanced SIMD, in the same way that
we don't implement the equivalent CPACR.ASEDIS and HCPTR.TASE.

Backports commit fc1120a7f5f2d4b601003205c598077d3eb11ad2 from qemu
2019-06-13 16:15:28 -04:00
Richard Henderson 7c32498b7f
target/arm: Use tcg_gen_gvec_bitsel
This replaces 3 target-specific implementations for BIT, BIF, and BSL.

Backports commit 3a7a2b4e5cf0d49cd8b14e8225af0310068b7d20 from qemu
2019-06-13 16:12:56 -04:00
Richard Henderson a1396b12f6
tcg: Fix typos in helper_gvec_sar{8,32,64}v
The loop is written with scalars, not vectors.
Use the correct type when incrementing.

Fixes: 5ee5c14cacd

Backports commit 899f08ad1d1231dbbfa67298413f05ed2679fb02 from qemu
2019-06-13 16:09:16 -04:00
Alex Bennée 938f8465a0
cputlb: cast size_t to target_ulong before using for address masks
While size_t is defined to happily access the biggest host object this
isn't the case when generating masks for 64 bit guests on 32 bit
hosts. Otherwise we end up truncating the address when we fall back to
our unaligned helper.

Fixes: https://bugs.launchpad.net/qemu/+bug/1831545

Backports commit ab7a2009df66241a3742cbdfe8f9a1f66c6af21f from qemu
2019-06-13 16:07:01 -04:00
Alex Bennée 9aef73f5fb
cputlb: use uint64_t for interim values for unaligned load
When running on 32 bit TCG backends a wide unaligned load ends up
truncating data before returning to the guest. We specifically have
the return type as uint64_t to avoid any premature truncation so we
should use the same for the interim types.

Fixes: https://bugs.launchpad.net/qemu/+bug/1830872
Fixes: eed5664238e

Backports commit 8c79b288513587e960b6b7257a9d955d5592f209 from qemu
2019-06-13 16:06:22 -04:00
Richard Henderson d7ea41c3a3
cpu: Move icount_decr to CPUNegativeOffsetState
Amusingly, we had already ignored the comment to keep this value
at the end of CPUState. This restores the minimum negative offset
from TCG_AREG0 for code generation.

For the couple of uses within qom/cpu.c, without NEED_CPU_H, add
a pointer from the CPUState object to the IcountDecr object within
CPUNegativeOffsetState.

Backports commit 5e1401969b25f676fee6b1c564441759cf967a43 from qemu
2019-06-13 15:34:28 -04:00