Commit graph

2414 commits

Author SHA1 Message Date
Peter Maydell 362379a9e1 target/arm: Factor out preserve-fp-state from full_vfp_access_check()
Factor out the code which handles M-profile lazy FP state preservation
from full_vfp_access_check(); accesses to the FPCXT_NS register are
a special case which need to do just this part (corresponding in the
pseudocode to the PreserveFPState() function), and not the full
set of actions matching the pseudocode ExecuteFPCheck() which
normal FP instructions need to do.

Backports 96dfae686628fc14ba4f993824322b93395e221b
2021-03-03 18:48:47 -05:00
Peter Maydell 2de945ba4d target/arm: Use new FPCR_NZCV_MASK constant
We defined a constant name for the mask of NZCV bits in the FPCR/FPSCR
in the previous commit; use it in a couple of places in existing code,
where we're masking out everything except NZCV for the "load to Rt=15
sets CPSR.NZCV" special case.

Backports 6a017acdf83e3bb6bd5e85289ca90b2ea3282b7e
2021-03-03 18:47:30 -05:00
Peter Maydell 2c6e54d1cd target/arm: Implement M-profile FPSCR_nzcvqc
v8.1M defines a new FP system register FPSCR_nzcvqc; this behaves
like the existing FPSCR, except that it reads and writes only bits
[31:27] of the FPSCR (the N, Z, C, V and QC flag bits). (Unlike the
FPSCR, the special case for Rt=15 of writing the CPSR.NZCV is not
permitted.)

Implement the register. Since we don't yet implement MVE, we handle
the QC bit as RES0, with todo comments for where we will need to add
support later.

Backports 9542c30bcf13c495400d63616dd8dfa825b04685
2021-03-03 18:45:38 -05:00
Peter Maydell 56532aa94c target/arm: Implement VLDR/VSTR system register
Implement the new-in-v8.1M VLDR/VSTR variants which directly
read or write FP system registers to memory.

Backports 0bf0dd4dcbd9fab324700ac6e0cd061cd043de0d
2021-03-03 18:42:05 -05:00
Peter Maydell edae732810 target/arm: Move general-use constant expanders up in translate.c
The constant-expander functions like negate, plus_2, etc, are
generally useful; move them up in translate.c so we can use them in
the VFP/Neon decoders as well as in the A32/T32/T16 decoders.

Backports f7ed0c9433e7c5c157d2e6235eb5c8b93234a71a
2021-03-03 18:29:32 -05:00
Peter Maydell a72c744370 target/arm: Refactor M-profile VMSR/VMRS handling
Currently M-profile borrows the A-profile code for VMSR and VMRS
(access to the FP system registers), because all it needs to support
is the FPSCR. In v8.1M things become significantly more complicated
in two ways:

* there are several new FP system registers; some have side effects
on read, and one (FPCXT_NS) needs to avoid the usual
vfp_access_check() and the "only if FPU implemented" check

* all sysregs are now accessible both by VMRS/VMSR (which
reads/writes a general purpose register) and also by VLDR/VSTR
(which reads/writes them directly to memory)

Refactor the structure of how we handle VMSR/VMRS to cope with this:

* keep the M-profile code entirely separate from the A-profile code

* abstract out the "read or write the general purpose register" part
of the code into a loadfn or storefn function pointer, so we can
reuse it for VLDR/VSTR.

Backports 32a290b8c3c2dc85cd88bd8983baf900d575cab
2021-03-03 18:13:17 -05:00
Peter Maydell 4eafe42d67 target/arm: Enforce M-profile VMRS/VMSR register restrictions
For M-profile before v8.1M, the only valid register for VMSR/VMRS is
the FPSCR. We have a comment that states this, but the actual logic
to forbid accesses for any other register value is missing, so we
would end up with A-profile style behaviour. Add the missing check.

Backports ede97c9d71110821738a48f88ff9f10d6bec017f
2021-03-03 18:06:23 -05:00
Peter Maydell 2e3bd010a8 target/arm: Implement CLRM instruction
In v8.1M the new CLRM instruction allows zeroing an arbitrary set of
the general-purpose registers and APSR. Implement this.

The encoding is a subset of the LDMIA T2 encoding, using what would
be Rn=0b1111 (which UNDEFs for LDMIA).

Backports 6e21a013fbdf54960a079dccc90772bb622e28e8
2021-03-03 18:00:28 -05:00
Peter Maydell 43d8441881 target/arm: Implement VSCCLRM insn
Implement the v8.1M VSCCLRM insn, which zeros floating point
registers if there is an active floating point context.
This requires support in write_neon_element32() for the MO_32
element size, so add it.

Because we want to use arm_gen_condlabel(), we need to move
the definition of that function up in translate.c so it is
before the #include of translate-vfp.c.inc.

Backports 83ff3d6add965c9752324de11eac5687121ea826
2021-03-03 17:57:30 -05:00
Peter Maydell 952ebdc207 target/arm: Don't clobber ID_PFR1.Security on M-profile cores
In arm_cpu_realizefn() we check whether the board code disabled EL3
via the has_el3 CPU object property, which we create if the CPU
starts with the ARM_FEATURE_EL3 feature bit. If it is disabled, then
we turn off ARM_FEATURE_EL3 and also zero out the relevant fields in
the ID_PFR1 and ID_AA64PFR0 registers.

This codepath was incorrectly being taken for M-profile CPUs, which
do not have an EL3 and don't set ARM_FEATURE_EL3, but which may have
the M-profile Security extension and so should have non-zero values
in the ID_PFR1.Security field.

Restrict the handling of the feature flag to A/R-profile cores.

Backports 4018818840f499d0a478508aedbb6802c8eae928
2021-03-03 17:52:30 -05:00
Peter Maydell cfefada296 target/arm: Implement v8.1M PXN extension
In v8.1M the PXN architecture extension adds a new PXN bit to the
MPU_RLAR registers, which forbids execution of code in the region
from a privileged mode.

This is another feature which is just in the generic "in v8.1M" set
and has no ID register field indicating its presence.

Backports cad8e2e3160dd10371552fce6cd8c6e171503e13
2021-03-03 17:50:26 -05:00
Rémi Denis-Courmont d9592046ef target/arm: fix stage 2 page-walks in 32-bit emulation
Using a target unsigned long would limit the Input Address to a LPAE
page-walk to 32 bits on AArch32 and 64 bits on AArch64. This is okay
for stage 1 or on AArch64, but it is insufficient for stage 2 on
AArch32. In that later case, the Input Address can have up to 40 bits.

Backports commit 98e8779770c40901ed585745aacc9a8e2b934a28
2021-03-02 13:37:02 -05:00
Chetan Pant 3e25486110 x86 tcg cpus: Fix Lesser GPL version number
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.

Backport d9ff33ada7f32ca59f99b270a2d0eb223b3c9c8f
2021-03-02 13:33:10 -05:00
Chetan Pant c7f6786089 arm tcg cpus: Fix Lesser GPL version number
There is no "version 2" of the "Lesser" General Public License.
It is either "GPL version 2.0" or "Lesser GPL version 2.1".
This patch replaces all occurrences of "Lesser GPL version 2" with
"Lesser GPL version 2.1" in comment section.

Backports 50f57e09fda4b7ffbc5ba62aad6cebf660824023
2021-03-02 13:30:35 -05:00
Peter Maydell f991d945d3 target/arm/translate-neon.c: Handle VTBL UNDEF case before VFP access check
Checks for UNDEF cases should go before the "is VFP enabled?" access
check, except in special cases. Move a stray UNDEF check in the VTBL
trans function up above the access check.

Backports b6c56c8a9a4064ea783f352f43c5df6231a110fa
2021-03-02 13:24:51 -05:00
Richard Henderson 9623047097 target/arm: Fix neon VTBL/VTBX for len > 1
The helper function did not get updated when we reorganized
the vector register file for SVE. Since then, the neon dregs
are non-sequential and cannot be simply indexed.

At the same time, make the helper function operate on 64-bit
quantities so that we do not have to call it twice.

Backports 604cef3e57eaeeef77074d78f6cf2eca1be11c62
2021-03-02 13:23:13 -05:00
Xinhao Zhang b3f63b72a2 target/arm: add space before the open parenthesis '('
Fix code style. Space required before the open parenthesis '('.

Backports 7f350a87e3a85e8a260ce4b133d549a7b2789213
2021-03-02 13:17:48 -05:00
Xinhao Zhang 71d4aced5d target/arm: Don't use '#' flag of printf format
Fix code style. Don't use '#' flag of printf format ('%#') in
format strings, use '0x' prefix instead

Backports 6eb55edbabb9eed1e4c7dfb233e7d738e8b5fa89
2021-03-02 13:16:09 -05:00
Xinhao Zhang 492fbc4d2c target/arm: add spaces around operator
Fix code style. Operator needs spaces both sides.

Backports bdc3b6f570e8bd219aa6a24a149b35a691e6986c
2021-03-02 13:15:12 -05:00
Peter Maydell e528c8229e target/arm: Get correct MMU index for other-security-state
In arm_v7m_mmu_idx_for_secstate() we get the 'priv' level to pass to
armv7m_mmu_idx_for_secstate_and_priv() by calling arm_current_el().
This is incorrect when the security state being queried is not the
current one, because arm_current_el() uses the current security state
to determine which of the banked CONTROL.nPRIV bits to look at.
The effect was that if (for instance) Secure state was in privileged
mode but Non-Secure was not then we would return the wrong MMU index.

The only places where we are using this function in a way that could
trigger this bug are for the stack loads during a v8M function-return
and for the instruction fetch of a v8M SG insn.

Fix the bug by expanding out the M-profile version of the
arm_current_el() logic inline so it can use the passed in secstate
rather than env->v7m.secure.

Backports 7142eb9e24b4aa5118cd67038057f15694d782aa
2021-03-02 13:08:44 -05:00
Rémi Denis-Courmont a4053565d6 target/arm: fix LORID_EL1 access check
Secure mode is not exempted from checking SCR_EL3.TLOR, and in the
future HCR_EL2.TLOR when S-EL2 is enabled.

Backports 9bd268bae5c4760870522292fb1d46e7da7e372a
2021-03-02 13:06:50 -05:00
Rémi Denis-Courmont df4413edc7 target/arm: fix handling of HCR.FB
HCR should be applied when NS is set, not when it is cleared.

Backports 373e7ffde9bae90a20fb5db21b053f23091689f4
2021-03-02 13:05:01 -05:00
Peter Maydell 6b8096d9fc target/arm: Fix VUDOT/VSDOT (scalar) on big-endian hosts
The helper functions for performing the udot/sdot operations against
a scalar were not using an address-swizzling macro when converting
the index of the scalar element into a pointer into the vm array.
This had no effect on little-endian hosts but meant we generated
incorrect results on big-endian hosts.

For these insns, the index is indexing over group of 4 8-bit values,
so 32 bits per indexed entity, and H4() is therefore what we want.
(For Neon the only possible input indexes are 0 and 1.)

Backports d1a9254be5cc93afb15be19f7543da6ff4806256
2021-03-02 13:03:51 -05:00
Peter Maydell 5c6730a432 target/arm: Fix float16 pairwise Neon ops on big-endian hosts
In the neon_padd/pmax/pmin helpers for float16, a cut-and-paste error
meant we were using the H4() address swizzler macro rather than the
H2() which is required for 2-byte data. This had no effect on
little-endian hosts but meant we put the result data into the
destination Dreg in the wrong order on big-endian hosts.

Backports 552714c0812a10e5cff239bd29928e5fcb8d8b3b
2021-03-02 13:02:31 -05:00
Richard Henderson d473f66177 target/arm: Improve do_prewiden_3d
We can use proper widening loads to extend 32-bit inputs,
and skip the "widenfn" step.

Backports 8aab18a2c5209e4e48998a61fbc2d89f374331ed
2021-03-02 13:00:25 -05:00
Richard Henderson 9263117d47 target/arm: Simplify do_long_3d and do_2scalar_long
In both cases, we can sink the write-back and perform
the accumulate into the normal destination temps

Backports 9f1a5f93c2dd345dc6c8fe86ed14bf1485056f6e
2021-03-02 12:46:53 -05:00
Richard Henderson 07c2b70234 target/arm: Rename neon_load_reg64 to vfp_load_reg64
The only uses of this function are for loading VFP
double-precision values, and nothing to do with NEON.

Backports b38b96ca90827012ab8eb045c1337cea83a54c4b
2021-03-02 12:43:25 -05:00
Richard Henderson 9d87b62578 target/arm: Add read/write_neon_element64
Replace all uses of neon_load/store_reg64 within translate-neon.c.inc.

Backports 0aa8e700a53b0aa7275ed747b8fa3acb61d35f2d
2021-03-02 12:40:33 -05:00
Richard Henderson 89b1f62878 target/arm: Rename neon_load_reg32 to vfp_load_reg32
The only uses of this function are for loading VFP
single-precision values, and nothing to do with NEON.

Backports 21c1c0e50b73c580c6bfc8f2314d1b6a14793561
2021-03-02 12:30:20 -05:00
Richard Henderson 011d9ab061 target/arm: Expand read/write_neon_element32 to all MemOp
We can then use this to improve VMOV (scalar to gp) and
VMOV (gp to scalar) so that we simply perform the memory
operation that we wanted, rather than inserting or
extracting from a 32-bit quantity.

These were the last uses of neon_load/store_reg, so remove them.

Backports 4d5fa5a80ac28f34b8497be1e85371272413a12e
2021-03-02 12:26:41 -05:00
Richard Henderson d21316d639 target/arm: Add read/write_neon_element32
Model these off the aa64 read/write_vec_element functions.
Use it within translate-neon.c.inc. The new functions do
not allocate or free temps, so this rearranges the calling
code a bit.

Backports a712266f5d5a36d04b22fe69fa15592d62bed019
2021-03-02 12:18:31 -05:00
Richard Henderson e390c1ec7f target/arm: Use neon_element_offset in vfp_reg_offset
This seems a bit more readable than using offsetof CPU_DoubleU.

Backports d8719785fde2f5041986853a314c05c6f567d3cb
2021-03-02 11:55:49 -05:00
Richard Henderson c1ca9e53da target/arm: Use neon_element_offset in neon_load/store_reg
These are the only users of neon_reg_offset, so remove that.

Backports 0f2cdc82276a723ee58562b56b9d537a4bd7bfef
2021-03-02 11:54:56 -05:00
Richard Henderson 1b09d0d96f target/arm: Move neon_element_offset to translate.c
This will shortly have users outside of translate-neon.c.inc.

Backports 7ec85c02833f4264840c6ed78b749443a7b4ffe0
2021-03-02 11:52:59 -05:00
Richard Henderson 8a20537e7f target/arm: Introduce neon_full_reg_offset
This function makes it clear that we're talking about the whole
register, and not the 32-bit piece at index 0. This fixes a bug
when running on a big-endian host.

Backports 015ee81a4c06b644969f621fd9965cc6372b879e
2021-03-02 11:50:36 -05:00
Peter Maydell 2f0940677e target/arm: Implement FPSCR.LTPSIZE for M-profile LOB extension
If the M-profile low-overhead-branch extension is implemented, FPSCR
bits [18:16] are a new field LTPSIZE. If MVE is not implemented
(currently always true for us) then this field always reads as 4 and
ignores writes.

These bits used to be the vector-length field for the old
short-vector extension, so we need to take care that they are not
misinterpreted as setting vec_len. We do this with a rearrangement
of the vfp_set_fpscr() code that deals with vec_len, vec_stride
and also the QC bit; this obviates the need for the M-profile
only masking step that we used to have at the start of the function.

We provide a new field in CPUState for LTPSIZE, even though this
will always be 4, in preparation for MVE, so we don't have to
come back later and split it out of the vfp.xregs[FPSCR] value.
(This state struct field will be saved and restored as part of
the FPSCR value via the vmstate_fpscr in machine.c.)

Backports 8128c8e8cc9489a8387c74075974f86dc0222e7f
2021-03-01 20:36:02 -05:00
Peter Maydell 8a6e118a17 target/arm: Allow M-profile CPUs with FP16 to set FPSCR.FP16
M-profile CPUs with half-precision floating point support should
be able to write to FPSCR.FZ16, but an M-profile specific masking
of the value at the top of vfp_set_fpscr() currently prevents that.
This is not yet an active bug because we have no M-profile
FP16 CPUs, but needs to be fixed before we can add any.

The bits that the masking is effectively preventing from being
set are the A-profile only short-vector Len and Stride fields,
plus the Neon QC bit. Rearrange the order of the function so
that those fields are handled earlier and only under a suitable
guard; this allows us to drop the M-profile specific masking,
making FZ16 writeable.

This change also makes the QC bit correctly RAZ/WI for older
no-Neon A-profile cores.

This refactoring also paves the way for the low-overhead-branch
LTPSIZE field, which uses some of the bits that are used for
A-profile Stride and Len.

Backports commit d31e2ce68d56f5bcc83831497e5fe4b8a7e18e85
2021-03-01 20:33:22 -05:00
Peter Maydell 3ae5543825 target/arm: Implement v8.1M low-overhead-loop instructions
v8.1M's "low-overhead-loop" extension has three instructions
for looping:
* DLS (start of a do-loop)
* WLS (start of a while-loop)
* LE (end of a loop)

The loop-start instructions are both simple operations to start a
loop whose iteration count (if any) is in LR. The loop-end
instruction handles "decrement iteration count and jump back to loop
start"; it also caches the information about the branch back to the
start of the loop to improve performance of the branch on subsequent
iterations.

As with the branch-future instructions, the architecture permits an
implementation to discard the LO_BRANCH_INFO cache at any time, and
QEMU takes the IMPDEF option to never set it in the first place
(equivalent to discarding it immediately), because for us a "real"
implementation would be unnecessary complexity.

(This implementation only provides the simple looping constructs; the
vector extension MVE (Helium) adds some extra variants to handle
looping across vectors. We'll add those later when we implement
MVE.)

Backports commit b7226369721896ab9ef71544e4fe95b40710e05a
2021-03-01 20:29:04 -05:00
Peter Maydell be197f9857 target/arm: Implement v8.1M branch-future insns (as NOPs)
v8.1M implements a new 'branch future' feature, which is a
set of instructions that request the CPU to perform a branch
"in the future", when it reaches a particular execution address.
In hardware, the expected implementation is that the information
about the branch location and destination is cached and then
acted upon when execution reaches the specified address.
However the architecture permits an implementation to discard
this cached information at any point, and so guest code must
always include a normal branch insn at the branch point as
a fallback. In particular, an implementation is specifically
permitted to treat all BF insns as NOPs (which is equivalent
to discarding the cached information immediately).

For QEMU, implementing this caching of branch information
would be complicated and would not improve the speed of
execution at all, so we make the IMPDEF choice to implement
all BF insns as NOPs.

Backports commit 05903f036edba8e3ed940cc215b8e27fb49265b9
2021-03-01 20:25:15 -05:00
Peter Maydell 966246d991 target/arm: Don't allow BLX imm for M-profile
The BLX immediate insn in the Thumb encoding always performs
a switch from Thumb to Arm state. This would be totally useless
in M-profile which has no Arm decoder, and so the instruction
does not exist at all there. Make the encoding UNDEF for M-profile.

(This part of the encoding space is used for the branch-future
and low-overhead-loop insns in v8.1M.)

Backports 920f04fa3ea789f8f85a52cee5395b8887b56cf7
2021-03-01 20:23:59 -05:00
Peter Maydell 5680bc701b target/arm: Make the t32 insn[25:23]=111 group non-overlapping
The t32 decode has a group which represents a set of insns
which overlap with B_cond_thumb because they have [25:23]=111
(which is an invalid condition code field for the branch insn).
This group is currently defined using the {} overlap-OK syntax,
but it is almost entirely non-overlapping patterns. Switch
it over to use a non-overlapping group.

For this to be valid syntactically, CPS must move into the same
overlapping-group as the hint insns (CPS vs hints was the
only actual use of the overlap facility for the group).

The non-overlapping subgroup for CLREX/DSB/DMB/ISB/SB is no longer
necessary and so we can remove it (promoting those insns to
be members of the parent group).

Backports 45f11876ae86128bdee27e0b089045de43cc88e4
2021-03-01 20:22:11 -05:00
Peter Maydell 666fe17025 target/arm: Implement v8.1M conditional-select insns
v8.1M brings four new insns to M-profile:
* CSEL : Rd = cond ? Rn : Rm
* CSINC : Rd = cond ? Rn : Rm+1
* CSINV : Rd = cond ? Rn : ~Rm
* CSNEG : Rd = cond ? Rn : -Rm

Implement these.

Backports cc73bbded0dfb5612b0e416f7eda13a66950542a
2021-03-01 20:19:33 -05:00
Peter Maydell 2dae268fcb target/arm: Implement v8.1M NOCP handling
From v8.1M, disabled-coprocessor handling changes slightly:
* coprocessors 8, 9, 14 and 15 are also governed by the
cp10 enable bit, like cp11
* an extra range of instruction patterns is considered
to be inside the coprocessor space

We previously marked these up with TODO comments; implement the
correct behaviour.

Unfortunately there is no ID register field which indicates this
behaviour. We could in theory test an unrelated ID register which
indicates guaranteed-to-be-in-v8.1M behaviour like ID_ISAR0.CmpBranch
>= 3 (low-overhead-loops), but it seems better to simply define a new
ARM_FEATURE_V8_1M feature flag and use it for this and other
new-in-v8.1M behaviour that isn't identifiable from the ID registers.

Backports commit 5d2555a1fe7370feeb1efbbf276a653040910017
2021-03-01 20:16:09 -05:00
Richard Henderson f7e831a7e4 target/arm: Ignore HCR_EL2.ATA when {E2H,TGE} != 11
Unlike many other bits in HCR_EL2, the description for this
bit does not contain the phrase "if ... this field behaves
as 0 for all purposes other than", so do not squash the bit
in arm_hcr_el2_eff.

Instead, replicate the E2H+TGE test in the two places that
require it.

Backports 4301acd7d7d455792ea873ced75c0b5d653618b1
2021-03-01 20:12:36 -05:00
Richard Henderson 4f00eacb11 target/arm: Fix reported EL for mte_check_fail
The reporting in AArch64.TagCheckFail only depends on PSTATE.EL,
and not the AccType of the operation. There are two guest
visible problems that affect LDTR and STTR because of this:

(1) Selecting TCF0 vs TCF1 to decide on reporting,
(2) Report "data abort same el" not "data abort lower el".

Backports 50244cc76abcac3296cff3d84826f5ff71808c80
2021-03-01 20:10:44 -05:00
Richard Henderson 511636a3f4 target/arm: Remove redundant mmu_idx lookup
We already have the full ARMMMUIdx as computed from the
function parameter.

For the purpose of regime_has_2_ranges, we can ignore any
difference between AccType_Normal and AccType_Unpriv, which
would be the only difference between the passed mmu_idx
and arm_mmu_idx_el.

Backports 4aedfc0f633fd06dd2a5dc8ffa93f4c56117e37f
2021-03-01 20:09:51 -05:00
Peter Maydell d350644817 target/arm: AArch32 VCVT fixed-point to float is always round-to-nearest
For AArch32, unlike the VCVT of integer to float, which honours the
rounding mode specified by the FPSCR, VCVT of fixed-point to float is
always round-to-nearest. (AArch64 fixed-point-to-float conversions
always honour the FPCR rounding mode.)

Implement this by providing _round_to_nearest versions of the
relevant helpers which set the rounding mode temporarily when making
the call to the underlying softfloat function.

We only need to change the VFP VCVT instructions, because the
standard- FPSCR value used by the Neon VCVT is always set to
round-to-nearest, so we don't need to do the extra work of saving
and restoring the rounding mode.

Backports commit 61db12d9f9eb36761edba4d9a414cd8dd34c512b
2021-03-01 20:04:31 -05:00
Peter Maydell 31013d5a8f target/arm: Fix SMLAD incorrect setting of Q bit
The SMLAD instruction is supposed to:
* signed multiply Rn[15:0] * Rm[15:0]
* signed multiply Rn[31:16] * Rm[31:16]
* perform a signed addition of the products and Ra
* set Rd to the low 32 bits of the theoretical
infinite-precision result
* set the Q flag if the sign-extension of Rd
would differ from the infinite-precision result
(ie on overflow)

Our current implementation doesn't quite do this, though: it performs
an addition of the products setting Q on overflow, and then it adds
Ra, again possibly setting Q. This sometimes incorrectly sets Q when
the architecturally mandated only-check-for-overflow-once algorithm
does not. For instance:
r1 = 0x80008000; r2 = 0x80008000; r3 = 0xffffffff
smlad r0, r1, r2, r3
This is (-32768 * -32768) + (-32768 * -32768) - 1

The products are both 0x4000_0000, so when added together as 32-bit
signed numbers they overflow (and QEMU sets Q), but because the
addition of Ra == -1 brings the total back down to 0x7fff_ffff
there is no overflow for the complete operation and setting Q is
incorrect.

Fix this edge case by resorting to 64-bit arithmetic for the
case where we need to add three values together.

Backports commit 5288145d716338ace0f83e3ff05c4d07715bb4f4
2021-03-01 19:58:39 -05:00
Peter Maydell 6cd06169ee target/arm: Make '-cpu max' have a 48-bit PA
QEMU supports a 48-bit physical address range, but we don't currently
expose it in the '-cpu max' ID registers (you get the same range as
Cortex-A57, which is 44 bits).

Set the ID_AA64MMFR0.PARange field to indicate 48 bits.

Backports d1b6b7017572e8d82f26eb827a1dba0e8cf3cae6
2021-03-01 19:50:28 -05:00
Richard Henderson 567fa21c65 target/arm: Fix SVE splice
While converting to gen_gvec_ool_zzzp, we lost passing
a->esz as the data argument to the function.

Backports commit dd701fafe55a78e655d4823d29226d92250a6b56
2021-03-01 19:20:44 -05:00