Commit graph

6467 commits

Author SHA1 Message Date
Richard Henderson c06fd38b57 target/arm: Rename isar_feature_aa32_simd_r32
The old name, isar_feature_aa32_fp_d32, does not reflect
the MVFR0 field name, SIMDReg.

Backports commit 0e13ba7889432c5e2f1bdb1b25e7076ca1b1dcba from qemu
2020-03-21 19:37:33 -04:00
Richard Henderson fcce8d4aa1 target/arm: Convert PMULL.8 to gvec
We still need two different helpers, since NEON and SVE2 get the
inputs from different locations within the source vector. However,
we can convert both to the same internal form for computation.

The sve2 helper is not used yet, but adding it with this patch
helps illustrate why the neon changes are helpful.

Backports commit e7e96fc5ec8c79dc77fef522d5226ac09f684ba5 from qemu
2020-03-21 19:35:46 -04:00
Richard Henderson c00f72f74f target/arm: Convert PMULL.64 to gvec
The gvec form will be needed for implementing SVE2.

Backports commit b9ed510e46f2f9e31e5e8adb4661d5d1cbe9a459 from qemu
2020-03-21 19:27:38 -04:00
Richard Henderson db8a935b44 target/arm: Convert PMUL.8 to gvec
The gvec form will be needed for implementing SVE2.

Extend the implementation to operate on uint64_t instead of uint32_t.
Use a counted inner loop instead of terminating when op1 goes to zero,
looking toward the required implementation for ARMv8.4-DIT.

Backports commit a21bb78e5817be3f494922e1dadd6455fe5d6318 from qemu
2020-03-21 19:22:18 -04:00
Richard Henderson d3139f2f0a target/arm: Vectorize USHL and SSHL
These instructions shift left or right depending on the sign
of the input, and 7 bits are significant to the shift. This
requires several masks and selects in addition to the actual
shifts to form the complete answer.

That said, the operation is still a small improvement even for
two 64-bit elements -- 13 vector operations instead of 2 * 7
integer operations.

Backports commit 87b74e8b6edd287ea2160caa0ebea725fa8f1ca1 from qemu
2020-03-21 19:14:17 -04:00
Peter Maydell 61cf5abc9e target/arm: Correctly implement ACTLR2, HACTLR2
The ACTLR2 and HACTLR2 AArch32 system registers didn't exist in ARMv7
or the original ARMv8. They were later added as optional registers,
whose presence is signaled by the ID_MMFR4.AC2 field. From ARMv8.2
they are mandatory (ie ID_MMFR4.AC2 must be non-zero).

We implemented HACTLR2 in commit 0e0456ab8895a5e85, but we
incorrectly made it exist for all v8 CPUs, and we didn't implement
ACTLR2 at all.

Sort this out by implementing both registers only when they are
supposed to exist, and setting the ID_MMFR4 bit for -cpu max.

Note that this removes HACTLR2 from our Cortex-A53, -A47 and -A72
CPU models; this is correct, because those CPUs do not implement
this register.

Fixes: 0e0456ab8895a5e85

Backports commit f6287c24c66d6b9187c1c2887e1c7cfa4d304b0c from qemu
2020-03-21 18:52:30 -04:00
Peter Maydell 1876feeede target/arm: Use FIELD_EX32 for testing 32-bit fields
Cut-and-paste errors mean we're using FIELD_EX64() to extract fields from
some 32-bit ID register fields. Use FIELD_EX32() instead. (This makes
no difference in behaviour, it's just more consistent.)

Backports commit b3a816f6ce1ec184ab6072f50bbe4479fc5116c3 from qemu
2020-03-21 18:50:14 -04:00
Peter Maydell 2ce106df33 target/arm: Use isar_feature function for testing AA32HPD feature
Now we have moved ID_MMFR4 into the ARMISARegisters struct, we
can define and use an isar_feature for the presence of the
ARMv8.2-AA32HPD feature, rather than open-coding the test.

While we're here, correct a comment typo which missed an 'A'
from the feature name.

Backports commit 4036b7d1cd9fb1097a5f4bc24d7d31744256260f from qemu
2020-03-21 18:48:57 -04:00
Peter Maydell 4693b2c011 target/arm: Test correct register in aa32_pan and aa32_ats1e1 checks
The isar_feature_aa32_pan and isar_feature_aa32_ats1e1 functions
are supposed to be testing fields in ID_MMFR3; but a cut-and-paste
error meant we were looking at MVFR0 instead.

Fix the functions to look at the right register; this requires
us to move at least id_mmfr3 to the ARMISARegisters struct; we
choose to move all the ID_MMFRn registers for consistency.

Backports commit 10054016eda1b13bdd8340d100fd029cc8b58f36 from qemu
2020-03-21 18:47:12 -04:00
Peter Maydell e72fa1cb33 target/arm: Correct handling of PMCR_EL0.LC bit
The LC bit in the PMCR_EL0 register is supposed to be:
* read/write
* RES1 on an AArch64-only implementation
* an architecturally UNKNOWN value on reset
(and use of LC==0 by software is deprecated).

We were implementing it incorrectly as read-only always zero,
though we do have all the code needed to test it and behave
accordingly.

Instead make it a read-write bit which resets to 1 always, which
satisfies all the architectural requirements above.

Backports commit 62d96ff48510f4bf648ad12f5d3a5507227b026f from qemu
2020-03-21 18:40:26 -04:00
Peter Maydell de428e4b45 target/arm: Correct definition of PMCRDP
The PMCR_EL0.DP bit is bit 5, which is 0x20, not 0x10. 0x10 is 'X'.
Correct our #define of PMCRDP and add the missing PMCRX.

We do have the correct behaviour for handling the DP bit being
set, so this fixes a guest-visible bug.

Fixes: 033614c47de

Backports commit a1ed04dd79aabb9dbeeb5fa7d49f1a3de0357553 from qemu
2020-03-21 18:39:37 -04:00
Peter Maydell 28b239adb9 target/arm: Provide ARMv8.4-PMU in '-cpu max'
Set the ID register bits to provide ARMv8.4-PMU (and implicitly
also ARMv8.1-PMU) in the 'max' CPU.

Backports commit 3bec78447a958d4819911252e056f29740ac25e4 from qemu
2020-03-21 18:38:53 -04:00
Peter Maydell 4dd57f7acc target/arm: Implement ARMv8.4-PMU extension
The ARMv8.4-PMU extension adds:
* one new required event, STALL
* one new system register PMMIR_EL1

(There are also some more L1-cache related events, but since
we don't implement any cache we don't provide these, in the
same way we don't provide the base-PMUv3 cache events.)

The STALL event "counts every attributable cycle on which no
attributable instruction or operation was sent for execution on this
PE". QEMU doesn't stall in this sense, so this is another
always-reads-zero event.

The PMMIR_EL1 register is a read-only register providing
implementation-specific information about the PMU; currently it has
only one field, SLOTS, which defines behaviour of the STALL_SLOT PMU
event. Since QEMU doesn't implement the STALL_SLOT event, we can
validly make the register read zero.

Backports commit 15dd1ebda4a6ef928d484c5a4f48b8ccb7438bb2 from qemu
2020-03-21 18:37:50 -04:00
Peter Maydell 5c93f43eb9 target/arm: Implement ARMv8.1-PMU extension
The ARMv8.1-PMU extension requires:
* the evtCount field in PMETYPER<n>_EL0 is 16 bits, not 10
* MDCR_EL2.HPMD allows event counting to be disabled at EL2
* two new required events, STALL_FRONTEND and STALL_BACKEND
* ID register bits in ID_AA64DFR0_EL1 and ID_DFR0

We already implement the 16-bit evtCount field and the
HPMD bit, so all that is missing is the two new events:
STALL_FRONTEND
"counts every cycle counted by the CPU_CYCLES event on which no
operation was issued because there are no operations available
to issue to this PE from the frontend"
STALL_BACKEND
"counts every cycle counted by the CPU_CYCLES event on which no
operation was issued because the backend is unable to accept
any available operations from the frontend"

QEMU never stalls in this sense, so our implementation is trivial:
always return a zero count.

Backports commit 0727f63b1ecf765ebc48266f616f8fc362dc7fbc from qemu
2020-03-21 18:34:33 -04:00
Peter Maydell 7dfc30b754 target/arm: Read debug-related ID registers from KVM
Backports 1548a7b2ad621a31b4216ed703b6d658a2ecf0d0 from qemu
2020-03-21 18:30:20 -04:00
Peter Maydell cef6f3e72c target/arm: Move DBGDIDR into ARMISARegisters
We're going to want to read the DBGDIDR register from KVM in
a subsequent commit, which means it needs to be in the
ARMISARegisters sub-struct. Move it.

Backports commit 4426d3617d64922d97b74ed22e67e33b6fb7de0a from qemu
2020-03-21 18:29:01 -04:00
Peter Maydell a6c9c87a5d target/arm: Stop assuming DBGDIDR always exists
The AArch32 DBGDIDR defines properties like the number of
breakpoints, watchpoints and context-matching comparators. On an
AArch64 CPU, the register may not even exist if AArch32 is not
supported at EL1.

Currently we hard-code use of DBGDIDR to identify the number of
breakpoints etc; this works for all our TCG CPUs, but will break if
we ever add an AArch64-only CPU. We also have an assert() that the
AArch32 and AArch64 registers match, which currently works only by
luck for KVM because we don't populate either of these ID registers
from the KVM vCPU and so they are both zero.

Clean this up so we have functions for finding the number
of breakpoints, watchpoints and context comparators which look
in the appropriate ID register.

This allows us to drop the "check that AArch64 and AArch32 agree
on the number of breakpoints etc" asserts:
* we no longer look at the AArch32 versions unless that's the
right place to be looking
* it's valid to have a CPU (eg AArch64-only) where they don't match
* we shouldn't have been asserting the validity of ID registers
in a codepath used with KVM anyway

Backports commit 88ce6c6ee85d902f59dc65afc3ca86b34f02b9ed from qemu
2020-03-21 18:26:24 -04:00
Peter Maydell afc28d9b2c target/arm: Add _aa64_ and _any_ versions of pmu_8_1 isar checks
Add the 64-bit version of the "is this a v8.1 PMUv3?"
ID register check function, and the _any_ version that
checks for either AArch32 or AArch64 support. We'll use
this in a later commit.

We don't (yet) do any isar_feature checks on ID_AA64DFR1_EL1,
but we move id_aa64dfr1 into the ARMISARegisters struct with
id_aa64dfr0, for consistency.

Backports commit 2a609df87d9b886fd38a190a754dbc241ff707e8 from qemu
2020-03-21 18:24:00 -04:00
Peter Maydell e64143966a target/arm: Define an aa32_pmu_8_1 isar feature test function
Instead of open-coding a check on the ID_DFR0 PerfMon ID register
field, create a standardly-named isar_feature for "does AArch32 have
a v8.1 PMUv3" and use it.

This entails moving the id_dfr0 field into the ARMISARegisters struct.

Backports commit a617953855b65a602d36364b9643f7e5bc31288e from qemu
2020-03-21 18:21:26 -04:00
Peter Maydell fd537585d7 target/arm: Use FIELD macros for clearing ID_DFR0 PERFMON field
We already define FIELD macros for ID_DFR0, so use them in the
one place where we're doing direct bit value manipulation.

Backports commit d52c061e541982a3663ad5c65bd3b518dbe85b87 from qemu
2020-03-21 18:17:55 -04:00
Peter Maydell fd6c635e03 target/arm: Add and use FIELD definitions for ID_AA64DFR0_EL1
Add FIELD() definitions for the ID_AA64DFR0_EL1 and use them
where we currently have hard-coded bit values.

Backports commit ceb2744b47a1ef4184dca56a158eb3156b6eba36 from qemu
2020-03-21 18:16:55 -04:00
Peter Maydell ebd7131c16 target/arm: Factor out PMU register definitions
Pull the code that defines the various PMU registers out
into its own function, matching the pattern we have
already for the debug registers.

Apart from one style fix to a multi-line comment, this
is purely movement of code with no changes to it.

Backports commit 24183fb6f00ecca8b508e245c95ff50ddde3f18b from qemu
2020-03-21 18:15:09 -04:00
Peter Maydell b1c088e2f2 target/arm: Define and use any_predinv isar_feature test
Instead of open-coding "ARM_FEATURE_AARCH64 ? aa64_predinv: aa32_predinv",
define and use an any_predinv isar_feature test function.

Backports commit 22e570730d15374453baa73ff2a699e01ef4e950 from qemu
2020-03-21 18:13:25 -04:00
Peter Maydell 62178626e4 target/arm: Add isar_feature_any_fp16 and document naming/usage conventions
Our current usage of the isar_feature feature tests almost always
uses an _aa32_ test when the code path is known to be AArch32
specific and an _aa64_ test when the code path is known to be
AArch64 specific. There is just one exception: in the vfp_set_fpscr
helper we check aa64_fp16 to determine whether the FZ16 bit in
the FP(S)CR exists, but this code is also used for AArch32.
There are other places in future where we're likely to want
a general "does this feature exist for either AArch32 or
AArch64" check (typically where architecturally the feature exists
for both CPU states if it exists at all, but the CPU might be
AArch32-only or AArch64-only, and so only have one set of ID
registers).

Introduce a new category of isar_feature_* functions:
isar_feature_any_foo() should be tested when what we want to
know is "does this feature exist for either AArch32 or AArch64",
and always returns the logical OR of isar_feature_aa32_foo()
and isar_feature_aa64_foo().

Backports commit 6e61f8391cc6cb0846d4bf078dbd935c2aeebff5 from qemu
2020-03-21 18:12:02 -04:00
Peter Maydell 778fcd9562 target/arm: Check aa32_pan in take_aarch32_exception(), not aa64_pan
In take_aarch32_exception(), we know we are dealing with a CPU that
has AArch32, so the right isar_feature test is aa32_pan, not aa64_pan.

Backports commit f8af1143ef93954e77cf59e09b5e004dafbd64fd from qemu
2020-03-21 18:09:27 -04:00
Peter Maydell e63f70f980 target/arm: Add _aa32_ to isar_feature functions testing 32-bit ID registers
Enforce a convention that an isar_feature function that tests a
32-bit ID register always has _aa32_ in its name, and one that
tests a 64-bit ID register always has _aa64_ in its name.
We already follow this except for three cases: thumb_div,
arm_div and jazelle, which all need _aa32_ adding.

(As noted in the comment, isar_feature_aa32_fp16_arith()
is an exception in that it currently tests ID_AA64PFR0_EL1,
but will switch to MVFR1 once we've properly implemented
FP16 for AArch32.)

Backports commit 873b73c0c891ec20adacc7bd1ae789294334d675 from qemu
2020-03-21 18:08:23 -04:00
Richard Henderson 0131e804fb target/arm: Split out aa64_va_parameter_tbi, aa64_va_parameter_tbid
For the purpose of rebuild_hflags_a64, we do not need to compute
all of the va parameters, only tbi. Moreover, we can compute them
in a form that is more useful to storing in hflags.

This eliminates the need for aa64_va_parameter_both, so fold that
in to aa64_va_parameter. The remaining calls to aa64_va_parameter
are in get_phys_addr_lpae and in pauth_helper.c.

This reduces the total cpu consumption of aa64_va_parameter in a
kernel boot plus a kvm guest kernel boot from 3% to 0.5%.

Backports commit b830a5ee82e66f54697dcc6450fe9239b7412d13 from qemu
2020-03-21 18:04:39 -04:00
Richard Henderson 2cce7e0dd0 target/arm: Remove ttbr1_valid check from get_phys_addr_lpae
Now that aa64_va_parameters_both sets select based on the number
of ranges in the regime, the ttbr1_valid check is redundant.

Backports commit 03f27724dff15633911e68a3906c30f57938ea45 from qemu
2020-03-21 18:01:24 -04:00
Richard Henderson f3fa39829d target/arm: Fix select for aa64_va_parameters_both
Select should always be 0 for a regime with one range.

Backports commit 71d181640a1a9470f074fa28600ca85587e2ca6b from qemu
2020-03-21 18:00:15 -04:00
Richard Henderson 3183349f1c target/arm: Use bit 55 explicitly for pauth
The psuedocode in aarch64/functions/pac/auth/Auth and
aarch64/functions/pac/strip/Strip always uses bit 55 for
extfield and do not consider if the current regime has 2 ranges.

Backports commit 7eeb4c2ce8dc0a5655526f3f39bd5d6cc02efb39 from qemu
2020-03-21 17:59:06 -04:00
Richard Henderson 51b6064ba4 target/arm: Flush high bits of sve register after AdvSIMD INS
Writes to AdvSIMD registers flush the bits above 128.

Backports commit 528dc354b6f3aa82d65141cc60bc0e725e6cae98 from qemu
2020-03-21 17:58:09 -04:00
Richard Henderson 74cbfceb56 target/arm: Flush high bits of sve register after AdvSIMD ZIP/UZP/TRN
Writes to AdvSIMD registers flush the bits above 128.

Backports commit 33649de62e40df0060a1c514574e4ef25c4e52e1 from qemu
2020-03-21 17:56:40 -04:00
Richard Henderson 6eb8472344 target/arm: Flush high bits of sve register after AdvSIMD TBL/TBX
Writes to AdvSIMD registers flush the bits above 128.

Backports commit 263273bc988e677ebadeaf7d0e49f6792a112db5 from qemu
2020-03-21 17:56:08 -04:00
Richard Henderson 18e9c4805f target/arm: Flush high bits of sve register after AdvSIMD EXT
Writes to AdvSIMD registers flush the bits above 128.

Backports commit 78cedfabd53b6f64e7e64fc84878d848e5df1d08 from qemu
2020-03-21 17:55:12 -04:00
Peter Maydell 96a96565db target/arm: Implement ARMv8.1-VMID16 extension
The ARMv8.1-VMID16 extension extends the VMID from 8 bits to 16 bits:

* the ID_AA64MMFR1_EL1.VMIDBits field specifies whether the VMID is
8 or 16 bits
* the VMID field in VTTBR_EL2 is extended to 16 bits
* VTCR_EL2.VS lets the guest specify whether to use the full 16 bits,
or use the backwards-compatible 8 bits

For QEMU implementing this is trivial:
* we do not track VMIDs in TLB entries, so we never use the VMID field
* we treat any write to VTTBR_EL2, not just a change to the VMID field
bits, as a "possible VMID change" that causes us to throw away TLB
entries, so that code doesn't need changing
* we allow the guest to read/write the VTCR_EL2.VS bit already

So all that's missing is the ID register part: report that we support
VMID16 in our 'max' CPU.

Backports commit dc7a88d0810ad272bdcd2e0869359af78fdd9114 from qemu
2020-03-21 17:52:43 -04:00
Richard Henderson 57f0aa3044 target/arm: Enable ARMv8.2-UAO in -cpu max
Backports commit e11f0eb6724571adb812a3ce5269c41586e0262b from qemu
2020-03-21 17:51:44 -04:00
Richard Henderson 18a86780ee target/arm: Implement UAO semantics
We need only override the current condition under which
TBFLAG_A64.UNPRIV is set.

Backports commit 7a8014ab871d5320effd737dfe88b2e80f16a509 from qemu
2020-03-21 17:50:29 -04:00
Richard Henderson 5b5050c6ca target/arm: Update MSR access to UAO
Backports commit 9eeb7a1c9531cb3574bfe2c36eb7624802c3ec00 from qemu
2020-03-21 17:48:01 -04:00
Richard Henderson 0630e66b5a target/arm: Add ID_AA64MMFR2_EL1
Add definitions for all of the fields, up to ARMv8.5.
Convert the existing RESERVED register to a full register.
Query KVM for the value of the register for the host.

Backports commit 64761e10af2742a916c08271828890274137b9e8 from qemu
2020-03-21 17:45:27 -04:00
Richard Henderson 7287bf16b8 target/arm: Enable ARMv8.2-ATS1E1 in -cpu max
This includes enablement of ARMv8.1-PAN.

Backports commit e0fe7309a7c21ef2386de50d37c86aea0d671c08 from qemu
2020-03-21 17:43:54 -04:00
Richard Henderson d196288b4f target/arm: Implement ATS1E1 system registers
This is a minor enhancement over ARMv8.1-PAN.
The *_PAN mmu_idx are used with the existing do_ats_write.

Backports commit 04b07d29722192926f467ea5fedf2c3b0996a2a5 from qemu
2020-03-21 17:42:01 -04:00
Richard Henderson 6576864930 target/arm: Set PAN bit as required on exception entry
The PAN bit is preserved, or set as per SCTLR_ELx.SPAN,
plus several other conditions listed in the ARM ARM.

Backports commit 4a2696c0d4d80e14a192b28148c6167bc5056f94 from qemu
2020-03-21 17:40:11 -04:00
Richard Henderson aad0621f96 target/arm: Enforce PAN semantics in get_S1prot
If we have a PAN-enforcing mmu_idx, set prot == 0 if user_rw != 0.

Backports commit 81636b70c226dc27d7ebc8dedbcec26166d23085 from qemu
2020-03-21 17:35:55 -04:00
Richard Henderson 41d03da852 target/arm: Update arm_mmu_idx_el for PAN
Examine the PAN bit for EL1, EL2, and Secure EL1 to
determine if it applies.

Backports commit 66412260cc1bee60a22d96e4ad8569b85745fea4 from qemu
2020-03-21 17:34:12 -04:00
Richard Henderson 35fab80c57 target/arm: Update MSR access for PAN
For aarch64, there's a dedicated msr (imm, reg) insn.
For aarch32, this is done via msr to cpsr. Writes from el0
are ignored, which is already handled by the CPSR_USER mask.

Backports commit 220f508f49c5f49fb771d5105f991c19ffede3f7 from qemu
2020-03-21 17:33:16 -04:00
Richard Henderson 50bb867a6f target/arm: Introduce aarch64_pstate_valid_mask
Use this along the exception return path, where we previously
accepted any values

Backports commit 140845111809cd6fd57ccde93867b48cc56ffc74 from qemu
2020-03-21 17:26:00 -04:00
Richard Henderson b6b69d7ac5 target/arm: Remove CPSR_RESERVED
The only remaining use was in op_helper.c. Use PSTATE_SS
directly, and move the commentary so that it is more obvious
what is going on.

Backports commit 70dae0d069c45250bbefd9424089383a8ac239de from qemu
2020-03-21 17:24:21 -04:00
Richard Henderson 2d3239d0a1 target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return
Using ~0 as the mask on the aarch64->aarch32 exception return
was not even as correct as the CPSR_ERET_MASK that we had used
on the aarch32->aarch32 exception return.

Backports commit d203cabd1bd12f31c9df0b5737421ba67b96857b from qemu
2020-03-21 17:20:53 -04:00
Richard Henderson c450694f1a target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask
CPSR_ERET_MASK was a useless renaming of CPSR_RESERVED.
The function also takes into account bits that the cpu
does not support.

Backports commit 437864216d63f052f3cd06ec8861d0e432496424 from qemu
2020-03-21 17:19:17 -04:00
Richard Henderson e4a7a089f0 target/arm: Mask CPSR_J when Jazelle is not enabled
The J bit signals Jazelle mode, and so of course is RES0
when the feature is not enabled.

Backports commit f062d1447f2a80e7a5f593b8cb5ac7cab5e16eb0 from qemu
2020-03-21 17:17:50 -04:00