Because the elements are sequential, we can eliminate many tests all
at once when the tag hits TCMA, or if the page(s) are not Tagged.
Backports commit 206adacfb8d35e671e3619591608c475aa046b63 from qemu
Cache the composite ATA setting.
Cache when MTE is fully enabled, i.e. access to tags are enabled
and tag checks affect the PE. Do this for both the normal context
and the UNPRIV context.
Backports commit 81ae05fa2d21ac1a0054935b74342aa38a5ecef7 from qemu
This is TFSRE0_EL1, TFSR_EL1, TFSR_EL2, TFSR_EL3,
RGSR_EL1, GCR_EL1, GMID_EL1, and PSTATE.TCO.
Backports commit 4b779cebb3e5ab30b945181f1ba3932f5f8a1cb5 from qemu
Using the MSR instruction to write to CPSR.E is deprecated, but it is
required to work from any mode including unprivileged code. We were
incorrectly forbidding usermode code from writing it because
CPSR_USER did not include the CPSR_E bit.
We use CPSR_USER in only three places:
* as the mask of what to allow userspace MSR to write to CPSR
* when deciding what bits a linux-user signal-return should be
able to write from the sigcontext structure
* in target_user_copy_regs() when we set up the initial
registers for the linux-user process
In the first two cases not being able to update CPSR.E is a bug, and
in the third case it doesn't matter because CPSR.E is always 0 there.
So we can fix both bugs by adding CPSR_E to CPSR_USER.
Because the cpsr_write() in restore_sigcontext() is now changing
a CPSR bit which is cached in hflags, we need to add an
arm_rebuild_hflags() call there; the callsite in
target_user_copy_regs() was already rebuilding hflags for other
reasons.
(The recommended way to change CPSR.E is to use the 'SETEND'
instruction, which we do correctly allow from usermode code.)
Backports commit 268b1b3dfbb92a9348406f728a33f39e3d8dcd8a from qemu
Move the common set_feature() and unset_feature() functions
from cpu.c and cpu64.c to cpu.h.
Backports commit 5fda95041d7237ab35733ceb66e0cb89f6107169 from qemu
MIDR_EL1 is a 64-bit system register with the top 32-bit being RES0.
Represent it in QEMU's ARMCPU struct with a uint64_t, not a
uint32_t.
This fixes an error when compiling with -Werror=conversion
because we were manipulating the register value using a
local uint64_t variable:
target/arm/cpu64.c: In function ‘aarch64_max_initfn’:
target/arm/cpu64.c:628:21: error: conversion from ‘uint64_t’ {aka ‘long unsigned int’} to ‘uint32_t’ {aka ‘unsigned int’} may change value [-Werror=conversion]
628 | cpu->midr = t;
| ^
and future-proofs us against a possible future architecture
change using some of the top 32 bits.
Backports commit e544f80030121040c8932ff1bd4006f390266c0f from qemu
The ARMv8.2-TTS2UXN feature extends the XN field in stage 2
translation table descriptors from just bit [54] to bits [54:53],
allowing stage 2 to control execution permissions separately for EL0
and EL1. Implement the new semantics of the XN field and enable
the feature for our 'max' CPU.
Backports commit ce3125bed935a12e619a8253c19340ecaa899347 from qemu
We define ARMMMUIdx_Stage2 as being an MMU index which uses a QEMU
TLB. However we never actually use the TLB -- all stage 2 lookups
are done by direct calls to get_phys_addr_lpae() followed by a
physical address load via address_space_ld*().
Remove Stage2 from the list of ARM MMU indexes which correspond to
real core MMU indexes, and instead put it in the set of "NOTLB" ARM
MMU indexes.
This allows us to drop NB_MMU_MODES to 11. It also means we can
safely add support for the ARMv8.3-TTS2UXN extension, which adds
permission bits to the stage 2 descriptors which define execute
permission separatel for EL0 and EL1; supporting that while keeping
Stage2 in a QEMU TLB would require us to use separate TLBs for
"Stage2 for an EL0 access" and "Stage2 for an EL1 access", which is a
lot of extra complication given we aren't even using the QEMU TLB.
In the process of updating the comment on our MMU index use,
fix a couple of other minor errors:
* NS EL2 EL2&0 was missing from the list in the comment
* some text hadn't been updated from when we bumped NB_MMU_MODES
above 8
Backports commit bf05340cb655637451162c02dadcd6581a05c02c from qemu
The ARMv8.3-CCIDX extension makes the CCSIDR_EL1 system ID registers
have a format that uses the full 64 bit width of the register, and
adds a new CCSIDR2 register so AArch32 can get at the high 32 bits.
QEMU doesn't implement caches, so we just treat these ID registers as
opaque values that are set to the correct constant values for each
CPU. The only thing we need to do is allow 64-bit values in our
cssidr[] array and provide the CCSIDR2 accessors.
We don't set the CCIDX field in our 'max' CPU because the CCSIDR
constant values we use are the same as the ones used by the
Cortex-A57 and they are in the old 32-bit format. This means
that the extra regdef added here is unused currently, but it
means that whenever in the future we add a CPU that does need
the new 64-bit format it will just work when we set the cssidr
values and the ID registers for it.
Backports commit 957e615503bd0de22393fd8dbcb22a5064fd2b5c from qemu
The v8.4-RCPC extension implements some new instructions:
* LDAPUR, LDAPURB, LDAPURH, LDAPRSB, LDAPRSH, LDAPRSW
* STLUR, STLURB, STLURH
These are all in a new subgroup of encodings that sits below the
top-level "Loads and Stores" group in the Arm ARM.
The STLUR* instructions have standard store-release semantics; the
LDAPUR* have Load-AcquirePC semantics, but (as with LDAPR*) we choose
to implement them as the slightly stronger Load-Acquire.
Backports commit a1229109dec4375259d3fff99f362405aab7917a from qemu
The v8.3-RCPC extension implements three new load instructions
which provide slightly weaker consistency guarantees than the
existing load-acquire operations. For QEMU we choose to simply
implement them with a full LDAQ barrier.
Backports commit 2677cf9f92a5319bb995927f9225940414ce879d from qemu
We missed an instance of using FIELD_EX32 on a 64-bit ID
register, in isar_feature_aa64_pmu_8_4(). Fix it.
Backports commit 54117b90ffd8a3977917971c3bd99bb5242710d9 from qemu.
All remaining tests for VFP4 are for fused multiply-add insns.
Since the MVFR1 field is used for both VFP and NEON, move its adjustment
from the !has_neon block to the (!has_vfp && !has_neon) block.
Test for vfp of the appropraite width alongside the test for simdfmac
within translate-vfp.inc.c. Within disas_neon_data_insn, we have
already tested for ARM_FEATURE_NEON.
Backports commit c52881bbc22b50db99a6c37171ad3eea7d959ae6 from qemu
We cannot easily create "any" functions for these, because the
ID_AA64PFR0 fields for FP and SIMD signal "enabled" with zero.
Which means that an aarch32-only cpu will return incorrect results
when testing the aarch64 registers.
To use these, we must either have context or additionally test
vs ARM_FEATURE_AARCH64.
Backports commit 7d63183ff1a61b3f7934dc9b40b10e4fd5e100cd from qemu
The old name, isar_feature_aa32_fpdp, does not reflect
that the test includes VFPv2. We will introduce another
feature tests for VFPv3.
Backports commit c4ff873583834c8275586914fff714e3ae65dee4 from qemu
Use this in the places that were checking ARM_FEATURE_VFP, and
are obviously testing for the existance of the register set
as opposed to testing for some particular instruction extension.
Backports commit 7fbc6a403a0aab834e764fa61d81ed8586cfe352 from qemu
The old name, isar_feature_aa32_fp_d32, does not reflect
the MVFR0 field name, SIMDReg.
Backports commit 0e13ba7889432c5e2f1bdb1b25e7076ca1b1dcba from qemu
The ACTLR2 and HACTLR2 AArch32 system registers didn't exist in ARMv7
or the original ARMv8. They were later added as optional registers,
whose presence is signaled by the ID_MMFR4.AC2 field. From ARMv8.2
they are mandatory (ie ID_MMFR4.AC2 must be non-zero).
We implemented HACTLR2 in commit 0e0456ab8895a5e85, but we
incorrectly made it exist for all v8 CPUs, and we didn't implement
ACTLR2 at all.
Sort this out by implementing both registers only when they are
supposed to exist, and setting the ID_MMFR4 bit for -cpu max.
Note that this removes HACTLR2 from our Cortex-A53, -A47 and -A72
CPU models; this is correct, because those CPUs do not implement
this register.
Fixes: 0e0456ab8895a5e85
Backports commit f6287c24c66d6b9187c1c2887e1c7cfa4d304b0c from qemu
Cut-and-paste errors mean we're using FIELD_EX64() to extract fields from
some 32-bit ID register fields. Use FIELD_EX32() instead. (This makes
no difference in behaviour, it's just more consistent.)
Backports commit b3a816f6ce1ec184ab6072f50bbe4479fc5116c3 from qemu
Now we have moved ID_MMFR4 into the ARMISARegisters struct, we
can define and use an isar_feature for the presence of the
ARMv8.2-AA32HPD feature, rather than open-coding the test.
While we're here, correct a comment typo which missed an 'A'
from the feature name.
Backports commit 4036b7d1cd9fb1097a5f4bc24d7d31744256260f from qemu
The isar_feature_aa32_pan and isar_feature_aa32_ats1e1 functions
are supposed to be testing fields in ID_MMFR3; but a cut-and-paste
error meant we were looking at MVFR0 instead.
Fix the functions to look at the right register; this requires
us to move at least id_mmfr3 to the ARMISARegisters struct; we
choose to move all the ID_MMFRn registers for consistency.
Backports commit 10054016eda1b13bdd8340d100fd029cc8b58f36 from qemu
The ARMv8.4-PMU extension adds:
* one new required event, STALL
* one new system register PMMIR_EL1
(There are also some more L1-cache related events, but since
we don't implement any cache we don't provide these, in the
same way we don't provide the base-PMUv3 cache events.)
The STALL event "counts every attributable cycle on which no
attributable instruction or operation was sent for execution on this
PE". QEMU doesn't stall in this sense, so this is another
always-reads-zero event.
The PMMIR_EL1 register is a read-only register providing
implementation-specific information about the PMU; currently it has
only one field, SLOTS, which defines behaviour of the STALL_SLOT PMU
event. Since QEMU doesn't implement the STALL_SLOT event, we can
validly make the register read zero.
Backports commit 15dd1ebda4a6ef928d484c5a4f48b8ccb7438bb2 from qemu
The ARMv8.1-PMU extension requires:
* the evtCount field in PMETYPER<n>_EL0 is 16 bits, not 10
* MDCR_EL2.HPMD allows event counting to be disabled at EL2
* two new required events, STALL_FRONTEND and STALL_BACKEND
* ID register bits in ID_AA64DFR0_EL1 and ID_DFR0
We already implement the 16-bit evtCount field and the
HPMD bit, so all that is missing is the two new events:
STALL_FRONTEND
"counts every cycle counted by the CPU_CYCLES event on which no
operation was issued because there are no operations available
to issue to this PE from the frontend"
STALL_BACKEND
"counts every cycle counted by the CPU_CYCLES event on which no
operation was issued because the backend is unable to accept
any available operations from the frontend"
QEMU never stalls in this sense, so our implementation is trivial:
always return a zero count.
Backports commit 0727f63b1ecf765ebc48266f616f8fc362dc7fbc from qemu
We're going to want to read the DBGDIDR register from KVM in
a subsequent commit, which means it needs to be in the
ARMISARegisters sub-struct. Move it.
Backports commit 4426d3617d64922d97b74ed22e67e33b6fb7de0a from qemu
The AArch32 DBGDIDR defines properties like the number of
breakpoints, watchpoints and context-matching comparators. On an
AArch64 CPU, the register may not even exist if AArch32 is not
supported at EL1.
Currently we hard-code use of DBGDIDR to identify the number of
breakpoints etc; this works for all our TCG CPUs, but will break if
we ever add an AArch64-only CPU. We also have an assert() that the
AArch32 and AArch64 registers match, which currently works only by
luck for KVM because we don't populate either of these ID registers
from the KVM vCPU and so they are both zero.
Clean this up so we have functions for finding the number
of breakpoints, watchpoints and context comparators which look
in the appropriate ID register.
This allows us to drop the "check that AArch64 and AArch32 agree
on the number of breakpoints etc" asserts:
* we no longer look at the AArch32 versions unless that's the
right place to be looking
* it's valid to have a CPU (eg AArch64-only) where they don't match
* we shouldn't have been asserting the validity of ID registers
in a codepath used with KVM anyway
Backports commit 88ce6c6ee85d902f59dc65afc3ca86b34f02b9ed from qemu
Add the 64-bit version of the "is this a v8.1 PMUv3?"
ID register check function, and the _any_ version that
checks for either AArch32 or AArch64 support. We'll use
this in a later commit.
We don't (yet) do any isar_feature checks on ID_AA64DFR1_EL1,
but we move id_aa64dfr1 into the ARMISARegisters struct with
id_aa64dfr0, for consistency.
Backports commit 2a609df87d9b886fd38a190a754dbc241ff707e8 from qemu
Instead of open-coding a check on the ID_DFR0 PerfMon ID register
field, create a standardly-named isar_feature for "does AArch32 have
a v8.1 PMUv3" and use it.
This entails moving the id_dfr0 field into the ARMISARegisters struct.
Backports commit a617953855b65a602d36364b9643f7e5bc31288e from qemu
Add FIELD() definitions for the ID_AA64DFR0_EL1 and use them
where we currently have hard-coded bit values.
Backports commit ceb2744b47a1ef4184dca56a158eb3156b6eba36 from qemu
Instead of open-coding "ARM_FEATURE_AARCH64 ? aa64_predinv: aa32_predinv",
define and use an any_predinv isar_feature test function.
Backports commit 22e570730d15374453baa73ff2a699e01ef4e950 from qemu
Our current usage of the isar_feature feature tests almost always
uses an _aa32_ test when the code path is known to be AArch32
specific and an _aa64_ test when the code path is known to be
AArch64 specific. There is just one exception: in the vfp_set_fpscr
helper we check aa64_fp16 to determine whether the FZ16 bit in
the FP(S)CR exists, but this code is also used for AArch32.
There are other places in future where we're likely to want
a general "does this feature exist for either AArch32 or
AArch64" check (typically where architecturally the feature exists
for both CPU states if it exists at all, but the CPU might be
AArch32-only or AArch64-only, and so only have one set of ID
registers).
Introduce a new category of isar_feature_* functions:
isar_feature_any_foo() should be tested when what we want to
know is "does this feature exist for either AArch32 or AArch64",
and always returns the logical OR of isar_feature_aa32_foo()
and isar_feature_aa64_foo().
Backports commit 6e61f8391cc6cb0846d4bf078dbd935c2aeebff5 from qemu
Enforce a convention that an isar_feature function that tests a
32-bit ID register always has _aa32_ in its name, and one that
tests a 64-bit ID register always has _aa64_ in its name.
We already follow this except for three cases: thumb_div,
arm_div and jazelle, which all need _aa32_ adding.
(As noted in the comment, isar_feature_aa32_fp16_arith()
is an exception in that it currently tests ID_AA64PFR0_EL1,
but will switch to MVFR1 once we've properly implemented
FP16 for AArch32.)
Backports commit 873b73c0c891ec20adacc7bd1ae789294334d675 from qemu
Add definitions for all of the fields, up to ARMv8.5.
Convert the existing RESERVED register to a full register.
Query KVM for the value of the register for the host.
Backports commit 64761e10af2742a916c08271828890274137b9e8 from qemu
For aarch64, there's a dedicated msr (imm, reg) insn.
For aarch32, this is done via msr to cpsr. Writes from el0
are ignored, which is already handled by the CPSR_USER mask.
Backports commit 220f508f49c5f49fb771d5105f991c19ffede3f7 from qemu
The only remaining use was in op_helper.c. Use PSTATE_SS
directly, and move the commentary so that it is more obvious
what is going on.
Backports commit 70dae0d069c45250bbefd9424089383a8ac239de from qemu
CPSR_ERET_MASK was a useless renaming of CPSR_RESERVED.
The function also takes into account bits that the cpu
does not support.
Backports commit 437864216d63f052f3cd06ec8861d0e432496424 from qemu
Include definitions for all of the bits in ID_MMFR3.
We already have a definition for ID_AA64MMFR1.PAN.
Backports commit 3d6ad6bb466f487bcc861f99e2c9054230df1076 from qemu
To implement PAN, we will want to swap, for short periods
of time, to a different privileged mmu_idx. In addition,
we cannot do this with flushing alone, because the AT*
instructions have both PAN and PAN-less versions.
Add the ARMMMUIdx*_PAN constants where necessary next to
the corresponding ARMMMUIdx* constant.
Backports commit 452ef8cb8c7b06f44a30a3c3a54d3be82c4aef59 from qemu
This inline function has one user in cpu.c, and need not be exposed
otherwise. Code movement only, with fixups for checkpatch.
Backports commit 310cedf39dea240a89f90729fd99481ff6158e90 from qemu
The EL2&0 translation regime is affected by Load Register (unpriv).
The code structure used here will facilitate later changes in this
area for implementing UAO and NV.
Backports commit cc28fc30e333dc2f20ebfde54444697e26cd8f6d from qemu
Several of the EL1/0 registers are redirected to the EL2 version when in
EL2 and HCR_EL2.E2H is set. Many of these registers have side effects.
Link together the two ARMCPRegInfo structures after they have been
properly instantiated. Install common dispatch routines to all of the
relevant registers.
The same set of registers that are redirected also have additional
EL12/EL02 aliases created to access the original register that was
redirected.
Omit the generic timer registers from redirection here, because we'll
need multiple kinds of redirection from both EL0 and EL2.
Backports commit e2cce18f5c1d0d55328c585c8372cdb096bbf528 from qemu