We are going to want to determine whether sve is enabled
for EL other than current.
Backports commit 2de7ace292cf7846b0cda0e940272d2cb0e06859 from qemu
Check for EL3 before testing CPTR_EL3.EZ. Return 0 when the exception
should be routed via AdvSIMDFPAccessTrap. Mirror the structure of
CheckSVEEnabled more closely.
Fixes: 5be5e8eda78
Backports commit 60eed0869d68b91eff71cc0a0facb01983726a5d from qemu
Given that the only field defined for this new register may only
be 0, we don't actually need to change anything except the name.
Backports commit 9516d7725ec1deaa6ef5ccc5a26d005650d6c524 from qemu
A cut-and-paste error meant we were reading r4 from the v8M
callee-saves exception stack frame twice. This is harmless
since it just meant we did two memory accesses to the same
location, but it's unnecessary. Delete it.
Backports commit e5ae4d0c063fbcca4cbbd26bcefbf1760cfac2aa from qemu
In v7m_exception_taken() we were incorrectly using a
"LR bit EXCRET.ES is 1" check when it should be 0
(compare the pseudocode ExceptionTaken() function).
This meant we didn't stack the callee-saved registers
when tailchaining from a NonSecure to a Secure exception.
Backports commit 7b73a1ca05b33d42278ce29cea4652e22d408165 from qemu
Not only are the sve-related tb_flags fields unused when SVE is
disabled, but not all of the cpu registers are initialized properly
for computing same. This can corrupt other fields by ORing in -1,
which might result in QEMU crashing.
This bug was not present in 3.0, but this patch is cc'd to
stable because adf92eab90e3f5f34c285 where the bug was
introduced was marked for stable.
Backports commit e79b445d896deb61909be52b61b87c98a9ed96f7 from qemu
On 32-bit exception entry, CPSR.J must always be set to 0
(see v7A Arm ARM DDI0406C.c B1.8.5). CPSR.IL must also
be cleared on 32-bit exception entry (see v8A Arm ARM
DDI0487C.a G1.10).
Clear these bits. (This fixes a bug which will never be noticed
by non-buggy guests.)
Backports commit 829f9fd394ab082753308cbda165c13eaf8fae49 from qemu
Factor out the code which changes the CPU state so as to
actually take an exception to AArch32. We're going to want
to use this for handling exception entry to Hyp mode.
Backports commit dea8378bb3e86f2c6bd05afb3927619f7c51bb47 from qemu
The AArch32 HCR and HCR2 registers alias HCR_EL2
bits [31:0] and [63:32]; implement them.
Since HCR2 exists in ARMv8 but not ARMv7, we need new
regdef arrays for "we have EL3, not EL2, we're ARMv8"
and "we have EL2, we're ARMv8" to hold the definitions.
Backports commit ce4afed8396aabaf87cd42fbe8a4c14f7a9d5c10 from qemu
The v8 AArch32 HACTLR2 register maps to bits [63:32] of ACTLR_EL2.
We implement ACTLR_EL2 as RAZ/WI, so make HACTLR2 also RAZ/WI.
(We put the regdef next to ACTLR_EL2 as a reminder in case we
ever make ACTLR_EL2 something other than RAZ/WI).
Backports commit 0e0456ab8895a5e85998904549e331d36c2692a5 from qemu
The AArch32 HSR is the equivalent of AArch64 ESR_EL2;
we can implement it by marking our existing ESR_EL2 regdef
as STATE_BOTH. It also needs to be "RES0 from EL3 if
EL2 not implemented", so add the missing stanza to
el3_no_el2_cp_reginfo.
Backports commit 68e78e332cb1c3f8b0317a0443acb2b5e190f0dd from qemu
The AArch32 virtualization extensions support these fault address
registers:
* HDFAR: aliased with AArch64 FAR_EL2[31:0] and AArch32 DFAR(S)
* HIFAR: aliased with AArch64 FAR_EL2[63:32] and AArch32 IFAR(S)
Implement the accessors for these. This fixes in passing a bug
where we weren't implementing the "RES0 from EL3 if EL2 not
implemented" behaviour for AArch64 FAR_EL2.
Backports commit cba517c31e7df8932c4473c477a0f01d8a0adc48 from qemu
Implement the AArch32 HVBAR register; we can do this just by
making the existing VBAR_EL2 regdefs be STATE_BOTH.
Backports commit d79e0c0608899428281a17c414ccf1a82d86ab85 from qemu
ARMCPRegInfo structs will default to .cp = 15 if they
are ARM_CP_STATE_BOTH, but not if they are ARM_CP_STATE_AA32
(because a coprocessor number of 0 is valid for AArch32).
We forgot to explicitly set .cp = 15 for the HMAIR1 and
HAMAIR1 regdefs, which meant they would UNDEF when the guest
tried to access them under cp15.
Backports commit b5ede85bfb7ba1a8f6086494c82f400b29969f65 from qemu
We implement the HAMAIR1 register as RAZ/WI; we had a typo in the
regdef, though, and were incorrectly naming it HMAIR1 (which is
a different register which we also implement as RAZ/WI).
Backports commit 55b53c718b2f684793eeefcf1c1a548ee97e23aa from qemu
When FZ is set, input_denormal exceptions are recognized, but this does
not happen with FZ16. The softfloat code has no way to distinguish
these bits and will raise such exceptions into fp_status_f16.flags,
so ignore them when computing the accumulated flags.
Backports commit 19062c169e5bcdda3d60df9161228e107bf0f96e from qemu
When support for FZ16 was added, we failed to include the bit
within FPCR_MASK, which means that it could never be set.
Continue to zero FZ16 when ARMv8.2-FP16 is not enabled.
Fixes: d81ce0ef2c4
Backports commit 0b62159be33d45d00dfa34a317c6d3da30ffb480 from qemu
Tailchaining is an optimization in handling of exception return
for M-profile cores: if we are about to pop the exception stack
for an exception return, but there is a pending exception which
is higher priority than the priority we are returning to, then
instead of unstacking and then immediately taking the exception
and stacking registers again, we can chain to the pending
exception without unstacking and stacking.
For v6M and v7M it is IMPDEF whether tailchaining happens for pending
exceptions; for v8M this is architecturally required. Implement it
in QEMU for all M-profile cores, since in practice v6M and v7M
hardware implementations generally do have it.
(We were already doing tailchaining for derived exceptions which
happened during exception return, like the validity checks and
stack access failures; these have always been required to be
tailchained for all versions of the architecture.)
Backports commit 5f62d3b9e67bfc3deb970e3c7fb7df7e57d46fc3 from qemu
On exception return for M-profile, we must restore the CONTROL.SPSEL
bit from the EXCRET value before we do any kind of tailchaining,
including for the derived exceptions on integrity check failures.
Otherwise we will give the guest an incorrect EXCRET.SPSEL value on
exception entry for the tailchained exception.
Backports commit 89b1fec193b81b6ad0bd2975f2fa179980cc722e from qemu
In do_v7m_exception_exit(), we use the exc_secure variable to track
whether the exception we're returning from is secure or non-secure.
Unfortunately the statement initializing this was accidentally
inside an "if (env->v7m.exception != ARMV7M_EXCP_NMI)" conditional,
which meant that we were using the wrong value for NMI handlers.
Move the initialization out to the right place.
Backports commit b8109608bc6f3337298d44ac4369bf0bc8c3a1e4 from qemu
One of the required effects of setting HCR_EL2.TGE is that when
SCR_EL3.NS is 1 then SCTLR_EL1.M must behave as if it is zero for
all purposes except direct reads. That is, it effectively disables
the MMU for the NS EL0/EL1 translation regime.
Backports commit 3d0e3080d8b7abcddc038d18e8401861c369c4c1 from qemu
The IMO, FMO and AMO bits in HCR_EL2 are defined to "behave as
1 for all purposes other than direct reads" if HCR_EL2.TGE
is set and HCR_EL2.E2H is 0, and to "behave as 0 for all
purposes other than direct reads" if HCR_EL2.TGE is set
and HRC_EL2.E2H is 1.
To avoid having to check E2H and TGE everywhere where we test IMO and
FMO, provide accessors arm_hcr_el2_imo(), arm_hcr_el2_fmo()and
arm_hcr_el2_amo(). We don't implement ARMv8.1-VHE yet, so the E2H
case will never be true, but we include the logic to save effort when
we eventually do get to that.
(Note that in several of these callsites the change doesn't
actually make a difference as either the callsite is handling
TGE specially anyway, or the CPU can't get into that situation
with TGE set; we change everywhere for consistency.)
Backports commit ac656b166b57332ee397e9781810c956f4f5fde5 from qemu
Some debug registers can be trapped via MDCR_EL2 bits TDRA, TDOSA,
and TDA, which we implement in the functions access_tdra(),
access_tdosa() and access_tda(). If MDCR_EL2.TDE or HCR_EL2.TGE
are 1, the TDRA, TDOSA and TDA bits should behave as if they were 1.
Implement this by having the access functions check MDCR_EL2.TDE
and HCR_EL2.TGE.
Backports commit 30ac6339dca3fe0d05a611f12eedd5af20af585a from qemu
Forbid stack alignment change. (CCR)
Reserve FAULTMASK, BASEPRI registers.
Report any fault as a HardFault. Disable MemManage, BusFault and
UsageFault, so they always escalated to HardFault. (SHCSR)
Backports commit 22ab3460017cfcfb6b50f05838ad142e08becce5 from qemu
To correctly handle small (less than TARGET_PAGE_SIZE) MPU regions,
we must correctly handle the case where the address being looked
up hits in an MPU region that is not small but the address is
in the same page as a small region. For instance if MPU region
1 covers an entire page from 0x2000 to 0x2400 and MPU region
2 is small and covers only 0x2200 to 0x2280, then for an access
to 0x2000 we must not return a result covering the full page
even though we hit the page-sized region 1. Otherwise we will
then cache that result in the TLB and accesses that should
hit region 2 will incorrectly find the region 1 information.
Check for the case where we miss an MPU region but it is still
within the same page, and in that case narrow the size we will
pass to tlb_set_page_with_attrs() for whatever the final
outcome is of the MPU lookup.
Backports commit 9d2b5a58f85be2d8e129c4b53d6708ecf8796e54 from qemu
For M-profile exception returns, the mmu index to use for exception
return unstacking is supposed to be that of wherever we are returning to:
* if returning to handler mode, privileged
* if returning to thread mode, privileged or unprivileged depending on
CONTROL.nPRIV for the destination security state
We were passing the wrong thing as the 'priv' argument to
arm_v7m_mmu_idx_for_secstate_and_priv(). The effect was that guests
which programmed the MPU to behave differently for privileged and
unprivileged code could get spurious MemManage Unstack exceptions.
Backports commit 2b83714d4ea659899069a4b94aa2dfadc847a013 from qemu
Leave ARM_CP_SVE, removing ARM_CP_FPU; the sve_access_check
produced by the flag already includes fp_access_check. If
we also check ARM_CP_FPU the double fp_access_check asserts.
Backports commit 11d7870b1b4d038d7beb827f3afa72e284701351 from qemu
Since kernel commit a86bd139f2 (arm64: arch_timer: Enable CNTVCT_EL0
trap..), released in kernel version v4.12, user-space has been able
to read these system registers. As we can't use QEMUTimer's in
linux-user mode we just directly call cpu_get_clock().
Backports commit 26c4a83bd4707797868174332a540f7d61288d15 from qemu
Allow ARMv8M to handle small MPU and SAU region sizes, by making
get_phys_add_pmsav8() set the page size to the 1 if the MPU or
SAU region covers less than a TARGET_PAGE_SIZE.
We choose to use a size of 1 because it makes no difference to
the core code, and avoids having to track both the base and
limit for SAU and MPU and then convert into an artificially
restricted "page size" that the core code will then ignore.
Since the core TCG code can't handle execution from small
MPU regions, we strip the exec permission from them so that
any execution attempts will cause an MPU exception, rather
than allowing it to end up with a cpu_abort() in
get_page_addr_code().
(The previous code's intention was to make any small page be
treated as having no permissions, but unfortunately errors
in the implementation meant that it didn't behave that way.
It's possible that some binaries using small regions were
accidentally working with our old behaviour and won't now.)
We also retain an existing bug, where we ignored the possibility
that the SAU region might not cover the entire page, in the
case of executable regions. This is necessary because some
currently-working guest code images rely on being able to
execute from addresses which are covered by a page-sized
MPU region but a smaller SAU region. We can remove this
workaround if we ever support execution from small regions.
Backports commit 720424359917887c926a33d248131fbff84c9c28 from qemu
We want to handle small MPU region sizes for ARMv7M. To do this,
make get_phys_addr_pmsav7() set the page size to the region
size if it is less that TARGET_PAGE_SIZE, rather than working
only in TARGET_PAGE_SIZE chunks.
Since the core TCG code con't handle execution from small
MPU regions, we strip the exec permission from them so that
any execution attempts will cause an MPU exception, rather
than allowing it to end up with a cpu_abort() in
get_page_addr_code().
(The previous code's intention was to make any small page be
treated as having no permissions, but unfortunately errors
in the implementation meant that it didn't behave that way.
It's possible that some binaries using small regions were
accidentally working with our old behaviour and won't now.)
Backports commit e5e40999b5e03567ef654546e3d448431643f8f3 from qemu
Depending on the host abi, float16, aka uint16_t, values are
passed and returned either zero-extended in the host register
or with garbage at the top of the host register.
The tcg code generator has so far been assuming garbage, as that
matches the x86 abi, but this is incorrect for other host abis.
Further, target/arm has so far been assuming zero-extended results,
so that it may store the 16-bit value into a 32-bit slot with the
high 16-bits already clear.
Rectify both problems by mapping "f16" in the helper definition
to uint32_t instead of (a typedef for) uint16_t. This forces
the host compiler to assume garbage in the upper 16 bits on input
and to zero-extend the result on output.
Backports commit 6c2be133a7478e443c99757b833d0f265c48e0a6 from qemu
This is a preparation for the coming feature of creating dynamically an XML
description for the ARM sysregs.
Add "_S" suffix to the secure version of sysregs that have both S and NS views
Replace (S) and (NS) by _S and _NS for the register that are manually defined,
so all the registers follow the same convention.
Backports commit 9c513e786d85cc58b8ba56a482566f759e0835b6 from qemu
This is a preparation for the coming feature of creating dynamically an XML
description for the ARM sysregs.
A register has ARM_CP_NO_GDB enabled will not be shown in the dynamic XML.
This bit is enabled automatically when creating CP_ANY wildcard aliases.
This bit could be enabled manually for any register we want to remove from the
dynamic XML description.
Backports commit 1f16378718fa87d63f70d0797f4546a88d8e3dd7 from qemu
The ARM ARM specifies FZ16 is suppressed for conversions. Rather than
pushing this logic into the softfloat code we can simply save the FZ
state and temporarily disable it for the softfloat call.
Backports commit 0acb9e7cb341cd767e39ec0875c8706eb2f1c359 from qemu
Instead of passing env and leaving it up to the helper to get the
right fpstatus we pass it explicitly. There was already a get_fpstatus
helper for neon for the 32 bit code. We also add an get_ahp_flag() for
passing the state of the alternative FP16 format flag. This leaves
scope for later tracking the AHP state in translation flags.
Backports commit 486624fcd3eaca6165ab8401d73bbae6c0fb81c1 from qemu
The instruction "ucvtf v0.4h, v04h, #2", with input 0x8000u,
overflows the intermediate float16 to infinity before we have a
chance to scale the output. Use float64 as the intermediate type
so that no input argument (uint32_t in this case) can overflow
or round before scaling. Given the declared argument, the signed
int32_t function has the same problem.
When converting from float16 to integer, using u/int32_t instead
of u/int16_t means that the bounding is incorrect.
Backports commit 88808a022c06f98d81cd3f2d105a5734c5614839 from qemu
The duplication of id_tlbtr_reginfo was unintentionally added within
3281af8114c6b8ead02f08b58e3c36895c1ea047 which should have been
id_mpuir_reginfo.
The effect was that for OMAP and StrongARM CPUs we would
incorrectly UNDEF writes to MPUIR rather than NOPing them.
Backports commit 100061121c1f69a672ce7bb3e9e3781f8018f9f6 from qemu