Rename arm_ccnt_enabled to pmu_counter_enabled, and add logic to only
return 'true' if the specified counter is enabled and neither prohibited
or filtered.
Backports commit 033614c47de78409ad3fb39bb7bd1483b71c6789 from qemu
Because of the PMU's design, many register accesses have side effects
which are inter-related, meaning that the normal method of saving CP
registers can result in inconsistent state. These side-effects are
largely handled in pmu_op_start/finish functions which can be called
before and after the state is saved/restored. By doing this and adding
raw read/write functions for the affected registers, we avoid
migration-related inconsistencies.
Backports relevant parts of commit
980ebe87053792a5bdefaa87777c40914fd4f673 from qemu
pmccntr_read and pmccntr_write contained duplicate code that was already
being handled by pmccntr_sync. Consolidate the duplicated code into two
functions: pmccntr_op_start and pmccntr_op_finish. Add a companion to
c15_ccnt in CPUARMState so that we can simultaneously save both the
architectural register value and the last underlying cycle count - this
ensures time isn't lost and will also allow us to access the 'old'
architectural register value in order to detect overflows in later
patches.
Backports commit 5d05b9d462666ed21b7fef61aa45dec9aaa9f0ff from qemu
The arm_regime_tbi{0,1} functions are replacable with the new function
by giving the lowest and highest address.
Backports commit 5d8634f5a3a8474525edcfd581a659830e9e97c0 from qemu
Use TBID in aa64_va_parameters depending on the data parameter.
This automatically updates all existing users of the function.
Backports commit 8220af7e4d34c858898fbfe55943aeea8f4e875f from qemu
We need to reuse this from helper-a64.c. Provide a stub
definition for CONFIG_USER_ONLY. This matches the stub
definitions that we removed for arm_regime_tbi{0,1} before.
Backports commit bf0be433878935e824479e8ae890493e1fb646ed from qemu
We will shortly want to talk about TBI as it relates to data.
Passing around a pair of variables is less convenient than a
single variable.
Backports commit 476a4692f06e381117fb7ad0d04d37c9c2612198 from qemu
Split out functions to extract the virtual address parameters.
Let the functions choose T0 or T1 address space half, if present.
Extract (most of) the control bits that vary between EL or Tx.
Backports commit ba97be9f4a4ecaf16a1454dc669e5f3d935d3b63 from qemu
While we could expose stage_1_mmu_idx, the combination is
probably going to be more useful.
Backports commit 64be86ab1b5ef10b660a4230ee7f27c0da499043 from qemu
The pattern
ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
is computing the full ARMMMUIdx, stripping off the ARM bits,
and then putting them back.
Avoid the extra two steps with the appropriate helper function.
Backports commit 50494a279dab22a015aba9501a94fcc3cd52140e from qemu
There are 5 bits of state that could be added, but to save
space within tbflags, add only a single enable bit.
Helpers will determine the rest of the state at runtime.
Backports commit 0816ef1bfcd3ac53e7454b62ca436727887f6056 from qemu
In U-boot, we switch from S-SVC -> Mon -> Hyp mode when we want to
enter Hyp mode. The change into Hyp mode is done by doing an
exception return from Mon. This doesn't work with current QEMU.
The problem is that in bad_mode_switch() we refuse to allow
the change of mode.
Note that bad_mode_switch() is used to do validation for two situations:
(1) changes to mode by instructions writing to CPSR.M
(ie not exception take/return) -- this corresponds to the
Armv8 Arm ARM pseudocode Arch32.WriteModeByInstr
(2) changes to mode by exception return
Attempting to enter or leave Hyp mode via case (1) is forbidden in
v8 and UNPREDICTABLE in v7, and QEMU is correct to disallow it
there. However, we're already doing that check at the top of the
bad_mode_switch() function, so if that passes then we should allow
the case (2) exception return mode changes to switch into Hyp mode.
We want to test whether we're trying to return to the nonexistent
"secure Hyp" mode, so we need to look at arm_is_secure_below_el3()
rather than arm_is_secure(), since the latter is always true if
we're in Mon (EL3).
Backports commit 2d2a4549cc29850aab891495685a7b31f5254b12 from qemu
Use "register" TBFLAG_ANY to indicate shared state between
A32 and A64, and "registers" TBFLAG_A32 & TBFLAG_A64 for
fields that are specific to the given cpu state.
Move ARM_TBFLAG_BE_DATA to shared state, instead of its current
placement within "Bit usage when in AArch32 state".
Backports commit aad821ac4faad369fad8941d25e59edf2514246b from qemu
Provide a trivial implementation with zero limited ordering regions,
which causes the LDLAR and STLLR instructions to devolve into the
LDAR and STLR instructions from the base ARMv8.0 instruction set.
Backports commit 2d7137c10fafefe40a0a049ff8a7bd78b66e661f from qemu
Since arm_hcr_el2_eff includes a check against
arm_is_secure_below_el3, we can often remove a
nearby check against secure state.
In some cases, sort the call to arm_hcr_el2_eff
to the end of a short-circuit logical sequence.
Backports commit 7c208e0f4171c9e2cc35efc12e1bf264a45c229f from qemu
Replace arm_hcr_el2_{fmo,imo,amo} with a more general routine
that also takes SCR_EL3.NS (aka arm_is_secure_below_el3) into
account, as documented for the plethora of bits in HCR_EL2.
Backports commit f77784446045231f7dfa46c9b872091241fa1557 from qemu
The bulk of the work here, beyond base HPD, is defining the
TTBCR2 register. In addition we must check TTBCR.T2E, which
is not present (RES0) for AArch64.
Backports commit ab638a328fd099ba0b23c8c818eb39f2c35414f3 from qemu
Since the TCR_*.HPD bits were RES0 in ARMv8.0, we can simply
interpret the bits as if ARMv8.1-HPD is present without checking.
We will need a slightly different check for hpd for aarch32.
Backports commit 037c13c5904f5fc67bb0ab7dd91ae07347aedee9 from qemu
Because EL3 has a fixed execution mode, we can properly decide
which of the bits are RES{0,1}.
Backports commit ea22747c63c9a894777aa41a7af85c3d08e39f81 from qemu
The enable for TGE has already occurred within arm_hcr_el2_amo
and friends. Moreover, when E2H is also set, the sense is
supposed to be reversed, which has also already occurred within
the helpers.
Backports commit 619959c3583dad325c36f09ce670e7d091382cae from qemu
At the same time, define the fields for these registers,
and use those defines in arm_pamax().
Backports commit 3dc91ddbc68391f934bf6945853e99cf6810fc00 from qemu
Hyp mode is an exception to the general rule that each AArch32
mode has its own r13, r14 and SPSR -- it has a banked r13 and
SPSR but shares its r14 with User and System mode. We were
incorrectly implementing it as banked, which meant that on
entry to Hyp mode r14 was 0 rather than the USR/SYS r14.
We provide a new function r14_bank_number() which is like
the existing bank_number() but provides the index into
env->banked_r14[]; bank_number() provides the index to use
for env->banked_r13[] and env->banked_cpsr[].
All the points in the code that were using bank_number()
to index into env->banked_r14[] are updated for consintency:
* switch_mode() -- this is the only place where we fix
an actual bug
* aarch64_sync_32_to_64() and aarch64_sync_64_to_32():
no behavioural change as we already special-cased Hyp R14
* kvm32.c: no behavioural change since the guest can't ever
be in Hyp mode, but conceptually the right thing to do
* msr_banked()/mrs_banked(): we can never get to the case
that accesses banked_r14[] with tgtmode == ARM_CPU_MODE_HYP,
so no behavioural change
Backports commit 593cfa2b637b92d37eef949653840dc065cdb960 from qemu
In commit 8a0fc3a29fc2315325400 we tried to implement HCR_EL2.{VI,VF},
but we got it wrong and had to revert it.
In that commit we implemented them as simply tracking whether there
is a pending virtual IRQ or virtual FIQ. This is not correct -- these
bits cause a software-generated VIRQ/VFIQ, which is distinct from
whether there is a hardware-generated VIRQ/VFIQ caused by the
external interrupt controller. So we need to track separately
the HCR_EL2 bit state and the external virq/vfiq line state, and
OR the two together to get the actual pending VIRQ/VFIQ state.
Fixes: 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f
Backports commit 89430fc6f80a5aef1d4cbd6fc26b40c30793786c from qemu
This reverts commit 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f.
The implementation of HCR.VI and VF in that commit is not
correct -- they do not track the overall "is there a pending
VIRQ or VFIQ" status, but whether there is a pending interrupt
due to "this mechanism", ie the hypervisor having set the VI/VF
bits. The overall pending state for VIRQ and VFIQ is effectively
the logical OR of the inbound lines from the GIC with the
VI and VF bits. Commit 8a0fc3a29fc231 would result in pending
VIRQ/VFIQ possibly being lost when the hypervisor wrote to HCR.
As a preliminary to implementing the HCR.VI/VF feature properly,
revert the broken one entirely.
Backports commit c624ea0fa7ffc9e2cc3e2b36c92b5c960954489f from qemu
Before we supported direct execution from MMIO regions, we
implemented workarounds in commit 720424359917887c926a33d2
which let us avoid doing so, even if the SAU or MPU region
was less than page-sized.
Once we implemented execute-from-MMIO, we removed part
of those workarounds in commit d4b6275df320cee76; but
we forgot the one in get_phys_addr_pmsav8() which
suppressed use of small SAU regions in executable regions.
Remove that workaround now.
Backports commit 521ed6b4015ba39a2e39c65a94643f3e6412edc4 from qemu
Now that we have full support for small regions, including execution,
we can remove the workarounds where we marked all small regions as
non-executable for the M-profile MPU and SAU.
Backports commit d4b6275df320cee764d56b194b1898547f545857 from qemu
Remove a TODO comment about implementing the vectored interrupt
controller. We have had an implementation of that for a decade;
it's in hw/intc/pl190.c.
Backports commit e24ad484909e7a00ca4f6332f3698facf0ba3394 from qemu
ATS1HR and ATS1HW (which allow AArch32 EL2 to do address translations
on the EL2 translation regime) were implemented in commit 14db7fe09a2c8.
However, we got them wrong: these should do stage 1 address translations
as defined for NS-EL2, which is ARMMMUIdx_S1E2. We were incorrectly
making them perform stage 2 translations.
A few years later in commit 1313e2d7e2cd we forgot entirely that
we'd implemented ATS1Hx, and added a comment that ATS1Hx were
"not supported yet". Remove the comment; there is no extra code
needed to handle these operations in do_ats_write(), because
arm_s1_regime_using_lpae_format() returns true for ARMMMUIdx_S1E2,
which forces 64-bit PAR format.
Backports commit 23463e0e4aeb2f0a9c60549a2c163f4adc0b8512 from qemu
In do_ats_write() we construct a PAR value based on the result
of the translation. A comment says "S2WLK and FSTAGE are always
zero, because we don't implement virtualization".
Since we do in fact now implement virtualization, add the missing
code that sets these bits based on the reported ARMMMUFaultInfo.
(These bits are named PTW and S in ARMv8, so we follow that
convention in the new comments in this patch.)
Backports commit 0f7b791b35f24cb1333f779705a3f6472e6935de from qemu
Since QEMU does not implement ASIDs, changes to the ASID must flush the
tlb. However, if the ASID does not change there is no reason to flush.
In testing a boot of the Ubuntu installer to the first menu, this reduces
the number of flushes by 30%, or nearly 600k instances.
Backports commit 93f379b0c43617b1361f742f261479eaed4959cb from qemu
The EL3 version of this register does not include an ASID,
and so the tlb_flush performed by vmsa_ttbr_write is not needed.
Backports commit f478847f1ee0df9397f561025ab2f687fd923571 from qemu
For traps of FP/SIMD instructions to AArch32 Hyp mode, the syndrome
provided in HSR has more information than is reported to AArch64.
Specifically, there are extra fields TA and coproc which indicate
whether the trapped instruction was FP or SIMD. Add this extra
information to the syndromes we construct, and mask it out when
taking the exception to AArch64.
Backports commit 4be42f4013fa1a9df47b48aae5148767bed8e80c from qemu
For the v7 version of the Arm architecture, the IL bit in
syndrome register values where the field is not valid was
defined to be UNK/SBZP. In v8 this is RES1, which is what
QEMU currently implements. Handle the desired v7 behaviour
by squashing the IL bit for the affected cases:
* EC == EC_UNCATEGORIZED
* prefetch aborts
* data aborts where ISV is 0
(The fourth case listed in the v8 Arm ARM DDI 0487C.a in
section G7.2.70, "illegal state exception", can't happen
on a v7 CPU.)
This deals with a corner case noted in a comment.
Backports commit 2ed08180db096ea5e44573529b85e09b1ed10b08 from qemu
Create and use a utility function to extract the EC field
from a syndrome, rather than open-coding the shift.
Backports commit 64b91e3f890a8c221b65c6820a5ee39107ee40f5 from qemu
If the HCR_EL2 PTW virtualizaiton configuration register bit
is set, then this means that a stage 2 Permission fault must
be generated if a stage 1 translation table access is made
to an address that is mapped as Device memory in stage 2.
Implement this.
Backports commit eadb2febf05452bd8062c4c7823d7d789142500c from qemu
The HCR_EL2 VI and VF bits are supposed to track whether there is
a pending virtual IRQ or virtual FIQ. For QEMU we store the
pending VIRQ/VFIQ status in cs->interrupt_request, so this means:
* if the register is read we must get these bit values from
cs->interrupt_request
* if the register is written then we must write the bit
values back into cs->interrupt_request
Backports commit 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f from qemu
The A/I/F bits in ISR_EL1 should track the virtual interrupt
status, not the physical interrupt status, if the associated
HCR_EL2.AMO/IMO/FMO bit is set. Implement this, rather than
always showing the physical interrupt status.
We don't currently implement anything to do with external
aborts, so this applies only to the I and F bits (though it
ought to be possible for the outer guest to present a virtual
external abort to the inner guest, even if QEMU doesn't
emulate physical external aborts, so there is missing
functionality in this area).
Backports commit 636540e9c40bd0931ef3022cb953bb7dbecd74ed from qemu
The HCR.DC virtualization configuration register bit has the
following effects:
* SCTLR.M behaves as if it is 0 for all purposes except
direct reads of the bit
* HCR.VM behaves as if it is 1 for all purposes except
direct reads of the bit
* the memory type produced by the first stage of the EL1&EL0
translation regime is Normal Non-Shareable,
Inner Write-Back Read-Allocate Write-Allocate,
Outer Write-Back Read-Allocate Write-Allocate.
Implement this behaviour.
Backports commit 9d1bab337caf2324a233e5937f415fad4ce1641b from qemu
The HCR.FB virtualization configuration register bit requests that
TLB maintenance, branch predictor invalidate-all and icache
invalidate-all operations performed in NS EL1 should be upgraded
from "local CPU only to "broadcast within Inner Shareable domain".
For QEMU we NOP the branch predictor and icache operations, so
we only need to upgrade the TLB invalidates:
AArch32 TLBIALL, TLBIMVA, TLBIASID, DTLBIALL, DTLBIMVA, DTLBIASID,
ITLBIALL, ITLBIMVA, ITLBIASID, TLBIMVAA, TLBIMVAL, TLBIMVAAL
AArch64 TLBI VMALLE1, TLBI VAE1, TLBI ASIDE1, TLBI VAAE1,
TLBI VALE1, TLBI VAALE1
Backports commit b4ab8ce98b8c482c8986785800f238d32a1578a9 from qemu
For AArch32, exception return happens through certain kinds
of CPSR write. We don't currently have any CPU_LOG_INT logging
of these events (unlike AArch64, where we log in the ERET
instruction). Add some suitable logging.
This will log exception returns like this:
Exception return from AArch32 hyp to usr PC 0x80100374
paralleling the existing logging in the exception_return
helper for AArch64 exception returns:
Exception return from AArch64 EL2 to AArch64 EL0 PC 0x8003045c
Exception return from AArch64 EL2 to AArch32 EL0 PC 0x8003045c
(Note that an AArch32 exception return can only be
AArch32->AArch32, never to AArch64.)
Backports commit 81e3728407bf4a12f83e14fd410d5f0a7d29b5b4 from qemu
Instantiating mps2-an505 (cortex-m33) will fail make check when
V7VE asserts that ID_ISAR0.Divide includes ARM division. It is
also wrong to include ARM_FEATURE_LPAE.
Backports commit 5256df880d1312a58472af3fb0a3c51e708f2161 from qemu
The get_phys_addr() functions take a pointer to an ARMMMUFaultInfo
struct, which they fill in only if a fault occurs. This means that
the caller must always zero-initialize the struct before passing
it in. We forgot to do this in v7m_stack_read() and v7m_stack_write().
Correct the error.
Backports commit ab44c7b71fa683b9402bea0d367b87c881704188 from qemu
This is an amendment to my earlier patch:
commit 7ece99b17e832065236c07a158dfac62619ef99b
Backports commit 599b71e277ac7e92807191b20b7163a28c5450ad from qemu
At present we assert:
arm_el_is_aa64: Assertion `el >= 1 && el <= 3' failed.
The comment in arm_el_is_aa64 explains why asking about EL0 without
extra information is impossible. Add an extra argument to provide
it from the surrounding context.
Fixes: 0ab5953b00b3
Backports commit 9a05f7b67436abdc52bce899f56acfde2e831454 from qemu
Updating the NS stack pointer via MSR to SP_NS should include
a check whether the new SP value is below the stack limit.
No other kinds of update to the various stack pointer and
limit registers via MSR should perform a check.
Backports commit 167765f0739e4a108e8c2e2ff2f37917df5658f9 from qemu
Add checks for breaches of the v8M stack limit when the
stack pointer is decremented to push the exception frame
for exception entry.
Note that the exception-entry case is unique in that the
stack pointer is updated to be the limit value if the limit
is hit (per rule R_ZLZG).
Backports commit c32da7aa6205a5ff62ae8d5062f7cad0eae4c1fd from qemu
We're going to want v7m_using_psp() in op_helper.c in the
next patch, so move it from helper.c to internals.h.
Backports commit 5529bf188d996391ff52a0e1801daf9c6a6bfcb0 from qemu
Define EXCP_STKOF, and arrange for it to cause us to take
a UsageFault with CFSR.STKOF set.
Backports commit 86f026de22d8854eecc004af44895de74225794f from qemu
The Arm v8M architecture includes hardware stack limit checking.
When certain instructions update the stack pointer, if the new
value of SP is below the limit set in the associated limit register
then an exception is taken. Add a TB flag that tracks whether
the limit-checking code needs to be emitted.
Backports commit 4730fb85035e99c909db7d14ef76cd17f28f4423 from qemu
Use the existing helpers to determine if (1) the fpu is enabled,
(2) sve state is enabled, and (3) the current sve vector length.
Backports commit ced3155141755ba244c988c72c4bde32cc819670 from qemu
SVE vector length can change when changing EL, or when writing
to one of the ZCR_ELn registers.
For correctness, our implementation requires that predicate bits
that are inaccessible are never set. Which means noticing length
changes and zeroing the appropriate register bits.
Backports commit 0ab5953b00b3165877d00cf75de628c51670b550 from qemu
We are going to want to determine whether sve is enabled
for EL other than current.
Backports commit 2de7ace292cf7846b0cda0e940272d2cb0e06859 from qemu
Check for EL3 before testing CPTR_EL3.EZ. Return 0 when the exception
should be routed via AdvSIMDFPAccessTrap. Mirror the structure of
CheckSVEEnabled more closely.
Fixes: 5be5e8eda78
Backports commit 60eed0869d68b91eff71cc0a0facb01983726a5d from qemu
Given that the only field defined for this new register may only
be 0, we don't actually need to change anything except the name.
Backports commit 9516d7725ec1deaa6ef5ccc5a26d005650d6c524 from qemu
A cut-and-paste error meant we were reading r4 from the v8M
callee-saves exception stack frame twice. This is harmless
since it just meant we did two memory accesses to the same
location, but it's unnecessary. Delete it.
Backports commit e5ae4d0c063fbcca4cbbd26bcefbf1760cfac2aa from qemu
In v7m_exception_taken() we were incorrectly using a
"LR bit EXCRET.ES is 1" check when it should be 0
(compare the pseudocode ExceptionTaken() function).
This meant we didn't stack the callee-saved registers
when tailchaining from a NonSecure to a Secure exception.
Backports commit 7b73a1ca05b33d42278ce29cea4652e22d408165 from qemu
Not only are the sve-related tb_flags fields unused when SVE is
disabled, but not all of the cpu registers are initialized properly
for computing same. This can corrupt other fields by ORing in -1,
which might result in QEMU crashing.
This bug was not present in 3.0, but this patch is cc'd to
stable because adf92eab90e3f5f34c285 where the bug was
introduced was marked for stable.
Backports commit e79b445d896deb61909be52b61b87c98a9ed96f7 from qemu
On 32-bit exception entry, CPSR.J must always be set to 0
(see v7A Arm ARM DDI0406C.c B1.8.5). CPSR.IL must also
be cleared on 32-bit exception entry (see v8A Arm ARM
DDI0487C.a G1.10).
Clear these bits. (This fixes a bug which will never be noticed
by non-buggy guests.)
Backports commit 829f9fd394ab082753308cbda165c13eaf8fae49 from qemu
Factor out the code which changes the CPU state so as to
actually take an exception to AArch32. We're going to want
to use this for handling exception entry to Hyp mode.
Backports commit dea8378bb3e86f2c6bd05afb3927619f7c51bb47 from qemu
The AArch32 HCR and HCR2 registers alias HCR_EL2
bits [31:0] and [63:32]; implement them.
Since HCR2 exists in ARMv8 but not ARMv7, we need new
regdef arrays for "we have EL3, not EL2, we're ARMv8"
and "we have EL2, we're ARMv8" to hold the definitions.
Backports commit ce4afed8396aabaf87cd42fbe8a4c14f7a9d5c10 from qemu
The v8 AArch32 HACTLR2 register maps to bits [63:32] of ACTLR_EL2.
We implement ACTLR_EL2 as RAZ/WI, so make HACTLR2 also RAZ/WI.
(We put the regdef next to ACTLR_EL2 as a reminder in case we
ever make ACTLR_EL2 something other than RAZ/WI).
Backports commit 0e0456ab8895a5e85998904549e331d36c2692a5 from qemu
The AArch32 HSR is the equivalent of AArch64 ESR_EL2;
we can implement it by marking our existing ESR_EL2 regdef
as STATE_BOTH. It also needs to be "RES0 from EL3 if
EL2 not implemented", so add the missing stanza to
el3_no_el2_cp_reginfo.
Backports commit 68e78e332cb1c3f8b0317a0443acb2b5e190f0dd from qemu
The AArch32 virtualization extensions support these fault address
registers:
* HDFAR: aliased with AArch64 FAR_EL2[31:0] and AArch32 DFAR(S)
* HIFAR: aliased with AArch64 FAR_EL2[63:32] and AArch32 IFAR(S)
Implement the accessors for these. This fixes in passing a bug
where we weren't implementing the "RES0 from EL3 if EL2 not
implemented" behaviour for AArch64 FAR_EL2.
Backports commit cba517c31e7df8932c4473c477a0f01d8a0adc48 from qemu
Implement the AArch32 HVBAR register; we can do this just by
making the existing VBAR_EL2 regdefs be STATE_BOTH.
Backports commit d79e0c0608899428281a17c414ccf1a82d86ab85 from qemu
ARMCPRegInfo structs will default to .cp = 15 if they
are ARM_CP_STATE_BOTH, but not if they are ARM_CP_STATE_AA32
(because a coprocessor number of 0 is valid for AArch32).
We forgot to explicitly set .cp = 15 for the HMAIR1 and
HAMAIR1 regdefs, which meant they would UNDEF when the guest
tried to access them under cp15.
Backports commit b5ede85bfb7ba1a8f6086494c82f400b29969f65 from qemu
We implement the HAMAIR1 register as RAZ/WI; we had a typo in the
regdef, though, and were incorrectly naming it HMAIR1 (which is
a different register which we also implement as RAZ/WI).
Backports commit 55b53c718b2f684793eeefcf1c1a548ee97e23aa from qemu
When FZ is set, input_denormal exceptions are recognized, but this does
not happen with FZ16. The softfloat code has no way to distinguish
these bits and will raise such exceptions into fp_status_f16.flags,
so ignore them when computing the accumulated flags.
Backports commit 19062c169e5bcdda3d60df9161228e107bf0f96e from qemu
When support for FZ16 was added, we failed to include the bit
within FPCR_MASK, which means that it could never be set.
Continue to zero FZ16 when ARMv8.2-FP16 is not enabled.
Fixes: d81ce0ef2c4
Backports commit 0b62159be33d45d00dfa34a317c6d3da30ffb480 from qemu
Tailchaining is an optimization in handling of exception return
for M-profile cores: if we are about to pop the exception stack
for an exception return, but there is a pending exception which
is higher priority than the priority we are returning to, then
instead of unstacking and then immediately taking the exception
and stacking registers again, we can chain to the pending
exception without unstacking and stacking.
For v6M and v7M it is IMPDEF whether tailchaining happens for pending
exceptions; for v8M this is architecturally required. Implement it
in QEMU for all M-profile cores, since in practice v6M and v7M
hardware implementations generally do have it.
(We were already doing tailchaining for derived exceptions which
happened during exception return, like the validity checks and
stack access failures; these have always been required to be
tailchained for all versions of the architecture.)
Backports commit 5f62d3b9e67bfc3deb970e3c7fb7df7e57d46fc3 from qemu
On exception return for M-profile, we must restore the CONTROL.SPSEL
bit from the EXCRET value before we do any kind of tailchaining,
including for the derived exceptions on integrity check failures.
Otherwise we will give the guest an incorrect EXCRET.SPSEL value on
exception entry for the tailchained exception.
Backports commit 89b1fec193b81b6ad0bd2975f2fa179980cc722e from qemu
In do_v7m_exception_exit(), we use the exc_secure variable to track
whether the exception we're returning from is secure or non-secure.
Unfortunately the statement initializing this was accidentally
inside an "if (env->v7m.exception != ARMV7M_EXCP_NMI)" conditional,
which meant that we were using the wrong value for NMI handlers.
Move the initialization out to the right place.
Backports commit b8109608bc6f3337298d44ac4369bf0bc8c3a1e4 from qemu
One of the required effects of setting HCR_EL2.TGE is that when
SCR_EL3.NS is 1 then SCTLR_EL1.M must behave as if it is zero for
all purposes except direct reads. That is, it effectively disables
the MMU for the NS EL0/EL1 translation regime.
Backports commit 3d0e3080d8b7abcddc038d18e8401861c369c4c1 from qemu
The IMO, FMO and AMO bits in HCR_EL2 are defined to "behave as
1 for all purposes other than direct reads" if HCR_EL2.TGE
is set and HCR_EL2.E2H is 0, and to "behave as 0 for all
purposes other than direct reads" if HCR_EL2.TGE is set
and HRC_EL2.E2H is 1.
To avoid having to check E2H and TGE everywhere where we test IMO and
FMO, provide accessors arm_hcr_el2_imo(), arm_hcr_el2_fmo()and
arm_hcr_el2_amo(). We don't implement ARMv8.1-VHE yet, so the E2H
case will never be true, but we include the logic to save effort when
we eventually do get to that.
(Note that in several of these callsites the change doesn't
actually make a difference as either the callsite is handling
TGE specially anyway, or the CPU can't get into that situation
with TGE set; we change everywhere for consistency.)
Backports commit ac656b166b57332ee397e9781810c956f4f5fde5 from qemu
Some debug registers can be trapped via MDCR_EL2 bits TDRA, TDOSA,
and TDA, which we implement in the functions access_tdra(),
access_tdosa() and access_tda(). If MDCR_EL2.TDE or HCR_EL2.TGE
are 1, the TDRA, TDOSA and TDA bits should behave as if they were 1.
Implement this by having the access functions check MDCR_EL2.TDE
and HCR_EL2.TGE.
Backports commit 30ac6339dca3fe0d05a611f12eedd5af20af585a from qemu
Forbid stack alignment change. (CCR)
Reserve FAULTMASK, BASEPRI registers.
Report any fault as a HardFault. Disable MemManage, BusFault and
UsageFault, so they always escalated to HardFault. (SHCSR)
Backports commit 22ab3460017cfcfb6b50f05838ad142e08becce5 from qemu
To correctly handle small (less than TARGET_PAGE_SIZE) MPU regions,
we must correctly handle the case where the address being looked
up hits in an MPU region that is not small but the address is
in the same page as a small region. For instance if MPU region
1 covers an entire page from 0x2000 to 0x2400 and MPU region
2 is small and covers only 0x2200 to 0x2280, then for an access
to 0x2000 we must not return a result covering the full page
even though we hit the page-sized region 1. Otherwise we will
then cache that result in the TLB and accesses that should
hit region 2 will incorrectly find the region 1 information.
Check for the case where we miss an MPU region but it is still
within the same page, and in that case narrow the size we will
pass to tlb_set_page_with_attrs() for whatever the final
outcome is of the MPU lookup.
Backports commit 9d2b5a58f85be2d8e129c4b53d6708ecf8796e54 from qemu
For M-profile exception returns, the mmu index to use for exception
return unstacking is supposed to be that of wherever we are returning to:
* if returning to handler mode, privileged
* if returning to thread mode, privileged or unprivileged depending on
CONTROL.nPRIV for the destination security state
We were passing the wrong thing as the 'priv' argument to
arm_v7m_mmu_idx_for_secstate_and_priv(). The effect was that guests
which programmed the MPU to behave differently for privileged and
unprivileged code could get spurious MemManage Unstack exceptions.
Backports commit 2b83714d4ea659899069a4b94aa2dfadc847a013 from qemu
Leave ARM_CP_SVE, removing ARM_CP_FPU; the sve_access_check
produced by the flag already includes fp_access_check. If
we also check ARM_CP_FPU the double fp_access_check asserts.
Backports commit 11d7870b1b4d038d7beb827f3afa72e284701351 from qemu
Since kernel commit a86bd139f2 (arm64: arch_timer: Enable CNTVCT_EL0
trap..), released in kernel version v4.12, user-space has been able
to read these system registers. As we can't use QEMUTimer's in
linux-user mode we just directly call cpu_get_clock().
Backports commit 26c4a83bd4707797868174332a540f7d61288d15 from qemu
Allow ARMv8M to handle small MPU and SAU region sizes, by making
get_phys_add_pmsav8() set the page size to the 1 if the MPU or
SAU region covers less than a TARGET_PAGE_SIZE.
We choose to use a size of 1 because it makes no difference to
the core code, and avoids having to track both the base and
limit for SAU and MPU and then convert into an artificially
restricted "page size" that the core code will then ignore.
Since the core TCG code can't handle execution from small
MPU regions, we strip the exec permission from them so that
any execution attempts will cause an MPU exception, rather
than allowing it to end up with a cpu_abort() in
get_page_addr_code().
(The previous code's intention was to make any small page be
treated as having no permissions, but unfortunately errors
in the implementation meant that it didn't behave that way.
It's possible that some binaries using small regions were
accidentally working with our old behaviour and won't now.)
We also retain an existing bug, where we ignored the possibility
that the SAU region might not cover the entire page, in the
case of executable regions. This is necessary because some
currently-working guest code images rely on being able to
execute from addresses which are covered by a page-sized
MPU region but a smaller SAU region. We can remove this
workaround if we ever support execution from small regions.
Backports commit 720424359917887c926a33d248131fbff84c9c28 from qemu
We want to handle small MPU region sizes for ARMv7M. To do this,
make get_phys_addr_pmsav7() set the page size to the region
size if it is less that TARGET_PAGE_SIZE, rather than working
only in TARGET_PAGE_SIZE chunks.
Since the core TCG code con't handle execution from small
MPU regions, we strip the exec permission from them so that
any execution attempts will cause an MPU exception, rather
than allowing it to end up with a cpu_abort() in
get_page_addr_code().
(The previous code's intention was to make any small page be
treated as having no permissions, but unfortunately errors
in the implementation meant that it didn't behave that way.
It's possible that some binaries using small regions were
accidentally working with our old behaviour and won't now.)
Backports commit e5e40999b5e03567ef654546e3d448431643f8f3 from qemu
Depending on the host abi, float16, aka uint16_t, values are
passed and returned either zero-extended in the host register
or with garbage at the top of the host register.
The tcg code generator has so far been assuming garbage, as that
matches the x86 abi, but this is incorrect for other host abis.
Further, target/arm has so far been assuming zero-extended results,
so that it may store the 16-bit value into a 32-bit slot with the
high 16-bits already clear.
Rectify both problems by mapping "f16" in the helper definition
to uint32_t instead of (a typedef for) uint16_t. This forces
the host compiler to assume garbage in the upper 16 bits on input
and to zero-extend the result on output.
Backports commit 6c2be133a7478e443c99757b833d0f265c48e0a6 from qemu
This is a preparation for the coming feature of creating dynamically an XML
description for the ARM sysregs.
Add "_S" suffix to the secure version of sysregs that have both S and NS views
Replace (S) and (NS) by _S and _NS for the register that are manually defined,
so all the registers follow the same convention.
Backports commit 9c513e786d85cc58b8ba56a482566f759e0835b6 from qemu
This is a preparation for the coming feature of creating dynamically an XML
description for the ARM sysregs.
A register has ARM_CP_NO_GDB enabled will not be shown in the dynamic XML.
This bit is enabled automatically when creating CP_ANY wildcard aliases.
This bit could be enabled manually for any register we want to remove from the
dynamic XML description.
Backports commit 1f16378718fa87d63f70d0797f4546a88d8e3dd7 from qemu
The ARM ARM specifies FZ16 is suppressed for conversions. Rather than
pushing this logic into the softfloat code we can simply save the FZ
state and temporarily disable it for the softfloat call.
Backports commit 0acb9e7cb341cd767e39ec0875c8706eb2f1c359 from qemu
Instead of passing env and leaving it up to the helper to get the
right fpstatus we pass it explicitly. There was already a get_fpstatus
helper for neon for the 32 bit code. We also add an get_ahp_flag() for
passing the state of the alternative FP16 format flag. This leaves
scope for later tracking the AHP state in translation flags.
Backports commit 486624fcd3eaca6165ab8401d73bbae6c0fb81c1 from qemu
The instruction "ucvtf v0.4h, v04h, #2", with input 0x8000u,
overflows the intermediate float16 to infinity before we have a
chance to scale the output. Use float64 as the intermediate type
so that no input argument (uint32_t in this case) can overflow
or round before scaling. Given the declared argument, the signed
int32_t function has the same problem.
When converting from float16 to integer, using u/int32_t instead
of u/int16_t means that the bounding is incorrect.
Backports commit 88808a022c06f98d81cd3f2d105a5734c5614839 from qemu
The duplication of id_tlbtr_reginfo was unintentionally added within
3281af8114c6b8ead02f08b58e3c36895c1ea047 which should have been
id_mpuir_reginfo.
The effect was that for OMAP and StrongARM CPUs we would
incorrectly UNDEF writes to MPUIR rather than NOPing them.
Backports commit 100061121c1f69a672ce7bb3e9e3781f8018f9f6 from qemu
This is a bug fix to ensure 64-bit reads of these registers don't read
adjacent data.
Backports commit e4e91a217c17fff4045dd4b423cdcb471b3d6a0e from qemu
Because the design of the PMU requires that the counter values be
converted between their delta and guest-visible forms for mode
filtering, an additional hook which occurs before the EL is changed is
necessary.
Backports commit b5c53d1b3886387874f8c8582b205aeb3e4c3df6 from qemu
In commit 95695effe8caa552b8f2 we changed the v7M/v8M stack
pop code to use a new v7m_stack_read() function that checks
whether the read should fail due to an MPU or bus abort.
We missed one call though, the one which reads the signature
word for the callee-saved register part of the frame.
Correct the omission.
Backports commit 4818bad98c8212fbbb0525d10761b6b65279ab92 from qemu
Remove a stale TODO comment -- we have now made the arm_ldl_ptw()
and arm_ldq_ptw() functions propagate physical memory read errors
out to their callers.
Backports commit 145772707fe80395b87c244ccf5699a756f1946b from qemu
Currently our PMSAv7 and ARMv7M MPU implementation cannot handle
MPU region sizes smaller than our TARGET_PAGE_SIZE. However we
report that in a slightly confusing way:
DRSR[3]: No support for MPU (sub)region alignment of 9 bits. Minimum is 10
The problem is not the alignment of the region, but its size;
tweak the error message to say so:
DRSR[3]: No support for MPU (sub)region size of 512 bytes. Minimum is 1024.
Backports commit 8aec759b45fa6986c0b159cb27353d6abb0d5d73 from qemu
Now that we have a helper function specifically for the BRK and
BKPT instructions, we can set the exception.fsr there rather
than in arm_cpu_do_interrupt_aarch32(). This allows us to
use our new arm_debug_exception_fsr() helper.
In particular this fixes a bug where we were hardcoding the
short-form IFSR value, which is wrong if the target exception
level has LPAE enabled.
Fixes: https://bugs.launchpad.net/qemu/+bug/1756927
Backports commit 62b94f31d0df75187bb00684fc29e8639eacc0c5 from qemu
Backports commits 2994fd96d986578a342f2342501b4ad30f6d0a85,
701e3c78ce45fa630ffc6826c4b9a4218954bc7f, and
d1853231c60d16af78cf4d1608d043614bfbac0b from qemuu
Much like recpe the ARM ARM has simplified the pseudo code for the
calculation which is done on a fixed point 9 bit integer maths. So
while adding f16 we can also clean this up to be a little less heavy
on the floating point and just return the fractional part and leave
the calle's to do the final packing of the result.
Backports commit d719cbc7641991d16b891ffbbfc3a16a04e37b9a from qemu
Also removes a load of symbols that seem unnecessary from the header_gen script
It looks like the ARM ARM has simplified the pseudo code for the
calculation which is done on a fixed point 9 bit integer maths. So
while adding f16 we can also clean this up to be a little less heavy
on the floating point and just return the fractional part and leave
the calle's to do the final packing of the result.
Backports commit 5eb70735af1c0b607bf2671a53aff3710cc1672f from qemu
As the rounding mode is now split between FP16 and the rest of
floating point we need to be explicit when tweaking it. Instead of
passing the CPU env we now pass the appropriate fpst pointer directly.
Backports commit 9b04991686785e18b18a36d193b68f08f7c91648 from qemu
Half-precision flush to zero behaviour is controlled by a separate
FZ16 bit in the FPCR. To handle this we pass a pointer to
fp_status_fp16 when working on half-precision operations. The value of
the presented FPCR is calculated from an amalgam of the two when read.
Backports commit d81ce0ef2c4f1052fcdef891a12499eca3084db7 from qemu
The register definitions for VMIDR and VMPIDR have separate
reginfo structs for the AArch32 and AArch64 registers. However
the 32-bit versions are wrong:
* they use offsetof instead of offsetoflow32 to mark where
the 32-bit value lives in the uint64_t CPU state field
* they don't mark themselves as ARM_CP_ALIAS
In particular this means that if you try to use an Arm guest CPU
which enables EL2 on a big-endian host it will assert at reset:
target/arm/cpu.c:114: cp_reg_check_reset: Assertion `oldvalue == newvalue' failed.
because the reset of the 32-bit register writes to the top
half of the uint64_t.
Correct the errors in the structures.
Backports commit 36476562d57a3b64bbe86db26e63677dd21907c5 from qemu
As cpu.h is another typically widely included file which doesn't need
full access to the softfloat API we can remove the includes from here
as well. Where they do need types it's typically for float_status and
the rounding modes so we move that to softfloat-types.h as well.
As a result of not having softfloat in every cpu.h call we now need to
add it to various helpers that do need the full softfloat.h
definitions.
Backports commit 24f91e81b65fcdd0552d1f0fcb0ea7cfe3829c19 from qemu
The v8M architecture includes hardware support for enforcing
stack pointer limits. We don't implement this behaviour yet,
but provide the MSPLIM and PSPLIM stack pointer limit registers
as reads-as-written, so that when we do implement the checks
in future this won't break guest migration.
Backports commit 57bb31568114023f67680d6fe478ceb13c51aa7d from qemu
In commit 50f11062d4c896 we added support for MSR/MRS access
to the NS banked special registers, but we forgot to implement
the support for writing to CONTROL_NS. Correct the omission.
Backports commit 6eb3a64e2a96f5ced1f7896042b01f002bf0a91f from qemu
Handle possible MPU faults, SAU faults or bus errors when
popping register state off the stack during exception return.
Backports commit 95695effe8caa552b8f243bceb3a08de4003c882 from qemu
Make the load of the exception vector from the vector table honour
the SAU and any bus error on the load (possibly provoking a derived
exception), rather than simply aborting if the load fails.
Backports commit 600c33f24752a00e81e9372261e35c2befea612b from qemu
Make v7m_push_callee_stack() honour the MPU by using the
new v7m_stack_write() function. We return a flag to indicate
whether the pushes failed, which we can then use in
v7m_exception_taken() to cause us to handle the derived
exception correctly.
Backports commit 65b4234ff73a4d4865438ce30bdfaaa499464efa from qemu
The memory writes done to push registers on the stack
on exception entry in M profile CPUs are supposed to
go via MPU permissions checks, which may cause us to
take a derived exception instead of the original one of
the MPU lookup fails. We were implementing these as
always-succeeds direct writes to physical memory.
Rewrite v7m_push_stack() to do the necessary checks.
Backports commit fd592d890ec40e3686760de84044230a8ebb1eb3 from qemu
In the v8M architecture, if the process of taking an exception
results in a further exception this is called a derived exception
(for example, an MPU exception when writing the exception frame to
memory). If the derived exception happens while pushing the initial
stack frame, we must ignore any subsequent possible exception
pushing the callee-saves registers.
In preparation for making the stack writes check for exceptions,
add a return value from v7m_push_stack() and a new parameter to
v7m_exception_taken(), so that the former can tell the latter that
it needs to ignore failures to write to the stack. We also plumb
the argument through to v7m_push_callee_stack(), which is where
the code to ignore the failures will be.
(Note that the v8M ARM pseudocode structures this slightly differently:
derived exceptions cause the attempt to process the original
exception to be abandoned; then at the top level it calls
DerivedLateArrival to prioritize the derived exception and call
TakeException from there. We choose to let the NVIC do the prioritization
and continue forward with a call to TakeException which will then
take either the original or the derived exception. The effect is
the same, but this structure works better for QEMU because we don't
have a convenient top level place to do the abandon-and-retry logic.)
Backports commit 0094ca70e165cfb69882fa2e100d935d45f1c983 from qemu
Currently armv7m_nvic_acknowledge_irq() does three things:
* make the current highest priority pending interrupt active
* return a bool indicating whether that interrupt is targeting
Secure or NonSecure state
* implicitly tell the caller which is the highest priority
pending interrupt by setting env->v7m.exception
We need to split these jobs, because v7m_exception_taken()
needs to know whether the pending interrupt targets Secure so
it can choose to stack callee-saves registers or not, but it
must not make the interrupt active until after it has done
that stacking, in case the stacking causes a derived exception.
Similarly, it needs to know the number of the pending interrupt
so it can read the correct vector table entry before the
interrupt is made active, because vector table reads might
also cause a derived exception.
Create a new armv7m_nvic_get_pending_irq_info() function which simply
returns information about the highest priority pending interrupt, and
use it to rearrange the v7m_exception_taken() code so we don't
acknowledge the exception until we've done all the things which could
possibly cause a derived exception.
Backports part of commit 6c9485188170e11ad31ce477c8ce200b8e8ce59d from qemu
Commit ("3b39d734141a target/arm: Handle page table walk load failures
correctly") modified both versions of the page table walking code (i.e.,
arm_ldl_ptw and arm_ldq_ptw) to record the result of the translation in
a temporary 'data' variable so that it can be inspected before being
returned. However, arm_ldq_ptw() returns an uint64_t, and using a
temporary uint32_t variable truncates the upper bits, corrupting the
result. This causes problems when using more than 4 GB of memory in
a TCG guest. So use a uint64_t instead.
Backports commit 9aea1ea31af25fe344a88da086ff913cca09c667 from qemu
Instead of ignoring the response from address_space_ld*()
(indicating an attempt to read a page table descriptor from
an invalid physical address), use it to report the failure
correctly.
Since this is another couple of locations where we need to
decide the value of the ARMMMUFaultInfo ea bit based on a
MemTxResult, we factor out that operation into a helper
function.
Backports commit 3b39d734141a71296d08af3d4c32f872fafd782e from qemu
For PMSAv7, the v7A/R Arm ARM defines that setting AP to 0b111
is an UNPREDICTABLE reserved combination. However, for v7M
this value is documented as having the same behaviour as 0b110:
read-only for both privileged and unprivileged. Accept this
value on an M profile core rather than treating it as a guest
error and a no-access page.
Backports commit 8638f1ad7403b63db880dadce38e6690b5d82b64 from qemu
Now that do_ats_write() is entirely in control of whether to
generate a 32-bit PAR or a 64-bit PAR, we can make it use the
correct (complicated) condition for doing so.
Backports commit 1313e2d7e2cd8b21741e0cf542eb09dfc4188f79 from qemu
All of the callers of get_phys_addr() and arm_tlb_fill() now ignore
the FSR values they return, so we can just remove the argument
entirely.
Backports commit bc52bfeb3be2052942b7dac8ba284f342ac9605b from qemu
In do_ats_write(), rather than using the FSR value from get_phys_addr(),
construct the PAR values using the information in the ARMMMUFaultInfo
struct. This allows us to create a PAR of the correct format regardless
of what the translation table format is.
For the moment we leave the condition for "when should this be a
64 bit PAR" as it was previously; this will need to be fixed to
properly support AArch32 Hyp mode.
Backports commit 5efe9ed45dec775ebe91ce72bd805ee780d16064 from qemu
Make get_phys_addr_pmsav8() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.
Backports commit 3f551b5b7380ff131fe22944aa6f5b166aa13caf from qemu
Make get_phys_addr_pmsav7() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.
Backports commit 9375ad15338b24e06109071ac3a85df48a2cc2e6 from qemu
Make get_phys_addr_pmsav5() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.
Note that PMSAv5 does not define any guest-visible fault status
register, so the different "fsr" values we were previously
returning are entirely arbitrary. So we can just switch to using
the most appropriae fi->type values without worrying that we
need to special-case FaultInfo->FSC conversion for PMSAv5.
Backports commit 53a4e5c5b07b2f50c538511b74b0d3d4964695ea from qemu
Make get_phys_addr_v6() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.
Backports commit da909b2c23a68e57bbcb6be98229e40df606f0c8 from qemu
Make get_phys_addr_v6() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.
Backports commit f06cf243945ccb24cb9578304306ae7fcb4cf3fd from qemu
Make get_phys_addr_v5() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.
Backports commit f989983e8dc9be6bc3468c6dbe46fcb1501a740c from qemu
All the callers of arm_ldq_ptw() and arm_ldl_ptw() ignore the value
that those functions store in the fsr argument on failure: if they
return failure to their callers they will always overwrite the fsr
value with something else.
Remove the argument from these functions and S1_ptw_translate().
This will simplify removing fsr from the calling functions.
Backports commit 3795a6de9f7ec4a7e3dcb8bf02a88a014147b0b0 from qemu
Implement the TT instruction which queries the security
state and access permissions of a memory location.
Backports commit 5158de241b0fb344a6c948dfcbc4e611ab5fafbe from qemu
For the TT instruction we're going to need to do an MPU lookup that
also tells us which MPU region the access hit. This requires us
to do the MPU lookup without first doing the SAU security access
check, so pull the MPU lookup parts of get_phys_addr_pmsav8()
out into their own function.
The TT instruction also needs to know the MPU region number which
the lookup hit, so provide this information to the caller of the
MPU lookup code, even though get_phys_addr_pmsav8() doesn't
need to know it.
Backports commit 54317c0ff3a3c0f6b2c3a1d3c8b5d93686a86d24 from qemu
For M profile, we currently have an mmu index MNegPri for
"requested execution priority negative". This fails to
distinguish "requested execution priority negative, privileged"
from "requested execution priority negative, usermode", but
the two can return different results for MPU lookups. Fix this
by splitting MNegPri into MNegPriPriv and MNegPriUser, and
similarly for the Secure equivalent MSNegPri.
This takes us from 6 M profile MMU modes to 8, which means
we need to bump NB_MMU_MODES; this is OK since the point
where we are forced to reduce TLB sizes is 9 MMU modes.
(It would in theory be possible to stick with 6 MMU indexes:
{mpu-disabled,user,privileged} x {secure,nonsecure} since
in the MPU-disabled case the result of an MPU lookup is
always the same for both user and privileged code. However
we would then need to rework the TB flags handling to put
user/priv into the TB flags separately from the mmuidx.
Adding an extra couple of mmu indexes is simpler.)
Backports commit 62593718d77c06ad2b5e942727cead40775d2395 from qemu
When we added the ARMMMUIdx_MSUser MMU index we forgot to
add it to the case statement in regime_is_user(), so we
weren't treating it as unprivileged when doing MPU lookups.
Correct the omission.
Backports commit 871bec7c44a453d9cab972ce1b5d12e1af0545ab from qemu
In ARMv7M the CPU ignores explicit writes to CONTROL.SPSEL
in Handler mode. In v8M the behaviour is slightly different:
writes to the bit are permitted but will have no effect.
We've already done the hard work to handle the value in
CONTROL.SPSEL being out of sync with what stack pointer is
actually in use, so all we need to do to fix this last loose
end is to update the condition we use to guard whether we
call write_v7m_control_spsel() on the register write.
Backports commit 83d7f86d3d27473c0aac79c1baaa5c2ab01b02d9 from qemu
For v8M it is possible for the CONTROL.SPSEL bit value and the
current stack to be out of sync. This means we need to update
the checks used in reads and writes of the PSP and MSP special
registers to use v7m_using_psp() rather than directly checking
the SPSEL bit in the control register.
Backports commit 1169d3aa5b19adca9384d954d80e1f48da388284 from qemu
In do_ats_write(), rather than using extended_addresses_enabled() to
decide whether the value we get back from get_phys_addr() is a 64-bit
format PAR or a 32-bit one, use arm_s1_regime_using_lpae_format().
This is not really the correct answer, because the PAR format
depends on the AT instruction being used, not just on the
translation regime. However getting this correct requires a
significant refactoring, so that get_phys_addr() returns raw
information about the fault which the caller can then assemble
into a suitable FSR/PAR/syndrome for its purposes, rather than
get_phys_addr() returning a pre-formatted FSR.
However this change at least improves the situation by making
the PAR work correctly for address translation operations done
at AArch64 EL2 on the EL2 translation regime. In particular,
this is necessary for Xen to be able to run in our emulation,
so this seems like a safer interim fix given that we are in freeze.
Backports commit 50cd71b0d347c74517dcb7da447fe657fca57d9c from qemu
The CPU ID registers ID_AA64PFR0_EL1, ID_PFR1_EL1 and ID_PFR1
have a field for reporting presence of GICv3 system registers.
We need to report this field correctly in order for Xen to
work as a guest inside QEMU emulation. We mustn't incorrectly
claim the sysregs exist when they don't, though, or Linux will
crash.
Unfortunately the way we've designed the GICv3 emulation in QEMU
puts the system registers as part of the GICv3 device, which
may be created after the CPU proper has been realized. This
means that we don't know at the point when we define the ID
registers what the correct value is. Handle this by switching
them to calling a function at runtime to read the value, where
we can fill in the GIC field appropriately.
Backports commit 96a8b92ed8f02d5e86ad380d3299d9f41f99b072 from qemu
On a successful address translation instruction, PAR is supposed to
contain cacheability and shareability attributes determined by the
translation. We previously returned 0 for these bits (in line with the
general strategy of ignoring caches and memory attributes), but some
guest OSes may depend on them.
This patch collects the attribute bits in the page-table walk, and
updates PAR with the correct attributes for all LPAE translations.
Short descriptor formats still return 0 for these bits, as in the
prior implementation.
Backports commit 5b2d261d60caf9d988d91ca1e02392d6fc8ea104 from qemu
Secure function return happens when a non-secure function has been
called using BLXNS and so has a particular magic LR value (either
0xfefffffe or 0xfeffffff). The function return via BX behaves
specially when the new PC value is this magic value, in the same
way that exception returns are handled.
Adjust our BX excret guards so that they recognize the function
return magic number as well, and perform the function-return
unstacking in do_v7m_exception_exit().
Backports commit d02a8698d7ae2bfed3b11fe5b064cb0aa406863b from qemu
Implement the SG instruction, which we emulate 'by hand' in the
exception handling code path.
Backports commit 333e10c51ef5876ced26f77b61b69ce0f83161a9 from qemu
Implement the security attribute lookups for memory accesses
in the get_phys_addr() functions, causing these to generate
various kinds of SecureFault for bad accesses.
The major subtlety in this code relates to handling of the
case when the security attributes the SAU assigns to the
address don't match the current security state of the CPU.
In the ARM ARM pseudocode for validating instruction
accesses, the security attributes of the address determine
whether the Secure or NonSecure MPU state is used. At face
value, handling this would require us to encode the relevant
bits of state into mmu_idx for both S and NS at once, which
would result in our needing 16 mmu indexes. Fortunately we
don't actually need to do this because a mismatch between
address attributes and CPU state means either:
* some kind of fault (usually a SecureFault, but in theory
perhaps a UserFault for unaligned access to Device memory)
* execution of the SG instruction in NS state from a
Secure & NonSecure code region
The purpose of SG is simply to flip the CPU into Secure
state, so we can handle it by emulating execution of that
instruction directly in arm_v7m_cpu_do_interrupt(), which
means we can treat all the mismatch cases as "throw an
exception" and we don't need to encode the state of the
other MPU bank into our mmu_idx values.
This commit doesn't include the actual emulation of SG;
it also doesn't include implementation of the IDAU, which
is a per-board way to specify hard-coded memory attributes
for addresses, which override the CPU-internal SAU if they
specify a more secure setting than the SAU is programmed to.
Backports commit 35337cc391245f251bfb9134f181c33e6375d6c1 from qemu
Add support for v8M and in particular the security extension
to the exception entry code. This requires changes to:
* calculation of the exception-return magic LR value
* push the callee-saves registers in certain cases
* clear registers when taking non-secure exceptions to avoid
leaking information from the interrupted secure code
* switch to the correct security state on entry
* use the vector table for the security state we're targeting
Backports commit d3392718e1fcf0859fb7c0774a8e946bacb8419c from qemu
For v8M, exceptions from Secure to Non-Secure state will save
callee-saved registers to the exception frame as well as the
caller-saved registers. Add support for unstacking these
registers in exception exit when necessary.
Backports commit 907bedb3f3ce134c149599bd9cb61856d811b8ca from qemu
In v8M, more bits are defined in the exception-return magic
values; update the code that checks these so we accept
the v8M values when the CPU permits them.
Backports commit bfb2eb52788b9605ef2fc9bc72683d4299117fde from qemu
In the v8M architecture, return from an exception to a PC which
has bit 0 set is not UNPREDICTABLE; it is defined that bit 0
is discarded [R_HRJH]. Restrict our complaint about this to v7M.
Backports commit 4e4259d3c574a8e89c3af27bcb84bc19a442efb1 from qemu
Attempting to do an exception return with an exception frame that
is not 8-aligned is UNPREDICTABLE in v8M; warn about this.
(It is not UNPREDICTABLE in v7M, and our implementation can
handle the merely-4-aligned case fine, so we don't need to
do anything except warn.)
Backports commit cb484f9a6e790205e69d9a444c3e353a3a1cfd84 from qemu
ARM v8M specifies that the INVPC usage fault for mismatched
xPSR exception field and handler mode bit should be checked
before updating the PSR and SP, so that the fault is taken
with the existing stack frame rather than by pushing a new one.
Perform this check in the right place for v8M.
Since v7M specifies in its pseudocode that this usage fault
check should happen later, we have to retain the original
code for that check rather than being able to merge the two.
(The distinction is architecturally visible but only in
very obscure corner cases like attempting an invalid exception
return with an exception frame in read only memory.)
Backports commit 224e0c300a0098fb577a03bd29d774d0769f632a from qemu
On exception return for v8M, the SPSEL bit in the EXC_RETURN magic
value should be restored to the SPSEL bit in the CONTROL register
banked specified by the EXC_RETURN.ES bit.
Add write_v7m_control_spsel_for_secstate() which behaves like
write_v7m_control_spsel() but allows the caller to specify which
CONTROL bank to use, reimplement write_v7m_control_spsel() in
terms of it, and use it in exception return.
Backports commit 3f0cddeee1f266d43c956581f3050058360a810d from qemu
Now that we can handle the CONTROL.SPSEL bit not necessarily being
in sync with the current stack pointer, we can restore the correct
security state on exception return. This happens before we start
to read registers off the stack frame, but after we have taken
possible usage faults for bad exception return magic values and
updated CONTROL.SPSEL.
Backports commit 3919e60b6efd9a86a0e6ba637aa584222855ac3a from qemu
In the v7M architecture, there is an invariant that if the CPU is
in Handler mode then the CONTROL.SPSEL bit cannot be nonzero.
This in turn means that the current stack pointer is always
indicated by CONTROL.SPSEL, even though Handler mode always uses
the Main stack pointer.
In v8M, this invariant is removed, and CONTROL.SPSEL may now
be nonzero in Handler mode (though Handler mode still always
uses the Main stack pointer). In preparation for this change,
change how we handle this bit: rename switch_v7m_sp() to
the now more accurate write_v7m_control_spsel(), and make it
check both the handler mode state and the SPSEL bit.
Note that this implicitly changes the point at which we switch
active SP on exception exit from before we pop the exception
frame to after it.
Backports commit de2db7ec894f11931932ca78cd14a8d2b1389d5b from qemu
Currently our M profile exception return code switches to the
target stack pointer relatively early in the process, before
it tries to pop the exception frame off the stack. This is
awkward for v8M for two reasons:
* in v8M the process vs main stack pointer is not selected
purely by the value of CONTROL.SPSEL, so updating SPSEL
and relying on that to switch to the right stack pointer
won't work
* the stack we should be reading the stack frame from and
the stack we will eventually switch to might not be the
same if the guest is doing strange things
Change our exception return code to use a 'frame pointer'
to read the exception frame rather than assuming that we
can switch the live stack pointer this early.
Backports commit 5b5223997c04b769bb362767cecb5f7ec382c5f0 from qemu
This properly forwards SMC events to EL2 when PSCI is provided by QEMU
itself and, thus, ARM_FEATURE_EL3 is off.
Found and tested with the Jailhouse hypervisor. Solution based on
suggestions by Peter Maydell.
Backports commit 77077a83006c3c9bdca496727f1735a3c5c5355d from qemu
In v8M the MSR and MRS instructions have extra register value
encodings to allow secure code to access the non-secure banked
version of various special registers.
(We don't implement the MSPLIM_NS or PSPLIM_NS aliases, because
we don't currently implement the stack limit registers at all.)
Backports commit 50f11062d4c896408731d6a286bcd116d1e08465 from qemu
In the v7M and v8M ARM ARM, the magic exception return values are
referred to as EXC_RETURN values, and in QEMU we use V7M_EXCRET_*
constants to define bits within them. Rename the 'type' variable
which holds the exception return value in do_v7m_exception_exit()
to excret, making it clearer that it does hold an EXC_RETURN value.
Backports commit 351e527a613147aa2a2e6910f92923deef27ee48 from qemu
The exception-return magic values get some new bits in v8M, which
makes some bit definitions for them worthwhile.
We don't use the bit definitions for the switch on the low bits
which checks the return type for v7M, because this is defined
in the v7M ARM ARM as a set of valid values rather than via
per-bit checks.
Backports commit 4d1e7a4745c050f7ccac49a1c01437526b5130b5 from qemu
In do_v7m_exception_exit(), there's no need to force the high 4
bits of 'type' to 1 when calling v7m_exception_taken(), because
we know that they're always 1 or we could not have got to this
"handle return to magic exception return address" code. Remove
the unnecessary ORs.
Backports commit 7115cdf5782922611bcc44c89eec5990db7f6466 from qemu
For a bus fault, the M profile BFSR bit PRECISERR means a bus
fault on a data access, and IBUSERR means a bus fault on an
instruction access. We had these the wrong way around; fix this.
Backports commit c6158878650c01b2c753b2ea7d0967c8fe5ca59e from qemu
For M profile we must clear the exclusive monitor on reset, exception
entry and exception exit. We weren't doing any of these things; fix
this bug.
Backports commit dc3c4c14f0f12854dbd967be3486f4db4e66d25b from qemu
Implement the BXNS v8M instruction, which is like BX but will do a
jump-and-switch-to-NonSecure if the branch target address has bit 0
clear.
This is the first piece of code which implements "switch to the
other security state", so the commit also includes the code to
switch the stack pointers around, which is the only complicated
part of switching security state.
BLXNS is more complicated than just "BXNS but set the link register",
so we leave it for a separate commit.
Backports commit fb602cb726b3ebdd01ef3b1732d74baf9fee7ec9 from qemu
Move the regime_is_secure() utility function to internals.h;
we are going to want to call it from translate.c.
Backports commit 61fcd69b0db268e7612b07fadc436b93def91768 from qemu
Make the CFSR register banked if v8M security extensions are enabled.
Not all the bits in this register are banked: the BFSR
bits [15:8] are shared between S and NS, and we store them
in the NS copy of the register.
Backports commit 334e8dad7a109d15cb20b090131374ae98682a50 from qemu
Make the CCR register banked if v8M security extensions are enabled.
This is slightly more complicated than the other "add banking"
patches because there is one bit in the register which is not
banked. We keep the live data in the NS copy of the register,
and adjust it on register reads and writes. (Since we don't
currently implement the behaviour that the bit controls, there
is nowhere else that needs to care.)
This patch includes the enforcement of the bits which are newly
RES1 in ARMv8M.
Backports commit 9d40cd8a68cfc7606f4548cc9e812bab15c6dc28 from qemu
Make the MPU registers MPU_MAIR0 and MPU_MAIR1 banked if v8M security
extensions are enabled.
We can freely add more items to vmstate_m_security without
breaking migration compatibility, because no CPU currently
has the ARM_FEATURE_M_SECURITY bit enabled and so this
subsection is not yet used by anything.
Backports commit 62c58ee0b24eafb44c06402fe059fbd7972eb409 from qemu