On 32-bit exception entry, CPSR.J must always be set to 0
(see v7A Arm ARM DDI0406C.c B1.8.5). CPSR.IL must also
be cleared on 32-bit exception entry (see v8A Arm ARM
DDI0487C.a G1.10).
Clear these bits. (This fixes a bug which will never be noticed
by non-buggy guests.)
Backports commit 829f9fd394ab082753308cbda165c13eaf8fae49 from qemu
Factor out the code which changes the CPU state so as to
actually take an exception to AArch32. We're going to want
to use this for handling exception entry to Hyp mode.
Backports commit dea8378bb3e86f2c6bd05afb3927619f7c51bb47 from qemu
The AArch32 HCR and HCR2 registers alias HCR_EL2
bits [31:0] and [63:32]; implement them.
Since HCR2 exists in ARMv8 but not ARMv7, we need new
regdef arrays for "we have EL3, not EL2, we're ARMv8"
and "we have EL2, we're ARMv8" to hold the definitions.
Backports commit ce4afed8396aabaf87cd42fbe8a4c14f7a9d5c10 from qemu
The v8 AArch32 HACTLR2 register maps to bits [63:32] of ACTLR_EL2.
We implement ACTLR_EL2 as RAZ/WI, so make HACTLR2 also RAZ/WI.
(We put the regdef next to ACTLR_EL2 as a reminder in case we
ever make ACTLR_EL2 something other than RAZ/WI).
Backports commit 0e0456ab8895a5e85998904549e331d36c2692a5 from qemu
ARMv7VE introduced the ERET instruction, which is necessary to
return from an exception taken to Hyp mode. Implement this.
In A32 encoding it is a completely new encoding; in T32 it
is an adjustment of the behaviour of the existing
"SUBS PC, LR, #<imm8>" instruction.
Backports commit 55c544ed2709bd202e71e77ddfe3ea0327852211 from qemu
The MSR (banked) and MRS (banked) instructions allow accesses to ELR_Hyp
from either Monitor or Hyp mode. Our translate time check
was overly strict and only permitted access from Monitor mode.
The runtime check we do in msr_mrs_banked_exc_checks() had the
correct code in it, but never got there because of the earlier
"currmode == tgtmode" check. Special case ELR_Hyp.
Backports commit aec4dd09f172ee64c19222b78269d5952fd9c1dc from qemu
The AArch32 HSR is the equivalent of AArch64 ESR_EL2;
we can implement it by marking our existing ESR_EL2 regdef
as STATE_BOTH. It also needs to be "RES0 from EL3 if
EL2 not implemented", so add the missing stanza to
el3_no_el2_cp_reginfo.
Backports commit 68e78e332cb1c3f8b0317a0443acb2b5e190f0dd from qemu
The AArch32 virtualization extensions support these fault address
registers:
* HDFAR: aliased with AArch64 FAR_EL2[31:0] and AArch32 DFAR(S)
* HIFAR: aliased with AArch64 FAR_EL2[63:32] and AArch32 IFAR(S)
Implement the accessors for these. This fixes in passing a bug
where we weren't implementing the "RES0 from EL3 if EL2 not
implemented" behaviour for AArch64 FAR_EL2.
Backports commit cba517c31e7df8932c4473c477a0f01d8a0adc48 from qemu
Implement the AArch32 HVBAR register; we can do this just by
making the existing VBAR_EL2 regdefs be STATE_BOTH.
Backports commit d79e0c0608899428281a17c414ccf1a82d86ab85 from qemu
ARMCPRegInfo structs will default to .cp = 15 if they
are ARM_CP_STATE_BOTH, but not if they are ARM_CP_STATE_AA32
(because a coprocessor number of 0 is valid for AArch32).
We forgot to explicitly set .cp = 15 for the HMAIR1 and
HAMAIR1 regdefs, which meant they would UNDEF when the guest
tried to access them under cp15.
Backports commit b5ede85bfb7ba1a8f6086494c82f400b29969f65 from qemu
We implement the HAMAIR1 register as RAZ/WI; we had a typo in the
regdef, though, and were incorrectly naming it HMAIR1 (which is
a different register which we also implement as RAZ/WI).
Backports commit 55b53c718b2f684793eeefcf1c1a548ee97e23aa from qemu
If an instruction is conditional (like CBZ) and it is executed
conditionally (using the ITx instruction), a jump to an undefined
label is generated, and QEMU crashes.
CBZ in IT block is an UNPREDICTABLE behavior, but we should not
crash. Honouring the condition code is allowed by the spec in this
case (constrained unpredictable, ARMv8, section K1.1.7), and matches
what we do for other "UNPREDICTABLE inside an IT block" instructions.
Fix the 'skip on condition' code to create a new label only if it
does not already exist. Previously multiple labels were created, but
only the last one of them was set.
Backports commit c2d9644e6d517170bf6520f633628259a8460d48 from qemu
These insns require u=1; failed to include that in the switch
cases. This probably happened during one of the rebases just
before final commit.
Fixes: d17b7cdcf4e
Backports commit b8a4a96db3639e17ab5e5cdc14fca4b19fbf5b3b from qemu
We were using the wrong flush-to-zero bit for the non-half input.
Fixes: 46d33d1e3c9
Backports commit e4ab5124a5c2e2291006b24bdc21c3dd8d087ff4 from qemu
When FZ is set, input_denormal exceptions are recognized, but this does
not happen with FZ16. The softfloat code has no way to distinguish
these bits and will raise such exceptions into fp_status_f16.flags,
so ignore them when computing the accumulated flags.
Backports commit 19062c169e5bcdda3d60df9161228e107bf0f96e from qemu
When support for FZ16 was added, we failed to include the bit
within FPCR_MASK, which means that it could never be set.
Continue to zero FZ16 when ARMv8.2-FP16 is not enabled.
Fixes: d81ce0ef2c4
Backports commit 0b62159be33d45d00dfa34a317c6d3da30ffb480 from qemu
Define a "cortex-m0" ARMv6-M CPU model.
Most of the register reset values set by other CPU models are not
relevant for the cut-down ARMv6-M architecture.
Backports commit 191776b96a381b5d2b8d3f90c1c02b3e4779e5f7 from qemu
This allows the default (and maximum) vector length to be set
from the command-line. Which is extraordinarily helpful in
debugging problems depending on vector length without having to
bake knowledge of PR_SET_SVE_VL into every guest binary.
Backports relevant parts of commit
adf92eab90e3f5f34c285da6d14d48952b7a8e72 from qemu
Also fold the FPCR/FPSR state onto the same line as PSTATE,
and mention but do not dump disabled FPU state.
Backports commit 2bf5f3f91bb4e3faa2a19aec042138a938afbf6a from qemu
The scaling should be solely on the memory operation size; the number
of registers being loaded does not come in to the initial computation.
Backports commit 50ef1cbf31caad21019ae6fa8036ed6f29244ba5 from qemu
The immediate should be scaled by the size of the memory reference,
not the size of the elements into which it is loaded.
Backports commit d0e372b0298f897993f831dbff7ad4f1c70f138e from qemu
The expression (int) imm + (uint32_t) len_align turns into uint32_t
and thus with negative imm produces a memory operation at the wrong
offset. None of the numbers involved are particularly large, so
change everything to use int.
Backports commit 19f2acc915a0f8f443a959844540a6f09133cc96 from qemu
The pseudocode for this operation is an increment + compare loop,
so comparing <= the maximum integer produces an all-true predicate.
Rather than bound in both the inline code and the helper, pass the
helper the number of predicate bits to set instead of the number
of predicate elements to set.
Backports commit bbd0968c458d48e34a08b8694fa3309a9fe1c9e7 from qemu
The normal vector element is sign-extended before
comparing with the wide vector element.
Backports commit df4e001093988544d09887122ae824f18ba55c68 from qemu
Tailchaining is an optimization in handling of exception return
for M-profile cores: if we are about to pop the exception stack
for an exception return, but there is a pending exception which
is higher priority than the priority we are returning to, then
instead of unstacking and then immediately taking the exception
and stacking registers again, we can chain to the pending
exception without unstacking and stacking.
For v6M and v7M it is IMPDEF whether tailchaining happens for pending
exceptions; for v8M this is architecturally required. Implement it
in QEMU for all M-profile cores, since in practice v6M and v7M
hardware implementations generally do have it.
(We were already doing tailchaining for derived exceptions which
happened during exception return, like the validity checks and
stack access failures; these have always been required to be
tailchained for all versions of the architecture.)
Backports commit 5f62d3b9e67bfc3deb970e3c7fb7df7e57d46fc3 from qemu
On exception return for M-profile, we must restore the CONTROL.SPSEL
bit from the EXCRET value before we do any kind of tailchaining,
including for the derived exceptions on integrity check failures.
Otherwise we will give the guest an incorrect EXCRET.SPSEL value on
exception entry for the tailchained exception.
Backports commit 89b1fec193b81b6ad0bd2975f2fa179980cc722e from qemu
In do_v7m_exception_exit(), we use the exc_secure variable to track
whether the exception we're returning from is secure or non-secure.
Unfortunately the statement initializing this was accidentally
inside an "if (env->v7m.exception != ARMV7M_EXCP_NMI)" conditional,
which meant that we were using the wrong value for NMI handlers.
Move the initialization out to the right place.
Backports commit b8109608bc6f3337298d44ac4369bf0bc8c3a1e4 from qemu
One of the required effects of setting HCR_EL2.TGE is that when
SCR_EL3.NS is 1 then SCTLR_EL1.M must behave as if it is zero for
all purposes except direct reads. That is, it effectively disables
the MMU for the NS EL0/EL1 translation regime.
Backports commit 3d0e3080d8b7abcddc038d18e8401861c369c4c1 from qemu
The IMO, FMO and AMO bits in HCR_EL2 are defined to "behave as
1 for all purposes other than direct reads" if HCR_EL2.TGE
is set and HCR_EL2.E2H is 0, and to "behave as 0 for all
purposes other than direct reads" if HCR_EL2.TGE is set
and HRC_EL2.E2H is 1.
To avoid having to check E2H and TGE everywhere where we test IMO and
FMO, provide accessors arm_hcr_el2_imo(), arm_hcr_el2_fmo()and
arm_hcr_el2_amo(). We don't implement ARMv8.1-VHE yet, so the E2H
case will never be true, but we include the logic to save effort when
we eventually do get to that.
(Note that in several of these callsites the change doesn't
actually make a difference as either the callsite is handling
TGE specially anyway, or the CPU can't get into that situation
with TGE set; we change everywhere for consistency.)
Backports commit ac656b166b57332ee397e9781810c956f4f5fde5 from qemu
Whene we raise a synchronous exception, if HCR_EL2.TGE is set then
exceptions targeting NS EL1 must be redirected to EL2. Implement
this in raise_exception() -- all synchronous exceptions go through
this function.
(Asynchronous exceptions go via arm_cpu_exec_interrupt(), which
already honours HCR_EL2.TGE when it determines the target EL
in arm_phys_excp_target_el().)
Backports commit 7556edfb4d7bf0583c852c8cfc49ef494c41dd8a from qemu
Some debug registers can be trapped via MDCR_EL2 bits TDRA, TDOSA,
and TDA, which we implement in the functions access_tdra(),
access_tdosa() and access_tda(). If MDCR_EL2.TDE or HCR_EL2.TGE
are 1, the TDRA, TDOSA and TDA bits should behave as if they were 1.
Implement this by having the access functions check MDCR_EL2.TDE
and HCR_EL2.TGE.
Backports commit 30ac6339dca3fe0d05a611f12eedd5af20af585a from qemu
If the "trap general exceptions" bit HCR_EL2.TGE is set, we
must mask all virtual interrupts (as per DDI0487C.a D1.14.3).
Implement this in arm_excp_unmasked().
Backports commit 2ccf0fef632f3d54b2cc9ea08f1e6904ff1f8df4 from qemu
Forbid stack alignment change. (CCR)
Reserve FAULTMASK, BASEPRI registers.
Report any fault as a HardFault. Disable MemManage, BusFault and
UsageFault, so they always escalated to HardFault. (SHCSR)
Backports commit 22ab3460017cfcfb6b50f05838ad142e08becce5 from qemu
To correctly handle small (less than TARGET_PAGE_SIZE) MPU regions,
we must correctly handle the case where the address being looked
up hits in an MPU region that is not small but the address is
in the same page as a small region. For instance if MPU region
1 covers an entire page from 0x2000 to 0x2400 and MPU region
2 is small and covers only 0x2200 to 0x2280, then for an access
to 0x2000 we must not return a result covering the full page
even though we hit the page-sized region 1. Otherwise we will
then cache that result in the TLB and accesses that should
hit region 2 will incorrectly find the region 1 information.
Check for the case where we miss an MPU region but it is still
within the same page, and in that case narrow the size we will
pass to tlb_set_page_with_attrs() for whatever the final
outcome is of the MPU lookup.
Backports commit 9d2b5a58f85be2d8e129c4b53d6708ecf8796e54 from qemu
'I' was being double-incremented; correctly within the inner loop
and incorrectly within the outer loop.
Backports commit 628fc75f3a3bb115de3b445c1a18547c44613cfe from qemu
For M-profile exception returns, the mmu index to use for exception
return unstacking is supposed to be that of wherever we are returning to:
* if returning to handler mode, privileged
* if returning to thread mode, privileged or unprivileged depending on
CONTROL.nPRIV for the destination security state
We were passing the wrong thing as the 'priv' argument to
arm_v7m_mmu_idx_for_secstate_and_priv(). The effect was that guests
which programmed the MPU to behave differently for privileged and
unprivileged code could get spurious MemManage Unstack exceptions.
Backports commit 2b83714d4ea659899069a4b94aa2dfadc847a013 from qemu
Use MAKE_64BIT_MASK instead of open-coding. Remove an odd
vector size check that is unlikely to be more profitable
than 3 64-bit integer stores. Correct the iteration for WORD
to avoid writing too much data.
Fixes RISU tests of PTRUE for VL 256.
Backports commit 973558a3f869e591d2406dd8226ec0c4e32a3c3e from qemu
These instructions must perform the sve_access_check, but
since they are implemented as NOPs there is no generated
code to elide when the access check fails.
Backports commit 2f95a3b09aebdcb5c9152a7ac434a5d57441fe82 from qemu
There is no need to re-set these 3 features already
implied by the call to aarch64_a15_initfn.
Backports commit 0b33968e7f4cf998f678b2d1a5be3d6f3f3513d8 from qemu
There is no need to re-set these 9 features already
implied by the call to aarch64_a57_initfn.
Backports commit 156a7065365578deb3d63c2b5b69a4b5999a8fcc from qemu
Leave ARM_CP_SVE, removing ARM_CP_FPU; the sve_access_check
produced by the flag already includes fp_access_check. If
we also check ARM_CP_FPU the double fp_access_check asserts.
Backports commit 11d7870b1b4d038d7beb827f3afa72e284701351 from qemu
We already check for the same condition within the normal integer
sdiv and sdiv64 helpers. Use a slightly different formation that
does not require deducing the expression type.
Backports commit 7e8fafbfd0537937ba8fb366a90ea6548cc31576 from qemu
Since kernel commit a86bd139f2 (arm64: arch_timer: Enable CNTVCT_EL0
trap..), released in kernel version v4.12, user-space has been able
to read these system registers. As we can't use QEMUTimer's in
linux-user mode we just directly call cpu_get_clock().
Backports commit 26c4a83bd4707797868174332a540f7d61288d15 from qemu
We've already added the helpers with an SVE patch, all that remains
is to wire up the aa64 and aa32 translators. Enable the feature
within -cpu max for CONFIG_USER_ONLY.
Backports commit 26c470a7bb4233454137de1062341ad48947f252 from qemu
Enhance the existing helpers to support SVE, which takes the
index from each 128-bit segment. The change has no effect
for AdvSIMD, since there is only one such segment.
Backports commit 18fc24057815bf3d956cfab892a2bc2344bd1dcb from qemu
For aa64 advsimd, we had been passing the pre-indexed vector.
However, sve applies the index to each 128-bit segment, so we
need to pass in the index separately.
For aa32 advsimd, the fp32 operation always has index 0, but
we failed to interpret the fp16 index correctly.
Backports commit 2cc99919a81a62589a4a6b0f365eabfead1db1a7 from qemu
Allow ARMv8M to handle small MPU and SAU region sizes, by making
get_phys_add_pmsav8() set the page size to the 1 if the MPU or
SAU region covers less than a TARGET_PAGE_SIZE.
We choose to use a size of 1 because it makes no difference to
the core code, and avoids having to track both the base and
limit for SAU and MPU and then convert into an artificially
restricted "page size" that the core code will then ignore.
Since the core TCG code can't handle execution from small
MPU regions, we strip the exec permission from them so that
any execution attempts will cause an MPU exception, rather
than allowing it to end up with a cpu_abort() in
get_page_addr_code().
(The previous code's intention was to make any small page be
treated as having no permissions, but unfortunately errors
in the implementation meant that it didn't behave that way.
It's possible that some binaries using small regions were
accidentally working with our old behaviour and won't now.)
We also retain an existing bug, where we ignored the possibility
that the SAU region might not cover the entire page, in the
case of executable regions. This is necessary because some
currently-working guest code images rely on being able to
execute from addresses which are covered by a page-sized
MPU region but a smaller SAU region. We can remove this
workaround if we ever support execution from small regions.
Backports commit 720424359917887c926a33d248131fbff84c9c28 from qemu
We want to handle small MPU region sizes for ARMv7M. To do this,
make get_phys_addr_pmsav7() set the page size to the region
size if it is less that TARGET_PAGE_SIZE, rather than working
only in TARGET_PAGE_SIZE chunks.
Since the core TCG code con't handle execution from small
MPU regions, we strip the exec permission from them so that
any execution attempts will cause an MPU exception, rather
than allowing it to end up with a cpu_abort() in
get_page_addr_code().
(The previous code's intention was to make any small page be
treated as having no permissions, but unfortunately errors
in the implementation meant that it didn't behave that way.
It's possible that some binaries using small regions were
accidentally working with our old behaviour and won't now.)
Backports commit e5e40999b5e03567ef654546e3d448431643f8f3 from qemu
Unlike ARMv7-M, ARMv6-M and ARMv8-M Baseline only supports naturally
aligned memory accesses for load/store instructions.
Backports commit 2aeba0d007d33efa12a6339bb140aa634e0d52eb from qemu
This feature is intended to distinguish ARMv8-M variants: Baseline and
Mainline. ARMv7-M compatibility requires the Main Extension. ARMv6-M
compatibility is provided by all ARMv8-M implementations.
Backports commit cc2ae7c9de14efd72c6205825eb7cd980ac09c11 from qemu
The arrays were made static, "if" was simplified because V7M and V8M
define V6 feature.
Backports commit 8297cb13e407db8a96cc7ed6b6a6c318a150759a from qemu
ARMv6-M supports 6 Thumb2 instructions. This patch checks for these
instructions and allows their execution.
Like Thumb2 cores, ARMv6-M always interprets BL instruction as 32-bit.
This patch is required for future Cortex-M0 support.
Backports commit 14120108f87b3f9e1beacdf0a6096e464e62bb65 from qemu
Rearrange the arithmetic so that we are agnostic about the total size
of the vector and the size of the element. This will allow us to index
up to the 32nd byte and with 16-byte elements.
Backports commit 66f2dbd783d0b6172043e3679171421b2d0bac11 from qemu
Do the cast to uintptr_t within the helper, so that the compiler
can type check the pointer argument. We can also do some more
sanity checking of the index argument.
Backports commit 07ea28b41830f946de3841b0ac61a3413679feb9 from qemu
Depending on the host abi, float16, aka uint16_t, values are
passed and returned either zero-extended in the host register
or with garbage at the top of the host register.
The tcg code generator has so far been assuming garbage, as that
matches the x86 abi, but this is incorrect for other host abis.
Further, target/arm has so far been assuming zero-extended results,
so that it may store the 16-bit value into a 32-bit slot with the
high 16-bits already clear.
Rectify both problems by mapping "f16" in the helper definition
to uint32_t instead of (a typedef for) uint16_t. This forces
the host compiler to assume garbage in the upper 16 bits on input
and to zero-extend the result on output.
Backports commit 6c2be133a7478e443c99757b833d0f265c48e0a6 from qemu
The FRECPX instructions should (like most other floating point operations)
honour the FPCR.FZ bit which specifies whether input denormals should
be flushed to zero (or FZ16 for the half-precision version).
We forgot to implement this, which doesn't affect the results (since
the calculation doesn't actually care about the mantissa bits) but did
mean we were failing to set the FPSR.IDC bit.
Backports commit 2cfbf36ec07f7cac1aabb3b86f1c95c8a55424ba from qemu
Excepting MOVPRFX, which isn't a reduction. Presumably it is
placed within the group because of its encoding.
Backports commit 047cec971d2791b206677b954227ea92ff7ee3db from qemu
These were the instructions that were stubbed out when
introducing the decode skeleton.
Backports commit 39eea56172e668cc4cca611ed9166779df54ac63 from qemu
Including only 4, as-yet unimplemented, instruction patterns
so that the whole thing compiles.
Backports commit 38388f7ee3adc04a7e7246c04352451c4f8d00fb from qemu
This is a preparation for the coming feature of creating dynamically an XML
description for the ARM sysregs.
Add "_S" suffix to the secure version of sysregs that have both S and NS views
Replace (S) and (NS) by _S and _NS for the register that are manually defined,
so all the registers follow the same convention.
Backports commit 9c513e786d85cc58b8ba56a482566f759e0835b6 from qemu
This is a preparation for the coming feature of creating dynamically an XML
description for the ARM sysregs.
A register has ARM_CP_NO_GDB enabled will not be shown in the dynamic XML.
This bit is enabled automatically when creating CP_ANY wildcard aliases.
This bit could be enabled manually for any register we want to remove from the
dynamic XML description.
Backports commit 1f16378718fa87d63f70d0797f4546a88d8e3dd7 from qemu
The ARM ARM specifies FZ16 is suppressed for conversions. Rather than
pushing this logic into the softfloat code we can simply save the FZ
state and temporarily disable it for the softfloat call.
Backports commit 0acb9e7cb341cd767e39ec0875c8706eb2f1c359 from qemu
Instead of passing env and leaving it up to the helper to get the
right fpstatus we pass it explicitly. There was already a get_fpstatus
helper for neon for the 32 bit code. We also add an get_ahp_flag() for
passing the state of the alternative FP16 format flag. This leaves
scope for later tracking the AHP state in translation flags.
Backports commit 486624fcd3eaca6165ab8401d73bbae6c0fb81c1 from qemu
All the hard work is already done by vfp_expand_imm, we just need to
make sure we pick up the correct size.
Backports commit 6ba28ddb9be37bdb67e3e38007a53ccbdcd010df from qemu
In commit d81ce0ef2c4f105 we added an extra float_status field
fp_status_fp16 for Arm, but forgot to initialize it correctly
by setting it to float_tininess_before_rounding. This currently
will only cause problems for the new V8_FP16 feature, since the
float-to-float conversion code doesn't use it yet. The effect
would be that we failed to set the Underflow IEEE exception flag
in all the cases where we should.
Add the missing initialization.
Backports commit bcc531f0364796104df4443d17f99b5fb494eca2 from qemu
Use write_fp_dreg and clear_vec_high to zero the bits
that need zeroing for these cases.
Backports commit 9a9f1f59521f46e8ff4527d9a2b52f83577e2aa3 from qemu
The instruction "ucvtf v0.4h, v04h, #2", with input 0x8000u,
overflows the intermediate float16 to infinity before we have a
chance to scale the output. Use float64 as the intermediate type
so that no input argument (uint32_t in this case) can overflow
or round before scaling. Given the declared argument, the signed
int32_t function has the same problem.
When converting from float16 to integer, using u/int32_t instead
of u/int16_t means that the bounding is incorrect.
Backports commit 88808a022c06f98d81cd3f2d105a5734c5614839 from qemu
While we have some of the scalar paths for FCVT for fp16,
we failed to decode the fp16 version of these instructions.
Backports commit d0ba8e74acd299b092786ffc30b306638d395a9e from qemu
While we have some of the scalar paths for *CVF for fp16,
we failed to decode the fp16 version of these instructions.
Backports commit a6117fae4576edfe7a5a5b802a742c33112c0993 from qemu
This implements all of the v8.1-Atomics instructions except
for compare-and-swap, which is decoded elsewhere.
Backports commit 74608ea45434c9b07055b21885e093528c5ed98c from qemu
The insns in the ARMv8.1-Atomics are added to the existing
load/store exclusive and load/store reg opcode spaces.
Rearrange the top-level decoders for these to accomodate.
The Atomics insns themselves still generate Unallocated.
Backports commit 68412d2ecedbab5a43b0d346cddb27e00d724aff from qemu
While at it, use int for both num_insns and max_insns to make
sure we have same-type comparisons.
Backports commit b542683d77b4f56cef0221b267c341616d87bce9 from qemu
If the PC is in the last page of the address space, next_page_start
overflows to 0. Fix it.
Backports commit bfe7ad5be77a6a8925a7ab1628452c8942222102 from qemu
For v8M the instructions VLLDM and VLSTM support lazy saving
and restoring of the secure floating-point registers. Even
if the floating point extension is not implemented, these
instructions must act as NOPs in Secure state, so they can
be used as part of the secure-to-nonsecure call sequence.
Fixes: https://bugs.launchpad.net/qemu/+bug/1768295
Backports commit b1e5336a9899016c53d59eba53ebf6abcc21995c from qemu
The duplication of id_tlbtr_reginfo was unintentionally added within
3281af8114c6b8ead02f08b58e3c36895c1ea047 which should have been
id_mpuir_reginfo.
The effect was that for OMAP and StrongARM CPUs we would
incorrectly UNDEF writes to MPUIR rather than NOPing them.
Backports commit 100061121c1f69a672ce7bb3e9e3781f8018f9f6 from qemu
Path analysis shows that size == 3 && !is_q has been eliminated.
Fixes: Coverity CID1385853
Backports commit a8766e3172c1671cab297c1ef4566a3c5d094822 from qemu
The (size > 3 && !is_q) condition is identical to the preceeding test
of bit 3 in immh; eliminate it. For the benefit of Coverity, assert
that size is within the bounds we expect.
Fixes: Coverity CID1385846
Fixes: Coverity CID1385849
Fixes: Coverity CID1385852
Fixes: Coverity CID1385857
Backports commit 8dae46970532afcf93470b00e83ca9921980efc3 from qemu
This is a bug fix to ensure 64-bit reads of these registers don't read
adjacent data.
Backports commit e4e91a217c17fff4045dd4b423cdcb471b3d6a0e from qemu
Because the design of the PMU requires that the counter values be
converted between their delta and guest-visible forms for mode
filtering, an additional hook which occurs before the EL is changed is
necessary.
Backports commit b5c53d1b3886387874f8c8582b205aeb3e4c3df6 from qemu
This eliminates the need for fetching it from el_change_hook_opaque, and
allows for supporting multiple el_change_hooks without having to hack
something together to find the registered opaque belonging to GICv3.
Backports commit d5a5e4c93dae0dc3feb402cf7ee78d846da1a7e1 from qemu
In commit 95695effe8caa552b8f2 we changed the v7M/v8M stack
pop code to use a new v7m_stack_read() function that checks
whether the read should fail due to an MPU or bus abort.
We missed one call though, the one which reads the signature
word for the callee-saved register part of the frame.
Correct the omission.
Backports commit 4818bad98c8212fbbb0525d10761b6b65279ab92 from qemu
Remove a stale TODO comment -- we have now made the arm_ldl_ptw()
and arm_ldq_ptw() functions propagate physical memory read errors
out to their callers.
Backports commit 145772707fe80395b87c244ccf5699a756f1946b from qemu
In icount mode, instructions that access io memory spaces in the middle
of the translation block invoke TB recompilation. After recompilation,
such instructions become last in the TB and are allowed to access io
memory spaces.
When the code includes instruction like i386 'xchg eax, 0xffffd080'
which accesses APIC, QEMU goes into an infinite loop of the recompilation.
This instruction includes two memory accesses - one read and one write.
After the first access, APIC calls cpu_report_tpr_access, which restores
the CPU state to get the current eip. But cpu_restore_state_from_tb
resets the cpu->can_do_io flag which makes the second memory access invalid.
Therefore the second memory access causes a recompilation of the block.
Then these operations repeat again and again.
This patch moves resetting cpu->can_do_io flag from
cpu_restore_state_from_tb to cpu_loop_exit* functions.
It also adds a parameter for cpu_restore_state which controls restoring
icount. There is no need to restore icount when we only query CPU state
without breaking the TB. Restoring it in such cases leads to the
incorrect flow of the virtual time.
In most cases new parameter is true (icount should be recalculated).
But there are two cases in i386 and openrisc when the CPU state is only
queried without the need to break the TB. This patch fixes both of
these cases.
Backports commit afd46fcad2dceffda35c0586f5723c127b6e09d8 from qemu
The parameters for tcg_gen_insn_start are target_ulong, which may be split
into two TCGArg parameters for storage in the opcode on 32-bit hosts.
Fixes the ARM target and its direct use of tcg_set_insn_param, which would
set the wrong argument in the 64-on-32 case.
Backports commit 9743cd5736263e90d312b2c33bd739ffe1eae70d from qemu
Currently our PMSAv7 and ARMv7M MPU implementation cannot handle
MPU region sizes smaller than our TARGET_PAGE_SIZE. However we
report that in a slightly confusing way:
DRSR[3]: No support for MPU (sub)region alignment of 9 bits. Minimum is 10
The problem is not the alignment of the region, but its size;
tweak the error message to say so:
DRSR[3]: No support for MPU (sub)region size of 512 bytes. Minimum is 1024.
Backports commit 8aec759b45fa6986c0b159cb27353d6abb0d5d73 from qemu
Make sure we are not treating architecturally Undefined instructions
as a SWP, by verifying the opcodes as per section A8.8.229 of ARMv7-A
specification. Bits [21:20] must be zero for this to be a SWP or SWPB.
We also choose to UNDEF for the architecturally UNPREDICTABLE case of
bits [11:8] not being zero.
Backports commit c4869ca630a57f4269bb932ec7f719cef5bc79b8 from qemu
For debug exceptions due to breakpoints or the BKPT instruction which
are taken to AArch32, the Fault Address Register is architecturally
UNKNOWN. We were using that as license to simply not set
env->exception.vaddress, but this isn't correct, because it will
expose to the guest whatever old value was in that field when
arm_cpu_do_interrupt_aarch32() writes it to the guest IFSR. That old
value might be a FAR for a previous guest EL2 or secure exception, in
which case we shouldn't show it to an EL1 or non-secure exception
handler. It might also be a non-deterministic value, which is bad
for record-and-replay.
Clear env->exception.vaddress before taking breakpoint debug
exceptions, to avoid this minor information leak.
Backports commit 548f514cf89dd9ab39c0cb4c063097bccf141fdd from qemu
Now that we have a helper function specifically for the BRK and
BKPT instructions, we can set the exception.fsr there rather
than in arm_cpu_do_interrupt_aarch32(). This allows us to
use our new arm_debug_exception_fsr() helper.
In particular this fixes a bug where we were hardcoding the
short-form IFSR value, which is wrong if the target exception
level has LPAE enabled.
Fixes: https://bugs.launchpad.net/qemu/+bug/1756927
Backports commit 62b94f31d0df75187bb00684fc29e8639eacc0c5 from qemu
When a debug exception is taken to AArch32, it appears as a Prefetch
Abort, and the Instruction Fault Status Register (IFSR) must be set.
The IFSR has two possible formats, depending on whether LPAE is in
use. Factor out the code in arm_debug_excp_handler() which picks
an FSR value into its own utility function, update it to use
arm_fi_to_lfsc() and arm_fi_to_sfsc() rather than hard-coded constants,
and use the correct condition to select long or short format.
In particular this fixes a bug where we could select the short
format because we're at EL0 and the EL1 translation regime is
not using LPAE, but then route the debug exception to EL2 because
of MDCR_EL2.TDE and hand EL2 the wrong format FSR.
Backports commit 81621d9ab8a0f07956e67850b15eebf6d6992eec from qemu
The MDCR_EL2.TDE bit allows the exception level targeted by debug
exceptions to be set to EL2 for code executing at EL0. We handle
this in the arm_debug_target_el() function, but this is only used for
hardware breakpoint and watchpoint exceptions, not for the exception
generated when the guest executes an AArch32 BKPT or AArch64 BRK
instruction. We don't have enough information for a translate-time
equivalent of arm_debug_target_el(), so instead make BKPT and BRK
call a special purpose helper which can do the routing, rather than
the generic exception_with_syndrome helper.
Backports commit c900a2e62dd6dde11c8f5249b638caad05bb15be from qemu
In OE project 4.15 linux kernel boot hang was observed under
single cpu aarch64 qemu. Kernel code was in a loop waiting for
vtimer arrival, spinning in TC generated blocks, while interrupt
was pending unprocessed. This happened because when qemu tried to
handle vtimer interrupt target had interrupts disabled, as
result flag indicating TCG exit, cpu->icount_decr.u16.high,
was cleared but arm_cpu_exec_interrupt function did not call
arm_cpu_do_interrupt to process interrupt. Later when target
reenabled interrupts, it happened without exit into main loop, so
following code that waited for result of interrupt execution
run in infinite loop.
To solve the problem instructions that operate on CPU sys state
(i.e enable/disable interrupt), and marked as DISAS_UPDATE,
should be considered as DISAS_EXIT variant, and should be
forced to exit back to main loop so qemu will have a chance
processing pending CPU state updates, including pending
interrupts.
This change brings consistency with how DISAS_UPDATE is treated
in aarch32 case.
Backports commit a75a52d62418dafe462be4fe30485501d1010bb9 from qemu
Add an Error argument to cpu_exec_init() to let users collect the
error. This is in preparation to change the CPU enumeration logic
in cpu_exec_init(). With the new enumeration logic, cpu_exec_init()
can fail if cpu_index values corresponding to max_cpus have already
been handed out.
Since all current callers of cpu_exec_init() are from instance_init,
use error_abort Error argument to abort in case of an error.
Backports commit 5a790cc4b942e651fec7edc597c19b637fad5a76 from qemu
cpu_init(cpu_model) were replaced by cpu_create(cpu_type) so
no users are left, remove it.
Backports commit 3f71e724e283233753f1b5b3d6a30948d3084636 from qemu
it will be used for providing to cpu name resolving class for
parsing cpu model for system and user emulation code.
Along with change add target to null-machine tests, so
that when switch to CPU_RESOLVING_TYPE happens,
it would ensure that null-machine usecase still works.
Backports commit 0dacec874fa3b3fd34b0d0670fa257efdcbbebd0 from qemu
Backports commits 2994fd96d986578a342f2342501b4ad30f6d0a85,
701e3c78ce45fa630ffc6826c4b9a4218954bc7f, and
d1853231c60d16af78cf4d1608d043614bfbac0b from qemuu
This function needs to be converted to QOM hook and virtualised for
multi-arch. This rename interferes, as cpu-qom will not have access
to the renaming causing name divergence. This rename doesn't really do
anything anyway so just delete it.
Backports commit 8642c1b81e0418df066a7960a7426d85a923a253 from qemu
Move vcpu's associated numa_node field out of generic CPUState
into inherited classes that actually care about cpu<->numa mapping,
i.e: ARMCPU, PowerPCCPU, X86CPU.
Backports relevant parts of commit 15f8b14228b856850df3fa5ba999ad96521f2208 from qemu
Now we have a working '-cpu max', the linux-user-only
'any' CPU is pretty much the same thing, so implement it
that way.
For the moment we don't add any of the extra feature bits
to the system-emulation "max", because we don't set the
ID register bits we would need to to advertise those
features as present.
Backports commit a0032cc5427d0d396aa0a9383ad9980533448ea4 from qemu
Add support for "-cpu max" for ARM guests. This CPU type behaves
like "-cpu host" when KVM is enabled, and like a system CPU with
the maximum possible feature set otherwise. (Note that this means
it won't be migratable across versions, as we will likely add
features to it in future.)
Backports commit bab52d4bba3f22921a690a887b4bd0342f2754cd from qemu
The cortex A53 TRM specifies that bits 24 and 25 of the L2CTLR register
specify the number of cores in the processor, not the total number of
cores in the system. To report this correctly on machines with multiple
CPU clusters (ARM's big.LITTLE or Xilinx's ZynqMP) we need to allow
the machine to overwrite this value. To do this let's add an optional
property.
Backports commit f9a697112ee64180354f98309a5d6b691cc8699d from qemu
Convert all machines to use DEFINE_MACHINE() instead of QEMUMachine
automatically using a script.
Backports commit e264d29de28c5b0be3d063307ce9fb613b427cc3 from qemu
The integer size check was already outside of the opcode switch;
move the floating-point size check outside as well. Unify the
size vs index adjustment between fp and integer paths.
Backports commit 449f264b1749ac0e59c58bbc2eacdb3dc302c2bf from qemu
Add a Cortex-M33 definition. The M33 is an M profile CPU
which implements the ARM v8M architecture, including the
M profile Security Extension.
Backports commit c7b26382fee8b745c6e903c85281babf30c2cb7c from qemu
The Cortex-M33 allows the system to specify the reset value of the
secure Vector Table Offset Register (VTOR) by asserting config
signals. In particular, guest images for the MPS2 AN505 board rely
on the MPS2's initial VTOR being correct for that board.
Implement a QEMU property so board and SoC code can set the reset
value to the correct value.
Backports commit 38e2a77c9d6876e58f45cabb1dd9a6a60c22b39e from qemu
This includes FMOV, FABS, FNEG, FSQRT and FRINT[NPMZAXI]. We re-use
existing helpers to achieve this.
Backports commit c2c08713a6a5846bbe601d4d1b4f9708ba77efdc from qemu
This covers the encoding group:
Advanced SIMD scalar three same FP16
As all the helpers are already there it is simply a case of calling the
existing helpers in the scalar context.
Backports commit 7c93b7741b29b3ffda81a6e9525771b4409db99f from qemu
I only needed to do a little light re-factoring to support the
half-precision helpers.
Backports commit 5c36d89567cfd049a7c59ff219639f788225068f from qemu
Much like recpe the ARM ARM has simplified the pseudo code for the
calculation which is done on a fixed point 9 bit integer maths. So
while adding f16 we can also clean this up to be a little less heavy
on the floating point and just return the fractional part and leave
the calle's to do the final packing of the result.
Backports commit d719cbc7641991d16b891ffbbfc3a16a04e37b9a from qemu
Also removes a load of symbols that seem unnecessary from the header_gen script
It looks like the ARM ARM has simplified the pseudo code for the
calculation which is done on a fixed point 9 bit integer maths. So
while adding f16 we can also clean this up to be a little less heavy
on the floating point and just return the fractional part and leave
the calle's to do the final packing of the result.
Backports commit 5eb70735af1c0b607bf2671a53aff3710cc1672f from qemu
Neither of these operations alter the floating point status registers
so we can do a pure bitwise operation, either squashing any sign
bit (ABS) or inverting it (NEG).
Backports commit 15f8a233c8c023dbc77b6fe6cd7c79eac9bee263 from qemu
I re-use the existing handle_2misc_fcmp_zero handler and tweak it
slightly to deal with the half-precision case.
Backports commit 7d4dd1a73a023f75c893623710e43743501b318e from qemu
This adds the full range of half-precision floating point to integral
instructions.
Backports commit 6109aea2d954891027acba64a13f1f1c7463cfac from qemu
This actually covers two different sections of the encoding table:
Advanced SIMD scalar two-register miscellaneous FP16
Advanced SIMD two-register miscellaneous (FP16)
The difference between the two is covered by a combination of Q (bit
30) and S (bit 28). Notably the FRINTx instructions are only
available in the vector form.
This is just the decode skeleton which will be filled out by later
patches.
Backports commit 5d432be6fd6efe37833ac82623c3abd35117b421 from qemu
A bunch of the vectorised bitwise operations just operate on larger
chunks at a time. We can do the same for the new half-precision
operations by introducing some TWOHALFOP helpers which work on each
half of a pair of half-precision operations at once.
Hopefully all this hoop jumping will get simpler once we have
generically vectorised helpers here.
Backports commit 6089030c7322d8f96b54fb9904e53b0f464bb8fe from qemu
The helpers use the new re-factored muladd support in SoftFloat for
the float16 work.
Backports commit 5d265064cf30daaacce5a4ce9945fc573015fb5f from qemu