ARMCPRegInfo structs will default to .cp = 15 if they
are ARM_CP_STATE_BOTH, but not if they are ARM_CP_STATE_AA32
(because a coprocessor number of 0 is valid for AArch32).
We forgot to explicitly set .cp = 15 for the HMAIR1 and
HAMAIR1 regdefs, which meant they would UNDEF when the guest
tried to access them under cp15.
Backports commit b5ede85bfb7ba1a8f6086494c82f400b29969f65 from qemu
We implement the HAMAIR1 register as RAZ/WI; we had a typo in the
regdef, though, and were incorrectly naming it HMAIR1 (which is
a different register which we also implement as RAZ/WI).
Backports commit 55b53c718b2f684793eeefcf1c1a548ee97e23aa from qemu
If an instruction is conditional (like CBZ) and it is executed
conditionally (using the ITx instruction), a jump to an undefined
label is generated, and QEMU crashes.
CBZ in IT block is an UNPREDICTABLE behavior, but we should not
crash. Honouring the condition code is allowed by the spec in this
case (constrained unpredictable, ARMv8, section K1.1.7), and matches
what we do for other "UNPREDICTABLE inside an IT block" instructions.
Fix the 'skip on condition' code to create a new label only if it
does not already exist. Previously multiple labels were created, but
only the last one of them was set.
Backports commit c2d9644e6d517170bf6520f633628259a8460d48 from qemu
Enabling TOPOEXT is always allowed, but it can't be enabled
blindly by "-cpu host" because it may make guests crash if the
rest of the cache topology information isn't provided or isn't
consistent.
This addresses the bug reported at:
https://bugzilla.redhat.com/show_bug.cgi?id=1613277
Backports commit 7210a02c58572b2686a3a8d610c6628f87864aed from qemu
New CPU models mostly inherit features from ancestor Skylake, while addin new
features: UMIP, New Instructions ( PCONIFIG (server only), WBNOINVD,
AVX512_VBMI2, GFNI, AVX512_VNNI, VPCLMULQDQ, VAES, AVX512_BITALG),
Intel PT and 5-level paging (Server only). As well as
IA32_PRED_CMD, SSBD support for speculative execution
side channel mitigations.
Note:
For 5-level paging, Guest physical address width can be configured, with
parameter "phys-bits". Unless explicitly specified, we still use its default
value, even for Icelake-Server cpu model.
At present, hold on expose IA32_ARCH_CAPABILITIES to guest, as 1) This MSR
actually presents more than 1 'feature', maintainers are considering expanding current
features presentation of only CPUIDs to MSR bits; 2) a reasonable default value
for MSR_IA32_ARCH_CAPABILITIES needs to settled first. These 2 are actully
beyond Icelake CPU model itself but fundamental. So split these work apart
and do it later.
https://lists.gnu.org/archive/html/qemu-devel/2018-07/msg00774.htmlhttps://lists.gnu.org/archive/html/qemu-devel/2018-07/msg00796.html
Backports commit 8a11c62da9146dd89aee98947e6bd831e65a970d from qemu
WBNOINVD: Write back and do not invalidate cache, enumerated by
CPUID.(EAX=80000008H, ECX=0):EBX[bit 9].
Backports commit 59a80a19ca31a6fff9fdbb6b4cf55a5a0767c3bc from qemu
Support of IA32_PRED_CMD MSR already be enumerated by same CPUID bit as
SPEC_CTRL.
At present, mark CPUID_7_0_EDX_ARCH_CAPABILITIES unmigratable, per Paolo's
comment.
Backports commit 3fc7c73139d2d38ae80c3b0bc963b1ac1555924c from qemu
IA32_PRED_CMD MSR gives software a way to issue commands that affect the state
of indirect branch predictors. Enumerated by CPUID.(EAX=7H,ECX=0):EDX[26].
IA32_ARCH_CAPABILITIES MSR enumerates architectural features of RDCL_NO and
IBRS_ALL. Enumerated by CPUID.(EAX=07H, ECX=0):EDX[29].
https://software.intel.com/sites/default/files/managed/c5/63/336996-Speculative-Execution-Side-Channel-Mitigations.pdf
Backports commit 8c80c99fcceabd0708a5a83f08577e778c9419f5 from qemu
MFHC0 and MTHC0 used to handle EntryLo0 and EntryLo1 registers only,
and placing ELPA flag checks before switch statement were technically
correct. However, after adding handling more registers, these checks
should be moved to act only in cases of handling EntryLo0 and
EntryLo1.
Backports commit 59488dda1f16c0259bc2610d8d71686ef436c649 from qemu
Update CP0 registers Config0, Config1, Config2, Config3,
Config4, and Config5 bit definitions.
Some of these bits will be utilized by upcoming nanoMIPS changes.
Backports commit 0413d7a55a8161ebd33541ba1df4285bf180c583 from qemu
Fix two instances of shadow variables. This cleans up entire file
translate.c from shadow variables.
Backports commit e1555d7ddf2c86fb92165e47eb092f1f5fa9e8bd from qemu
Mark switch fallthroughs with comments, in cases fallthroughs
are intentional.
The comments "/* fall through */" are interpreted by compilers and
other tools, and they will not issue warnings in such cases. For gcc,
the warning is turnend on by -Wimplicit-fallthrough. With this patch,
there will be no such warnings in target/mips directory. If such
warning appears in future, it should be checked if it is intentional,
and, if yes, marked with a comment similar to those from this patch.
The comment must be just before next "case", otherwise gcc won't
understand it.
Backports commit 146dd620db815558938433eb9f57a571d424d2c6 from qemu
Remove "range style" case statements to make code analysis easier.
This patch handles cases when the values in the range in question
were not properly defined.
Backports commit c38a1d52233c85976eeed99c9015e881de8cd68e from qemu
Remove "range style" case statements to make code analysis easier.
This is needed also for some upcoming nanoMIPS-related refactorings.
Backports commit c2e19f3c2b1a1bb5f4fc3c55ee8cfa28dde9b810 from qemu
For 0x1.0000000000003p+0 + 0x1.ffffffep+14 = 0x1.0001fffp+15
we dropped the sticky bit and so failed to raise inexact.
Backports commit 64d450a0eaad5f02f9d6bba1dd451446297bb4dc from qemu
These insns require u=1; failed to include that in the switch
cases. This probably happened during one of the rebases just
before final commit.
Fixes: d17b7cdcf4e
Backports commit b8a4a96db3639e17ab5e5cdc14fca4b19fbf5b3b from qemu
We were using the wrong flush-to-zero bit for the non-half input.
Fixes: 46d33d1e3c9
Backports commit e4ab5124a5c2e2291006b24bdc21c3dd8d087ff4 from qemu
When FZ is set, input_denormal exceptions are recognized, but this does
not happen with FZ16. The softfloat code has no way to distinguish
these bits and will raise such exceptions into fp_status_f16.flags,
so ignore them when computing the accumulated flags.
Backports commit 19062c169e5bcdda3d60df9161228e107bf0f96e from qemu
When support for FZ16 was added, we failed to include the bit
within FPCR_MASK, which means that it could never be set.
Continue to zero FZ16 when ARMv8.2-FP16 is not enabled.
Fixes: d81ce0ef2c4
Backports commit 0b62159be33d45d00dfa34a317c6d3da30ffb480 from qemu
Define a "cortex-m0" ARMv6-M CPU model.
Most of the register reset values set by other CPU models are not
relevant for the cut-down ARMv6-M architecture.
Backports commit 191776b96a381b5d2b8d3f90c1c02b3e4779e5f7 from qemu
This allows the default (and maximum) vector length to be set
from the command-line. Which is extraordinarily helpful in
debugging problems depending on vector length without having to
bake knowledge of PR_SET_SVE_VL into every guest binary.
Backports relevant parts of commit
adf92eab90e3f5f34c285da6d14d48952b7a8e72 from qemu
Also fold the FPCR/FPSR state onto the same line as PSTATE,
and mention but do not dump disabled FPU state.
Backports commit 2bf5f3f91bb4e3faa2a19aec042138a938afbf6a from qemu
The scaling should be solely on the memory operation size; the number
of registers being loaded does not come in to the initial computation.
Backports commit 50ef1cbf31caad21019ae6fa8036ed6f29244ba5 from qemu
The immediate should be scaled by the size of the memory reference,
not the size of the elements into which it is loaded.
Backports commit d0e372b0298f897993f831dbff7ad4f1c70f138e from qemu
The expression (int) imm + (uint32_t) len_align turns into uint32_t
and thus with negative imm produces a memory operation at the wrong
offset. None of the numbers involved are particularly large, so
change everything to use int.
Backports commit 19f2acc915a0f8f443a959844540a6f09133cc96 from qemu
Fix the following issues:
common.py:873:13: E129 visually indented line with same indent as next logical line
common.py:1766:5: E741 ambiguous variable name 'l'
common.py:1784:1: E305 expected 2 blank lines after class or function definition, found 1
common.py:1833:1: E305 expected 2 blank lines after class or function definition, found 1
common.py:1843:1: E305 expected 2 blank lines after class or function definition, found 1
visit.py:181:18: E127 continuation line over-indented for visual indent
Backports commit b736e25a1820c63f7d69baa03e624cef80c4de90 from qemu
The pseudocode for this operation is an increment + compare loop,
so comparing <= the maximum integer produces an all-true predicate.
Rather than bound in both the inline code and the helper, pass the
helper the number of predicate bits to set instead of the number
of predicate elements to set.
Backports commit bbd0968c458d48e34a08b8694fa3309a9fe1c9e7 from qemu
The normal vector element is sign-extended before
comparing with the wide vector element.
Backports commit df4e001093988544d09887122ae824f18ba55c68 from qemu
Tailchaining is an optimization in handling of exception return
for M-profile cores: if we are about to pop the exception stack
for an exception return, but there is a pending exception which
is higher priority than the priority we are returning to, then
instead of unstacking and then immediately taking the exception
and stacking registers again, we can chain to the pending
exception without unstacking and stacking.
For v6M and v7M it is IMPDEF whether tailchaining happens for pending
exceptions; for v8M this is architecturally required. Implement it
in QEMU for all M-profile cores, since in practice v6M and v7M
hardware implementations generally do have it.
(We were already doing tailchaining for derived exceptions which
happened during exception return, like the validity checks and
stack access failures; these have always been required to be
tailchained for all versions of the architecture.)
Backports commit 5f62d3b9e67bfc3deb970e3c7fb7df7e57d46fc3 from qemu
On exception return for M-profile, we must restore the CONTROL.SPSEL
bit from the EXCRET value before we do any kind of tailchaining,
including for the derived exceptions on integrity check failures.
Otherwise we will give the guest an incorrect EXCRET.SPSEL value on
exception entry for the tailchained exception.
Backports commit 89b1fec193b81b6ad0bd2975f2fa179980cc722e from qemu
In do_v7m_exception_exit(), we use the exc_secure variable to track
whether the exception we're returning from is secure or non-secure.
Unfortunately the statement initializing this was accidentally
inside an "if (env->v7m.exception != ARMV7M_EXCP_NMI)" conditional,
which meant that we were using the wrong value for NMI handlers.
Move the initialization out to the right place.
Backports commit b8109608bc6f3337298d44ac4369bf0bc8c3a1e4 from qemu
One of the required effects of setting HCR_EL2.TGE is that when
SCR_EL3.NS is 1 then SCTLR_EL1.M must behave as if it is zero for
all purposes except direct reads. That is, it effectively disables
the MMU for the NS EL0/EL1 translation regime.
Backports commit 3d0e3080d8b7abcddc038d18e8401861c369c4c1 from qemu
The IMO, FMO and AMO bits in HCR_EL2 are defined to "behave as
1 for all purposes other than direct reads" if HCR_EL2.TGE
is set and HCR_EL2.E2H is 0, and to "behave as 0 for all
purposes other than direct reads" if HCR_EL2.TGE is set
and HRC_EL2.E2H is 1.
To avoid having to check E2H and TGE everywhere where we test IMO and
FMO, provide accessors arm_hcr_el2_imo(), arm_hcr_el2_fmo()and
arm_hcr_el2_amo(). We don't implement ARMv8.1-VHE yet, so the E2H
case will never be true, but we include the logic to save effort when
we eventually do get to that.
(Note that in several of these callsites the change doesn't
actually make a difference as either the callsite is handling
TGE specially anyway, or the CPU can't get into that situation
with TGE set; we change everywhere for consistency.)
Backports commit ac656b166b57332ee397e9781810c956f4f5fde5 from qemu
Whene we raise a synchronous exception, if HCR_EL2.TGE is set then
exceptions targeting NS EL1 must be redirected to EL2. Implement
this in raise_exception() -- all synchronous exceptions go through
this function.
(Asynchronous exceptions go via arm_cpu_exec_interrupt(), which
already honours HCR_EL2.TGE when it determines the target EL
in arm_phys_excp_target_el().)
Backports commit 7556edfb4d7bf0583c852c8cfc49ef494c41dd8a from qemu
Some debug registers can be trapped via MDCR_EL2 bits TDRA, TDOSA,
and TDA, which we implement in the functions access_tdra(),
access_tdosa() and access_tda(). If MDCR_EL2.TDE or HCR_EL2.TGE
are 1, the TDRA, TDOSA and TDA bits should behave as if they were 1.
Implement this by having the access functions check MDCR_EL2.TDE
and HCR_EL2.TGE.
Backports commit 30ac6339dca3fe0d05a611f12eedd5af20af585a from qemu
If the "trap general exceptions" bit HCR_EL2.TGE is set, we
must mask all virtual interrupts (as per DDI0487C.a D1.14.3).
Implement this in arm_excp_unmasked().
Backports commit 2ccf0fef632f3d54b2cc9ea08f1e6904ff1f8df4 from qemu
Forbid stack alignment change. (CCR)
Reserve FAULTMASK, BASEPRI registers.
Report any fault as a HardFault. Disable MemManage, BusFault and
UsageFault, so they always escalated to HardFault. (SHCSR)
Backports commit 22ab3460017cfcfb6b50f05838ad142e08becce5 from qemu