The bit 6, 7 and 8 of MSR_IA32_ARCH_CAPABILITIES are recently disclosed
for some security issues. Add the definitions for them to be used by named
CPU models.
Backports commit 6c997b4adb300788d61d72e2b8bc67c03a584956 from qemu
Before we introduce blocking semihosting calls we need to ensure we
can restart the system on semi hosting exception. To be able to do
this the EXCP_SEMIHOST operation should be idempotent until it finally
completes. Practically this means ensureing we only update the pc
after the semihosting call has completed.
Backports commit 4ff5ef9e911c670ca10cdd36dd27c5395ec2c753 from qemu
All semihosting exceptions are dealt with earlier in the common code
so we should never get here.
Backports commit b906acbb3aceed5b1eca30d9d365d5bd7431400b from qemu
Cooper Lake is intel's successor to Cascade Lake, the new
CPU model inherits features from Cascadelake-Server, while
add one platform associated new feature: AVX512_BF16. Meanwhile,
add STIBP for speculative execution.
Backports commit 22a866b6166db5caa4abaa6e656c2a431fa60726 from qemu
stibp feature is already added through the following commit.
0e89165829
Add a macro for it to allow CPU models to report it when host supports.
Backports commit 5af514d0cb314f43bc53f2aefb437f6451d64d0c from qemu
Define MSR_ARCH_CAP_MDS_NO in the IA32_ARCH_CAPABILITIES MSR to allow
CPU models to report the feature when host supports it.
Backports commit 77b168d221191156c47fcd8d1c47329dfdb9439e from qemu
A write to the SCR can change the effective EL by droppping the system
from secure to non-secure mode. However if we use a cached current_el
from before the change we'll rebuild the flags incorrectly. To fix
this we introduce the ARM_CP_NEWEL CP flag to indicate the new EL
should be used when recomputing the flags.
Backports partof commit f80741d107673f162e3b097fc76a1590036cc9d1 from
qemu
ARMv8.2 introduced support for Data Cache Clean instructions
to PoP (point-of-persistence) - DC CVAP and PoDP (point-of-deep-persistence)
- DV CVADP. Both specify conceptual points in a memory system where all writes
that are to reach them are considered persistent.
The support provided considers both to be actually the same so there is no
distinction between the two. If none is available (there is no backing store
for given memory) both will result in Data Cache Clean up to the point of
coherency. Otherwise sync for the specified range shall be performed.
Backports commit 0d57b49992200a926c4436eead97ecfc8cc710be from qemu
This bit configures endianness of PCI MMIO devices. It is used by
Solaris and OpenBSD sunhme drivers.
Tested working on OpenBSD.
Unfortunately Solaris 10 had a unrelated keyboard issue blocking
testing... another inch towards Solaris 10 on SPARC64 =)
Backports commit ccdb4c5535f41ee4da2ef158f58fca0327e50dab from qemu
Append MemTxAttrs to interfaces so we can pass along up coming Invert
Endian TTE bit on SPARC64.
Backports commit 9bed46e67e2ee54bc596ba58063ee71a5ca40923 from qemu
Temporarily no-op size_memop was introduced to aid the conversion of
memory_region_dispatch_{read|write} operand "unsigned size" into
"MemOp op".
Now size_memop is implemented, again hard coded size but with
MO_{8|16|32|64}. This is more expressive and avoids size_memop calls.
Backports commit 4574664677116dedb29b12150137f3888374a857 from qemu
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be converted into a "MemOp op".
As size_memop is a no-op, this patch does not change any behaviour.
Backports commit e501824b3f3b3650e7cb8a509064cac01bc27c82 from qemu
This change ensures that the FPU can be accessed in Non-Secure mode
when the CPU core is reset using the arm_set_cpu_on() function call.
The NSACR.{CP11,CP10} bits define the exception level required to
access the FPU in Non-Secure mode. Without these bits set, the CPU
will give an undefined exception trap on the first FPU access for the
secondary cores under Linux.
This is necessary because in this power-control codepath QEMU
is effectively emulating a bit of EL3 firmware, and has to set
the CPU up as the EL3 firmware would.
Fixes: fc1120a7f5
Backports commit 0c7f8c43daf6556078e51de98aa13f069e505985 from qemu
QEMU lacks the minimum Jazelle implementation that is required
by the architecture (everything is RAZ or RAZ/WI). Add it
together with the HCR_EL2.TID0 trapping that goes with it.
Backports commit f96f3d5f09973ef40f164cf2d5fd98ce5498b82a from qemu
HSTR_EL2 offers a way to trap ranges of CP15 system register
accesses to EL2, and it looks like this register is completely
ignored by QEMU.
To avoid adding extra .accessfn filters all over the place (which
would have a direct performance impact), let's add a new TB flag
that gets set whenever HSTR_EL2 is non-zero and that QEMU translates
a context where this trap has a chance to apply, and only generate
the extra access check if the hypervisor is actively using this feature.
Tested with a hand-crafted KVM guest accessing CBAR.
Backports commit 5bb0a20b74ad17dee5dae38e3b8b70b383ee7c2d from qemu
HCR_EL2.TID3 requires that AArch32 reads of MVFR[012] are trapped to
EL2, and HCR_EL2.TID0 does the same for reads of FPSID.
In order to handle this, introduce a new TCG helper function that
checks for these control bits before executing the VMRC instruction.
Tested with a hacked-up version of KVM/arm64 that sets the control
bits for 32bit guests.
Backports commit 9ca1d776cb49c09b09579d9edd0447542970c834 from qemu
HCR_EL2.TID1 mandates that access from EL1 to REVIDR_EL1, AIDR_EL1
(and their 32bit equivalents) as well as TCMTR, TLBTR are trapped
to EL2. QEMU ignores it, making it harder for a hypervisor to
virtualize the HW (though to be fair, no known hypervisor actually
cares).
Do the right thing by trapping to EL2 if HCR_EL2.TID1 is set.
Backports commit 93fbc983b29a2eb84e2f6065929caf14f99c3681 from qemu
HCR_EL2.TID2 mandates that access from EL1 to CTR_EL0, CCSIDR_EL1,
CCSIDR2_EL1, CLIDR_EL1, CSSELR_EL1 are trapped to EL2, and QEMU
completely ignores it, making it impossible for hypervisors to
virtualize the cache hierarchy.
Do the right thing by trapping to EL2 if HCR_EL2.TID2 is set.
Backports commit 630fcd4d2ba37050329e0adafdc552d656ebe2f3 from qemu
This is derived from cortex-m4 description, adding DP support and FPv5
instructions with the corresponding flags in isar and mvfr2.
Checked that it could successfully execute
vrinta.f32 s15, s15
while cortex-m4 emulation rejects it with "illegal instruction".
Backports commit cf7beda5072e106ddce875c1996446540c5fe239 from qemu
HCR_EL2.TID3 mandates that access from EL1 to a long list of id
registers traps to EL2, and QEMU has so far ignored this requirement.
This breaks (among other things) KVM guests that have PtrAuth enabled,
while the hypervisor doesn't want to expose the feature to its guest.
To achieve this, KVM traps the ID registers (ID_AA64ISAR1_EL1 in this
case), and masks out the unsupported feature.
QEMU not honoring the trap request means that the guest observes
that the feature is present in the HW, starts using it, and dies
a horrible death when KVM injects an UNDEF, because the feature
*really* isn't supported.
Do the right thing by trapping to EL2 if HCR_EL2.TID3 is set.
Note that this change does not include trapping of the MVFR
registers from AArch32 (they are accessed via the VMRS
instruction and need to be handled in a different way).
Backports commit 6a4ef4e5d1084ce41fafa7d470a644b0fd3d9317 from qemu
The ARMv8 ARM states when executing at EL2, EL3 or Secure EL1,
ISR_EL1 shows the pending status of the physical IRQ, FIQ, or
SError interrupts.
Unfortunately, QEMU's implementation only considers the HCR_EL2
bits, and ignores the current exception level. This means a hypervisor
trying to look at its own interrupt state actually sees the guest
state, which is unexpected and breaks KVM as of Linux 5.3.
Instead, check for the running EL and return the physical bits
if not running in a virtualized context.
Backports commit 7cf95aed53c8770a338617ef40d5f37d2c197853 from qemu
According to the PushStack() pseudocode in the armv7m RM,
bit 4 of the LR should be set to NOT(CONTROL.PFCA) when
an FPU is present. Current implementation is doing it for
armv8, but not for armv7. This patch makes the existing
logic applicable to both code paths.
Backports commit f900b1e5b087a02199fbb6de7038828008e9e419 from qemu
Simply moving the non-stub helper_v7m_mrs/msr outside of
!CONFIG_USER_ONLY is not an option, because of all of the
other system-mode helpers that are called.
But we can split out a few subroutines to handle the few
EL0 accessible registers without duplicating code.
Backports commit 04c9c81b8fa2ee33f59a26265700fae6fc646062 from qemu
There was too much cut and paste between ldrexd and strexd,
as ldrexd does prohibit two output registers the same.
Fixes: af288228995
Backports commit 655b02646dc175dc10666459b0a1e4346fc8d46a from qemu
Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.
Target dependant attributes are conditionalized upon NEED_CPU_H.
Backports commit 14776ab5a12972ea439c7fb2203a4c15a09094b4 from qemu
Switch the SPARC target from the old unassigned_access hook to the
new do_transaction_failed hook.
This will cause the "if transaction failed" code paths added in
the previous commits to become active if the access is to an
unassigned address. In particular we'll now handle bus errors
during page table walks correctly (generating a translation
error with the right kind of fault status).
Backports commit f8c3db33a5e863291182f8862ddf81618a7c6194 from qemu
The dump_mmu() function does a ldl_phys() at the start, but
then never uses the value it loads at all. Remove the
unused code.
Backports commit 9dffeec2e003a482ca858a887d3454c6bebed91e from qemu
Convert the mmu_probe() function to using address_space_ldl()
rather than ldl_phys(), so we can explicitly detect memory
transaction failures.
This makes no practical difference at the moment, because
ldl_phys() will return 0 on a transaction failure, and we
treat transaction failures and 0 PDEs identically. However
the spec says that MMU probe operations are supposed to
update the fault status registers, and if we ever implement
that we'll want to distinguish the difference. For the
moment, just add a TODO comment about the bug.
Backports commit d86a9ad33c75ed795f09fb43243d0acecd583f24 from qemu
Currently we use the ldl_phys() function to read page table entries.
With the unassigned_access hook in place, if these hit an unassigned
area of memory then the hook will cause us to wrongly generate
an exception with a fault address matching the address of the
page table entry.
Change to using address_space_ldl() so we can detect and correctly
handle bus errors and give them their correct behaviour of
causing a translation error with a suitable fault status register.
Note that this won't actually take effect until we switch the
over to using the do_translation_failed hook.
Backports commit 3c818dfcc271f5ba298b06f33466ab30f9a28349 from qemu
Currently the ld/st_asi helper functions make calls to the
ld*_phys() and st*_phys() functions for those ASIs which
imply direct accesses to physical addresses. These implicitly
rely on the unassigned_access hook to cause them to generate
an MMU fault if the access fails.
Switch to using the address_space_* functions instead, which
return a MemTxResult that we can check. This means that when
we switch SPARC over to using the do_transaction_failed hook
we'll still get the same MMU faults we did before.
This commit converts the ASIs which do MXCC stream source
and destination accesses.
It's not clear to me whether raising an MMU fault like this
is the correct behaviour if we encounter a bus error, but
we retain the same behaviour that the old unassigned_access
hook would implement.
Backports commit 776095d3cd751a58469b68f652c1ab6785f63652 from qemu
Currently the ld/st_asi helper functions make calls to the
ld*_phys() and st*_phys() functions for those ASIs which
imply direct accesses to physical addresses. These implicitly
rely on the unassigned_access hook to cause them to generate
an MMU fault if the access fails.
Switch to using the address_space_* functions instead, which
return a MemTxResult that we can check. This means that when
we switch SPARC over to using the do_transaction_failed hook
we'll still get the same MMU faults we did before.
This commit converts the ASIs which do "MMU passthrough".
Backports commit b9f5fdad49c74583dcf9fcba0805b148e3992e13 from qemu
Currently the SPARC target uses the old-style do_unassigned_access
hook. We want to switch it over to do_transaction_failed, but to do
this we must first remove all the direct calls in ldst_helper.c to
cpu_unassigned_access(). Factor out the body of the hook function's
code into a new sparc_raise_mmu_fault() and call it from the hook and
from the various places that used to call cpu_unassigned_access().
In passing, this fixes a bug where the code that raised the
MMU exception was directly calling GETPC() from a function that
was several levels deep in the callstack from the original
helper function: the new sparc_raise_mmu_fault() instead takes
the return address as an argument.
Other than the use of retaddr rather than GETPC() and a comment
format fixup, the body of the new function has no changes from
that of the old hook function.
Backports commit c9d793f44620a4793239da73f67758ce5f5ba5d0 from qemu
Switch the MIPS target from the old unassigned_access hook to the new
do_transaction_failed hook.
Unlike the old hook, do_transaction_failed is only ever called from
the TCG memory access paths, so there is no need for the "ignore this
if we're using KVM" hack that we were previously using to work around
the way unassigned_access was called for all kinds of memory accesses
to unassigned physical addresses.
The MIPS target does not ever do direct memory reads by physical
address (via either ldl_phys etc or address_space_ldl etc), so the
only memory accesses this affects are the 'normal' guest loads and
stores, which will be handled by the new hook; their behaviour is
unchanged.
Backports commit 4f02a06d50ef0081089ed8cb3ec7c7986e3c95f8 from qemu
There are only two remaining uses of gen_bx_im. In each case, we
know the destination mode -- not changing in the case of gen_jmp
or changing in the case of trans_BLX_i. Use this to simplify the
surrounding code.
For trans_BLX_i, use gen_jmp for the actual branch. For gen_jmp,
use gen_set_pc_im to set up the single-step.
Backports commit eac2f39602e0423adf56be410c9a22c31fec9a81 from qemu
Now that all callers pass a constant value, split the switch
statement into the individual trans_* functions.
Backports commit 279de61a21a1622cb875ead82d6e78c989ba2966 from qemu
Add a check for ARMv6 in trans_CPS. We had this correct in
the T16 path, but had previously forgotten the check on the
A32 and T32 paths.
Backports commit 20556e7bd6111266fbf1d81e4ff7a89bfa5795a7 from qemu
Fold away all of the cases that now just goto illegal_op,
because all of their internal bits are now in decodetree.
Backports commit 590057d969a54de5d97261701c5702b3bebc9c07 from qemu
Fold away all of the cases that now just goto illegal_op,
because all of their internal bits are now in decodetree.
Backports commit f843e77144c9334e244a422848177f2fbef5eb05 from qemu
We have been using store_reg and not store_reg_for_load when writing
back a loaded value into the base register. At first glance this is
incorrect when base == pc, however that case is UNPREDICTABLE.
Backports commit b0e382b8cf365fed8b8c43482029ac7655961a85 from qemu
This has been a TODO item for quite a while. The minimum bit
count for A32 and T16 is 1, and for T32 is 2.
Backports commit 4b222545dbf30b60c033e1cd6eddda612575fd8c from qemu
Prior to v7, for the A32 encoding, this operation wrote an UNKNOWN
value back to the base register. Starting in v7 this is UNPREDICTABLE.
Backports commit 3949f4675d13c587078f8f423845a3a537a22595 from qemu
This includes a minor bug fix to LDM (user), which requires
bit 21 to be 0, which means no writeback.
Backports commit c5c426d4c680f908a1e262091a17b088b5709200 from qemu
In op_bfx, note that tcg_gen_{,s}extract_i32 already checks
for width == 32, so we don't need to special case that here.
Backports commit 86d21e4b509a2835ed79f234f476a4c5191d435b from qemu
Pass the T5 encoding of SUBS PC, LR, #IMM through the normal SUBS path
to make it clear exactly what's happening -- we hit ALUExceptionReturn
along that path.
Backports commit ef11bc3c461e2c650e8bef552146a4b08f81884e from qemu
Document our choice about the T32 CONSTRAINED UNPREDICTABLE behaviour.
This matches the undocumented choice made by the legacy decoder.
Backports commit 4c97f5b2f0fa9b37f9ff497f15411d809e6fd098 from qemu
The m-profile and a-profile decodings overlap. Only return false
for the case of wrong profile; handle UNDEFINED for permission failure
directly. This ensures that we don't accidentally pass an insn that
applies to the wrong profile.
Backports commit d0b26644502103ca97093ef67749812dc1df7eea from qemu
By shifting the 16-bit input left by 16, we can align the desired
portion of the 48-bit product and use tcg_gen_muls2_i32.
Backports commit 485b607d4f393e0de92c922806a68aef22340c98 from qemu
Since all of the inputs and outputs are i32, dispense with
the intermediate promotion to i64 and use tcg_gen_add2_i32.
Backports commit ea96b374641bc429269096d88d4e91ee544273e9 from qemu
Since all of the inputs and outputs are i32, dispense with
the intermediate promotion to i64 and use tcg_gen_mulu2_i32
and tcg_gen_add2_i32.
Backports commit 2409d56454f0d028619fb1002eda86bf240906dd from qemu
Convert the modified immediate form of the data processing insns.
For A32, we can finally remove any code that was intertwined with
the register and register-shifted-register forms.
Backports commit 581c6ebd17c8f56ad52772216e6c6d8cc8997e8b from qemu
Convert the register shifted by register form of the data
processing insns. For A32, we cannot yet remove any code
because the legacy decoder intertwines the immediate form.
Backports commit 5be2c12337f4cbdbda4efe6ab485350f730faaad from qemu
Convert the register shifted by immediate form of the data
processing insns. For A32, we cannot yet remove any code
because the legacy decoder intertwines the reg-shifted-reg
and immediate forms.
Backports commit 25ae32c558182c07fc6ad01b936e9151cbf00c44 from qemu
Add the infrastructure that will become the new decoder.
No instructions adjusted so far.
Backports commit 51409b9e8cfe997b1ac3365df7400e0c6e844437 from qemu
This function already includes the test for an interworking write
to PC from a load. Change the T32 LDM implementation to match the
A32 LDM implementation.
For LDM, the reordering of the tests does not change valid
behaviour because the only case that differs is has rn == 15,
which is UNPREDICTABLE.
Backports commit 69be3e13764111737e1a7a13bb0c231e4d5be756 from qemu
The previous simplification got the order of operands to the
subtraction wrong. Since the 64-bit product is the subtrahend,
we must use a 64-bit subtract to properly compute the borrow
from the low-part of the product.
Fixes: 5f8cd06ebcf5 ("target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR")
Backports commit e0a0c8322b8ebcdad674f443a3e86db8708d6738 from qemu
The translation table walk for an ATS instruction can result in
various faults. In general these are just reported back via the
PAR_EL1 fault status fields, but in some cases the architecture
requires that the fault is turned into an exception:
* synchronous stage 2 faults of any kind during AT S1E0* and
AT S1E1* instructions executed from NS EL1 fault to EL2 or EL3
* synchronous external aborts are taken as Data Abort exceptions
(This is documented in the v8A Arm ARM DDI0487A.e D5.2.11 and
G5.13.4.)
Backports commit 0710b2fa84a4aeb925422e1e88edac49ed407c79 from qemu
Currently the only part of an ARMCPRegInfo which is allowed to cause
a CPU exception is the access function, which returns a value indicating
that some flavour of UNDEF should be generated.
For the ATS system instructions, we would like to conditionally
generate exceptions as part of the writefn, because some faults
during the page table walk (like external aborts) should cause
an exception to be raised rather than returning a value.
There are several ways we could do this:
* plumb the GETPC() value from the top level set_cp_reg/get_cp_reg
helper functions through into the readfn and writefn hooks
* add extra readfn_with_ra/writefn_with_ra hooks that take the GETPC()
value
* require the ATS instructions to provide a dummy accessfn,
which serves no purpose except to cause the code generation
to emit TCG ops to sync the CPU state
* add an ARM_CP_ flag to mark the ARMCPRegInfo as possibly
throwing an exception in its read/write hooks, and make the
codegen sync the CPU state before calling the hooks if the
flag is set
This patch opts for the last of these, as it is fairly simple
to implement and doesn't require invasive changes like updating
the readfn/writefn hook function prototype signature.
Backports commit 37ff584c15bc3e1dd2c26b1998f00ff87189538c from qemu
Make this a static function private to translate.c.
Thus we can use the same idiom between aarch64 and aarch32
without actually sharing function implementations.
Backports commit 1ce21ba1eaf08b22da5925f3e37fc0b4322da858 from qemu
Despite the fact that the text for the call to gen_exception_insn
is identical for aarch64 and aarch32, the implementation inside
gen_exception_insn is totally different.
This fixes exceptions raised from aarch64.
This reverts commit fb2d3c9a9a.
Order of arguments in helper_ret_stl_mmu() invocations was wrong,
apparently caused by a misplaced multiline copy-and-paste.
Fixes: 6decc57 ("target/mips: Fix MSA instructions ST.<B|H|W|D> on big endian host")
Backports commit abd4393d769d9fe2333b2e83e00f911a78475943 from qemu
Intel CooperLake cpu adds AVX512_BF16 instruction, defining as
CPUID.(EAX=7,ECX=1):EAX[bit 05].
The patch adds a property for setting the subleaf of CPUID leaf 7 in
case that people would like to specify it.
The release spec link as follows,
https://software.intel.com/sites/default/files/managed/c5/15/\
architecture-instruction-set-extensions-programming-reference.pdf
Backports commit 80db491da4ce8b199e0e8d1e23943b20aab82f69 from qemu
The x86 architecture requires that all conversions from floating
point to integer which raise the 'invalid' exception (infinities of
both signs, NaN, and all values which don't fit in the destination
integer) return what the x86 spec calls the "indefinite integer
value", which is 0x8000_0000 for 32-bits or 0x8000_0000_0000_0000 for
64-bits. The softfloat functions return the more usual behaviour of
positive overflows returning the maximum value that fits in the
destination integer format and negative overflows returning the
minimum value that fits.
Wrap the softfloat functions in x86-specific versions which
detect the 'invalid' condition and return the indefinite integer.
Note that we don't use these wrappers for the 3DNow! pf2id and pf2iw
instructions, which do return the minimum value that fits in
an int32 if the input float is a large negative number.
Fixes: https://bugs.launchpad.net/qemu/+bug/1815423
Backports commit 1e8a98b53867f61da9ca09f411288e2085d323c4 from qemu
This patch moves the define of target access alignment earlier from
target/foo/cpu.h to configure.
Suggested in Richard Henderson's reply to "[PATCH 1/4] tcg: TCGMemOp is now
accelerator independent MemOp"
Backports commit 52bf9771fdfce98e98cea36a17a18915be6f6b7f from qemu