The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be converted into a "MemOp op".
As size_memop is a no-op, this patch does not change any behaviour.
Backports commit 4cbb198eefef41bbca703605c78875fd4fec6ef6 from qemu
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be converted into a "MemOp op".
As size_memop is a no-op, this patch does not change any behaviour.
Backports commit 3d9e7c3e7bf11962e1100d077e46f93f780b7310 from qemu
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be converted into a "MemOp op".
As size_memop is a no-op, this patch does not change any behaviour.
Backports commit e501824b3f3b3650e7cb8a509064cac01bc27c82 from qemu
Introduce no-op size_memop to aid preparatory conversion of
interfaces.
Once interfaces are converted, size_memop will be implemented to
return a MemOp from size in bytes.
Backports commit 66b9b24375ac215cdcbdf9e14d665395360abff4 from qemu
This change ensures that the FPU can be accessed in Non-Secure mode
when the CPU core is reset using the arm_set_cpu_on() function call.
The NSACR.{CP11,CP10} bits define the exception level required to
access the FPU in Non-Secure mode. Without these bits set, the CPU
will give an undefined exception trap on the first FPU access for the
secondary cores under Linux.
This is necessary because in this power-control codepath QEMU
is effectively emulating a bit of EL3 firmware, and has to set
the CPU up as the EL3 firmware would.
Fixes: fc1120a7f5
Backports commit 0c7f8c43daf6556078e51de98aa13f069e505985 from qemu
QEMU lacks the minimum Jazelle implementation that is required
by the architecture (everything is RAZ or RAZ/WI). Add it
together with the HCR_EL2.TID0 trapping that goes with it.
Backports commit f96f3d5f09973ef40f164cf2d5fd98ce5498b82a from qemu
HSTR_EL2 offers a way to trap ranges of CP15 system register
accesses to EL2, and it looks like this register is completely
ignored by QEMU.
To avoid adding extra .accessfn filters all over the place (which
would have a direct performance impact), let's add a new TB flag
that gets set whenever HSTR_EL2 is non-zero and that QEMU translates
a context where this trap has a chance to apply, and only generate
the extra access check if the hypervisor is actively using this feature.
Tested with a hand-crafted KVM guest accessing CBAR.
Backports commit 5bb0a20b74ad17dee5dae38e3b8b70b383ee7c2d from qemu
HCR_EL2.TID3 requires that AArch32 reads of MVFR[012] are trapped to
EL2, and HCR_EL2.TID0 does the same for reads of FPSID.
In order to handle this, introduce a new TCG helper function that
checks for these control bits before executing the VMRC instruction.
Tested with a hacked-up version of KVM/arm64 that sets the control
bits for 32bit guests.
Backports commit 9ca1d776cb49c09b09579d9edd0447542970c834 from qemu
HCR_EL2.TID1 mandates that access from EL1 to REVIDR_EL1, AIDR_EL1
(and their 32bit equivalents) as well as TCMTR, TLBTR are trapped
to EL2. QEMU ignores it, making it harder for a hypervisor to
virtualize the HW (though to be fair, no known hypervisor actually
cares).
Do the right thing by trapping to EL2 if HCR_EL2.TID1 is set.
Backports commit 93fbc983b29a2eb84e2f6065929caf14f99c3681 from qemu
HCR_EL2.TID2 mandates that access from EL1 to CTR_EL0, CCSIDR_EL1,
CCSIDR2_EL1, CLIDR_EL1, CSSELR_EL1 are trapped to EL2, and QEMU
completely ignores it, making it impossible for hypervisors to
virtualize the cache hierarchy.
Do the right thing by trapping to EL2 if HCR_EL2.TID2 is set.
Backports commit 630fcd4d2ba37050329e0adafdc552d656ebe2f3 from qemu
This is derived from cortex-m4 description, adding DP support and FPv5
instructions with the corresponding flags in isar and mvfr2.
Checked that it could successfully execute
vrinta.f32 s15, s15
while cortex-m4 emulation rejects it with "illegal instruction".
Backports commit cf7beda5072e106ddce875c1996446540c5fe239 from qemu
HCR_EL2.TID3 mandates that access from EL1 to a long list of id
registers traps to EL2, and QEMU has so far ignored this requirement.
This breaks (among other things) KVM guests that have PtrAuth enabled,
while the hypervisor doesn't want to expose the feature to its guest.
To achieve this, KVM traps the ID registers (ID_AA64ISAR1_EL1 in this
case), and masks out the unsupported feature.
QEMU not honoring the trap request means that the guest observes
that the feature is present in the HW, starts using it, and dies
a horrible death when KVM injects an UNDEF, because the feature
*really* isn't supported.
Do the right thing by trapping to EL2 if HCR_EL2.TID3 is set.
Note that this change does not include trapping of the MVFR
registers from AArch32 (they are accessed via the VMRS
instruction and need to be handled in a different way).
Backports commit 6a4ef4e5d1084ce41fafa7d470a644b0fd3d9317 from qemu
The ARMv8 ARM states when executing at EL2, EL3 or Secure EL1,
ISR_EL1 shows the pending status of the physical IRQ, FIQ, or
SError interrupts.
Unfortunately, QEMU's implementation only considers the HCR_EL2
bits, and ignores the current exception level. This means a hypervisor
trying to look at its own interrupt state actually sees the guest
state, which is unexpected and breaks KVM as of Linux 5.3.
Instead, check for the running EL and return the physical bits
if not running in a virtualized context.
Backports commit 7cf95aed53c8770a338617ef40d5f37d2c197853 from qemu
According to the PushStack() pseudocode in the armv7m RM,
bit 4 of the LR should be set to NOT(CONTROL.PFCA) when
an FPU is present. Current implementation is doing it for
armv8, but not for armv7. This patch makes the existing
logic applicable to both code paths.
Backports commit f900b1e5b087a02199fbb6de7038828008e9e419 from qemu
Simply moving the non-stub helper_v7m_mrs/msr outside of
!CONFIG_USER_ONLY is not an option, because of all of the
other system-mode helpers that are called.
But we can split out a few subroutines to handle the few
EL0 accessible registers without duplicating code.
Backports commit 04c9c81b8fa2ee33f59a26265700fae6fc646062 from qemu
There was too much cut and paste between ldrexd and strexd,
as ldrexd does prohibit two output registers the same.
Fixes: af288228995
Backports commit 655b02646dc175dc10666459b0a1e4346fc8d46a from qemu
Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.
Target dependant attributes are conditionalized upon NEED_CPU_H.
Backports commit 14776ab5a12972ea439c7fb2203a4c15a09094b4 from qemu
Switch the SPARC target from the old unassigned_access hook to the
new do_transaction_failed hook.
This will cause the "if transaction failed" code paths added in
the previous commits to become active if the access is to an
unassigned address. In particular we'll now handle bus errors
during page table walks correctly (generating a translation
error with the right kind of fault status).
Backports commit f8c3db33a5e863291182f8862ddf81618a7c6194 from qemu
The dump_mmu() function does a ldl_phys() at the start, but
then never uses the value it loads at all. Remove the
unused code.
Backports commit 9dffeec2e003a482ca858a887d3454c6bebed91e from qemu
Convert the mmu_probe() function to using address_space_ldl()
rather than ldl_phys(), so we can explicitly detect memory
transaction failures.
This makes no practical difference at the moment, because
ldl_phys() will return 0 on a transaction failure, and we
treat transaction failures and 0 PDEs identically. However
the spec says that MMU probe operations are supposed to
update the fault status registers, and if we ever implement
that we'll want to distinguish the difference. For the
moment, just add a TODO comment about the bug.
Backports commit d86a9ad33c75ed795f09fb43243d0acecd583f24 from qemu
Currently we use the ldl_phys() function to read page table entries.
With the unassigned_access hook in place, if these hit an unassigned
area of memory then the hook will cause us to wrongly generate
an exception with a fault address matching the address of the
page table entry.
Change to using address_space_ldl() so we can detect and correctly
handle bus errors and give them their correct behaviour of
causing a translation error with a suitable fault status register.
Note that this won't actually take effect until we switch the
over to using the do_translation_failed hook.
Backports commit 3c818dfcc271f5ba298b06f33466ab30f9a28349 from qemu
Currently the ld/st_asi helper functions make calls to the
ld*_phys() and st*_phys() functions for those ASIs which
imply direct accesses to physical addresses. These implicitly
rely on the unassigned_access hook to cause them to generate
an MMU fault if the access fails.
Switch to using the address_space_* functions instead, which
return a MemTxResult that we can check. This means that when
we switch SPARC over to using the do_transaction_failed hook
we'll still get the same MMU faults we did before.
This commit converts the ASIs which do MXCC stream source
and destination accesses.
It's not clear to me whether raising an MMU fault like this
is the correct behaviour if we encounter a bus error, but
we retain the same behaviour that the old unassigned_access
hook would implement.
Backports commit 776095d3cd751a58469b68f652c1ab6785f63652 from qemu
Currently the ld/st_asi helper functions make calls to the
ld*_phys() and st*_phys() functions for those ASIs which
imply direct accesses to physical addresses. These implicitly
rely on the unassigned_access hook to cause them to generate
an MMU fault if the access fails.
Switch to using the address_space_* functions instead, which
return a MemTxResult that we can check. This means that when
we switch SPARC over to using the do_transaction_failed hook
we'll still get the same MMU faults we did before.
This commit converts the ASIs which do "MMU passthrough".
Backports commit b9f5fdad49c74583dcf9fcba0805b148e3992e13 from qemu
Currently the SPARC target uses the old-style do_unassigned_access
hook. We want to switch it over to do_transaction_failed, but to do
this we must first remove all the direct calls in ldst_helper.c to
cpu_unassigned_access(). Factor out the body of the hook function's
code into a new sparc_raise_mmu_fault() and call it from the hook and
from the various places that used to call cpu_unassigned_access().
In passing, this fixes a bug where the code that raised the
MMU exception was directly calling GETPC() from a function that
was several levels deep in the callstack from the original
helper function: the new sparc_raise_mmu_fault() instead takes
the return address as an argument.
Other than the use of retaddr rather than GETPC() and a comment
format fixup, the body of the new function has no changes from
that of the old hook function.
Backports commit c9d793f44620a4793239da73f67758ce5f5ba5d0 from qemu
The maximum level is defined as P_L2_LEVELS and skip is defined with 6
bits, which means if P_L2_LEVELS < (1 << 6), skip never exceeds the
boundary.
Since this check is between two constants, which leverages compiler
to optimize the code based on different configuration.
Backports commit 526ca2360ea1cd947f74c8c6c38b91b9d6fcfdb5 from qemu
In subpage_init(), we will set subpage->sub_section to
PHYS_SECTION_UNASSIGNED by subpage_register. Since
PHYS_SECTION_UNASSIGNED is defined to be 0, and we allocate subpage with
g_malloc0, this means subpage->sub_section is already initialized to 0.
This patch removes the redundant setup for a new subpage and also fix
the code style.
Backports commit b797ab1a15ba8d2b2fc4ec3e1f24d755f6855d05 from qemu
The purpose of these two MAX here is to get the maximum of these three
variables:
A: map->nodes_nb + nodes
B: map->nodes_nb_alloc
C: alloc_hint
We can write it like MAX(A, B, C). Since the if condition says A > B,
this means MAX(A, B, C) = MAX(A, C).
This patch just simplify the calculation a bit.
Backports commit c95cfd040078db8017f74fd3a4d6f798385d960c from qemu
Function phys_page_set() and phys_page_set_level() 's argument *nb*
stands for number of pages to set instead of hardware address.
This would be more proper to use uint64_t instead of hwaddr for its
type.
Backports commit 56b15076805a29673c1a90ea9c3ebef25bfcc912 from qemu
Switch the MIPS target from the old unassigned_access hook to the new
do_transaction_failed hook.
Unlike the old hook, do_transaction_failed is only ever called from
the TCG memory access paths, so there is no need for the "ignore this
if we're using KVM" hack that we were previously using to work around
the way unassigned_access was called for all kinds of memory accesses
to unassigned physical addresses.
The MIPS target does not ever do direct memory reads by physical
address (via either ldl_phys etc or address_space_ldl etc), so the
only memory accesses this affects are the 'normal' guest loads and
stores, which will be handled by the new hook; their behaviour is
unchanged.
Backports commit 4f02a06d50ef0081089ed8cb3ec7c7986e3c95f8 from qemu
Document the use of g_autofree and g_autoptr in glib for automatic
freeing of memory.
Backports commit 821f2967562a1fdc7e52a644963163e6917c4293 from qemu
The split of information between the two docs is rather arbitary and
unclear. It is simpler for contributors if all the information is in
one file.
Backports commit 637f39568fc0bd9848fd9d225d52ab0c4c443ed3 from qemu
There are only two remaining uses of gen_bx_im. In each case, we
know the destination mode -- not changing in the case of gen_jmp
or changing in the case of trans_BLX_i. Use this to simplify the
surrounding code.
For trans_BLX_i, use gen_jmp for the actual branch. For gen_jmp,
use gen_set_pc_im to set up the single-step.
Backports commit eac2f39602e0423adf56be410c9a22c31fec9a81 from qemu
Now that all callers pass a constant value, split the switch
statement into the individual trans_* functions.
Backports commit 279de61a21a1622cb875ead82d6e78c989ba2966 from qemu
Add a check for ARMv6 in trans_CPS. We had this correct in
the T16 path, but had previously forgotten the check on the
A32 and T32 paths.
Backports commit 20556e7bd6111266fbf1d81e4ff7a89bfa5795a7 from qemu
Fold away all of the cases that now just goto illegal_op,
because all of their internal bits are now in decodetree.
Backports commit 590057d969a54de5d97261701c5702b3bebc9c07 from qemu
Fold away all of the cases that now just goto illegal_op,
because all of their internal bits are now in decodetree.
Backports commit f843e77144c9334e244a422848177f2fbef5eb05 from qemu
We have been using store_reg and not store_reg_for_load when writing
back a loaded value into the base register. At first glance this is
incorrect when base == pc, however that case is UNPREDICTABLE.
Backports commit b0e382b8cf365fed8b8c43482029ac7655961a85 from qemu
This has been a TODO item for quite a while. The minimum bit
count for A32 and T16 is 1, and for T32 is 2.
Backports commit 4b222545dbf30b60c033e1cd6eddda612575fd8c from qemu
Prior to v7, for the A32 encoding, this operation wrote an UNKNOWN
value back to the base register. Starting in v7 this is UNPREDICTABLE.
Backports commit 3949f4675d13c587078f8f423845a3a537a22595 from qemu
This includes a minor bug fix to LDM (user), which requires
bit 21 to be 0, which means no writeback.
Backports commit c5c426d4c680f908a1e262091a17b088b5709200 from qemu
In op_bfx, note that tcg_gen_{,s}extract_i32 already checks
for width == 32, so we don't need to special case that here.
Backports commit 86d21e4b509a2835ed79f234f476a4c5191d435b from qemu
Pass the T5 encoding of SUBS PC, LR, #IMM through the normal SUBS path
to make it clear exactly what's happening -- we hit ALUExceptionReturn
along that path.
Backports commit ef11bc3c461e2c650e8bef552146a4b08f81884e from qemu
Document our choice about the T32 CONSTRAINED UNPREDICTABLE behaviour.
This matches the undocumented choice made by the legacy decoder.
Backports commit 4c97f5b2f0fa9b37f9ff497f15411d809e6fd098 from qemu
The m-profile and a-profile decodings overlap. Only return false
for the case of wrong profile; handle UNDEFINED for permission failure
directly. This ensures that we don't accidentally pass an insn that
applies to the wrong profile.
Backports commit d0b26644502103ca97093ef67749812dc1df7eea from qemu