Cleanup in the boilerplate that each target must define.
Replace arm_env_get_cpu with env_archcpu. The combination
CPU(arm_env_get_cpu) should have used ENV_GET_CPU to begin;
use env_cpu now.
Backports commit 2fc0cc0e1e034582f4718b1a2d57691474ccb6aa from qemu
Thereby decoupling the resulting translated code from the current state
of the system.
Backports commit 2399d4e7cec22ecf1c51062d2ebfd45220dbaace from qemu
We do not need an out-of-line helper for manipulating bits in pstate.
While changing things, share the implementation of gen_ss_advance.
Backports commit 22ac3c49641f6eed93dca5b852030b4d3eacf6c4 from qemu
The EL0+UMA check is unique to DAIF. While SPSel had avoided the
check by nature of already checking EL >= 1, the other post v8.0
extensions to MSR (imm) allow EL0 and do not require UMA. Avoid
the unconditional write to pc and use raise_exception_ra to unwind.
Backports commit ff730e9666a716b669ac4a8ca7c521177d1d2b15 from qemu
Since arm_hcr_el2_eff includes a check against
arm_is_secure_below_el3, we can often remove a
nearby check against secure state.
In some cases, sort the call to arm_hcr_el2_eff
to the end of a short-circuit logical sequence.
Backports commit 7c208e0f4171c9e2cc35efc12e1bf264a45c229f from qemu
This commit fixes a case where the CPU would try to go to EL3 when
executing an smc instruction, even though ARM_FEATURE_EL3 is false. This
case is raised when the PSCI conduit is set to smc, but the smc
instruction does not lead to a valid PSCI call.
QEMU crashes with an assertion failure latter on because of incoherent
mmu_idx.
This commit refactors the pre_smc helper by enumerating all the possible
way of handling an scm instruction, and covering the previously missing
case leading to the crash.
The following minimal test would crash before this commit:
.global _start
.text
_start:
ldr x0, =0xdeadbeef ; invalid PSCI call
smc #0
run with the following command line:
aarch64-linux-gnu-gcc -nostdinc -nostdlib -Wl,-Ttext=40000000 \
-o test test.s
qemu-system-aarch64 -M virt,virtualization=on,secure=off \
-cpu cortex-a57 -kernel test
Backports commit 7760da729ac88f112f98f36395ac3b55fc9e4211 from qemu
Hyp mode is an exception to the general rule that each AArch32
mode has its own r13, r14 and SPSR -- it has a banked r13 and
SPSR but shares its r14 with User and System mode. We were
incorrectly implementing it as banked, which meant that on
entry to Hyp mode r14 was 0 rather than the USR/SYS r14.
We provide a new function r14_bank_number() which is like
the existing bank_number() but provides the index into
env->banked_r14[]; bank_number() provides the index to use
for env->banked_r13[] and env->banked_cpsr[].
All the points in the code that were using bank_number()
to index into env->banked_r14[] are updated for consintency:
* switch_mode() -- this is the only place where we fix
an actual bug
* aarch64_sync_32_to_64() and aarch64_sync_64_to_32():
no behavioural change as we already special-cased Hyp R14
* kvm32.c: no behavioural change since the guest can't ever
be in Hyp mode, but conceptually the right thing to do
* msr_banked()/mrs_banked(): we can never get to the case
that accesses banked_r14[] with tgtmode == ARM_CPU_MODE_HYP,
so no behavioural change
Backports commit 593cfa2b637b92d37eef949653840dc065cdb960 from qemu
Create and use a utility function to extract the EC field
from a syndrome, rather than open-coding the shift.
Backports commit 64b91e3f890a8c221b65c6820a5ee39107ee40f5 from qemu
At present we assert:
arm_el_is_aa64: Assertion `el >= 1 && el <= 3' failed.
The comment in arm_el_is_aa64 explains why asking about EL0 without
extra information is impossible. Add an extra argument to provide
it from the surrounding context.
Fixes: 0ab5953b00b3
Backports commit 9a05f7b67436abdc52bce899f56acfde2e831454 from qemu
Add code to insert calls to a helper function to do the stack
limit checking when we handle these forms of instruction
that write to SP:
* ADD (SP plus immediate)
* ADD (SP plus register)
* SUB (SP minus immediate)
* SUB (SP minus register)
* MOV (register)
Backports commit 5520318939fea5d659bf808157cd726cb967b761 from qemu
SVE vector length can change when changing EL, or when writing
to one of the ZCR_ELn registers.
For correctness, our implementation requires that predicate bits
that are inaccessible are never set. Which means noticing length
changes and zeroing the appropriate register bits.
Backports commit 0ab5953b00b3165877d00cf75de628c51670b550 from qemu
The MSR (banked) and MRS (banked) instructions allow accesses to ELR_Hyp
from either Monitor or Hyp mode. Our translate time check
was overly strict and only permitted access from Monitor mode.
The runtime check we do in msr_mrs_banked_exc_checks() had the
correct code in it, but never got there because of the earlier
"currmode == tgtmode" check. Special case ELR_Hyp.
Backports commit aec4dd09f172ee64c19222b78269d5952fd9c1dc from qemu
Whene we raise a synchronous exception, if HCR_EL2.TGE is set then
exceptions targeting NS EL1 must be redirected to EL2. Implement
this in raise_exception() -- all synchronous exceptions go through
this function.
(Asynchronous exceptions go via arm_cpu_exec_interrupt(), which
already honours HCR_EL2.TGE when it determines the target EL
in arm_phys_excp_target_el().)
Backports commit 7556edfb4d7bf0583c852c8cfc49ef494c41dd8a from qemu
Because the design of the PMU requires that the counter values be
converted between their delta and guest-visible forms for mode
filtering, an additional hook which occurs before the EL is changed is
necessary.
Backports commit b5c53d1b3886387874f8c8582b205aeb3e4c3df6 from qemu
In icount mode, instructions that access io memory spaces in the middle
of the translation block invoke TB recompilation. After recompilation,
such instructions become last in the TB and are allowed to access io
memory spaces.
When the code includes instruction like i386 'xchg eax, 0xffffd080'
which accesses APIC, QEMU goes into an infinite loop of the recompilation.
This instruction includes two memory accesses - one read and one write.
After the first access, APIC calls cpu_report_tpr_access, which restores
the CPU state to get the current eip. But cpu_restore_state_from_tb
resets the cpu->can_do_io flag which makes the second memory access invalid.
Therefore the second memory access causes a recompilation of the block.
Then these operations repeat again and again.
This patch moves resetting cpu->can_do_io flag from
cpu_restore_state_from_tb to cpu_loop_exit* functions.
It also adds a parameter for cpu_restore_state which controls restoring
icount. There is no need to restore icount when we only query CPU state
without breaking the TB. Restoring it in such cases leads to the
incorrect flow of the virtual time.
In most cases new parameter is true (icount should be recalculated).
But there are two cases in i386 and openrisc when the CPU state is only
queried without the need to break the TB. This patch fixes both of
these cases.
Backports commit afd46fcad2dceffda35c0586f5723c127b6e09d8 from qemu
For debug exceptions due to breakpoints or the BKPT instruction which
are taken to AArch32, the Fault Address Register is architecturally
UNKNOWN. We were using that as license to simply not set
env->exception.vaddress, but this isn't correct, because it will
expose to the guest whatever old value was in that field when
arm_cpu_do_interrupt_aarch32() writes it to the guest IFSR. That old
value might be a FAR for a previous guest EL2 or secure exception, in
which case we shouldn't show it to an EL1 or non-secure exception
handler. It might also be a non-deterministic value, which is bad
for record-and-replay.
Clear env->exception.vaddress before taking breakpoint debug
exceptions, to avoid this minor information leak.
Backports commit 548f514cf89dd9ab39c0cb4c063097bccf141fdd from qemu
Now that we have a helper function specifically for the BRK and
BKPT instructions, we can set the exception.fsr there rather
than in arm_cpu_do_interrupt_aarch32(). This allows us to
use our new arm_debug_exception_fsr() helper.
In particular this fixes a bug where we were hardcoding the
short-form IFSR value, which is wrong if the target exception
level has LPAE enabled.
Fixes: https://bugs.launchpad.net/qemu/+bug/1756927
Backports commit 62b94f31d0df75187bb00684fc29e8639eacc0c5 from qemu
When a debug exception is taken to AArch32, it appears as a Prefetch
Abort, and the Instruction Fault Status Register (IFSR) must be set.
The IFSR has two possible formats, depending on whether LPAE is in
use. Factor out the code in arm_debug_excp_handler() which picks
an FSR value into its own utility function, update it to use
arm_fi_to_lfsc() and arm_fi_to_sfsc() rather than hard-coded constants,
and use the correct condition to select long or short format.
In particular this fixes a bug where we could select the short
format because we're at EL0 and the EL1 translation regime is
not using LPAE, but then route the debug exception to EL2 because
of MDCR_EL2.TDE and hand EL2 the wrong format FSR.
Backports commit 81621d9ab8a0f07956e67850b15eebf6d6992eec from qemu
The MDCR_EL2.TDE bit allows the exception level targeted by debug
exceptions to be set to EL2 for code executing at EL0. We handle
this in the arm_debug_target_el() function, but this is only used for
hardware breakpoint and watchpoint exceptions, not for the exception
generated when the guest executes an AArch32 BKPT or AArch64 BRK
instruction. We don't have enough information for a translate-time
equivalent of arm_debug_target_el(), so instead make BKPT and BRK
call a special purpose helper which can do the routing, rather than
the generic exception_with_syndrome helper.
Backports commit c900a2e62dd6dde11c8f5249b638caad05bb15be from qemu
The MC68040 MMU provides the size of the access that
triggers the page fault.
This size is set in the Special Status Word which
is written in the stack frame of the access fault
exception.
So we need the size in m68k_cpu_unassigned_access() and
m68k_cpu_handle_mmu_fault().
To be able to do that, this patch modifies the prototype of
handle_mmu_fault handler, tlb_fill() and probe_write().
do_unassigned_access() already includes a size parameter.
This patch also updates handle_mmu_fault handlers and
tlb_fill() of all targets (only parameter, no code change).
Backports commit 98670d47cd8d63a529ff230fd39ddaa186156f8c from qemu
Rather than passing a regno to the helper, pass pointers to the
vector register directly. This eliminates the need to pass in
the environment pointer and reduces the number of places that
directly access env->vfp.regs[].
Backports commit e7c06c4e4c98c47899417f154df1f2ef4e8d09a0 from qemu
Instead of ignoring the response from address_space_ld*()
(indicating an attempt to read a page table descriptor from
an invalid physical address), use it to report the failure
correctly.
Since this is another couple of locations where we need to
decide the value of the ARMMMUFaultInfo ea bit based on a
MemTxResult, we factor out that operation into a helper
function.
Backports commit 3b39d734141a71296d08af3d4c32f872fafd782e from qemu
cpu_restore_state officially supports being passed an address it can't
resolve the state for. As a result the checks in the helpers are
superfluous and can be removed. This makes the code consistent with
other users of cpu_restore_state.
Of course this does nothing to address what to do if cpu_restore_state
can't resolve the state but so far it seems this is handled elsewhere.
The change was made with included coccinelle script.
Backports commit 65255e8efdd5fca602bcc4ff61a879939ff75f4f from qemu
Now that do_ats_write() is entirely in control of whether to
generate a 32-bit PAR or a 64-bit PAR, we can make it use the
correct (complicated) condition for doing so.
Backports commit 1313e2d7e2cd8b21741e0cf542eb09dfc4188f79 from qemu
Now that ARMMMUFaultInfo is guaranteed to have enough information
to construct a fault status code, we can pass it in to the
deliver_fault() function and let it generate the correct type
of FSR for the destination, rather than relying on the value
provided by get_phys_addr().
I don't think there are any cases the old code was getting
wrong, but this is more obviously correct.
Backports commit 681f9a89d201d7891e2c60dff5e5415d8f618518 from qemu
WFI/E are often, but not always, 4 bytes long. When they are, we need to
set ARM_EL_IL_SHIFT in the syndrome register.
Pass the instruction length to HELPER(wfi), use it to decrement pc
appropriately and to pass an is_16bit flag to syn_wfx, which sets
ARM_EL_IL_SHIFT if needed.
Set dc->insn in both arm_tr_translate_insn and thumb_tr_translate_insn.
Backports commit 58803318e5a546b2eb0efd7a053ed36b6c29ae6f from qemu
This properly forwards SMC events to EL2 when PSCI is provided by QEMU
itself and, thus, ARM_FEATURE_EL3 is off.
Found and tested with the Jailhouse hypervisor. Solution based on
suggestions by Peter Maydell.
Backports commit 77077a83006c3c9bdca496727f1735a3c5c5355d from qemu
For M profile we must clear the exclusive monitor on reset, exception
entry and exception exit. We weren't doing any of these things; fix
this bug.
Backports commit dc3c4c14f0f12854dbd967be3486f4db4e66d25b from qemu
Implement the new do_transaction_failed hook for ARM, which should
cause the CPU to take a prefetch abort or data abort.
Backports commit c79c0a314c43b78f6326d5f137bdbafdbf8e9766 from qemu
For external aborts, we will want to be able to specify the EA
(external abort type) bit in the syndrome field. Allow callers of
deliver_fault() to do that by adding a field to ARMMMUFaultInfo which
we use when constructing the syndrome values.
Backports commit c528af7aa64f159eb30b46e567b650c5440fc117 from qemu
We currently have some similar code in tlb_fill() and in
arm_cpu_do_unaligned_access() for delivering a data abort or prefetch
abort. We're also going to want to do the same thing to handle
external aborts. Factor out the common code into a new function
deliver_fault().
Backports commit aac43da1d772a50778ab1252c13c08c2eb31fb39 from qemu
M profile cores can never trap on WFI or WFE instructions. Check for
M profile in check_wfx_trap() to ensure this.
The existing code will do the right thing for v7M cores because
the hcr_el2 and scr_el3 registers will be all-zeroes and so we
won't attempt to trap, but when we start setting ARM_FEATURE_V8
for v8M cores the v8A handling of SCTLR.nTWE and .nTWI will not
give the right results.
Backports commit 0e2845689ebdb4ea7174f96f6797e2d8942bd114 from qemu
The M profile CPU's MPU has an awkward corner case which we
would like to implement with a different MMU index.
We can avoid having to bump the number of MMU modes ARM
uses, because some of our existing MMU indexes are only
used by non-M-profile CPUs, so we can borrow one.
To avoid that getting too confusing, clean up the code
to try to keep the two meanings of the index separate.
Instead of ARMMMUIdx enum values being identical to core QEMU
MMU index values, they are now the core index values with some
high bits set. Any particular CPU always uses the same high
bits (so eventually A profile cores and M profile cores will
use different bits). New functions arm_to_core_mmu_idx()
and core_to_arm_mmu_idx() convert between the two.
In general core index values are stored in 'int' types, and
ARM values are stored in ARMMMUIdx types.
Backports commit 8bd5c82030b2cb09d3eef6b444f1620911cc9fc5 from qemu
When identifying the DFSR format for an alignment fault, use
the mmu index that we are passed, rather than calling cpu_mmu_index()
to get the mmu index for the current CPU state. This doesn't actually
make any difference since the only cases where the current MMU index
differs from the index used for the load are the "unprivileged
load/store" instructions, and in that case the mmu index may
differ but the translation regime is the same (apart from the
"use from Hyp mode" case which is UNPREDICTABLE).
However it's the more logical thing to do.
Backports commit e517d95b63427fae9f03958dbc005c36b4ebf2cf from qemu
In tlb_fill() we construct a syndrome register value from a
fault status register value which is filled in by arm_tlb_fill().
arm_tlb_fill() returns FSR values which might be in the format
used with short-format page descriptors, or the format used
with long-format (LPAE) descriptors. The syndrome register
always uses LPAE-format FSR status codes.
It isn't actually possible to end up delivering a syndrome
register value to the guest for a fault which is reported
with a short-format FSR (that kind of stage 1 fault will only
happen for an AArch32 translation regime which doesn't have
a syndrome register, and can never be redirected to an AArch64
or Hyp exception level). Add an assertion which checks this,
and adjust the code so that we construct a syndrome with
an invalid status code, rather than allowing set bits in
the FSR input to randomly corrupt other fields in the syndrome.
Backports commit 65ed2ed90d9d81fd4b639029be850ea5651f919f from qemu
The WFE and YIELD instructions are really only hints and in TCG's case
they were useful to move the scheduling on from one vCPU to the next. In
the parallel context (MTTCG) this just causes an unnecessary cpu_exit
and contention of the BQL.
Backports commit c22edfebff29f63d793032e4fbd42a035bb73e27 from qemu
In BE32 mode, sub-word size watchpoints can fail to trigger because the
address of the access is adjusted in the opcode helpers before being
compared with the watchpoint registers. This patch reverses the address
adjustment before performing the comparison with the help of a new CPUClass
hook.
This version of the patch augments and tidies up comments a little.
Backports commit 40612000599e52e792d23c998377a0fa429c4036 from qemu
We've currently got 18 architectures in QEMU, and thus 18 target-xxx
folders in the root folder of the QEMU source tree. More architectures
(e.g. RISC-V, AVR) are likely to be included soon, too, so the main
folder of the QEMU sources slowly gets quite overcrowded with the
target-xxx folders.
To disburden the main folder a little bit, let's move the target-xxx
folders into a dedicated target/ folder, so that target-xxx/ simply
becomes target/xxx/ instead.
Backports commit fcf5ef2ab52c621a4617ebbef36bf43b4003f4c0 from qemu
2018-03-01 22:50:58 -05:00
Renamed from qemu/target-arm/op_helper.c (Browse further)