Split out gen_top_byte_ignore in preparation of handling these
data accesses; the new tbflags field is not yet honored.
Backports commit 4a9ee99db38ba513bf1e8f43665b79c60accd017 from qemu
Caching the bit means that we will not have to re-walk the
page tables to look up the bit during translation.
Backports commit 1bafc2ba7e6bfe89fff3503fdac8db39c973de48 from qemu
Whenever we notice that a counter overflow has occurred, send an
interrupt. This is made more reliable with the addition of a timer in a
follow-on commit.
Backports commit f4efb4b2a17528837cb445f9bdfaef8df4a5acf7 from qemu
A bug was introduced during a respin of:
commit 57a4a11b2b281bb548b419ca81bfafb214e4c77a
target/arm: Add array for supported PMU events, generate PMCEID[01]_EL0
This patch introduced two calls to get_pmceid() during CPU
initialization - one each for PMCEID0 and PMCEID1. In addition to
building the register values, get_pmceid() clears an internal array
mapping event numbers to their implementations (supported_event_map)
before rebuilding it. This is an optimization since much of the logic is
shared. However, since it was called twice, the contents of
supported_event_map reflect only the events in PMCEID1 (the second call
to get_pmceid()).
Fix this bug by moving the initialization of PMCEID0 and PMCEID1 back
into a single function call, and name it more appropriately since it is
doing more than simply generating the contents of the PMCEID[01]
registers.
Backports commit bf8d09694ccc07487cd73d7562081fdaec3370c8 from qemu
When tsz == 0, aarch32 selects the address space via exclusion,
and there are no "top_bits" remaining that require validation.
Fixes: ba97be9f4a4
Backports commit 36d820af0eddf4fc6a533579b052d8f0085a9fb8 from qemu
This both advertises that we support four counters and enables them
because the pmu_num_counters() reads this value from PMCR.
Backports commit ac689a2e5155d129acaa39603e2a7a29abd90d89 from qemu
The instruction event is only enabled when icount is used, cycles are
always supported. Always defining get_cycle_count (but altering its
behavior depending on CONFIG_USER_ONLY) allows us to remove some
CONFIG_USER_ONLY #defines throughout the rest of the code.
Backports commit b2e2372511946fae86fbb8709edec7a41c6f3167 from qemu
Add arrays to hold the registers, the definitions themselves, access
functions, and logic to reset counters when PMCR.P is set. Update
filtering code to support counters other than PMCCNTR. Support migration
with raw read/write functions.
Backports commit 5ecdd3e47cadae83a62dc92b472f1fe163b56f59 from qemu
This commit doesn't add any supported events, but provides the framework
for adding them. We store the pm_event structs in a simple array, and
provide the mapping from the event numbers to array indexes in the
supported_event_map array. Because the value of PMCEID[01] depends upon
which events are supported at runtime, generate it dynamically.
Backports commit 57a4a11b2b281bb548b419ca81bfafb214e4c77a from qemu
Rename arm_ccnt_enabled to pmu_counter_enabled, and add logic to only
return 'true' if the specified counter is enabled and neither prohibited
or filtered.
Backports commit 033614c47de78409ad3fb39bb7bd1483b71c6789 from qemu
Because of the PMU's design, many register accesses have side effects
which are inter-related, meaning that the normal method of saving CP
registers can result in inconsistent state. These side-effects are
largely handled in pmu_op_start/finish functions which can be called
before and after the state is saved/restored. By doing this and adding
raw read/write functions for the affected registers, we avoid
migration-related inconsistencies.
Backports relevant parts of commit
980ebe87053792a5bdefaa87777c40914fd4f673 from qemu
pmccntr_read and pmccntr_write contained duplicate code that was already
being handled by pmccntr_sync. Consolidate the duplicated code into two
functions: pmccntr_op_start and pmccntr_op_finish. Add a companion to
c15_ccnt in CPUARMState so that we can simultaneously save both the
architectural register value and the last underlying cycle count - this
ensures time isn't lost and will also allow us to access the 'old'
architectural register value in order to detect overflows in later
patches.
Backports commit 5d05b9d462666ed21b7fef61aa45dec9aaa9f0ff from qemu
The arm_regime_tbi{0,1} functions are replacable with the new function
by giving the lowest and highest address.
Backports commit 5d8634f5a3a8474525edcfd581a659830e9e97c0 from qemu
Use TBID in aa64_va_parameters depending on the data parameter.
This automatically updates all existing users of the function.
Backports commit 8220af7e4d34c858898fbfe55943aeea8f4e875f from qemu
We need to reuse this from helper-a64.c. Provide a stub
definition for CONFIG_USER_ONLY. This matches the stub
definitions that we removed for arm_regime_tbi{0,1} before.
Backports commit bf0be433878935e824479e8ae890493e1fb646ed from qemu
We will shortly want to talk about TBI as it relates to data.
Passing around a pair of variables is less convenient than a
single variable.
Backports commit 476a4692f06e381117fb7ad0d04d37c9c2612198 from qemu
Split out functions to extract the virtual address parameters.
Let the functions choose T0 or T1 address space half, if present.
Extract (most of) the control bits that vary between EL or Tx.
Backports commit ba97be9f4a4ecaf16a1454dc669e5f3d935d3b63 from qemu
While we could expose stage_1_mmu_idx, the combination is
probably going to be more useful.
Backports commit 64be86ab1b5ef10b660a4230ee7f27c0da499043 from qemu
The pattern
ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
is computing the full ARMMMUIdx, stripping off the ARM bits,
and then putting them back.
Avoid the extra two steps with the appropriate helper function.
Backports commit 50494a279dab22a015aba9501a94fcc3cd52140e from qemu
There are 5 bits of state that could be added, but to save
space within tbflags, add only a single enable bit.
Helpers will determine the rest of the state at runtime.
Backports commit 0816ef1bfcd3ac53e7454b62ca436727887f6056 from qemu
In U-boot, we switch from S-SVC -> Mon -> Hyp mode when we want to
enter Hyp mode. The change into Hyp mode is done by doing an
exception return from Mon. This doesn't work with current QEMU.
The problem is that in bad_mode_switch() we refuse to allow
the change of mode.
Note that bad_mode_switch() is used to do validation for two situations:
(1) changes to mode by instructions writing to CPSR.M
(ie not exception take/return) -- this corresponds to the
Armv8 Arm ARM pseudocode Arch32.WriteModeByInstr
(2) changes to mode by exception return
Attempting to enter or leave Hyp mode via case (1) is forbidden in
v8 and UNPREDICTABLE in v7, and QEMU is correct to disallow it
there. However, we're already doing that check at the top of the
bad_mode_switch() function, so if that passes then we should allow
the case (2) exception return mode changes to switch into Hyp mode.
We want to test whether we're trying to return to the nonexistent
"secure Hyp" mode, so we need to look at arm_is_secure_below_el3()
rather than arm_is_secure(), since the latter is always true if
we're in Mon (EL3).
Backports commit 2d2a4549cc29850aab891495685a7b31f5254b12 from qemu
Use "register" TBFLAG_ANY to indicate shared state between
A32 and A64, and "registers" TBFLAG_A32 & TBFLAG_A64 for
fields that are specific to the given cpu state.
Move ARM_TBFLAG_BE_DATA to shared state, instead of its current
placement within "Bit usage when in AArch32 state".
Backports commit aad821ac4faad369fad8941d25e59edf2514246b from qemu
Provide a trivial implementation with zero limited ordering regions,
which causes the LDLAR and STLLR instructions to devolve into the
LDAR and STLR instructions from the base ARMv8.0 instruction set.
Backports commit 2d7137c10fafefe40a0a049ff8a7bd78b66e661f from qemu
Since arm_hcr_el2_eff includes a check against
arm_is_secure_below_el3, we can often remove a
nearby check against secure state.
In some cases, sort the call to arm_hcr_el2_eff
to the end of a short-circuit logical sequence.
Backports commit 7c208e0f4171c9e2cc35efc12e1bf264a45c229f from qemu
Replace arm_hcr_el2_{fmo,imo,amo} with a more general routine
that also takes SCR_EL3.NS (aka arm_is_secure_below_el3) into
account, as documented for the plethora of bits in HCR_EL2.
Backports commit f77784446045231f7dfa46c9b872091241fa1557 from qemu
The bulk of the work here, beyond base HPD, is defining the
TTBCR2 register. In addition we must check TTBCR.T2E, which
is not present (RES0) for AArch64.
Backports commit ab638a328fd099ba0b23c8c818eb39f2c35414f3 from qemu
Since the TCR_*.HPD bits were RES0 in ARMv8.0, we can simply
interpret the bits as if ARMv8.1-HPD is present without checking.
We will need a slightly different check for hpd for aarch32.
Backports commit 037c13c5904f5fc67bb0ab7dd91ae07347aedee9 from qemu
Because EL3 has a fixed execution mode, we can properly decide
which of the bits are RES{0,1}.
Backports commit ea22747c63c9a894777aa41a7af85c3d08e39f81 from qemu
The enable for TGE has already occurred within arm_hcr_el2_amo
and friends. Moreover, when E2H is also set, the sense is
supposed to be reversed, which has also already occurred within
the helpers.
Backports commit 619959c3583dad325c36f09ce670e7d091382cae from qemu
At the same time, define the fields for these registers,
and use those defines in arm_pamax().
Backports commit 3dc91ddbc68391f934bf6945853e99cf6810fc00 from qemu
Hyp mode is an exception to the general rule that each AArch32
mode has its own r13, r14 and SPSR -- it has a banked r13 and
SPSR but shares its r14 with User and System mode. We were
incorrectly implementing it as banked, which meant that on
entry to Hyp mode r14 was 0 rather than the USR/SYS r14.
We provide a new function r14_bank_number() which is like
the existing bank_number() but provides the index into
env->banked_r14[]; bank_number() provides the index to use
for env->banked_r13[] and env->banked_cpsr[].
All the points in the code that were using bank_number()
to index into env->banked_r14[] are updated for consintency:
* switch_mode() -- this is the only place where we fix
an actual bug
* aarch64_sync_32_to_64() and aarch64_sync_64_to_32():
no behavioural change as we already special-cased Hyp R14
* kvm32.c: no behavioural change since the guest can't ever
be in Hyp mode, but conceptually the right thing to do
* msr_banked()/mrs_banked(): we can never get to the case
that accesses banked_r14[] with tgtmode == ARM_CPU_MODE_HYP,
so no behavioural change
Backports commit 593cfa2b637b92d37eef949653840dc065cdb960 from qemu
In commit 8a0fc3a29fc2315325400 we tried to implement HCR_EL2.{VI,VF},
but we got it wrong and had to revert it.
In that commit we implemented them as simply tracking whether there
is a pending virtual IRQ or virtual FIQ. This is not correct -- these
bits cause a software-generated VIRQ/VFIQ, which is distinct from
whether there is a hardware-generated VIRQ/VFIQ caused by the
external interrupt controller. So we need to track separately
the HCR_EL2 bit state and the external virq/vfiq line state, and
OR the two together to get the actual pending VIRQ/VFIQ state.
Fixes: 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f
Backports commit 89430fc6f80a5aef1d4cbd6fc26b40c30793786c from qemu
This reverts commit 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f.
The implementation of HCR.VI and VF in that commit is not
correct -- they do not track the overall "is there a pending
VIRQ or VFIQ" status, but whether there is a pending interrupt
due to "this mechanism", ie the hypervisor having set the VI/VF
bits. The overall pending state for VIRQ and VFIQ is effectively
the logical OR of the inbound lines from the GIC with the
VI and VF bits. Commit 8a0fc3a29fc231 would result in pending
VIRQ/VFIQ possibly being lost when the hypervisor wrote to HCR.
As a preliminary to implementing the HCR.VI/VF feature properly,
revert the broken one entirely.
Backports commit c624ea0fa7ffc9e2cc3e2b36c92b5c960954489f from qemu
Before we supported direct execution from MMIO regions, we
implemented workarounds in commit 720424359917887c926a33d2
which let us avoid doing so, even if the SAU or MPU region
was less than page-sized.
Once we implemented execute-from-MMIO, we removed part
of those workarounds in commit d4b6275df320cee76; but
we forgot the one in get_phys_addr_pmsav8() which
suppressed use of small SAU regions in executable regions.
Remove that workaround now.
Backports commit 521ed6b4015ba39a2e39c65a94643f3e6412edc4 from qemu
Now that we have full support for small regions, including execution,
we can remove the workarounds where we marked all small regions as
non-executable for the M-profile MPU and SAU.
Backports commit d4b6275df320cee764d56b194b1898547f545857 from qemu
Remove a TODO comment about implementing the vectored interrupt
controller. We have had an implementation of that for a decade;
it's in hw/intc/pl190.c.
Backports commit e24ad484909e7a00ca4f6332f3698facf0ba3394 from qemu
ATS1HR and ATS1HW (which allow AArch32 EL2 to do address translations
on the EL2 translation regime) were implemented in commit 14db7fe09a2c8.
However, we got them wrong: these should do stage 1 address translations
as defined for NS-EL2, which is ARMMMUIdx_S1E2. We were incorrectly
making them perform stage 2 translations.
A few years later in commit 1313e2d7e2cd we forgot entirely that
we'd implemented ATS1Hx, and added a comment that ATS1Hx were
"not supported yet". Remove the comment; there is no extra code
needed to handle these operations in do_ats_write(), because
arm_s1_regime_using_lpae_format() returns true for ARMMMUIdx_S1E2,
which forces 64-bit PAR format.
Backports commit 23463e0e4aeb2f0a9c60549a2c163f4adc0b8512 from qemu
In do_ats_write() we construct a PAR value based on the result
of the translation. A comment says "S2WLK and FSTAGE are always
zero, because we don't implement virtualization".
Since we do in fact now implement virtualization, add the missing
code that sets these bits based on the reported ARMMMUFaultInfo.
(These bits are named PTW and S in ARMv8, so we follow that
convention in the new comments in this patch.)
Backports commit 0f7b791b35f24cb1333f779705a3f6472e6935de from qemu
Since QEMU does not implement ASIDs, changes to the ASID must flush the
tlb. However, if the ASID does not change there is no reason to flush.
In testing a boot of the Ubuntu installer to the first menu, this reduces
the number of flushes by 30%, or nearly 600k instances.
Backports commit 93f379b0c43617b1361f742f261479eaed4959cb from qemu
The EL3 version of this register does not include an ASID,
and so the tlb_flush performed by vmsa_ttbr_write is not needed.
Backports commit f478847f1ee0df9397f561025ab2f687fd923571 from qemu