Implement the MDCR_EL3 register (which is SDCR for AArch32).
For the moment we implement it as reads-as-written.
Backports commit 5513c3abed8e5fabe116830c63f0d3fe1f94bd21 from qemu
We already modify the processor feature bits to not report EL3
support to the guest if EL3 isn't enabled for the CPU we're emulating.
Add similar support for not reporting EL2 unless it is enabled.
This is necessary because real world guest code running at EL3
(trusted firmware or bootloaders) will query the ID registers to
determine whether it should start a guest Linux kernel in EL2 or EL3.
Backports commit 3c2f7bb32b4c597925c5c7411307d51f1a56045d from qemu
Implement the inputsize > pamax check for Stage 2 translations.
This is CONSTRAINED UNPREDICTABLE and we choose to fault.
Backports commit 3526423e867765568ad95b8094ae8b4042cac215 from qemu
Rename check_s2_startlevel to check_s2_mmu_setup in preparation
for additional checks.
Backports commit a0e966c93a0968d29ef51447d08a6b7be6f4d757 from qemu
The S2 starting level table size check applies to both AArch32
and AArch64. Move it to common code.
Backports commit 98d68ec289750139258d9cd9ab3f6d7dd10bb762 from qemu
The AArch64 system registers DACR32_EL2, IFSR32_EL2, SPSR_IRQ,
SPSR_ABT, SPSR_UND and SPSR_FIQ are visible and fully functional from
EL3 even if the CPU has no EL2 (unlike some others which are RES0
from EL3 in that configuration). Move them from el2_cp_reginfo[] to
v8_cp_reginfo[] so they are always present.
Backports commit 6a43e0b6e1f6bcd6b11656967422f4217258200a from qemu
The AArch64 FPEXC32_EL2 system register is visible at EL2 and EL3,
and allows those exception levels to read and write the FPEXC
register for a lower exception level that is using AArch32.
Backports commit 03fbf20f4da58f41998dc10ec7542f65d37ba759 from qemu
The architecture requires that for an exception return to AArch32 the
low bits of ELR_ELx are ignored when the PC is set from them:
* if returning to Thumb mode, ignore ELR_ELx[0]
* if returning to ARM mode, ignore ELR_ELx[1:0]
We were only squashing bit 0; also squash bit 1 if the SPSR T bit
indicates this is a return to ARM code.
Backports commit c1e0371442bf3a7e42ad53c2a3d816ed7099f81d from qemu
We already implement almost all the checks for the illegal
return events from AArch64 state described in the ARM ARM section
D1.11.2. Add the two missing ones:
* return to EL2 when EL3 is implemented and SCR_EL3.NS is 0
* return to Non-secure EL1 when EL2 is implemented and HCR_EL2.TGE is 1
(We don't implement external debug, so the case of "debug state exit
from EL0 using AArch64 state to EL0 using AArch32 state" doesn't apply
for QEMU.)
Backports commit e393f339af87da7210f6c86902b321df6a2e8bf5 from qemu
Remove the assumptions that the AArch64 exception return code was
making about a return to AArch32 always being a return to EL0.
This includes pulling out the illegal-SPSR checks so we can apply
them for return to 32 bit as well as return to 64-bit.
Backports commit 3809951bf61605974b91578c582de4da28f8ed07 from qemu
The entry offset when taking an exception to AArch64 from a lower
exception level may be 0x400 or 0x600. 0x400 is used if the
implemented exception level immediately lower than the target level
is using AArch64, and 0x600 if it is using AArch32. We were
incorrectly implementing this as checking the exception level
that the exception was taken from. (The two can be different if
for example we take an exception from EL0 to AArch64 EL3; we should
in this case be checking EL2 if EL2 is implemented, and EL1 if
EL2 is not implemented.)
Backports commit 3d6f761713745dfed7d2ccfe98077d213a6a6eba from qemu
Handling of semihosting calls should depend on the register width
of the calling code, not on that of any higher exception level,
so we need to identify and handle semihosting calls before we
decide whether to deliver the exception as an entry to AArch32
or AArch64. (EXCP_SEMIHOST is also an "internal exception" so
it has no target exception level in the first place.)
This will allow AArch32 EL1 code to use semihosting calls when
running under an AArch64 EL3.
Backports commit 904c04de2e1b425e7bc8c4ce2fae3d652eeed242 from qemu
If EL2 or EL3 is present on an AArch64 CPU, then exceptions can be
taken to an exception level which is running AArch32 (if only EL0
and EL1 are present then EL1 must be AArch64 and all exceptions are
taken to AArch64). To support this we need to have a single
implementation of the CPU do_interrupt() method which can handle both
32 and 64 bit exception entry.
Pull the common parts of aarch64_cpu_do_interrupt() and
arm_cpu_do_interrupt() out into a new function which calls
either the AArch32 or AArch64 specific entry code once it has
worked out which one is needed.
We temporarily special-case the handling of EXCP_SEMIHOST to
avoid an assertion in arm_el_is_aa64(); the next patch will
pull all the semihosting handling out to the arm_cpu_do_interrupt()
level (since semihosting semantics depend on the register width
of the calling code, not on that of any higher EL).
Backports commit 966f758c49ff478c4757efa5970ce649161bff92 from qemu
Move the aarch64_cpu_do_interrupt() function to helper.c. We want
to be able to call this from code that isn't AArch64-only, and
the move allows us to avoid awkward #ifdeffery at the callsite.
Backports commit f3a9b6945cbbb23f3a70da14e9ffdf1e60c580a8 from qemu
Support EL2 and EL3 in arm_el_is_aa64() by implementing the
logic for checking the SCR_EL3 and HCR_EL2 register-width bits
as appropriate to determine the register width of lower exception
levels.
Backports commit 446c81abf8e0572b8d5d23fe056516ac62af278d from qemu
If we have a secure address space, use it in page table walks:
when doing the physical accesses to read descriptors, make them
through the correct address space.
(The descriptor reads are the only direct physical accesses
made in target-arm/ for CPUs which might have TrustZone.)
Backports commit 5ce4ff6502fc6ae01a30c3917996c6c41be1d176 from qemu
Implement the asidx_from_attrs CPU method to return the
Secure or NonSecure address space as appropriate.
(The function is inline so we can use it directly in target-arm
code to be added in later patches.)
Backports commit 017518c1f6ed9939c7f390cb91078f0919b5494c from qemu
Add QOM property to the ARM CPU which boards can use to tell us what
memory region to use for secure accesses. Nonsecure accesses
go via the memory region specified with the base CPU class 'memory'
property.
By default, if no secure region is specified it is the same as the
nonsecure region, and if no nonsecure region is specified we will use
address_space_memory.
Backports commit 9e273ef2174d7cd5b14a16d8638812541d3eb6bb from qemu
The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.
Backports commit ec53b45bcd1f74f7a4c31331fa6d50b402cd6d26 from qemu
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Backports commit 74c21bd07491739c6e56bcb1f962e4df730e77f3 from qemu
arm_regime_using_lpae_format checks whether the LPAE extension is used
for stage 1 translation regimes. MMU indexes not exclusively of a stage 1
regime won't work with this method.
In case of ARMMMUIdx_S12NSE0 or ARMMMUIdx_S12NSE1, offset these values
by ARMMMUIdx_S1NSE0 to get the right index indicating a stage 1
translation regime.
Rename also the function to arm_s1_regime_using_lpae_format and update
the comments to reflect the change.
Backports commit deb2db996cbb9470b39ae1e383791ef34c4eb3c2 from qemu
Qemu does not generally perform alignment checks. However, the ARM ARM
requires implementation of alignment exceptions for a number of cases
including LDREX, and Windows-on-ARM relies on this.
This change adds plumbing to enable alignment checks on loads using
MO_ALIGN, a do_unaligned_access hook to raise the exception (data
abort), and uses the new aligned loads in LDREX (for all but
single-byte loads).
Backports commit 30901475b91ef1f46304404ab4bfe89097f61b96 from qemu
In an LPAE format descriptor in ARMv8 the address field extends
up to bit 47, not just bit 39. Correct the masking so we don't
give incorrect results if the output address size is greater
than 40 bits, as it can be for AArch64.
(Note that we don't yet support the new-in-v8 Address Size fault which
should be generated if any translation table entry or TTBR contains
an address with non-zero bits above the most significant bit of the
maximum output address size.)
Backports commit 6109769a8b42bd0c3d5b1601c9b35fe7ea6a603e from qemu
Architectural breakpoint check could raise an exceptions, thus condexec
bits should be updated before calling gen_helper_check_breakpoints().
Backports commit ce8a1b5449cd8c4c2831abb581d3208c3a3745a0 from qemu
Coprocessor access instructions are allowed inside IT block.
gen_helper_access_check_cp_reg() can raise an exceptions thus condexec
bits should be updated before.
Backports commit 43bfa4a100687af8d293fef0a197839b51400fca from qemu
AArch32 translation code does not distinguish between DISAS_UPDATE and
DISAS_JUMP. Thus, we cannot use any of them without first updating PC in
CPU state. Furthermore, it is too complicated to update PC in CPU state
before PC gets updated in disas context. So it is hardly possible to
correctly end TB early if is is not likely to be executed before calling
disas_*_insn(), e.g. just after calling breakpoint check helper.
Modify DISAS_UPDATE and DISAS_JUMP usage in AArch32 translation and
apply to them the same semantic as AArch64 translation does:
- DISAS_UPDATE: update PC in CPU state when finishing translation
- DISAS_JUMP: preserve current PC value in CPU state when finishing
translation
This patch fixes a bug in AArch32 breakpoint handling: when
check_breakpoints helper does not generate an exception, ending the TB
early with DISAS_UPDATE couldn't update PC in CPU state and execution
hangs.
Backports commit 577bf808958d06497928c639efaa473bf8c5e099 from qemu
Do not raise a CPU exception if no CPU breakpoint has fired, since
singlestep is also done by generating a debug internal exception. This
fixes a bug with singlestepping in gdbstub.
Backports commit 5c629f4ff4dc9ae79cc732f59a8df15ede796ff7 from qemu
If this CPU supports EL3, enhance the printing of the current
CPU mode in debug logging to distinguish S from NS modes as
appropriate.
Backports commit 06e5cf7acd1f94ab7c1cd6945974a1f039672940 from qemu
The AArch64 debug CPU display of PSTATE as "PSTATE=200003c5 (flags --C-)"
on the end of the same line as the last of the general purpose registers
is unnecessarily different from the AArch32 display of PSR as
"PSR=200001d3 --C- A svc32" on its own line. Update the AArch64
code to put PSTATE in its own line and in the same format, including
printing the exception level (mode).
Backports commit 08b8e0f527930208a548b424d2ab3103bf3c8c02 from qemu
Some targets already had this within their logic, but make sure
it's present for all targets.
Backports commit 522a0d4e3c0d397ffb45ec400d8cbd426dad9d17 from qemu
Reduce the boilerplate required for each target. At the same time,
move the test for breakpoint after calling tcg_gen_insn_start.
Note that arm and aarch64 do not use cpu_breakpoint_test, but still
move the inline test down after tcg_gen_insn_start.
Backports commit b933066ae03d924a92b2616b4a24e7d91cd5b841 from qemu
Introduce ARMMMUFaultInfo to propagate MMU Fault information
across the MMU translation code path. This is in preparation for
adding Stage-2 translation.
No functional changes.
Backports commit e14b5a23d8c83304559f31397f95d22ada60a19a from qemu
The starting level for S2 pagetable walks is computed
differently from the S1 starting level. Implement the S2
variant.
Backports commit 1853d5a9dcac910322c6cc5b2fddec45fd052d25 from qemu
Rename granule_sz to stride to better match the reference manuals.
No functional change.
Backports commit 973a5434825c076995218868b5b3047e5de400c6 from qemu
Remove the tsz variable and introduce inputsize.
This simplifies the code a little and makes it easier to
compare with the reference manuals.
No functional change.
Backports commit 4ca6a051758edf625a17dfc4ce4ab72edabac170 from qemu
Add support for AArch32 S2 negative t0sz. In preparation for
using 40bit IPAs on AArch32.
Backports commit 4ee38098010240e0b390061fdd0151ff62d80279 from qemu
Move declaration of t0sz and t1sz to the top of the function
avoiding a mix of code and variable declarations.
No functional change.
Backports commit 1f4c8c18a5b6f4fad13e13b7e3828124c6c8f34d from qemu
Make t0sz and t1sz signed integers to match tsz and to make
it easier to implement support for AArch32 negative t0sz.
t1sz is changed for consistensy.
No functional change.
Backports commit 5c31a10d16c595d6a59e3e7fc1808c3b1d03e02f from qemu