There is a CPU data endianness test that is used to drive the
virtio_big_endian test.
Move this up to the header so it can be more generally used for endian
tests. The KVM specific cpu_syncronize_state call is left behind in the
virtio specific function.
Rename it arm_cpu-data_is_big_endian() to more accurately capture that
this is for data accesses only.
Backports commit ed50ff7875d61a75517c92deb0444d73fbbca878 from qemu
bswap_code is a CPU property of sorts ("is the iside endianness the
opposite way round to TARGET_WORDS_BIGENDIAN?") but it is not the
actual CPU state involved here which is SCTLR.B (set for BE32
binaries, clear for BE8).
Replace bswap_code with SCTLR.B, and pass that to arm_ld*_code.
The next patches will make data fetches honor both SCTLR.B and
CPSR.E appropriately.
Backports commit f9fd40ebe4f55e0048e002925b8d65e66d56e7a7 from qemu
In helper.c the expression
(env->uncached_cpsr & CPSR_M) != CPSR_USER
is always true; the right hand side was supposed to be ARM_CPU_MODE_USR
(an error in commit cb01d391).
Since the incorrect expression was always true, this just meant that
commit cb01d391 had no effect.
However simply changing the RHS here would reveal a logic error: if
the mode is USR we wish to completely ignore the attempt to set the
mode bits, which means that we must clear the CPSR_M bits from mask
to avoid the uncached_cpsr bits being updated at the end of the
function.
Move the condition into the correct place in the code, fix its RHS
constant, and add a comment about the fact that we must be doing a
gdbstub write if we're in user mode.
Backports commit 8c4f0eb94cc65ee32a12feba88d0b32e3665d5ea from qemu
The v8 ARM ARM defines that unused spaces in the ID_AA64* system
register ranges are Reserved and must RAZ, rather than being UNDEF.
Implement this.
In particular, ARM v8.2 adds a new feature register ID_AA64MMFR2,
and newer versions of the Linux kernel will attempt to read this,
which causes them not to boot up on versions of QEMU missing this fix.
Since the encoding .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6
is actually defined in ARMv8 (as ID_MMFR4), we give it an entry in
the ARMCPU struct so CPUs can override it, though since none do
this too will just RAZ.
Backports commit e20d84c1407d43d5a2e2ac95dbb46db3b0af8f9f from qemu
Mark CNTHP_TVAL_EL2 as ARM_CP_NO_RAW due to the register not
having any underlying state. This fixes an issue with booting
KVM enabled kernels when EL2 is on.
Backports commit d44ec156300a149b386a14d3ab349d3b83b66b8c from qemu
Implement the performance monitor register traps controlled
by MDCR_EL3.TPM and MDCR_EL2.TPM. Most of the performance
registers already have an access function to deal with the
user-enable bit, and the TPM checks can be added there. We
also need a new access function which only implements the
TPM checks for use by the few not-EL0-accessible registers
and by PMUSERENR_EL0 (which is always EL0-readable).
Backports commit 1fce1ba985d9c5c96e5b9709e1356d1814b8fa9e from qemu
Fix two issues with our implementation of the SDCR:
* it is only present from ARMv8 onwards
* it does not contain several of the trap bits present in its 64-bit
counterpart the MDCR_EL3
Put the register description in the right place so that it does not
get enabled for ARMv7 and earlier, and give it a write function so that
we can mask out the bits which should not be allowed to have an effect
if EL3 is 32-bit.
Backports commit a8d64e735182cbbb5dcc98f41656b118c45e57cc from qemu
If HCR.TGE is 1 then mode changes via CPS and MSR from Monitor to
NonSecure PL1 modes are illegal mode changes. Implement this check
in bad_mode_switch().
(We don't currently implement HCR.TGE, but this is the only missing
check from the v8 ARM ARM G1.9.3 and so it's worth adding now; the
rest of the HCR.TGE checks can be added later as necessary.)
Backports commit 10eacda787ac9990dc22d4437b289200c819712c from qemu
Mode switches from Hyp to any other mode via the CPS and MRS
instructions are illegal mode switches (though obviously switching
via exception return is valid). Add this check to bad_mode_switch().
Backports commit af393ffc6da116b9dd4c70901bad1f4cafb1773d from qemu
In v8, the illegal mode changes which are UNPREDICTABLE in v7 are
given architected behaviour:
* the mode field is unchanged
* PSTATE.IL is set (so any subsequent instructions will UNDEF)
* any other CPSR fields are written to as normal
This is pretty much the same behaviour we picked for our
UNPREDICTABLE handling, with the exception that for v8 we
need to set the IL bit.
Backports commit 81907a582901671c15be36a63b5063f88f3487e2 from qemu
In v8 trying to switch mode to Mon from Secure EL1 is an
illegal mode switch. (In v7 this is impossible as all secure
modes except User are at EL3.) We can handle this case by
making a switch to Mon valid only if the current EL is 3,
which then gives the correct answer whether EL3 is AArch32
or AArch64.
Backports commit 58ae2d1f037fae1d90eed4522053a85d79edfbec from qemu
We don't actually support Hyp mode yet, but add the correct
checks for it to the bad_mode_switch() function for completeness.
Backports commit e6c8fc07b4fce0729bb747770756835f4b0ca7f4 from qemu
QEMU doesn't implement the NSACR.RFR bit, which is a permitted
IMPDEF in choice in ARMv7 and the only permitted choice in ARMv8.
Add a comment to bad_mode_switch() to note that this is why
FIQ is always a valid mode regardless of the CPU's Secure state.
Backports commit 52ff951b4f63a29593650a15efdf82f63d6d962d from qemu
The only case where we can attempt a cpsr_write() mode switch from
User is from the gdbstub; all other cases are handled in the
calling code (notably translate.c). Architecturally attempts to
alter the mode bits from user mode are simply ignored (and not
treated as a bad mode switch, which in v8 sets CPSR.IL). Make
mode switches from User ignored in cpsr_write() as well, for
consistency.
Backports commit cb01d3912c8b000ed26d5fe95f6c194b3e3ba7a6 from qemu
Raw CPSR writes should skip the architectural checks for whether
we're allowed to set the A or F bits and should also not do
the switching of register banks if the mode changes. Handle
this inside cpsr_write(), which allows us to drop the "manually
set the mode bits to avoid the bank switch" code from all the
callsites which are using CPSRWriteRaw.
This fixes a bug in 32-bit KVM handling where we had forgotten
the "manually set the mode bits" part and could thus potentially
trash the register state if the mode from the last exit to userspace
differed from the mode on this exit.
Backports commit f8c88bbcda76d5674e4bb125471371b41d330df8 from qemu
Add an argument to cpsr_write() to indicate what kind of CPSR
write is being requested, since the exact behaviour should
differ for the different cases.
Backports commit 50866ba5a2cfe922aaf3edb79f6eac5b0653477a from qemu
The rules for setting the CPSR on a 32-bit exception return are
subtly different from those for setting the CPSR via an instruction
like MSR or CPS. (In particular, in Hyp mode changing the mode bits
is not valid via MSR or CPS.) Split the exception-return case into
its own helper for setting CPSR, so we can eventually handle them
differently in the helper function.
Backports commit 235ea1f5c89abf30e452539b973b0dbe43d3fe2b from qemu
Make get_r13_banked() raise an exception at runtime for the
corner case of SRS from System mode, so that we can UNDEF it;
this brings us in to line with the ARM ARM's set of permitted
CONSTRAINED UNPREDICTABLE choices.
Backports commit f01377f591fe15c652f947646c4a69a7d4a71ad9 from qemu
The user-mode versions of get/set_r13_banked() exist just to assert
if they're ever called -- the translate time code should never
emit calls to them because SRS from user mode always UNDEF.
There's no code in the softmmu versions that can't compile in
CONFIG_USER_ONLY, and the assertion is not particularly useful,
so combine the two functions rather than having completely split
versions under ifdefs.
Backports commit d86d57d4fe683c99823f625f941eff26c07c72c3 from qemu
Move get/set_r13_banked() from helper.c to op_helper.c. This will
let us add exception-raising code to them, and also puts them
in the same file as get/set_user_reg(), which makes some conceptual
sense.
(The original reason for the helper.c/op_helper.c split was that
only op_helper.c had access to the CPU env pointer; this distinction
has not been true for a long time, though, and so the split is
now rather arbitrary.)
Backports commit 72309cee482868d6c4711931c3f7e02ab9dec229 from qemu
target-arm: Move bank_number() into internals.h
Move bank_number()'s implementation into internals.h, so
it's available in the user-mode-only compile as well.
Backports commit c766568d3604082c6fd45cbabe42c48e4861a13f from qemu
The SRS instruction is:
* UNDEFINED in Hyp mode
* UNPREDICTABLE in User or System mode
* UNPREDICTABLE if the specified mode isn't accessible
* trapped to EL3 if EL3 is AArch64 and we are at Secure EL1
Clean up the code to handle all these cases cleanly, including
picking UNDEF as our choice of UNPREDICTABLE behaviour rather
blindly trusting the mode field passed in the instruction.
As part of this, move the check for IS_USER into gen_srs()
itself rather than having it done by the caller.
The exception is that we don't UNDEF for calls from System
mode, which need a runtime check. This will be dealt with in
the following commits.
Backports commit cbc0326b6fb905f80b7cef85b24571f7ebb62077 from qemu
If access to FPEXC32_EL2 is trapped by CPTR_EL2.TFP or CPTR_EL3.TFP,
this should be reported with a syndrome register indicating an
FP access trap, not one indicating a system register access trap.
Backports commit f2cae6092767aaf418778eada15be444c23883be from qemu
Implement trapping of the "debug ROM" registers, which are controlled
by MDCR_EL2.TDRA for EL2 but by the more general MDCR_EL3.TDA for EL3.
Backports commit 91b0a23865558e2ce9c2e7042d404e8bf2e4b817 from qemu
Implement the traps to EL2 and EL3 controlled by the bits
MDCR_EL2.TDOSA MDCR_EL3.TDOSA. These can configurably trap
accesses to the "powerdown debug" registers.
Backports commit 187f678d5c28251dba2b44127e59966b14518ef7 from qemu
We weren't quite implementing the handling of SCR.SMD correctly.
The condition governing whether the SMD bit should apply only
for NS state is "is EL3 is AArch32", not "is the current EL AArch32".
Fix the condition, and clarify the comment both to reflect this and
to expand slightly on what's going on for the v7-no-Virtualization case.
Backports commit f096e92b6385fd87e8ea948ad3af70faf752c13a from qemu
Correct some corner cases we were getting wrong for
CNTFRQ access rights:
* should UNDEF from 32-bit Secure EL1
* only writable from the highest implemented exception level,
which might not be EL1 now
To clarify the code, provide a new utility function
arm_highest_el() which returns the highest implemented
exception level.
Backports commit 755026728abb19fba70e6b4396a27fa2e7550d74 from qemu
ARM stops before access to a location covered by watchpoint. Also, QEMU
watchpoint fire is not necessarily an architectural watchpoint match.
Unfortunately, that is hardly possible to ignore a fired watchpoint in
debug exception handler. So move watchpoint check from debug exception
handler to the dedicated watchpoint checking callback.
Backports commit 3826121d9298cde1d29ead05910e1f40125ee9b0 from qemu
All Thumb Neon and VFP instructions are 32 bits, so the IL
bit in the syndrome register should be set. Pass false to the
syn_* function's is_16bit argument rather than s->thumb
so we report the correct IL bit.
Backports commit 7d197d2db5e99e4c8b20f6771ddc7303acaa1c89 from qemu
All Thumb coprocessor instructions are 32 bits, so the IL
bit in the syndrome register should be set. Pass false to the
syn_* function's is_16bit argument rather than s->thumb
so we report the correct IL bit.
Backports commit 4df322593037d2700f72dfdfb967300b7ad2e696 from qemu
In syndrome register values, the IL bit indicates the instruction
length, and is 1 for 4-byte instructions and 0 for 2-byte
instructions. All A64 and A32 instructions are 4-byte, but
Thumb instructions may be either 2 or 4 bytes long. Unfortunately
we named the parameter to the syn_* functions for constructing
syndromes "is_thumb", which falsely implies that it should be
set for all Thumb instructions, rather than only the 16-bit ones.
Fix the functions to name the parameter 'is_16bit' instead.
Backports commit fc05f4a62c568b607ec3fe428a419bb38205b570 from qemu
Enable EL3 support for our Cortex-A53 and Cortex-A57 CPU models.
We have enough implemented now to be able to run real world code
at least to some extent (I can boot ARM Trusted Firmware to the
point where it pulls in OP-TEE and then falls over because it
doesn't have a UEFI image it can chain to).
Backports commit 3ad901bc2b98f5539af9a7d4aef140a6d8fa6442 from qemu
Implement some corner cases of the behaviour of the NSACR
register on ARMv8:
* if EL3 is AArch64 then accessing the NSACR from Secure EL1
with AArch32 should trap to EL3
* if EL3 is not present or is AArch64 then reads from NS EL1 and
NS EL2 return constant 0xc00
It would in theory be possible to implement all these with
a single reginfo definition, but for clarity we use three
separate definitions for the three cases and install the
right one based on the CPU feature flags.
Backports commit 2f027fc52d4b444a47cb05a9c96697372a6b57d2 from qemu
System registers might have access requirements which need to
be described via a CPAccessFn and which differ for reads and
writes. For this to be possible we need to pass the access
function a parameter to tell it whether the access being checked
is a read or a write.
Backports commit 3f208fd76bcc91a8506681bb8472f2398fe6f487 from qemu
The arm_generate_debug_exceptions() function as originally implemented
assumes no EL2 or EL3. Since we now have much more of an implementation
of those now, fix this assumption.
Backports commit 533e93f1cf12c570aab45f14663dab6fb8ea3ffc from qemu
The registers MVBAR and SCR should have the behaviour of trapping to
EL3 if accessed from Secure EL1, but we were incorrectly implementing
them to UNDEF (which would trap to EL1). Fix this by using the new
access_trap_aa32s_el1() access function.
Backports commit efe4a274083f61484a8f1478d93f229d43aa8095 from qemu
Implement the MDCR_EL3 register (which is SDCR for AArch32).
For the moment we implement it as reads-as-written.
Backports commit 5513c3abed8e5fabe116830c63f0d3fe1f94bd21 from qemu
We already modify the processor feature bits to not report EL3
support to the guest if EL3 isn't enabled for the CPU we're emulating.
Add similar support for not reporting EL2 unless it is enabled.
This is necessary because real world guest code running at EL3
(trusted firmware or bootloaders) will query the ID registers to
determine whether it should start a guest Linux kernel in EL2 or EL3.
Backports commit 3c2f7bb32b4c597925c5c7411307d51f1a56045d from qemu
Implement the inputsize > pamax check for Stage 2 translations.
This is CONSTRAINED UNPREDICTABLE and we choose to fault.
Backports commit 3526423e867765568ad95b8094ae8b4042cac215 from qemu
Rename check_s2_startlevel to check_s2_mmu_setup in preparation
for additional checks.
Backports commit a0e966c93a0968d29ef51447d08a6b7be6f4d757 from qemu
The S2 starting level table size check applies to both AArch32
and AArch64. Move it to common code.
Backports commit 98d68ec289750139258d9cd9ab3f6d7dd10bb762 from qemu
The AArch64 system registers DACR32_EL2, IFSR32_EL2, SPSR_IRQ,
SPSR_ABT, SPSR_UND and SPSR_FIQ are visible and fully functional from
EL3 even if the CPU has no EL2 (unlike some others which are RES0
from EL3 in that configuration). Move them from el2_cp_reginfo[] to
v8_cp_reginfo[] so they are always present.
Backports commit 6a43e0b6e1f6bcd6b11656967422f4217258200a from qemu
The AArch64 FPEXC32_EL2 system register is visible at EL2 and EL3,
and allows those exception levels to read and write the FPEXC
register for a lower exception level that is using AArch32.
Backports commit 03fbf20f4da58f41998dc10ec7542f65d37ba759 from qemu
The architecture requires that for an exception return to AArch32 the
low bits of ELR_ELx are ignored when the PC is set from them:
* if returning to Thumb mode, ignore ELR_ELx[0]
* if returning to ARM mode, ignore ELR_ELx[1:0]
We were only squashing bit 0; also squash bit 1 if the SPSR T bit
indicates this is a return to ARM code.
Backports commit c1e0371442bf3a7e42ad53c2a3d816ed7099f81d from qemu
We already implement almost all the checks for the illegal
return events from AArch64 state described in the ARM ARM section
D1.11.2. Add the two missing ones:
* return to EL2 when EL3 is implemented and SCR_EL3.NS is 0
* return to Non-secure EL1 when EL2 is implemented and HCR_EL2.TGE is 1
(We don't implement external debug, so the case of "debug state exit
from EL0 using AArch64 state to EL0 using AArch32 state" doesn't apply
for QEMU.)
Backports commit e393f339af87da7210f6c86902b321df6a2e8bf5 from qemu
Remove the assumptions that the AArch64 exception return code was
making about a return to AArch32 always being a return to EL0.
This includes pulling out the illegal-SPSR checks so we can apply
them for return to 32 bit as well as return to 64-bit.
Backports commit 3809951bf61605974b91578c582de4da28f8ed07 from qemu
The entry offset when taking an exception to AArch64 from a lower
exception level may be 0x400 or 0x600. 0x400 is used if the
implemented exception level immediately lower than the target level
is using AArch64, and 0x600 if it is using AArch32. We were
incorrectly implementing this as checking the exception level
that the exception was taken from. (The two can be different if
for example we take an exception from EL0 to AArch64 EL3; we should
in this case be checking EL2 if EL2 is implemented, and EL1 if
EL2 is not implemented.)
Backports commit 3d6f761713745dfed7d2ccfe98077d213a6a6eba from qemu
Handling of semihosting calls should depend on the register width
of the calling code, not on that of any higher exception level,
so we need to identify and handle semihosting calls before we
decide whether to deliver the exception as an entry to AArch32
or AArch64. (EXCP_SEMIHOST is also an "internal exception" so
it has no target exception level in the first place.)
This will allow AArch32 EL1 code to use semihosting calls when
running under an AArch64 EL3.
Backports commit 904c04de2e1b425e7bc8c4ce2fae3d652eeed242 from qemu
If EL2 or EL3 is present on an AArch64 CPU, then exceptions can be
taken to an exception level which is running AArch32 (if only EL0
and EL1 are present then EL1 must be AArch64 and all exceptions are
taken to AArch64). To support this we need to have a single
implementation of the CPU do_interrupt() method which can handle both
32 and 64 bit exception entry.
Pull the common parts of aarch64_cpu_do_interrupt() and
arm_cpu_do_interrupt() out into a new function which calls
either the AArch32 or AArch64 specific entry code once it has
worked out which one is needed.
We temporarily special-case the handling of EXCP_SEMIHOST to
avoid an assertion in arm_el_is_aa64(); the next patch will
pull all the semihosting handling out to the arm_cpu_do_interrupt()
level (since semihosting semantics depend on the register width
of the calling code, not on that of any higher EL).
Backports commit 966f758c49ff478c4757efa5970ce649161bff92 from qemu
Move the aarch64_cpu_do_interrupt() function to helper.c. We want
to be able to call this from code that isn't AArch64-only, and
the move allows us to avoid awkward #ifdeffery at the callsite.
Backports commit f3a9b6945cbbb23f3a70da14e9ffdf1e60c580a8 from qemu
Support EL2 and EL3 in arm_el_is_aa64() by implementing the
logic for checking the SCR_EL3 and HCR_EL2 register-width bits
as appropriate to determine the register width of lower exception
levels.
Backports commit 446c81abf8e0572b8d5d23fe056516ac62af278d from qemu
If we have a secure address space, use it in page table walks:
when doing the physical accesses to read descriptors, make them
through the correct address space.
(The descriptor reads are the only direct physical accesses
made in target-arm/ for CPUs which might have TrustZone.)
Backports commit 5ce4ff6502fc6ae01a30c3917996c6c41be1d176 from qemu
Implement the asidx_from_attrs CPU method to return the
Secure or NonSecure address space as appropriate.
(The function is inline so we can use it directly in target-arm
code to be added in later patches.)
Backports commit 017518c1f6ed9939c7f390cb91078f0919b5494c from qemu
Add QOM property to the ARM CPU which boards can use to tell us what
memory region to use for secure accesses. Nonsecure accesses
go via the memory region specified with the base CPU class 'memory'
property.
By default, if no secure region is specified it is the same as the
nonsecure region, and if no nonsecure region is specified we will use
address_space_memory.
Backports commit 9e273ef2174d7cd5b14a16d8638812541d3eb6bb from qemu
The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.
Backports commit ec53b45bcd1f74f7a4c31331fa6d50b402cd6d26 from qemu
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Backports commit 74c21bd07491739c6e56bcb1f962e4df730e77f3 from qemu
arm_regime_using_lpae_format checks whether the LPAE extension is used
for stage 1 translation regimes. MMU indexes not exclusively of a stage 1
regime won't work with this method.
In case of ARMMMUIdx_S12NSE0 or ARMMMUIdx_S12NSE1, offset these values
by ARMMMUIdx_S1NSE0 to get the right index indicating a stage 1
translation regime.
Rename also the function to arm_s1_regime_using_lpae_format and update
the comments to reflect the change.
Backports commit deb2db996cbb9470b39ae1e383791ef34c4eb3c2 from qemu
Qemu does not generally perform alignment checks. However, the ARM ARM
requires implementation of alignment exceptions for a number of cases
including LDREX, and Windows-on-ARM relies on this.
This change adds plumbing to enable alignment checks on loads using
MO_ALIGN, a do_unaligned_access hook to raise the exception (data
abort), and uses the new aligned loads in LDREX (for all but
single-byte loads).
Backports commit 30901475b91ef1f46304404ab4bfe89097f61b96 from qemu
In an LPAE format descriptor in ARMv8 the address field extends
up to bit 47, not just bit 39. Correct the masking so we don't
give incorrect results if the output address size is greater
than 40 bits, as it can be for AArch64.
(Note that we don't yet support the new-in-v8 Address Size fault which
should be generated if any translation table entry or TTBR contains
an address with non-zero bits above the most significant bit of the
maximum output address size.)
Backports commit 6109769a8b42bd0c3d5b1601c9b35fe7ea6a603e from qemu
Architectural breakpoint check could raise an exceptions, thus condexec
bits should be updated before calling gen_helper_check_breakpoints().
Backports commit ce8a1b5449cd8c4c2831abb581d3208c3a3745a0 from qemu
Coprocessor access instructions are allowed inside IT block.
gen_helper_access_check_cp_reg() can raise an exceptions thus condexec
bits should be updated before.
Backports commit 43bfa4a100687af8d293fef0a197839b51400fca from qemu
AArch32 translation code does not distinguish between DISAS_UPDATE and
DISAS_JUMP. Thus, we cannot use any of them without first updating PC in
CPU state. Furthermore, it is too complicated to update PC in CPU state
before PC gets updated in disas context. So it is hardly possible to
correctly end TB early if is is not likely to be executed before calling
disas_*_insn(), e.g. just after calling breakpoint check helper.
Modify DISAS_UPDATE and DISAS_JUMP usage in AArch32 translation and
apply to them the same semantic as AArch64 translation does:
- DISAS_UPDATE: update PC in CPU state when finishing translation
- DISAS_JUMP: preserve current PC value in CPU state when finishing
translation
This patch fixes a bug in AArch32 breakpoint handling: when
check_breakpoints helper does not generate an exception, ending the TB
early with DISAS_UPDATE couldn't update PC in CPU state and execution
hangs.
Backports commit 577bf808958d06497928c639efaa473bf8c5e099 from qemu
Do not raise a CPU exception if no CPU breakpoint has fired, since
singlestep is also done by generating a debug internal exception. This
fixes a bug with singlestepping in gdbstub.
Backports commit 5c629f4ff4dc9ae79cc732f59a8df15ede796ff7 from qemu
If this CPU supports EL3, enhance the printing of the current
CPU mode in debug logging to distinguish S from NS modes as
appropriate.
Backports commit 06e5cf7acd1f94ab7c1cd6945974a1f039672940 from qemu
The AArch64 debug CPU display of PSTATE as "PSTATE=200003c5 (flags --C-)"
on the end of the same line as the last of the general purpose registers
is unnecessarily different from the AArch32 display of PSR as
"PSR=200001d3 --C- A svc32" on its own line. Update the AArch64
code to put PSTATE in its own line and in the same format, including
printing the exception level (mode).
Backports commit 08b8e0f527930208a548b424d2ab3103bf3c8c02 from qemu
Some targets already had this within their logic, but make sure
it's present for all targets.
Backports commit 522a0d4e3c0d397ffb45ec400d8cbd426dad9d17 from qemu
Reduce the boilerplate required for each target. At the same time,
move the test for breakpoint after calling tcg_gen_insn_start.
Note that arm and aarch64 do not use cpu_breakpoint_test, but still
move the inline test down after tcg_gen_insn_start.
Backports commit b933066ae03d924a92b2616b4a24e7d91cd5b841 from qemu
Introduce ARMMMUFaultInfo to propagate MMU Fault information
across the MMU translation code path. This is in preparation for
adding Stage-2 translation.
No functional changes.
Backports commit e14b5a23d8c83304559f31397f95d22ada60a19a from qemu
The starting level for S2 pagetable walks is computed
differently from the S1 starting level. Implement the S2
variant.
Backports commit 1853d5a9dcac910322c6cc5b2fddec45fd052d25 from qemu
Rename granule_sz to stride to better match the reference manuals.
No functional change.
Backports commit 973a5434825c076995218868b5b3047e5de400c6 from qemu
Remove the tsz variable and introduce inputsize.
This simplifies the code a little and makes it easier to
compare with the reference manuals.
No functional change.
Backports commit 4ca6a051758edf625a17dfc4ce4ab72edabac170 from qemu
Add support for AArch32 S2 negative t0sz. In preparation for
using 40bit IPAs on AArch32.
Backports commit 4ee38098010240e0b390061fdd0151ff62d80279 from qemu
Move declaration of t0sz and t1sz to the top of the function
avoiding a mix of code and variable declarations.
No functional change.
Backports commit 1f4c8c18a5b6f4fad13e13b7e3828124c6c8f34d from qemu
Make t0sz and t1sz signed integers to match tsz and to make
it easier to implement support for AArch32 negative t0sz.
t1sz is changed for consistensy.
No functional change.
Backports commit 5c31a10d16c595d6a59e3e7fc1808c3b1d03e02f from qemu
When the memory we're trying to translate code from is not executable we have
to turn this into a guest fault. In order to report the correct PC for this
fault, and to make sure it is not reported until after any other possible
faults for instructions earlier in execution, we must terminate TBs at
the end of a page, in case the next instruction is in a non-executable page.
This is simple for T16, A32 and A64 instructions, which are always aligned
to their size. However T32 instructions may be 32-bits but only 16-aligned,
so they can straddle a page boundary.
Correct the condition that checks whether the next instruction will touch
the following page, to ensure that if we're 2 bytes before the boundary
and this insn is T32 then we end the TB.
Backports commit 541ebcd401ee47f3c1a3ce503ef5466b75e9d20a from qemu
The code in arm_excp_unmasked() suppresses the ability of PSTATE.AIF
to mask exceptions from a lower EL targeting EL2 or EL3 if the
CPU is 64-bit. This is correct for a target of EL3, but not correct
for targeting EL2. Further, we go to some effort to calculate
scr and hcr values which are not used at all for the 64-bit CPU
case.
Rearrange the code to correctly implement the 64-bit CPU logic
and keep the hcr/scr calculations in the 32-bit CPU codepath.
Backports commit 7cd6de3bb1ca55dfa8f53fb9894803eb33f497b3 from qemu
A QEMU breakpoint match is not definitely an architectural breakpoint
match. If an exception is generated unconditionally during translation,
it is hardly possible to ignore it in the debug exception handler.
Generate a call to a helper to check CPU breakpoints and raise an
exception only if any breakpoint matches architecturally.
Backports commit 5d98bf8f38c17a348ab6e8af196088cd4953acd0 from qemu
GDB breakpoints have higher priority so they have to be checked first.
Should GDB breakpoint match, just return from the debug exception
handler.
Backports commit e63a2d4d9ed73e33a0b7483085808048be8bbcb1 from qemu
Implement debug exception routing according to ARM ARM D2.3.1 Pseudocode
description of routing debug exceptions.
Backports commit 81669b8b81eb450d7b89ee5fdd57bdb73d87022d from qemu
Add the MDCR_EL2 register. We don't implement any of
the debug-related traps this register controls yet, so
currently it simply reads back as written.
Backports commit 14cc7b54372995a6ba72c7719372e4f710fc9b5a from qemu
Added oslar_write function to OSLAR_EL1 sysreg, using a status variable
in ARMCPUState.cp15 struct (oslsr_el1). This variable is also linked
to the newly added read-only OSLSR_EL1 register.
Linux reads from this register during its suspend/resume procedure.
Backports commit 1424ca8d4320427c3e93722b65e19077969808a2 from qemu
It is incorrect to call arm_el_is_aa64() function for unimplemented EL.
This patch fixes several attempts to do so.
Backports commit 2cde031f5a34996bab32571a26b1a6bcf3e5b5d9 from qemu
If any store instruction writes the code inside the same TB
after this store insn, the execution of the TB must be stopped
to execute new code correctly.
As described in ARMv8 manual D3.4.6 self-modifying code must do an
IC invalidation to be valid, and an ISB after it. So it's enough to end
the TB after ISB instruction on the code translation.
Also this TB break is necessary to take any pending interrupts immediately
after an ISB (as required by ARMv8 ARM D1.14.4).
Backports commit 6df99dec9e81838423d723996e96236693fa31fe from qemu