Commit graph

430 commits

Author SHA1 Message Date
Richard Henderson 2540911bdd target/arm: Add support for MTE to SCTLR_ELx
target/arm: Add support for MTE to HCR_EL2 and SCR_EL3

This does not attempt to rectify all of the res0 bits, but does
clear the mte bits when not enabled. Since there is no high-part
mapping of SCTLR, aa32 mode cannot write to these bits.

Backports commits f00faf130d5dcf64b04f71a95f14745845ca1014, and
8ddb300bf60a5f3d358dd6fbf81174f6c03c1d9f from qemu.
2021-02-25 13:59:11 -05:00
Richard Henderson d81feac642 target/arm: Improve masking of SCR RES0 bits
Protect reads of aa64 id registers with ARM_CP_STATE_AA64.
Use this as a simpler test than arm_el_is_aa64, since EL3
cannot change mode.

Backports commit 252e8c69669599b4bcff802df300726300292f47 from qemu
2021-02-25 13:56:35 -05:00
Richard Henderson 6530d6342f softfloat: Use post test for floatN_mul
The existing f{32,64}_addsub_post test, which checks for zero
inputs, is identical to f{32,64}_mul_fast_test. Which means
we can eliminate the fast_test/fast_op hooks in favor of
reusing the same post hook.

This means we have one fewer test along the fast path for multiply.

Backports commit b240c9c497b9880ac0ba29465907d5ebecd48083 from qemu
2020-05-21 17:24:00 -04:00
Lioncash 5c03efd5d6 arm/helper: Amend sign conversion warning 2020-05-11 17:21:25 -04:00
Edgar E. Iglesias 91dbd53f77 target/arm: Drop access_el3_aa32ns_aa64any()
Calling access_el3_aa32ns() works for AArch32 only cores
but it does not handle 32-bit EL2 on top of 64-bit EL3
for mixed 32/64-bit cores.

Merge access_el3_aa32ns_aa64any() into access_el3_aa32ns()
and only use the latter.

Fixes: 68e9c2fe65 ("target-arm: Add VTCR_EL2")

Backports commit 93dd1e6140e2652347cfe7208591d4cd32762d08 from qemu
2020-05-11 16:39:40 -04:00
Peter Maydell b427549ce4 target/arm: Implement ARMv8.2-TTS2UXN
The ARMv8.2-TTS2UXN feature extends the XN field in stage 2
translation table descriptors from just bit [54] to bits [54:53],
allowing stage 2 to control execution permissions separately for EL0
and EL1. Implement the new semantics of the XN field and enable
the feature for our 'max' CPU.

Backports commit ce3125bed935a12e619a8253c19340ecaa899347 from qemu
2020-05-07 08:49:18 -04:00
Peter Maydell 1e75276a89 target/arm: Add new 's1_is_el0' argument to get_phys_addr_lpae()
For ARMv8.2-TTS2UXN, the stage 2 page table walk wants to know
whether the stage 1 access is for EL0 or not, because whether
exec permission is given can depend on whether this is an EL0
or EL1 access. Add a new argument to get_phys_addr_lpae() so
the call sites can pass this information in.

Since get_phys_addr_lpae() doesn't already have a doc comment,
add one so we have a place to put the documentation of the
semantics of the new s1_is_el0 argument.

Backports commit ff7de2fc2c994030bfb83af9ddc9a3cd70ce3e88 from qemu
2020-05-07 08:45:23 -04:00
Peter Maydell bec9ee21b6 target/arm: Use enum constant in get_phys_addr_lpae() call
The access_type argument to get_phys_addr_lpae() is an MMUAccessType;
use the enum constant MMU_DATA_LOAD rather than a literal 0 when we
call it in S1_ptw_translate().

Backports commit 59dff859cd850876df2cfa561c7bcfc4bdda4599 from qemu
2020-05-07 08:42:41 -04:00
Peter Maydell 3df93e463d target/arm: Don't use a TLB for ARMMMUIdx_Stage2
We define ARMMMUIdx_Stage2 as being an MMU index which uses a QEMU
TLB. However we never actually use the TLB -- all stage 2 lookups
are done by direct calls to get_phys_addr_lpae() followed by a
physical address load via address_space_ld*().

Remove Stage2 from the list of ARM MMU indexes which correspond to
real core MMU indexes, and instead put it in the set of "NOTLB" ARM
MMU indexes.

This allows us to drop NB_MMU_MODES to 11. It also means we can
safely add support for the ARMv8.3-TTS2UXN extension, which adds
permission bits to the stage 2 descriptors which define execute
permission separatel for EL0 and EL1; supporting that while keeping
Stage2 in a QEMU TLB would require us to use separate TLBs for
"Stage2 for an EL0 access" and "Stage2 for an EL1 access", which is a
lot of extra complication given we aren't even using the QEMU TLB.

In the process of updating the comment on our MMU index use,
fix a couple of other minor errors:
* NS EL2 EL2&0 was missing from the list in the comment
* some text hadn't been updated from when we bumped NB_MMU_MODES
above 8

Backports commit bf05340cb655637451162c02dadcd6581a05c02c from qemu
2020-05-07 08:40:06 -04:00
Philippe Mathieu-Daudé afeb8ff2dc target/arm: Restrict the Address Translate write operation to TCG accel
Under KVM these registers are written by the hardware.
Restrict the writefn handlers to TCG to avoid when building
without TCG:

LINK aarch64-softmmu/qemu-system-aarch64
target/arm/helper.o: In function `do_ats_write':
target/arm/helper.c:3524: undefined reference to `raise_exception'

Backports commit 9fb005b02dbda7f47b789b7f19bf5f73622a4756 from qemu
2020-04-30 21:31:22 -04:00
Peter Maydell 6a015761ac target/arm: Remove obsolete TODO note from get_phys_addr_lpae()
An old comment in get_phys_addr_lpae() claims that the code does not
support the different format TCR for VTCR_EL2. This used to be true
but it is not true now (in particular the aa64_va_parameters() and
aa32_va_parameters() functions correctly handle the different
register format by checking whether the mmu_idx is Stage2).
Remove the out of date parts of the comment.

Backports commit 07d1be3b3aac20c21ac4a95c7f3f01a3622a31a3 from qemu
2020-04-30 07:21:17 -04:00
Peter Maydell 4228e7f155 target/arm: PSTATE.PAN should not clear exec bits
Our implementation of the PSTATE.PAN bit incorrectly cleared all
access permission bits for privileged access to memory which is
user-accessible. It should only affect the privileged read and write
permissions; execute permission is dealt with via XN/PXN instead.

Fixes: 81636b70c226dc27d7ebc8d

Backports commit f4e1dbc578a051db08a40c05276ebf525b98f949 from qemu
2020-04-30 07:20:20 -04:00
Changbin Du 1e274425bd target/arm: fix incorrect current EL bug in aarch32 exception emulation
The arm_current_el() should be invoked after mode switching. Otherwise, we
get a wrong current EL value, since current EL is also determined by
current mode.

Fixes: 4a2696c0d4 ("target/arm: Set PAN bit as required on exception entry")

Backports commit 88828bf133b64b7a860c166af3423ef1a47c5d3b from qemu
2020-04-30 06:57:36 -04:00
Richard Henderson d5234c8b3d target/arm: Rearrange disabled check for watchpoints
Coverity rightly notes that ctz32(bas) on 0 will return 32,
which makes the len calculation a BAD_SHIFT.

A value of 0 in DBGWCR<n>_EL1.BAS is reserved. Simply move
the existing check we have for this case

Backports commit ae1111d4def40c6f592c3a307c599272b778eb65 from qemu
2020-04-30 06:52:38 -04:00
Alex Bennée 46e1dab19e target/arm: don't bother with id_aa64pfr0_read for USER_ONLY
For system emulation we need to check the state of the GIC before we
report the value. However this isn't relevant to exporting of the
value to linux-user and indeed breaks the exported value as set by
modify_arm_cp_regs.

Backports commit 976b99b6ec2e15cd7c36d72fdb9b60c37c5494f8 from qemu
2020-04-30 06:24:10 -04:00
Richard Henderson d3a5843aeb target/arm: Replicate TBI/TBID bits for single range regimes
Replicate the single TBI bit from TCR_EL2 and TCR_EL3 so that
we can unconditionally use pointer bit 55 to index into our
composite TBI1:TBI0 field.

Backports commit 3e270f67f0f05277021763af119a6ce195f8ed51 from qemu
2020-04-30 05:58:59 -04:00
Richard Henderson cc32a96183 target/arm: Honor the HCR_EL2.TTLB bit
This bit traps EL1 access to tlb maintenance insns.

Backports commit 30881b7353b5bb41210c32cd8e00421da757808c from qemu
2020-04-30 05:58:59 -04:00
Richard Henderson 74d6aa6012 target/arm: Honor the HCR_EL2.TPU bit
This bit traps EL1 access to cache maintenance insns that operate
to the point of unification. There are no longer any references to
plain aa64_cacheop_access, so remove it.

Backports commit 38262d8a732f8bd0e9ca3dc064f6e73d00c08b9a from qemu
2020-04-30 05:58:59 -04:00
Richard Henderson f35a83d5ff target/arm: Honor the HCR_EL2.TPCP bit
This bit traps EL1 access to cache maintenance insns that operate
to the point of coherency or persistence.

Backports commit 1bed4d2e55459129c19f5952bcfc65bd0c70db5b from qemu
2020-03-22 02:44:41 -04:00
Richard Henderson 8ff8ff0c4a target/arm: Honor the HCR_EL2.TACR bit
This bit traps EL1 access to the auxiliary control registers.

Backports commit 9960237769ada2faaaf1898b96da7a55e1691cf4 from qemu
2020-03-22 02:42:05 -04:00
Richard Henderson d252af2069 target/arm: Honor the HCR_EL2.TSW bit
These bits trap EL1 access to set/way cache maintenance insns.

Backports commit 1803d2713b29d85031cc964d545036bda9880f26 from qemu
2020-03-22 02:40:10 -04:00
Richard Henderson 7ee27e5d93 target/arm: Honor the HCR_EL2.{TVM,TRVM} bits
These bits trap EL1 access to various virtual memory controls.

Backports commit 84929218512c19ec9a296fbfd7b39219e0c592ae from qemu
2020-03-22 02:38:03 -04:00
Richard Henderson 7cd75be7c8 target/arm: Improve masking in arm_hcr_el2_eff
Update the {TGE,E2H} == '11' masking to ARMv8.6.
If EL2 is configured for aarch32, disable all of
the bits that are RES0 in aarch32 mode.

Backports commit 4990e1d3c128580dd2fa0bbb1a42b6d63ba1ac28 from qemu
2020-03-22 02:32:35 -04:00
Richard Henderson c4b2493c2e target/arm: Improve masking of HCR/HCR2 RES0 bits
Don't merely start with v8.0, handle v7VE as well. Ensure that writes
from aarch32 mode do not change bits in the other half of the register.
Protect reads of aa64 id registers with ARM_FEATURE_AARCH64.

Backports commit d1fb4da208411ce7b3dafb9f9e7726ebcec14edb from qemu
2020-03-22 02:28:41 -04:00
Peter Maydell 32b0e506e6 target/arm: Implement (trivially) ARMv8.2-TTCNP
The ARMv8.2-TTCNP extension allows an implementation to optimize by
sharing TLB entries between multiple cores, provided that software
declares that it's ready to deal with this by setting a CnP bit in
the TTBRn_ELx. It is mandatory from ARMv8.2 onward.

For QEMU's TLB implementation, sharing TLB entries between different
cores would not really benefit us and would be a lot of work to
implement. So we implement this extension in the "trivial" manner:
we allow the guest to set and read back the CnP bit, but don't change
our behaviour (this is an architecturally valid implementation
choice).

The only code path which looks at the TTBRn_ELx values for the
long-descriptor format where the CnP bit is defined is already doing
enough masking to not get confused when the CnP bit at the bottom of
the register is set, so we can simply add a comment noting why we're
relying on that mask.

Backports commit 41a4bf1feab098da4cd5495cd56a99b0339e2275 from qemu
2020-03-22 02:24:48 -04:00
Peter Maydell 7271ebf96d target/arm: Implement ARMv8.3-CCIDX
The ARMv8.3-CCIDX extension makes the CCSIDR_EL1 system ID registers
have a format that uses the full 64 bit width of the register, and
adds a new CCSIDR2 register so AArch32 can get at the high 32 bits.

QEMU doesn't implement caches, so we just treat these ID registers as
opaque values that are set to the correct constant values for each
CPU. The only thing we need to do is allow 64-bit values in our
cssidr[] array and provide the CCSIDR2 accessors.

We don't set the CCIDX field in our 'max' CPU because the CCSIDR
constant values we use are the same as the ones used by the
Cortex-A57 and they are in the old 32-bit format. This means
that the extra regdef added here is unused currently, but it
means that whenever in the future we add a CPU that does need
the new 64-bit format it will just work when we set the cssidr
values and the ID registers for it.

Backports commit 957e615503bd0de22393fd8dbcb22a5064fd2b5c from qemu
2020-03-22 00:17:37 -04:00
Richard Henderson f73b360f8e target/arm: Rename isar_feature_aa32_fpdp_v2
The old name, isar_feature_aa32_fpdp, does not reflect
that the test includes VFPv2. We will introduce another
feature tests for VFPv3.

Backports commit c4ff873583834c8275586914fff714e3ae65dee4 from qemu
2020-03-21 23:16:00 -04:00
Richard Henderson 06b52d6660 target/arm: Add isar_feature_aa32_vfp_simd
Use this in the places that were checking ARM_FEATURE_VFP, and
are obviously testing for the existance of the register set
as opposed to testing for some particular instruction extension.

Backports commit 7fbc6a403a0aab834e764fa61d81ed8586cfe352 from qemu
2020-03-21 23:11:36 -04:00
Richard Henderson 833de589ed target/arm: Use isar_feature_aa32_simd_r32 more places
Many uses of ARM_FEATURE_VFP3 are testing for the number of simd
registers implemented. Use the proper test vs MVFR0.SIMDReg.

Backports commit a6627f5fc607939f7c8b9c3157fdcb2d368ba0ed from qemu
2020-03-21 19:39:35 -04:00
Peter Maydell 61cf5abc9e target/arm: Correctly implement ACTLR2, HACTLR2
The ACTLR2 and HACTLR2 AArch32 system registers didn't exist in ARMv7
or the original ARMv8. They were later added as optional registers,
whose presence is signaled by the ID_MMFR4.AC2 field. From ARMv8.2
they are mandatory (ie ID_MMFR4.AC2 must be non-zero).

We implemented HACTLR2 in commit 0e0456ab8895a5e85, but we
incorrectly made it exist for all v8 CPUs, and we didn't implement
ACTLR2 at all.

Sort this out by implementing both registers only when they are
supposed to exist, and setting the ID_MMFR4 bit for -cpu max.

Note that this removes HACTLR2 from our Cortex-A53, -A47 and -A72
CPU models; this is correct, because those CPUs do not implement
this register.

Fixes: 0e0456ab8895a5e85

Backports commit f6287c24c66d6b9187c1c2887e1c7cfa4d304b0c from qemu
2020-03-21 18:52:30 -04:00
Peter Maydell 2ce106df33 target/arm: Use isar_feature function for testing AA32HPD feature
Now we have moved ID_MMFR4 into the ARMISARegisters struct, we
can define and use an isar_feature for the presence of the
ARMv8.2-AA32HPD feature, rather than open-coding the test.

While we're here, correct a comment typo which missed an 'A'
from the feature name.

Backports commit 4036b7d1cd9fb1097a5f4bc24d7d31744256260f from qemu
2020-03-21 18:48:57 -04:00
Peter Maydell 4693b2c011 target/arm: Test correct register in aa32_pan and aa32_ats1e1 checks
The isar_feature_aa32_pan and isar_feature_aa32_ats1e1 functions
are supposed to be testing fields in ID_MMFR3; but a cut-and-paste
error meant we were looking at MVFR0 instead.

Fix the functions to look at the right register; this requires
us to move at least id_mmfr3 to the ARMISARegisters struct; we
choose to move all the ID_MMFRn registers for consistency.

Backports commit 10054016eda1b13bdd8340d100fd029cc8b58f36 from qemu
2020-03-21 18:47:12 -04:00
Peter Maydell e72fa1cb33 target/arm: Correct handling of PMCR_EL0.LC bit
The LC bit in the PMCR_EL0 register is supposed to be:
* read/write
* RES1 on an AArch64-only implementation
* an architecturally UNKNOWN value on reset
(and use of LC==0 by software is deprecated).

We were implementing it incorrectly as read-only always zero,
though we do have all the code needed to test it and behave
accordingly.

Instead make it a read-write bit which resets to 1 always, which
satisfies all the architectural requirements above.

Backports commit 62d96ff48510f4bf648ad12f5d3a5507227b026f from qemu
2020-03-21 18:40:26 -04:00
Peter Maydell de428e4b45 target/arm: Correct definition of PMCRDP
The PMCR_EL0.DP bit is bit 5, which is 0x20, not 0x10. 0x10 is 'X'.
Correct our #define of PMCRDP and add the missing PMCRX.

We do have the correct behaviour for handling the DP bit being
set, so this fixes a guest-visible bug.

Fixes: 033614c47de

Backports commit a1ed04dd79aabb9dbeeb5fa7d49f1a3de0357553 from qemu
2020-03-21 18:39:37 -04:00
Peter Maydell 4dd57f7acc target/arm: Implement ARMv8.4-PMU extension
The ARMv8.4-PMU extension adds:
* one new required event, STALL
* one new system register PMMIR_EL1

(There are also some more L1-cache related events, but since
we don't implement any cache we don't provide these, in the
same way we don't provide the base-PMUv3 cache events.)

The STALL event "counts every attributable cycle on which no
attributable instruction or operation was sent for execution on this
PE". QEMU doesn't stall in this sense, so this is another
always-reads-zero event.

The PMMIR_EL1 register is a read-only register providing
implementation-specific information about the PMU; currently it has
only one field, SLOTS, which defines behaviour of the STALL_SLOT PMU
event. Since QEMU doesn't implement the STALL_SLOT event, we can
validly make the register read zero.

Backports commit 15dd1ebda4a6ef928d484c5a4f48b8ccb7438bb2 from qemu
2020-03-21 18:37:50 -04:00
Peter Maydell 5c93f43eb9 target/arm: Implement ARMv8.1-PMU extension
The ARMv8.1-PMU extension requires:
* the evtCount field in PMETYPER<n>_EL0 is 16 bits, not 10
* MDCR_EL2.HPMD allows event counting to be disabled at EL2
* two new required events, STALL_FRONTEND and STALL_BACKEND
* ID register bits in ID_AA64DFR0_EL1 and ID_DFR0

We already implement the 16-bit evtCount field and the
HPMD bit, so all that is missing is the two new events:
STALL_FRONTEND
"counts every cycle counted by the CPU_CYCLES event on which no
operation was issued because there are no operations available
to issue to this PE from the frontend"
STALL_BACKEND
"counts every cycle counted by the CPU_CYCLES event on which no
operation was issued because the backend is unable to accept
any available operations from the frontend"

QEMU never stalls in this sense, so our implementation is trivial:
always return a zero count.

Backports commit 0727f63b1ecf765ebc48266f616f8fc362dc7fbc from qemu
2020-03-21 18:34:33 -04:00
Peter Maydell cef6f3e72c target/arm: Move DBGDIDR into ARMISARegisters
We're going to want to read the DBGDIDR register from KVM in
a subsequent commit, which means it needs to be in the
ARMISARegisters sub-struct. Move it.

Backports commit 4426d3617d64922d97b74ed22e67e33b6fb7de0a from qemu
2020-03-21 18:29:01 -04:00
Peter Maydell a6c9c87a5d target/arm: Stop assuming DBGDIDR always exists
The AArch32 DBGDIDR defines properties like the number of
breakpoints, watchpoints and context-matching comparators. On an
AArch64 CPU, the register may not even exist if AArch32 is not
supported at EL1.

Currently we hard-code use of DBGDIDR to identify the number of
breakpoints etc; this works for all our TCG CPUs, but will break if
we ever add an AArch64-only CPU. We also have an assert() that the
AArch32 and AArch64 registers match, which currently works only by
luck for KVM because we don't populate either of these ID registers
from the KVM vCPU and so they are both zero.

Clean this up so we have functions for finding the number
of breakpoints, watchpoints and context comparators which look
in the appropriate ID register.

This allows us to drop the "check that AArch64 and AArch32 agree
on the number of breakpoints etc" asserts:
* we no longer look at the AArch32 versions unless that's the
right place to be looking
* it's valid to have a CPU (eg AArch64-only) where they don't match
* we shouldn't have been asserting the validity of ID registers
in a codepath used with KVM anyway

Backports commit 88ce6c6ee85d902f59dc65afc3ca86b34f02b9ed from qemu
2020-03-21 18:26:24 -04:00
Peter Maydell afc28d9b2c target/arm: Add _aa64_ and _any_ versions of pmu_8_1 isar checks
Add the 64-bit version of the "is this a v8.1 PMUv3?"
ID register check function, and the _any_ version that
checks for either AArch32 or AArch64 support. We'll use
this in a later commit.

We don't (yet) do any isar_feature checks on ID_AA64DFR1_EL1,
but we move id_aa64dfr1 into the ARMISARegisters struct with
id_aa64dfr0, for consistency.

Backports commit 2a609df87d9b886fd38a190a754dbc241ff707e8 from qemu
2020-03-21 18:24:00 -04:00
Peter Maydell e64143966a target/arm: Define an aa32_pmu_8_1 isar feature test function
Instead of open-coding a check on the ID_DFR0 PerfMon ID register
field, create a standardly-named isar_feature for "does AArch32 have
a v8.1 PMUv3" and use it.

This entails moving the id_dfr0 field into the ARMISARegisters struct.

Backports commit a617953855b65a602d36364b9643f7e5bc31288e from qemu
2020-03-21 18:21:26 -04:00
Peter Maydell fd6c635e03 target/arm: Add and use FIELD definitions for ID_AA64DFR0_EL1
Add FIELD() definitions for the ID_AA64DFR0_EL1 and use them
where we currently have hard-coded bit values.

Backports commit ceb2744b47a1ef4184dca56a158eb3156b6eba36 from qemu
2020-03-21 18:16:55 -04:00
Peter Maydell ebd7131c16 target/arm: Factor out PMU register definitions
Pull the code that defines the various PMU registers out
into its own function, matching the pattern we have
already for the debug registers.

Apart from one style fix to a multi-line comment, this
is purely movement of code with no changes to it.

Backports commit 24183fb6f00ecca8b508e245c95ff50ddde3f18b from qemu
2020-03-21 18:15:09 -04:00
Peter Maydell b1c088e2f2 target/arm: Define and use any_predinv isar_feature test
Instead of open-coding "ARM_FEATURE_AARCH64 ? aa64_predinv: aa32_predinv",
define and use an any_predinv isar_feature test function.

Backports commit 22e570730d15374453baa73ff2a699e01ef4e950 from qemu
2020-03-21 18:13:25 -04:00
Peter Maydell 778fcd9562 target/arm: Check aa32_pan in take_aarch32_exception(), not aa64_pan
In take_aarch32_exception(), we know we are dealing with a CPU that
has AArch32, so the right isar_feature test is aa32_pan, not aa64_pan.

Backports commit f8af1143ef93954e77cf59e09b5e004dafbd64fd from qemu
2020-03-21 18:09:27 -04:00
Peter Maydell e63f70f980 target/arm: Add _aa32_ to isar_feature functions testing 32-bit ID registers
Enforce a convention that an isar_feature function that tests a
32-bit ID register always has _aa32_ in its name, and one that
tests a 64-bit ID register always has _aa64_ in its name.
We already follow this except for three cases: thumb_div,
arm_div and jazelle, which all need _aa32_ adding.

(As noted in the comment, isar_feature_aa32_fp16_arith()
is an exception in that it currently tests ID_AA64PFR0_EL1,
but will switch to MVFR1 once we've properly implemented
FP16 for AArch32.)

Backports commit 873b73c0c891ec20adacc7bd1ae789294334d675 from qemu
2020-03-21 18:08:23 -04:00
Richard Henderson 0131e804fb target/arm: Split out aa64_va_parameter_tbi, aa64_va_parameter_tbid
For the purpose of rebuild_hflags_a64, we do not need to compute
all of the va parameters, only tbi. Moreover, we can compute them
in a form that is more useful to storing in hflags.

This eliminates the need for aa64_va_parameter_both, so fold that
in to aa64_va_parameter. The remaining calls to aa64_va_parameter
are in get_phys_addr_lpae and in pauth_helper.c.

This reduces the total cpu consumption of aa64_va_parameter in a
kernel boot plus a kvm guest kernel boot from 3% to 0.5%.

Backports commit b830a5ee82e66f54697dcc6450fe9239b7412d13 from qemu
2020-03-21 18:04:39 -04:00
Richard Henderson 2cce7e0dd0 target/arm: Remove ttbr1_valid check from get_phys_addr_lpae
Now that aa64_va_parameters_both sets select based on the number
of ranges in the regime, the ttbr1_valid check is redundant.

Backports commit 03f27724dff15633911e68a3906c30f57938ea45 from qemu
2020-03-21 18:01:24 -04:00
Richard Henderson f3fa39829d target/arm: Fix select for aa64_va_parameters_both
Select should always be 0 for a regime with one range.

Backports commit 71d181640a1a9470f074fa28600ca85587e2ca6b from qemu
2020-03-21 18:00:15 -04:00
Richard Henderson 18a86780ee target/arm: Implement UAO semantics
We need only override the current condition under which
TBFLAG_A64.UNPRIV is set.

Backports commit 7a8014ab871d5320effd737dfe88b2e80f16a509 from qemu
2020-03-21 17:50:29 -04:00
Richard Henderson 5b5050c6ca target/arm: Update MSR access to UAO
Backports commit 9eeb7a1c9531cb3574bfe2c36eb7624802c3ec00 from qemu
2020-03-21 17:48:01 -04:00