Commit graph

5220 commits

Author SHA1 Message Date
Peter Maydell 2ce106df33 target/arm: Use isar_feature function for testing AA32HPD feature
Now we have moved ID_MMFR4 into the ARMISARegisters struct, we
can define and use an isar_feature for the presence of the
ARMv8.2-AA32HPD feature, rather than open-coding the test.

While we're here, correct a comment typo which missed an 'A'
from the feature name.

Backports commit 4036b7d1cd9fb1097a5f4bc24d7d31744256260f from qemu
2020-03-21 18:48:57 -04:00
Peter Maydell 4693b2c011 target/arm: Test correct register in aa32_pan and aa32_ats1e1 checks
The isar_feature_aa32_pan and isar_feature_aa32_ats1e1 functions
are supposed to be testing fields in ID_MMFR3; but a cut-and-paste
error meant we were looking at MVFR0 instead.

Fix the functions to look at the right register; this requires
us to move at least id_mmfr3 to the ARMISARegisters struct; we
choose to move all the ID_MMFRn registers for consistency.

Backports commit 10054016eda1b13bdd8340d100fd029cc8b58f36 from qemu
2020-03-21 18:47:12 -04:00
Peter Maydell e72fa1cb33 target/arm: Correct handling of PMCR_EL0.LC bit
The LC bit in the PMCR_EL0 register is supposed to be:
* read/write
* RES1 on an AArch64-only implementation
* an architecturally UNKNOWN value on reset
(and use of LC==0 by software is deprecated).

We were implementing it incorrectly as read-only always zero,
though we do have all the code needed to test it and behave
accordingly.

Instead make it a read-write bit which resets to 1 always, which
satisfies all the architectural requirements above.

Backports commit 62d96ff48510f4bf648ad12f5d3a5507227b026f from qemu
2020-03-21 18:40:26 -04:00
Peter Maydell de428e4b45 target/arm: Correct definition of PMCRDP
The PMCR_EL0.DP bit is bit 5, which is 0x20, not 0x10. 0x10 is 'X'.
Correct our #define of PMCRDP and add the missing PMCRX.

We do have the correct behaviour for handling the DP bit being
set, so this fixes a guest-visible bug.

Fixes: 033614c47de

Backports commit a1ed04dd79aabb9dbeeb5fa7d49f1a3de0357553 from qemu
2020-03-21 18:39:37 -04:00
Peter Maydell 28b239adb9 target/arm: Provide ARMv8.4-PMU in '-cpu max'
Set the ID register bits to provide ARMv8.4-PMU (and implicitly
also ARMv8.1-PMU) in the 'max' CPU.

Backports commit 3bec78447a958d4819911252e056f29740ac25e4 from qemu
2020-03-21 18:38:53 -04:00
Peter Maydell 4dd57f7acc target/arm: Implement ARMv8.4-PMU extension
The ARMv8.4-PMU extension adds:
* one new required event, STALL
* one new system register PMMIR_EL1

(There are also some more L1-cache related events, but since
we don't implement any cache we don't provide these, in the
same way we don't provide the base-PMUv3 cache events.)

The STALL event "counts every attributable cycle on which no
attributable instruction or operation was sent for execution on this
PE". QEMU doesn't stall in this sense, so this is another
always-reads-zero event.

The PMMIR_EL1 register is a read-only register providing
implementation-specific information about the PMU; currently it has
only one field, SLOTS, which defines behaviour of the STALL_SLOT PMU
event. Since QEMU doesn't implement the STALL_SLOT event, we can
validly make the register read zero.

Backports commit 15dd1ebda4a6ef928d484c5a4f48b8ccb7438bb2 from qemu
2020-03-21 18:37:50 -04:00
Peter Maydell 5c93f43eb9 target/arm: Implement ARMv8.1-PMU extension
The ARMv8.1-PMU extension requires:
* the evtCount field in PMETYPER<n>_EL0 is 16 bits, not 10
* MDCR_EL2.HPMD allows event counting to be disabled at EL2
* two new required events, STALL_FRONTEND and STALL_BACKEND
* ID register bits in ID_AA64DFR0_EL1 and ID_DFR0

We already implement the 16-bit evtCount field and the
HPMD bit, so all that is missing is the two new events:
STALL_FRONTEND
"counts every cycle counted by the CPU_CYCLES event on which no
operation was issued because there are no operations available
to issue to this PE from the frontend"
STALL_BACKEND
"counts every cycle counted by the CPU_CYCLES event on which no
operation was issued because the backend is unable to accept
any available operations from the frontend"

QEMU never stalls in this sense, so our implementation is trivial:
always return a zero count.

Backports commit 0727f63b1ecf765ebc48266f616f8fc362dc7fbc from qemu
2020-03-21 18:34:33 -04:00
Peter Maydell 7dfc30b754 target/arm: Read debug-related ID registers from KVM
Backports 1548a7b2ad621a31b4216ed703b6d658a2ecf0d0 from qemu
2020-03-21 18:30:20 -04:00
Peter Maydell cef6f3e72c target/arm: Move DBGDIDR into ARMISARegisters
We're going to want to read the DBGDIDR register from KVM in
a subsequent commit, which means it needs to be in the
ARMISARegisters sub-struct. Move it.

Backports commit 4426d3617d64922d97b74ed22e67e33b6fb7de0a from qemu
2020-03-21 18:29:01 -04:00
Peter Maydell a6c9c87a5d target/arm: Stop assuming DBGDIDR always exists
The AArch32 DBGDIDR defines properties like the number of
breakpoints, watchpoints and context-matching comparators. On an
AArch64 CPU, the register may not even exist if AArch32 is not
supported at EL1.

Currently we hard-code use of DBGDIDR to identify the number of
breakpoints etc; this works for all our TCG CPUs, but will break if
we ever add an AArch64-only CPU. We also have an assert() that the
AArch32 and AArch64 registers match, which currently works only by
luck for KVM because we don't populate either of these ID registers
from the KVM vCPU and so they are both zero.

Clean this up so we have functions for finding the number
of breakpoints, watchpoints and context comparators which look
in the appropriate ID register.

This allows us to drop the "check that AArch64 and AArch32 agree
on the number of breakpoints etc" asserts:
* we no longer look at the AArch32 versions unless that's the
right place to be looking
* it's valid to have a CPU (eg AArch64-only) where they don't match
* we shouldn't have been asserting the validity of ID registers
in a codepath used with KVM anyway

Backports commit 88ce6c6ee85d902f59dc65afc3ca86b34f02b9ed from qemu
2020-03-21 18:26:24 -04:00
Peter Maydell afc28d9b2c target/arm: Add _aa64_ and _any_ versions of pmu_8_1 isar checks
Add the 64-bit version of the "is this a v8.1 PMUv3?"
ID register check function, and the _any_ version that
checks for either AArch32 or AArch64 support. We'll use
this in a later commit.

We don't (yet) do any isar_feature checks on ID_AA64DFR1_EL1,
but we move id_aa64dfr1 into the ARMISARegisters struct with
id_aa64dfr0, for consistency.

Backports commit 2a609df87d9b886fd38a190a754dbc241ff707e8 from qemu
2020-03-21 18:24:00 -04:00
Peter Maydell e64143966a target/arm: Define an aa32_pmu_8_1 isar feature test function
Instead of open-coding a check on the ID_DFR0 PerfMon ID register
field, create a standardly-named isar_feature for "does AArch32 have
a v8.1 PMUv3" and use it.

This entails moving the id_dfr0 field into the ARMISARegisters struct.

Backports commit a617953855b65a602d36364b9643f7e5bc31288e from qemu
2020-03-21 18:21:26 -04:00
Peter Maydell fd537585d7 target/arm: Use FIELD macros for clearing ID_DFR0 PERFMON field
We already define FIELD macros for ID_DFR0, so use them in the
one place where we're doing direct bit value manipulation.

Backports commit d52c061e541982a3663ad5c65bd3b518dbe85b87 from qemu
2020-03-21 18:17:55 -04:00
Peter Maydell fd6c635e03 target/arm: Add and use FIELD definitions for ID_AA64DFR0_EL1
Add FIELD() definitions for the ID_AA64DFR0_EL1 and use them
where we currently have hard-coded bit values.

Backports commit ceb2744b47a1ef4184dca56a158eb3156b6eba36 from qemu
2020-03-21 18:16:55 -04:00
Peter Maydell ebd7131c16 target/arm: Factor out PMU register definitions
Pull the code that defines the various PMU registers out
into its own function, matching the pattern we have
already for the debug registers.

Apart from one style fix to a multi-line comment, this
is purely movement of code with no changes to it.

Backports commit 24183fb6f00ecca8b508e245c95ff50ddde3f18b from qemu
2020-03-21 18:15:09 -04:00
Peter Maydell b1c088e2f2 target/arm: Define and use any_predinv isar_feature test
Instead of open-coding "ARM_FEATURE_AARCH64 ? aa64_predinv: aa32_predinv",
define and use an any_predinv isar_feature test function.

Backports commit 22e570730d15374453baa73ff2a699e01ef4e950 from qemu
2020-03-21 18:13:25 -04:00
Peter Maydell 62178626e4 target/arm: Add isar_feature_any_fp16 and document naming/usage conventions
Our current usage of the isar_feature feature tests almost always
uses an _aa32_ test when the code path is known to be AArch32
specific and an _aa64_ test when the code path is known to be
AArch64 specific. There is just one exception: in the vfp_set_fpscr
helper we check aa64_fp16 to determine whether the FZ16 bit in
the FP(S)CR exists, but this code is also used for AArch32.
There are other places in future where we're likely to want
a general "does this feature exist for either AArch32 or
AArch64" check (typically where architecturally the feature exists
for both CPU states if it exists at all, but the CPU might be
AArch32-only or AArch64-only, and so only have one set of ID
registers).

Introduce a new category of isar_feature_* functions:
isar_feature_any_foo() should be tested when what we want to
know is "does this feature exist for either AArch32 or AArch64",
and always returns the logical OR of isar_feature_aa32_foo()
and isar_feature_aa64_foo().

Backports commit 6e61f8391cc6cb0846d4bf078dbd935c2aeebff5 from qemu
2020-03-21 18:12:02 -04:00
Peter Maydell 778fcd9562 target/arm: Check aa32_pan in take_aarch32_exception(), not aa64_pan
In take_aarch32_exception(), we know we are dealing with a CPU that
has AArch32, so the right isar_feature test is aa32_pan, not aa64_pan.

Backports commit f8af1143ef93954e77cf59e09b5e004dafbd64fd from qemu
2020-03-21 18:09:27 -04:00
Peter Maydell e63f70f980 target/arm: Add _aa32_ to isar_feature functions testing 32-bit ID registers
Enforce a convention that an isar_feature function that tests a
32-bit ID register always has _aa32_ in its name, and one that
tests a 64-bit ID register always has _aa64_ in its name.
We already follow this except for three cases: thumb_div,
arm_div and jazelle, which all need _aa32_ adding.

(As noted in the comment, isar_feature_aa32_fp16_arith()
is an exception in that it currently tests ID_AA64PFR0_EL1,
but will switch to MVFR1 once we've properly implemented
FP16 for AArch32.)

Backports commit 873b73c0c891ec20adacc7bd1ae789294334d675 from qemu
2020-03-21 18:08:23 -04:00
Richard Henderson 0131e804fb target/arm: Split out aa64_va_parameter_tbi, aa64_va_parameter_tbid
For the purpose of rebuild_hflags_a64, we do not need to compute
all of the va parameters, only tbi. Moreover, we can compute them
in a form that is more useful to storing in hflags.

This eliminates the need for aa64_va_parameter_both, so fold that
in to aa64_va_parameter. The remaining calls to aa64_va_parameter
are in get_phys_addr_lpae and in pauth_helper.c.

This reduces the total cpu consumption of aa64_va_parameter in a
kernel boot plus a kvm guest kernel boot from 3% to 0.5%.

Backports commit b830a5ee82e66f54697dcc6450fe9239b7412d13 from qemu
2020-03-21 18:04:39 -04:00
Richard Henderson 2cce7e0dd0 target/arm: Remove ttbr1_valid check from get_phys_addr_lpae
Now that aa64_va_parameters_both sets select based on the number
of ranges in the regime, the ttbr1_valid check is redundant.

Backports commit 03f27724dff15633911e68a3906c30f57938ea45 from qemu
2020-03-21 18:01:24 -04:00
Richard Henderson f3fa39829d target/arm: Fix select for aa64_va_parameters_both
Select should always be 0 for a regime with one range.

Backports commit 71d181640a1a9470f074fa28600ca85587e2ca6b from qemu
2020-03-21 18:00:15 -04:00
Richard Henderson 3183349f1c target/arm: Use bit 55 explicitly for pauth
The psuedocode in aarch64/functions/pac/auth/Auth and
aarch64/functions/pac/strip/Strip always uses bit 55 for
extfield and do not consider if the current regime has 2 ranges.

Backports commit 7eeb4c2ce8dc0a5655526f3f39bd5d6cc02efb39 from qemu
2020-03-21 17:59:06 -04:00
Richard Henderson 51b6064ba4 target/arm: Flush high bits of sve register after AdvSIMD INS
Writes to AdvSIMD registers flush the bits above 128.

Backports commit 528dc354b6f3aa82d65141cc60bc0e725e6cae98 from qemu
2020-03-21 17:58:09 -04:00
Richard Henderson 74cbfceb56 target/arm: Flush high bits of sve register after AdvSIMD ZIP/UZP/TRN
Writes to AdvSIMD registers flush the bits above 128.

Backports commit 33649de62e40df0060a1c514574e4ef25c4e52e1 from qemu
2020-03-21 17:56:40 -04:00
Richard Henderson 6eb8472344 target/arm: Flush high bits of sve register after AdvSIMD TBL/TBX
Writes to AdvSIMD registers flush the bits above 128.

Backports commit 263273bc988e677ebadeaf7d0e49f6792a112db5 from qemu
2020-03-21 17:56:08 -04:00
Richard Henderson 18e9c4805f target/arm: Flush high bits of sve register after AdvSIMD EXT
Writes to AdvSIMD registers flush the bits above 128.

Backports commit 78cedfabd53b6f64e7e64fc84878d848e5df1d08 from qemu
2020-03-21 17:55:12 -04:00
Peter Maydell 96a96565db target/arm: Implement ARMv8.1-VMID16 extension
The ARMv8.1-VMID16 extension extends the VMID from 8 bits to 16 bits:

* the ID_AA64MMFR1_EL1.VMIDBits field specifies whether the VMID is
8 or 16 bits
* the VMID field in VTTBR_EL2 is extended to 16 bits
* VTCR_EL2.VS lets the guest specify whether to use the full 16 bits,
or use the backwards-compatible 8 bits

For QEMU implementing this is trivial:
* we do not track VMIDs in TLB entries, so we never use the VMID field
* we treat any write to VTTBR_EL2, not just a change to the VMID field
bits, as a "possible VMID change" that causes us to throw away TLB
entries, so that code doesn't need changing
* we allow the guest to read/write the VTCR_EL2.VS bit already

So all that's missing is the ID register part: report that we support
VMID16 in our 'max' CPU.

Backports commit dc7a88d0810ad272bdcd2e0869359af78fdd9114 from qemu
2020-03-21 17:52:43 -04:00
Richard Henderson 57f0aa3044 target/arm: Enable ARMv8.2-UAO in -cpu max
Backports commit e11f0eb6724571adb812a3ce5269c41586e0262b from qemu
2020-03-21 17:51:44 -04:00
Richard Henderson 18a86780ee target/arm: Implement UAO semantics
We need only override the current condition under which
TBFLAG_A64.UNPRIV is set.

Backports commit 7a8014ab871d5320effd737dfe88b2e80f16a509 from qemu
2020-03-21 17:50:29 -04:00
Richard Henderson 5b5050c6ca target/arm: Update MSR access to UAO
Backports commit 9eeb7a1c9531cb3574bfe2c36eb7624802c3ec00 from qemu
2020-03-21 17:48:01 -04:00
Richard Henderson 0630e66b5a target/arm: Add ID_AA64MMFR2_EL1
Add definitions for all of the fields, up to ARMv8.5.
Convert the existing RESERVED register to a full register.
Query KVM for the value of the register for the host.

Backports commit 64761e10af2742a916c08271828890274137b9e8 from qemu
2020-03-21 17:45:27 -04:00
Richard Henderson 7287bf16b8 target/arm: Enable ARMv8.2-ATS1E1 in -cpu max
This includes enablement of ARMv8.1-PAN.

Backports commit e0fe7309a7c21ef2386de50d37c86aea0d671c08 from qemu
2020-03-21 17:43:54 -04:00
Richard Henderson d196288b4f target/arm: Implement ATS1E1 system registers
This is a minor enhancement over ARMv8.1-PAN.
The *_PAN mmu_idx are used with the existing do_ats_write.

Backports commit 04b07d29722192926f467ea5fedf2c3b0996a2a5 from qemu
2020-03-21 17:42:01 -04:00
Richard Henderson 6576864930 target/arm: Set PAN bit as required on exception entry
The PAN bit is preserved, or set as per SCTLR_ELx.SPAN,
plus several other conditions listed in the ARM ARM.

Backports commit 4a2696c0d4d80e14a192b28148c6167bc5056f94 from qemu
2020-03-21 17:40:11 -04:00
Richard Henderson aad0621f96 target/arm: Enforce PAN semantics in get_S1prot
If we have a PAN-enforcing mmu_idx, set prot == 0 if user_rw != 0.

Backports commit 81636b70c226dc27d7ebc8dedbcec26166d23085 from qemu
2020-03-21 17:35:55 -04:00
Richard Henderson 41d03da852 target/arm: Update arm_mmu_idx_el for PAN
Examine the PAN bit for EL1, EL2, and Secure EL1 to
determine if it applies.

Backports commit 66412260cc1bee60a22d96e4ad8569b85745fea4 from qemu
2020-03-21 17:34:12 -04:00
Richard Henderson 35fab80c57 target/arm: Update MSR access for PAN
For aarch64, there's a dedicated msr (imm, reg) insn.
For aarch32, this is done via msr to cpsr. Writes from el0
are ignored, which is already handled by the CPSR_USER mask.

Backports commit 220f508f49c5f49fb771d5105f991c19ffede3f7 from qemu
2020-03-21 17:33:16 -04:00
Richard Henderson 50bb867a6f target/arm: Introduce aarch64_pstate_valid_mask
Use this along the exception return path, where we previously
accepted any values

Backports commit 140845111809cd6fd57ccde93867b48cc56ffc74 from qemu
2020-03-21 17:26:00 -04:00
Richard Henderson b6b69d7ac5 target/arm: Remove CPSR_RESERVED
The only remaining use was in op_helper.c. Use PSTATE_SS
directly, and move the commentary so that it is more obvious
what is going on.

Backports commit 70dae0d069c45250bbefd9424089383a8ac239de from qemu
2020-03-21 17:24:21 -04:00
Richard Henderson 2d3239d0a1 target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return
Using ~0 as the mask on the aarch64->aarch32 exception return
was not even as correct as the CPSR_ERET_MASK that we had used
on the aarch32->aarch32 exception return.

Backports commit d203cabd1bd12f31c9df0b5737421ba67b96857b from qemu
2020-03-21 17:20:53 -04:00
Richard Henderson c450694f1a target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask
CPSR_ERET_MASK was a useless renaming of CPSR_RESERVED.
The function also takes into account bits that the cpu
does not support.

Backports commit 437864216d63f052f3cd06ec8861d0e432496424 from qemu
2020-03-21 17:19:17 -04:00
Richard Henderson e4a7a089f0 target/arm: Mask CPSR_J when Jazelle is not enabled
The J bit signals Jazelle mode, and so of course is RES0
when the feature is not enabled.

Backports commit f062d1447f2a80e7a5f593b8cb5ac7cab5e16eb0 from qemu
2020-03-21 17:17:50 -04:00
Richard Henderson ca2bb77ab3 target/arm: Split out aarch32_cpsr_valid_mask
Split this helper out of msr_mask in translate.c. At the same time,
transform the negative reductive logic to positive accumulative logic.
It will be usable along the exception paths.

While touching msr_mask, fix up formatting.

Backports commit 4f9584ed4bba8a57a3cb2fa48a682725005d530a from qemu
2020-03-21 17:16:20 -04:00
Richard Henderson e9850834d5 target/arm: Move LOR regdefs to file scope
For static const regdefs, file scope is preferred.

Backports commit d8564ee4e5bce87ec1fdf23656df9367eb1bc571 from qemu
2020-03-21 17:13:58 -04:00
Richard Henderson 5c8c2ca505 target/arm: Add isar_feature tests for PAN + ATS1E1
Include definitions for all of the bits in ID_MMFR3.
We already have a definition for ID_AA64MMFR1.PAN.

Backports commit 3d6ad6bb466f487bcc861f99e2c9054230df1076 from qemu
2020-03-21 17:13:07 -04:00
Richard Henderson 7aaf0d442b target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled
To implement PAN, we will want to swap, for short periods
of time, to a different privileged mmu_idx. In addition,
we cannot do this with flushing alone, because the AT*
instructions have both PAN and PAN-less versions.

Add the ARMMMUIdx*_PAN constants where necessary next to
the corresponding ARMMMUIdx* constant.

Backports commit 452ef8cb8c7b06f44a30a3c3a54d3be82c4aef59 from qemu
2020-03-21 17:12:16 -04:00
Richard Henderson ed5a4950fd target/arm: Add arm_mmu_idx_is_stage1_of_2
Use a common predicate for querying stage1-ness.

Backports commit fee7aa46edd76f06c3dc176abb8fd05b365efce6 from qemu
2020-03-21 16:56:03 -04:00
Richard Henderson 12b4e01d9c tcg: Add tcg_gen_gvec_5_ptr
Extend the vector generator infrastructure to handle
5 vector arguments.

Backports commit 2445971604c1cfd3ec484457159f4ac300fb04d2 from qemu
2020-03-21 16:54:01 -04:00
Taylor Simpson 6507fdb3b1 tcg: Add support for a helper with 7 arguments
Currently, helpers can only take up to 6 arguments. This patch adds the
capability for up to 7 arguments. I have tested it with the Hexagon port
that I am preparing for submission.

Backports commit e6cadf49c3d191f6984e56ec3bbeb0b103ca5bc2 from qemu
2020-03-21 16:53:56 -04:00
Richard Henderson eb0586f9cd target/arm: Raise only one interrupt in arm_cpu_exec_interrupt
The fall through organization of this function meant that we
would raise an interrupt, then might overwrite that with another.
Since interrupt prioritization is IMPLEMENTATION DEFINED, we
can recognize these in any order we choose.

Unify the code to raise the interrupt in a block at the end.

Backports commit d63d0ec59d87a698de5ed843288f90a23470cf2e from qemu
2020-03-21 16:42:52 -04:00
Richard Henderson d00e5ec47d target/arm: Use bool for unmasked in arm_excp_unmasked
The value computed is fully boolean; using int8_t is odd.

Backports commit 16e07f78df002067bc4bfb115ba1ee0ce278e9e5 from qemu
2020-03-21 16:40:36 -04:00
Richard Henderson 975f0a9bc5 target/arm: Pass more cpu state to arm_excp_unmasked
Avoid redundant computation of cpu state by passing it in
from the caller, which has already computed it for itself.

Backports commit be87955687446be152f366af543c9234eab78a7c from qemu
2020-03-21 16:39:16 -04:00
Richard Henderson 6023db20bc target/arm: Move arm_excp_unmasked to cpu.c
This inline function has one user in cpu.c, and need not be exposed
otherwise. Code movement only, with fixups for checkpatch.

Backports commit 310cedf39dea240a89f90729fd99481ff6158e90 from qemu
2020-03-21 16:37:12 -04:00
Richard Henderson ad5a3b2532 target/arm: Enable ARMv8.1-VHE in -cpu max
Backports commit cd3f80aba0c559a6291f7c3e686422b15381f6b7 from qemu
2020-03-21 16:36:04 -04:00
Richard Henderson 36407da586 target/arm: Update arm_cpu_do_interrupt_aarch64 for VHE
When VHE is enabled, the exception level below EL2 is not EL1,
but EL0, and so to identify the entry vector offset for exceptions
targeting EL2 we need to look at the width of EL0, not of EL1.

Backports commit cb092fbbaeb7b4e91b3f9c53150c8160f91577c7 from qemu
2020-03-21 16:35:07 -04:00
Richard Henderson 8f1201e392 target/arm: Update get_a64_user_mem_index for VHE
The EL2&0 translation regime is affected by Load Register (unpriv).

The code structure used here will facilitate later changes in this
area for implementing UAO and NV.

Backports commit cc28fc30e333dc2f20ebfde54444697e26cd8f6d from qemu
2020-03-21 16:33:52 -04:00
Alex Bennée 76ca1cd732 target/arm: check TGE and E2H flags for EL0 pauth traps
According to ARM ARM we should only trap from the EL1&0 regime.

Backports commit a7469a3c1edc7687d7d25967bc2c0280de202bca from qemu
2020-03-21 16:27:40 -04:00
Richard Henderson 01e1e7a3a0 target/arm: Update {fp,sve}_exception_el for VHE
When TGE+E2H are both set, CPACR_EL1 is ignored.

Backports commit c2ddb7cf963b3bea838266bfca62514dc9750a10 from qemu
2020-03-21 16:26:01 -04:00
Richard Henderson 86d0163465 target/arm: Update arm_phys_excp_target_el for TGE
The TGE bit routes all asynchronous exceptions to EL2.

Backports commit d1b31428fd522b725bc053c84b5fbc8764061363 from qemu
2020-03-21 16:23:52 -04:00
Richard Henderson 0c03fa2dac target/arm: Flush tlbs for E2&0 translation regime
Backports commit 85d0dc9fa205027554372367f6925749a2d2b4c4 from qemu
2020-03-21 16:22:46 -04:00
Richard Henderson 50ac89852a target/arm: Flush tlb for ASID changes in EL2&0 translation regime
Since we only support a single ASID, flush the tlb when it changes.

Note that TCR_EL2, like TCR_EL1, has the A1 bit that chooses between
the two TTBR* registers for the location of the ASID.

Backports commit d06dc93340825030b6297c61199a17c0067b0377 from qemu
2020-03-21 16:13:55 -04:00
Richard Henderson a2b8ebabfa target/arm: Add VHE timer register redirection and aliasing
Apart from the wholesale redirection that HCR_EL2.E2H performs
for EL2, there's a separate redirection specific to the timers
that happens for EL0 when running in the EL2&0 regime.

Backports commit bb5972e439dc0ac4d21329a9d97bad6760ec702d from qemu
2020-03-21 16:09:54 -04:00
Richard Henderson e41c51f6da target/arm: Add VHE system register redirection and aliasing
Several of the EL1/0 registers are redirected to the EL2 version when in
EL2 and HCR_EL2.E2H is set. Many of these registers have side effects.
Link together the two ARMCPRegInfo structures after they have been
properly instantiated. Install common dispatch routines to all of the
relevant registers.

The same set of registers that are redirected also have additional
EL12/EL02 aliases created to access the original register that was
redirected.

Omit the generic timer registers from redirection here, because we'll
need multiple kinds of redirection from both EL0 and EL2.

Backports commit e2cce18f5c1d0d55328c585c8372cdb096bbf528 from qemu
2020-03-21 15:57:03 -04:00
Richard Henderson ff720b7fd3 target/arm: Update define_one_arm_cp_reg_with_opaque for VHE
For ARMv8.1, op1 == 5 is reserved for EL2 aliases of
EL1 and EL0 registers.

Backports commit b4ecf60f7eee88cbfe5700044790cb7494c5dd37 from qemu
2020-03-21 15:39:54 -04:00
Richard Henderson 8c7795dc04 target/arm: Update timer access for VHE
Backports commit 5bc8437136fb1e7bc8b566f4f2f7269b0f990fad from qemu
2020-03-21 15:38:47 -04:00
Richard Henderson d6150127b4 target/arm: Add the hypervisor virtual counter
Backports commit 8c94b071a09c2183f032febff3112f2b7662156c from qemu
2020-03-21 15:35:36 -04:00
Richard Henderson 8e2ac48ad0 target/arm: Update ctr_el0_access for EL2
Update to include checks against HCR_EL2.TID2.

Backports commit 97475a89375d62a7722e04ced9fbdf0b992f4b83 from qemu
2020-03-21 15:31:48 -04:00
Richard Henderson 6886ba66d0 target/arm: Update aa64_zva_access for EL2
The comment that we don't support EL2 is somewhat out of date.
Update to include checks against HCR_EL2.TDZ.

Backports commit 4351cb72fb65926136ab618c9e40c1f5a8813251 from qemu
2020-03-21 15:30:37 -04:00
Richard Henderson 3a5135473f target/arm: Update arm_sctlr for VHE
Use the correct sctlr for EL2&0 regime. Due to header ordering,
and where arm_mmu_idx_el is declared, we need to move the function
out of line. Use the function in many more places in order to
select the correct control.

Backports commit aaec143212bb70ac9549cf73203d13100bd5c7c2 from qemu
2020-03-21 15:29:21 -04:00
Richard Henderson 6073542afc target/arm: Update arm_mmu_idx for VHE
Return the indexes for the EL2&0 regime when the appropriate bits
are set within HCR_EL2.

Backports commit 6003d9800ee38aa11eefb5cd64ae55abb64bef16 from qemu
2020-03-21 15:23:38 -04:00
Richard Henderson 3d583cd45f target/arm: Split out arm_mmu_idx_el
Backports commit 164690b29f9eaf69fe641859bc9f8954f12e691d from qemu
2020-03-21 15:22:01 -04:00
Richard Henderson f4397b0212 target/arm: Add regime_has_2_ranges
Create a predicate to indicate whether the regime has
both positive and negative addresses.

Backports commit 339370b90d067345b69585ddf4b668fa01f41d67 from qemu
2020-03-21 15:14:11 -04:00
Richard Henderson 0318d7af99 target/arm: Reorganize ARMMMUIdx
Prepare for, but do not yet implement, the EL2&0 regime.
This involves adding the new MMUIdx enumerators and adjusting
some of the MMUIdx related predicates to match.

Backports commit b9f6033c1a5fb7da55ed353794db8ec064f78bb2 from qemu.
2020-03-21 15:10:05 -04:00
Richard Henderson 85a7cfbdc6 target/arm: Tidy ARMMMUIdx m-profile definitions
Replace the magic numbers with the relevant ARM_MMU_IDX_M_* constants.
Keep the definitions short by referencing previous symbols.

Backports commit 25568316b2a7e73d68701042ba6ebdb217205e20 from qemu
2020-03-21 14:58:17 -04:00
Richard Henderson c223708063 target/arm: Rearrange ARMMMUIdxBit
Define via macro expansion, so that renumbering of the base ARMMMUIdx
symbols is automatically reflected in the bit definitions.

Backports commit 5f09a6dfbfbff4662f52cc3130a2e07044816497 from qemu
2020-03-21 14:56:46 -04:00
Richard Henderson 56504d255b target/arm: Expand TBFLAG_ANY.MMUIDX to 4 bits
We are about to expand the number of mmuidx to 10, and so need 4 bits.
For the benefit of reading the number out of -d exec, align it to the
penultimate nibble.

Backports commit 506f149815c2168f16ade17893e117419d93f248 from qemu
2020-03-21 14:54:55 -04:00
Richard Henderson be3c71fb8b target/arm: Recover 4 bits from TBFLAGs
We had completely run out of TBFLAG bits.
Split A- and M-profile bits into two overlapping buckets.
This results in 4 free bits.

We used to initialize all of the a32 and m32 fields in DisasContext
by assignment, in arm_tr_init_disas_context. Now we only initialize
either the a32 or m32 by assignment, because the bits overlap in
tbflags. So zero the entire structure in gen_intermediate_code.

Backports commit 79cabf1f473ca6e9fa0727f64ed9c2a84a36f0aa from qemu
2020-03-21 14:51:46 -04:00
Richard Henderson 153d7aadd5 target/arm: Rename ARMMMUIdx_S1E2 to ARMMMUIdx_E2
This is part of a reorganization to the set of mmu_idx.
The non-secure EL2 regime only has a single stage translation;
there is no point in pointing out that the idx is for stage1.

Backports commit e013b7411339342aac8d986c5d5e329e1baee8e1 from qemu
2020-03-21 14:42:23 -04:00
Richard Henderson f45ab0614e target/arm: Rename ARMMMUIdx*_S1E3 to ARMMMUIdx*_SE3
This is part of a reorganization to the set of mmu_idx.
The EL3 regime only has a single stage translation, and
is always secure.

Backports commit 127b2b086303296289099a6fb10bbc51077f1d53 from qemu
2020-03-21 14:38:44 -04:00
Richard Henderson 1a672fc3b1 target/arm: Rename ARMMMUIdx_S1SE[01] to ARMMMUIdx_SE10_[01]
This is part of a reorganization to the set of mmu_idx.
This emphasizes that they apply to the Secure EL1&0 regime.

Backports commit fba37aedecb82506c62a1f9e81d066b4fd04e443 from qemu
2020-03-21 14:35:28 -04:00
Richard Henderson 31837384b3 target/arm: Rename ARMMMUIdx_S1NSE* to ARMMMUIdx_Stage1_E*
This is part of a reorganization to the set of mmu_idx.
The EL1&0 regime is the only one that uses 2-stage translation.
Spelling out Stage avoids confusion with Secure.

Backports commit 2859d7b590760283a7b5aef40b723e9dfd7c98ba from qemu
2020-03-21 14:20:31 -04:00
Richard Henderson b62b4c4f35 target/arm: Rename ARMMMUIdx_S2NS to ARMMMUIdx_Stage2
The EL1&0 regime is the only one that uses 2-stage translation.

Backports commit 97fa9350017e647151dd1dc212f1bbca0294dba7 from qemu
2020-03-21 14:15:35 -04:00
Richard Henderson ec05f22e82 target/arm: Rename ARMMMUIdx*_S12NSE* to ARMMMUIdx*_E10_*
This is part of a reorganization to the set of mmu_idx.
This emphasizes that they apply to the EL1&0 regime.

The ultimate goal is

-- Non-secure regimes:
ARMMMUIdx_E10_0,
ARMMMUIdx_E20_0,
ARMMMUIdx_E10_1,
ARMMMUIdx_E2,
ARMMMUIdx_E20_2,

-- Secure regimes:
ARMMMUIdx_SE10_0,
ARMMMUIdx_SE10_1,
ARMMMUIdx_SE3,

-- Helper mmu_idx for non-secure EL1&0 stage1 and stage2
ARMMMUIdx_Stage2,
ARMMMUIdx_Stage1_E0,
ARMMMUIdx_Stage1_E1,

The 'S' prefix is reserved for "Secure". Unless otherwise specified,
each mmu_idx represents all stages of translation.

Backports commit 01b98b686460b3a0fb47125882e4f8d4268ac1b6 from qemu
2020-03-21 14:09:15 -04:00
Richard Henderson 270d557a99 target/arm: Split out alle1_tlbmask
No functional change, but unify code sequences.

Backports commit 90c19cdf1de440d7d9745cf255168999071b3a31 from qemu
2020-03-21 13:57:04 -04:00
Richard Henderson 6d4a7b84b5 target/arm: Split out vae1_tlbmask
No functional change, but unify code sequences.

Backports commit b7e0730de32d7079a1447ecbb5616d89de77b823 from qemu
2020-03-21 13:53:39 -04:00
Richard Henderson 4f4c385a8e target/arm: Update CNTVCT_EL0 for VHE
The virtual offset may be 0 depending on EL, E2H and TGE.

Backports commit 53d1f85608f83d645491eba6581d1f300dba2384 from qemu
2020-03-21 13:50:35 -04:00
Richard Henderson 215b4a9851 target/arm: Add TTBR1_EL2
At the same time, add writefn to TTBR0_EL2 and TCR_EL2.
A later patch will update any ASID therein.

Backports commit ed30da8eee6906032b38a84e4807e2142b09d8ec from qemu
2020-03-21 13:47:56 -04:00
Richard Henderson 35508d46c7 target/arm: Add CONTEXTIDR_EL2
Not all of the breakpoint types are supported, but those that
only examine contextidr are extended to support the new register.

Backports commit e2a1a4616c86159eb4c07659a02fff8bb25d3729 from qemu
2020-03-21 13:39:20 -04:00
Richard Henderson fe6825ca4d target/arm: Enable HCR_E2H for VHE
Backports commit 03c76131bc494366a4357a1d265c5eb5cc820754 from qemu
2020-03-21 13:35:00 -04:00
Richard Henderson 5455bd4037 target/arm: Define isar_feature_aa64_vh
Backports commit 8fc2ea21f75923b427eba261eb70f4a258f1b4e5 from qemu
2020-03-21 13:34:11 -04:00
Alex Bennée ced8834737 target/arm: fix TCG leak for fcvt half->double
When support for the AHP flag was added we inexplicably only freed the
new temps in one of the two legs. Move those tcg_temp_free to the same
level as the allocation to fix that leak.

Backports commit aeab8e5eb220cc5ff84b0b68b9afccc611bf0fcd from qemu
2020-03-21 13:14:47 -04:00
Yongbok Kim 7fbc373f59 target/mips: Add implementation of GINVT instruction
Implement emulation of GINVT instruction. As QEMU doesn't support
caches and virtualization, this implementation covers only one
instruction (GINVT - Global Invalidate TLB) among all TLB-related
MIPS instructions.

Backports commit 99029be1c2875cd857614397674bbf563ddb6f91 from qemu
2020-03-21 13:01:35 -04:00
Yongbok Kim f10de71e73 target/mips: Amend CP0 WatchHi register implementation
WatchHi is extended by the field MemoryMapID with the GINVT instruction.
The field is accessible by MTHC0/MFHC0 in 32-bit architectures and DMTC0/
DMFC0 in 64-bit architectures.

Backports commit feafe82cc2289a31b3e3f11dc76f3539ea22d670 from qemu
2020-03-21 12:39:00 -04:00
Kashyap Chamarthy 8392450626 target/i386: Add the 'model-id' for Skylake -v3 CPU models
This fixes a confusion in the help output. (Although, if you squint
long enough at the '-cpu help' output, you _do_ notice that
"Skylake-Client-noTSX-IBRS" is an alias of "Skylake-Client-v3";
similarly for Skylake-Server-v3.)

Without this patch:

$ qemu-system-x86 -cpu help
...
x86 Skylake-Client-v1 Intel Core Processor (Skylake)
x86 Skylake-Client-v2 Intel Core Processor (Skylake, IBRS)
x86 Skylake-Client-v3 Intel Core Processor (Skylake, IBRS)
...
x86 Skylake-Server-v1 Intel Xeon Processor (Skylake)
x86 Skylake-Server-v2 Intel Xeon Processor (Skylake, IBRS)
x86 Skylake-Server-v3 Intel Xeon Processor (Skylake, IBRS)
...

With this patch:

$ ./qemu-system-x86 -cpu help
...
x86 Skylake-Client-v1 Intel Core Processor (Skylake)
x86 Skylake-Client-v2 Intel Core Processor (Skylake, IBRS)
x86 Skylake-Client-v3 Intel Core Processor (Skylake, IBRS, no TSX)
...
x86 Skylake-Server-v1 Intel Xeon Processor (Skylake)
x86 Skylake-Server-v2 Intel Xeon Processor (Skylake, IBRS)
x86 Skylake-Server-v3 Intel Xeon Processor (Skylake, IBRS, no TSX)

Backports commit 673b0add9ea7f432f34c1c99eaa7c567012fc838 from qemu
2020-03-21 12:27:24 -04:00
ShihPo Hung 7fffc5208c target/riscv: update mstatus.SD when FS is set dirty
remove the check becuase SD bit should summarize FS and XS fields
unconditionally.

Backports commit 82f014671cf057de51c4a577c9e2ad637dcec6f9 from qemu
2020-03-21 12:22:56 -04:00
ShihPo Hung 6bdd94bf26 target/riscv: fsd/fsw doesn't dirty FP state
Backports commit a59796eb6d59bbd74ce28ddbddb1b83e60674e96 from qemu
2020-03-21 12:20:52 -04:00
Yiting Wang 1c7f2083da riscv: Set xPIE to 1 after xRET
When executing an xRET instruction, supposing xPP holds the
value y, xIE is set to xPIE; the privilege mode is changed to y;
xPIE is set to 1. But QEMU sets xPIE to 0 incorrectly.

Backports commit a37f21c27d3e2342c2080aafd4cfe7e949612428 from qemu
2020-03-21 12:18:59 -04:00
Vincent Dehors 6127f08028 target/arm: Fix PAuth sbox functions
In the PAC computation, sbox was applied over wrong bits.
As this is a 4-bit sbox, bit index should be incremented by 4 instead of 16.

Test vector from QARMA paper (https://eprint.iacr.org/2016/444.pdf) was
used to verify one computation of the pauth_computepac() function which
uses sbox2.

Launchpad: https://bugs.launchpad.net/bugs/1859713

Backports commit de0b1bae6461f67243282555475f88b2384a1eb9 from qemu
2020-03-21 12:17:26 -04:00
Clement Deschamps 1d97d223c3 target/arm: add PMU feature to cortex-r5 and cortex-r5f
The PMU is not optional on cortex-r5 and cortex-r5f (see
the "Features" chapter of the Technical Reference Manual).

Backports commit 90f671581ac601fcc1b840d9e9abe7e3c3e672db from qemu
2020-03-21 12:16:11 -04:00
Laurent Vivier 066f619b02 m68k: Fix regression causing Single-Step via GDB/RSP to not single step
A regression that was introduced, with the refactor to TranslatorOps,
drops two lines that update the PC when single-stepping is being performed.

Fixes: 11ab74b01e0a ("target/m68k: Convert to TranslatorOps")

Backports commit 322f244aaa80a5208090d41481c1c09c6face66b from qemu
2020-03-21 12:15:08 -04:00
Richard Henderson dc9733e555 target/arm: Set ISSIs16Bit in make_issinfo
During the conversion to decodetree, the setting of
ISSIs16Bit got lost. This causes the guest os to
incorrectly adjust trapping memory operations.

Backports commit 1a1fbc6cbb34c26d43d8360c66c1d21681af14a9 from qemu
2020-03-21 12:09:05 -04:00
Jeff Kubascik c9aadd696f target/arm: Return correct IL bit in merge_syn_data_abort
The IL bit is set for 32-bit instructions, thus passing false
with the is_16bit parameter to syn_data_abort_with_iss() makes
a syn mask that always has the IL bit set.

Pass is_16bit as true to make the initial syn mask have IL=0,
so that the final IL value comes from or'ing template_syn.

Cc: qemu-stable@nongnu.org
Fixes: aaa1f954d4ca ("target-arm: A64: Create Instruction Syndromes for Data Aborts")

Backports commit 30d544839e278dc76017b9a42990c41e84a34377 from qemu
2020-03-21 12:08:05 -04:00
Jeff Kubascik 95e39f60be target/arm: adjust program counter for wfi exception in AArch32
The wfi instruction can be configured to be trapped by a higher exception
level, such as the EL2 hypervisor. When the instruction is trapped, the
program counter should contain the address of the wfi instruction that
caused the exception. The program counter is adjusted for this in the wfi op
helper function.

However, this correction is done to env->pc, which only applies to AArch64
mode. For AArch32, the program counter is stored in env->regs[15]. This
adds an if-else statement to modify the correct program counter location
based on the the current CPU mode.

Backports commit 855532912b0e1bf803ae393e5b0c7e80948cd6a4 from qemu
2020-03-21 12:07:11 -04:00
Richard Henderson fb1988190e target/arm: Fix sign-extension for SMLAL*
The 32-bit product should be sign-extended, not zero-extended.

Fixes: ea96b37

Backports commit 1ab170865202aab8301131f31bffd87ea0f60d16 from qemu
2020-03-21 11:34:43 -04:00
Charles Ferguson 0d0d054382 Add implementation of access to the ARM SPSR register. (#1178)
The SPSR register is named within the Unicorn headers, but the code
to access it is absent. This means that it will always read as 0 and
ignore writes. This makes it harder to work with changes in processor
mode, as the usual way to return from a CPU exception is a
`MOVS pc, lr` for undefined instructions or `SUBS pc, lr, #4`
for most other aborts - which implicitly restores the CPSR from SPSR.

This change adds the access to the SPSR so that it can be read and
written as the caller might expect.

Backports commit 99097cab4c39fb3fc50eea8f0006954f62a149b2 from unicorn.
2020-01-14 09:57:55 -05:00
Charles Ferguson 784d580f01 Ensure that PC is not fixed up when code tracing or timing. (#1179)
Under some circumstances, the PC is not fixed up properly when
returning from the execution of a block in cpu_tb_exec. This appears
to be caused by the resetting of the PC from the tb.

This change removes the additional fixup in the cases where there
is code tracing or timing active. Either of these cases would result
in the wrong PC being reported.

Closes unicorn-engine#1105.

Backports commit b59632fb645d456338472e3d757c065c0ed74ad5 from unicorn
2020-01-14 09:52:25 -05:00
meta 55a3c5a4a5 Expose different 32-bit ARM CPU models to users via UC_MODE flags (#1165)
Backports commit ba745521991429b76b93180dca70c294c6b343cf from unicorn.
2020-01-14 09:37:21 -05:00
w1tcher b1f5794ab4 Fix the error in the hook_code of the arm
Calling emu_stop and causing the pc value to be incorrect after the end of the run. (#1157)

Backports commit 83887b8193dfeca3e5e8da851b41f874bcd0514e from unicorn.
2020-01-14 09:29:37 -05:00
Chen Huitao 644ea0c88c fix a mem-leak (#1147)
* fix a mem-leak.

* check the uc and l1_map before using them.

* fix multi-level free bug.

* Add pointer check.

Backports commit 79d89e5d3b83c6ee5d523738bc488d1e44b06f6a from unicorn.
2020-01-14 09:24:44 -05:00
Azertinv a22641c4be Added an invalid instruction hook (#1132)
* first draft for an invalid instruction hook

* Fixed documentation on return value of invalid insn hook

Backports commit 07f94ad1fc62293cac330df9714d739be6354926 from unicorn
2020-01-14 09:15:54 -05:00
Chen Huitao 00ffa5c930 Remove warnings (#1140)
* remove warnings on windows with vs2019.

* remove warnings.

Backports commit ca6516ff790f2c6b2bc59a6b7472cb25be0f82b8 from unicorn.
2020-01-14 09:05:43 -05:00
Chen Huitao 221333ceaf check arguments, return error instead of raising exceptions. (#1125)
* check arguments, return error instaed of raising exceptions. close #1117.

* remove empty lines. remove thr underscore prefix in function name.

Backports commit 23a426625f1469bd2052eab7d014deb6b9820bf2 from unicorn.
2020-01-14 09:00:11 -05:00
Pan Nengyuan 134a026e6b arm/translate-a64: fix uninitialized variable warning
Fixes:
target/arm/translate-a64.c: In function 'disas_crypto_three_reg_sha512':
target/arm/translate-a64.c:13625:9: error: 'genfn' may be used uninitialized in this function [-Werror=maybe-uninitialized]
genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
qemu/target/arm/translate-a64.c:13609:8: error: 'feature' may be used uninitialized in this function [-Werror=maybe-uninitialized]
if (!feature) {

Backports commit c7a5e7910517e2711215a9e869a733ffde696091 from qemu
2020-01-14 08:46:42 -05:00
Xiaoyao Li 31ab6fbd2c target/i386: Add missed features to Cooperlake CPU model
It lacks VMX features and two security feature bits (disclosed recently) in
MSR_IA32_ARCH_CAPABILITIES in current Cooperlake CPU model, so add them.

Fixes: 22a866b6166d ("i386: Add new CPU model Cooperlake")

Backports commit 2dea9d9ca4ea7e9afe83d0b4153b21a16987e866 from qemu
2020-01-14 08:43:26 -05:00
Xiaoyao Li 5e0b249dc0 target/i386: Add new bit definitions of MSR_IA32_ARCH_CAPABILITIES
The bit 6, 7 and 8 of MSR_IA32_ARCH_CAPABILITIES are recently disclosed
for some security issues. Add the definitions for them to be used by named
CPU models.

Backports commit 6c997b4adb300788d61d72e2b8bc67c03a584956 from qemu
2020-01-14 08:30:17 -05:00
Alex Bennée 8f275077b0 target/arm: only update pc after semihosting completes
Before we introduce blocking semihosting calls we need to ensure we
can restart the system on semi hosting exception. To be able to do
this the EXCP_SEMIHOST operation should be idempotent until it finally
completes. Practically this means ensureing we only update the pc
after the semihosting call has completed.

Backports commit 4ff5ef9e911c670ca10cdd36dd27c5395ec2c753 from qemu
2020-01-14 08:28:25 -05:00
Alex Bennée ea2714796f target/arm: remove unused EXCP_SEMIHOST leg
All semihosting exceptions are dealt with earlier in the common code
so we should never get here.

Backports commit b906acbb3aceed5b1eca30d9d365d5bd7431400b from qemu
2020-01-14 08:18:51 -05:00
Laurent Vivier fe60494b77 target/m68k: only change valid bits in CACR
This is used by netBSD (and MacOS ROM) to detect the MMU type

Backports commit 18b6102e51bb317d25ee61b49b7b56702b79560c from qemu
2020-01-14 08:17:14 -05:00
Eduardo Habkost 9a6fb2bad6 configure: Require Python >= 3.5
Python 3.5 is the oldest Python version available on our
supported build platforms, and Python 2 end of life will be 3
weeks after the planned release date of QEMU 4.2.0. Drop Python
2 support from configure completely, and require Python 3.5 or
newer.

Backports commit ddf90699631db53c981b6a5a63d31c08e0eaeec7 from qemu
2020-01-14 08:09:23 -05:00
Markus Armbruster 2814d68506 util/cutils: Turn FIXME comment into QEMU_BUILD_BUG_ON()
qemu_strtoi64() assumes int64_t is long long. This is marked FIXME.
Replace by a QEMU_BUILD_BUG_ON() to avoid surprises.

Same for qemu_strtou64().

Fix a typo in qemu_strtoul()'s contract while there.

Backports commit 369276ebf3cbba419653a19a01b790f3bcf3aea7 from qemu
2020-01-14 08:04:30 -05:00
Cathy Zhang 2952ab497f i386: Add new CPU model Cooperlake
Cooper Lake is intel's successor to Cascade Lake, the new
CPU model inherits features from Cascadelake-Server, while
add one platform associated new feature: AVX512_BF16. Meanwhile,
add STIBP for speculative execution.

Backports commit 22a866b6166db5caa4abaa6e656c2a431fa60726 from qemu
2020-01-14 08:00:22 -05:00
Cathy Zhang 67b6034a0f i386: Add macro for stibp
stibp feature is already added through the following commit.
0e89165829

Add a macro for it to allow CPU models to report it when host supports.

Backports commit 5af514d0cb314f43bc53f2aefb437f6451d64d0c from qemu
2020-01-14 08:00:22 -05:00
Lioncash 067a459774 i386: Backport some formatting changes 2020-01-14 08:00:18 -05:00
Cathy Zhang 8c0ed30d38 i386: Add MSR feature bit for MDS-NO
Define MSR_ARCH_CAP_MDS_NO in the IA32_ARCH_CAPABILITIES MSR to allow
CPU models to report the feature when host supports it.

Backports commit 77b168d221191156c47fcd8d1c47329dfdb9439e from qemu
2020-01-14 07:56:43 -05:00
Alex Bennée 639c5c4fe2 target/arm: ensure we use current exception state after SCR update
A write to the SCR can change the effective EL by droppping the system
from secure to non-secure mode. However if we use a cached current_el
from before the change we'll rebuild the flags incorrectly. To fix
this we introduce the ARM_CP_NEWEL CP flag to indicate the new EL
should be used when recomputing the flags.

Backports partof commit f80741d107673f162e3b097fc76a1590036cc9d1 from
qemu
2020-01-14 07:51:10 -05:00
Beata Michalska 81c14bb595 target/arm: Add support for DC CVAP & DC CVADP ins
ARMv8.2 introduced support for Data Cache Clean instructions
to PoP (point-of-persistence) - DC CVAP and PoDP (point-of-deep-persistence)
- DV CVADP. Both specify conceptual points in a memory system where all writes
that are to reach them are considered persistent.
The support provided considers both to be actually the same so there is no
distinction between the two. If none is available (there is no backing store
for given memory) both will result in Data Cache Clean up to the point of
coherency. Otherwise sync for the specified range shall be performed.

Backports commit 0d57b49992200a926c4436eead97ecfc8cc710be from qemu
2020-01-14 07:47:48 -05:00
Beata Michalska 0716794d86 Memory: Enable writeback for given memory region
Add an option to trigger memory writeback to sync given memory region
with the corresponding backing store, case one is available.
This extends the support for persistent memory, allowing syncing on-demand.

Backports commit 61c490e25e081af39ff40556f6c1229b8b011585 from qemu
2020-01-14 07:44:24 -05:00
Beata Michalska 47776dc862 tcg: cputlb: Add probe_read
Add probe_read alongside the write probing equivalent.

Backports commit 9e70492b4389d4355ae9c9ee2ba6286cfdadc257 from qemu
2020-01-14 07:16:41 -05:00
David Hildenbrand de513617c8 accel/tcg: allow to invalidate a write TLB entry immediately
Background: s390x implements Low-Address Protection (LAP). If LAP is
enabled, writing to effective addresses (before any translation)
0-511 and 4096-4607 triggers a protection exception.

So we have subpage protection on the first two pages of every address
space (where the lowcore - the CPU private data resides).

By immediately invalidating the write entry but allowing the caller to
continue, we force every write access onto these first two pages into
the slow path. we will get a tlb fault with the specific accessed
addresses and can then evaluate if protection applies or not.

We have to make sure to ignore the invalid bit if tlb_fill() succeeds.

Backports commit f52bfb12143e29d7c8bd827bdb751aee47a9694e from qemu
2020-01-14 07:14:10 -05:00
David Hildenbrand d9d91c1db6 tcg: Factor out probe_write() logic into probe_access()
Let's also allow to probe other access types.

Backports commit c25c283df0f08582df29f1d5d7be1516b851532d from qemu
2020-01-14 07:07:54 -05:00
David Hildenbrand 53c3c47efa tcg: Make probe_write() return a pointer to the host page
... similar to tlb_vaddr_to_host(); however, allow access to the host
page except when TLB_NOTDIRTY or TLB_MMIO is set.

Backports commit fef39ccd567032d3ad520ed80f3576068e6eb2e3 from qemu
2020-01-14 07:04:17 -05:00
David Hildenbrand 2bc3843fe3 tcg: Enforce single page access in probe_write()
Let's enforce the interface restriction.

Backports commit ca86cf328ce216bb304bbf09a43614613f945d86 from qemu
2020-01-14 07:02:15 -05:00
David Hildenbrand b732ad9eba tcg: Check for watchpoints in probe_write()
Let size > 0 indicate a promise to write to those bytes.
Check for write watchpoints in the probed range.

Backports commit 03a981893c99faba84bb373976796ad7dce0aecc from qemu
2020-01-14 07:01:05 -05:00
Richard Henderson 07f30382c0 cputlb: Handle watchpoints via TLB_WATCHPOINT
The raising of exceptions from check_watchpoint, buried inside
of the I/O subsystem, is fundamentally broken. We do not have
the helper return address with which we can unwind guest state.

Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT.
Move the call to cpu_check_watchpoint into the cputlb helpers
where we do have the helper return address.

This allows watchpoints on RAM to bypass the full i/o access path.

Backports commit 50b107c5d617eaf93301cef20221312e7a986701 from qemu
2020-01-14 06:58:33 -05:00
Richard Henderson 6c4a3fd06f cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK
We had two different mechanisms to force a recheck of the tlb.

Before TLB_RECHECK was introduced, we had a PAGE_WRITE_INV bit
that would immediate set TLB_INVALID_MASK, which automatically
means that a second check of the tlb entry fails.

We can use the same mechanism to handle small pages.
Conserve TLB_* bits by removing TLB_RECHECK.

Backports commit 30d7e098d5c38644359820317fcf72e3e129ec53 from qemu
2020-01-14 06:20:33 -05:00
David Hildenbrand f7b61b95f0 tcg: Factor out CONFIG_USER_ONLY probe_write() from s390x code
Factor it out into common code. Similar to the !CONFIG_USER_ONLY variant,
let's not allow to cross page boundaries.

Backports commit 59e96ac6cb13951dd09afc70622858089abf3384 from qemu
2020-01-12 10:27:49 -05:00
Richard Henderson bb313206e5 cputlb: Remove double-alignment in store_helper
We have already aligned page2 to the start of the next page.
There is no reason to do that a second time.

Backports commit 5787585d0406cfd54dda0c71ea1a603347ce6e71 from qemu
2020-01-12 10:25:13 -05:00
Richard Henderson 6990b212e3 cputlb: Fix size operand for tlb_fill on unaligned store
We are currently passing the size of the full write to
the tlb_fill for the second page. Instead pass the real
size of the write to that page.

This argument is unused within all tlb_fill, except to be
logged via tracing, so in practice this makes no difference.

But in a moment we'll need the value of size2 for watchpoints,
and if we've computed the value we might as well use it.

Backports commit 8f7cd2ad4acd01242d00807e231097b3de9f0930 from qemu
2020-01-12 06:17:09 -05:00
Tony Nguyen 15eb165995 target/sparc: sun4u Invert Endian TTE bit
This bit configures endianness of PCI MMIO devices. It is used by
Solaris and OpenBSD sunhme drivers.

Tested working on OpenBSD.

Unfortunately Solaris 10 had a unrelated keyboard issue blocking
testing... another inch towards Solaris 10 on SPARC64 =)

Backports commit ccdb4c5535f41ee4da2ef158f58fca0327e50dab from qemu
2020-01-07 19:21:30 -05:00
Tony Nguyen 7eea07fe55 target/sparc: Add TLB entry with attributes
Append MemTxAttrs to interfaces so we can pass along up coming Invert
Endian TTE bit on SPARC64.

Backports commit 9bed46e67e2ee54bc596ba58063ee71a5ca40923 from qemu
2020-01-07 19:19:30 -05:00
Tony Nguyen a95927de1d cputlb: Byte swap memory transaction attribute
Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Backports commit a26fc6f5152b47f1d7ed928f9c9d462d01ff1624 from qemu
2020-01-07 19:15:33 -05:00
Tony Nguyen 103d6f51c8 memory: Single byte swap along the I/O path
Now that MemOp has been pushed down into the memory API, and
callers are encoding endianness, we can collapse byte swaps
along the I/O path into the accelerator and target independent
adjust_endianness.

Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant
byte swaps cancelling out.

Backports commit 9bf825bf3df4ebae3af51566c8088e3f1249a910 from qemu
2020-01-07 19:12:04 -05:00
Tony Nguyen ad8957a4c3 cputlb: Replace size and endian operands for MemOp
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.

Backports commit be5c4787e9a6eed12fd765d9e890f7cc6cd63220 from qemu
2020-01-07 19:03:51 -05:00
Tony Nguyen da98d0da4e memory: Access MemoryRegion with endianness
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.

Call memory_region_dispatch_{read|write} with endianness encoded into
the "MemOp op" operand.

This patch does not change any behaviour as
memory_region_dispatch_{read|write} is yet to handle the endianness.

Once it does handle endianness, callers with byte swaps can collapse
them into adjust_endianness.

Backports commit d5d680cacc66ef7e3c02c81dc8f3a34eabce6dfe from qemu
2020-01-07 18:54:11 -05:00
Tony Nguyen b335c4756a exec: Hard code size with MO_{8|16|32|64}
Temporarily no-op size_memop was introduced to aid the conversion of
memory_region_dispatch_{read|write} operand "unsigned size" into
"MemOp op".

Now size_memop is implemented, again hard coded size but with
MO_{8|16|32|64}. This is more expressive and avoids size_memop calls.

Backports commit 07f0834f264a79d6225202bd35ca37f74afb8df1 from qemu
2020-01-07 18:33:15 -05:00
Tony Nguyen cb5688009e target/mips: Hard code size with MO_{8|16|32|64}
Temporarily no-op size_memop was introduced to aid the conversion of
memory_region_dispatch_{read|write} operand "unsigned size" into
"MemOp op".

Now size_memop is implemented, again hard coded size but with
MO_{8|16|32|64}. This is more expressive and avoids size_memop calls.

Backports commit 4574664677116dedb29b12150137f3888374a857 from qemu
2020-01-07 18:30:39 -05:00
Tony Nguyen 435d2e5c67 memory: Access MemoryRegion with MemOp
Convert memory_region_dispatch_{read|write} operand "unsigned size"
into a "MemOp op".

Backports commit e67c904668d82ca4416cd91d37d9f5abcceef747 from qemu
2020-01-07 18:29:27 -05:00
Tony Nguyen 3b777a2332 cputlb: Access MemoryRegion with MemOp
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".

Convert interfaces by using no-op size_memop.

After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be converted into a "MemOp op".

As size_memop is a no-op, this patch does not change any behaviour.

Backports commit 4cbb198eefef41bbca703605c78875fd4fec6ef6 from qemu
2020-01-07 18:26:29 -05:00
Tony Nguyen ab64c53bd0 exec: Access MemoryRegion with MemOp
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".

Convert interfaces by using no-op size_memop.

After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be converted into a "MemOp op".

As size_memop is a no-op, this patch does not change any behaviour.

Backports commit 3d9e7c3e7bf11962e1100d077e46f93f780b7310 from qemu
2020-01-07 18:25:19 -05:00
Tony Nguyen 7e9a1641c2 target/mips: Access MemoryRegion with MemOp
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".

Convert interfaces by using no-op size_memop.

After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be converted into a "MemOp op".

As size_memop is a no-op, this patch does not change any behaviour.

Backports commit e501824b3f3b3650e7cb8a509064cac01bc27c82 from qemu
2020-01-07 18:21:31 -05:00
Tony Nguyen dd78f65bc6 memory: Introduce size_memop
Introduce no-op size_memop to aid preparatory conversion of
interfaces.

Once interfaces are converted, size_memop will be implemented to
return a MemOp from size in bytes.

Backports commit 66b9b24375ac215cdcbdf9e14d665395360abff4 from qemu
2020-01-07 18:19:35 -05:00
Niek Linnenbank 998714db1f arm/arm-powerctl: set NSACR.{CP11, CP10} bits in arm_set_cpu_on()
This change ensures that the FPU can be accessed in Non-Secure mode
when the CPU core is reset using the arm_set_cpu_on() function call.
The NSACR.{CP11,CP10} bits define the exception level required to
access the FPU in Non-Secure mode. Without these bits set, the CPU
will give an undefined exception trap on the first FPU access for the
secondary cores under Linux.

This is necessary because in this power-control codepath QEMU
is effectively emulating a bit of EL3 firmware, and has to set
the CPU up as the EL3 firmware would.

Fixes: fc1120a7f5

Backports commit 0c7f8c43daf6556078e51de98aa13f069e505985 from qemu
2020-01-07 18:10:29 -05:00
Marc Zyngier 9c3e512479 target/arm: Add support for missing Jazelle system registers
QEMU lacks the minimum Jazelle implementation that is required
by the architecture (everything is RAZ or RAZ/WI). Add it
together with the HCR_EL2.TID0 trapping that goes with it.

Backports commit f96f3d5f09973ef40f164cf2d5fd98ce5498b82a from qemu
2020-01-07 18:09:13 -05:00
Marc Zyngier 457934855b target/arm: Handle AArch32 CP15 trapping via HSTR_EL2
HSTR_EL2 offers a way to trap ranges of CP15 system register
accesses to EL2, and it looks like this register is completely
ignored by QEMU.

To avoid adding extra .accessfn filters all over the place (which
would have a direct performance impact), let's add a new TB flag
that gets set whenever HSTR_EL2 is non-zero and that QEMU translates
a context where this trap has a chance to apply, and only generate
the extra access check if the hypervisor is actively using this feature.

Tested with a hand-crafted KVM guest accessing CBAR.

Backports commit 5bb0a20b74ad17dee5dae38e3b8b70b383ee7c2d from qemu
2020-01-07 18:07:21 -05:00
Marc Zyngier 868de52f69 target/arm: Handle trapping to EL2 of AArch32 VMRS instructions
HCR_EL2.TID3 requires that AArch32 reads of MVFR[012] are trapped to
EL2, and HCR_EL2.TID0 does the same for reads of FPSID.
In order to handle this, introduce a new TCG helper function that
checks for these control bits before executing the VMRC instruction.

Tested with a hacked-up version of KVM/arm64 that sets the control
bits for 32bit guests.

Backports commit 9ca1d776cb49c09b09579d9edd0447542970c834 from qemu
2020-01-07 18:04:16 -05:00
Marc Zyngier 51062d3fc2 target/arm: Honor HCR_EL2.TID1 trapping requirements
HCR_EL2.TID1 mandates that access from EL1 to REVIDR_EL1, AIDR_EL1
(and their 32bit equivalents) as well as TCMTR, TLBTR are trapped
to EL2. QEMU ignores it, making it harder for a hypervisor to
virtualize the HW (though to be fair, no known hypervisor actually
cares).

Do the right thing by trapping to EL2 if HCR_EL2.TID1 is set.

Backports commit 93fbc983b29a2eb84e2f6065929caf14f99c3681 from qemu
2020-01-07 18:00:01 -05:00
Marc Zyngier d1e981c44b target/arm: Honor HCR_EL2.TID2 trapping requirements
HCR_EL2.TID2 mandates that access from EL1 to CTR_EL0, CCSIDR_EL1,
CCSIDR2_EL1, CLIDR_EL1, CSSELR_EL1 are trapped to EL2, and QEMU
completely ignores it, making it impossible for hypervisors to
virtualize the cache hierarchy.

Do the right thing by trapping to EL2 if HCR_EL2.TID2 is set.

Backports commit 630fcd4d2ba37050329e0adafdc552d656ebe2f3 from qemu
2020-01-07 17:55:40 -05:00
Christophe Lyon 1df67780cd target/arm: Add support for cortex-m7 CPU
This is derived from cortex-m4 description, adding DP support and FPv5
instructions with the corresponding flags in isar and mvfr2.

Checked that it could successfully execute
vrinta.f32 s15, s15
while cortex-m4 emulation rejects it with "illegal instruction".

Backports commit cf7beda5072e106ddce875c1996446540c5fe239 from qemu
2020-01-07 17:52:27 -05:00
Peter Maydell 4fdf05f89e Open 5.0 development tree
Backports commit ba9975025ecc85cc2a137636e667dd22a7ae3848 from qemu
2020-01-07 17:50:51 -05:00
Peter Maydell 980b9657f5 Update version for v4.2.0 release
Backports commit b0ca999a43a22b38158a222233d3f5881648bb4f from qemu
2020-01-07 17:50:25 -05:00
Peter Maydell 8002c5cb46 Update version for v4.2.0-rc5 release
Backports commit 52901abf94477b400cf88c1f70bb305e690ba2de from qemu.
2020-01-07 17:49:49 -05:00
Peter Maydell 96a92c3be3 Update version for v4.2.0-rc4 release
Backports commit 1bdc319ab5d289ce6b822e06fb2b13666fd9278e from qemu
2020-01-07 17:49:16 -05:00
Peter Maydell 7f4ea3b98f
Update version for v4.2.0-rc3 release
Backports commit 1a61a081ac33ae6cb7dd2e38d119a572f416c7f7 from qemu
2019-11-28 03:47:54 -05:00
Marc Zyngier 145d58c367
target/arm: Honor HCR_EL2.TID3 trapping requirements
HCR_EL2.TID3 mandates that access from EL1 to a long list of id
registers traps to EL2, and QEMU has so far ignored this requirement.

This breaks (among other things) KVM guests that have PtrAuth enabled,
while the hypervisor doesn't want to expose the feature to its guest.
To achieve this, KVM traps the ID registers (ID_AA64ISAR1_EL1 in this
case), and masks out the unsupported feature.

QEMU not honoring the trap request means that the guest observes
that the feature is present in the HW, starts using it, and dies
a horrible death when KVM injects an UNDEF, because the feature
*really* isn't supported.

Do the right thing by trapping to EL2 if HCR_EL2.TID3 is set.

Note that this change does not include trapping of the MVFR
registers from AArch32 (they are accessed via the VMRS
instruction and need to be handled in a different way).

Backports commit 6a4ef4e5d1084ce41fafa7d470a644b0fd3d9317 from qemu
2019-11-28 03:46:32 -05:00
Marc Zyngier 2e8c8b5a7c
target/arm: Fix ISR_EL1 tracking when executing at EL2
The ARMv8 ARM states when executing at EL2, EL3 or Secure EL1,
ISR_EL1 shows the pending status of the physical IRQ, FIQ, or
SError interrupts.

Unfortunately, QEMU's implementation only considers the HCR_EL2
bits, and ignores the current exception level. This means a hypervisor
trying to look at its own interrupt state actually sees the guest
state, which is unexpected and breaks KVM as of Linux 5.3.

Instead, check for the running EL and return the physical bits
if not running in a virtualized context.

Backports commit 7cf95aed53c8770a338617ef40d5f37d2c197853 from qemu
2019-11-28 03:41:38 -05:00
Jean-Hugues Deschênes a2194585bb
target/arm: Fix handling of cortex-m FTYPE flag in EXCRET
According to the PushStack() pseudocode in the armv7m RM,
bit 4 of the LR should be set to NOT(CONTROL.PFCA) when
an FPU is present. Current implementation is doing it for
armv8, but not for armv7. This patch makes the existing
logic applicable to both code paths.

Backports commit f900b1e5b087a02199fbb6de7038828008e9e419 from qemu
2019-11-28 03:40:37 -05:00
Lioncash eadeae183d
target/arm: Amend bad merge 2019-11-28 03:29:56 -05:00
Richard Henderson f2ec6bc22d
target/arm: Support EL0 v7m msr/mrs for CONFIG_USER_ONLY
Simply moving the non-stub helper_v7m_mrs/msr outside of
!CONFIG_USER_ONLY is not an option, because of all of the
other system-mode helpers that are called.

But we can split out a few subroutines to handle the few
EL0 accessible registers without duplicating code.

Backports commit 04c9c81b8fa2ee33f59a26265700fae6fc646062 from qemu
2019-11-28 03:29:46 -05:00
Richard Henderson df5929cb69
target/arm: Relax r13 restriction for ldrex/strex for v8.0
Armv8-A removes UNPREDICTABLE for R13 for these cases.

Backports commit d46ad79efac7aaf9f0eb9f5a96a576e9f39200e0 from qemu
2019-11-28 03:29:31 -05:00
Richard Henderson fa7a6a5d91
target/arm: Do not reject rt == rt2 for strexd
There was too much cut and paste between ldrexd and strexd,
as ldrexd does prohibit two output registers the same.

Fixes: af288228995

Backports commit 655b02646dc175dc10666459b0a1e4346fc8d46a from qemu
2019-11-28 03:29:18 -05:00
Lioncash 28e90d563a
memory: Delete memory region subregions
Allows for more graceful teardown of unicorn.
2019-11-28 03:03:11 -05:00
Tony Nguyen f75368cd0f
tcg: TCGMemOp is now accelerator independent MemOp
Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.

Target dependant attributes are conditionalized upon NEED_CPU_H.

Backports commit 14776ab5a12972ea439c7fb2203a4c15a09094b4 from qemu
2019-11-28 03:01:12 -05:00
Peter Maydell 77d90985cc
target/sparc: Switch to do_transaction_failed() hook
Switch the SPARC target from the old unassigned_access hook to the
new do_transaction_failed hook.

This will cause the "if transaction failed" code paths added in
the previous commits to become active if the access is to an
unassigned address. In particular we'll now handle bus errors
during page table walks correctly (generating a translation
error with the right kind of fault status).

Backports commit f8c3db33a5e863291182f8862ddf81618a7c6194 from qemu
2019-11-28 02:56:50 -05:00
Peter Maydell 47dd9a5286
target/sparc: Remove unused ldl_phys from dump_mmu()
The dump_mmu() function does a ldl_phys() at the start, but
then never uses the value it loads at all. Remove the
unused code.

Backports commit 9dffeec2e003a482ca858a887d3454c6bebed91e from qemu
2019-11-28 02:56:39 -05:00
Peter Maydell 7d2ca16d7f
target/sparc: Handle bus errors in mmu_probe()
Convert the mmu_probe() function to using address_space_ldl()
rather than ldl_phys(), so we can explicitly detect memory
transaction failures.

This makes no practical difference at the moment, because
ldl_phys() will return 0 on a transaction failure, and we
treat transaction failures and 0 PDEs identically. However
the spec says that MMU probe operations are supposed to
update the fault status registers, and if we ever implement
that we'll want to distinguish the difference. For the
moment, just add a TODO comment about the bug.

Backports commit d86a9ad33c75ed795f09fb43243d0acecd583f24 from qemu
2019-11-28 02:56:32 -05:00
Peter Maydell 0d6cada970
target/sparc: Correctly handle bus errors in page table walks
Currently we use the ldl_phys() function to read page table entries.
With the unassigned_access hook in place, if these hit an unassigned
area of memory then the hook will cause us to wrongly generate
an exception with a fault address matching the address of the
page table entry.

Change to using address_space_ldl() so we can detect and correctly
handle bus errors and give them their correct behaviour of
causing a translation error with a suitable fault status register.

Note that this won't actually take effect until we switch the
over to using the do_translation_failed hook.

Backports commit 3c818dfcc271f5ba298b06f33466ab30f9a28349 from qemu
2019-11-28 02:56:25 -05:00
Peter Maydell 13ed49dd35
target/sparc: Check for transaction failures in MXCC stream ASI accesses
Currently the ld/st_asi helper functions make calls to the
ld*_phys() and st*_phys() functions for those ASIs which
imply direct accesses to physical addresses. These implicitly
rely on the unassigned_access hook to cause them to generate
an MMU fault if the access fails.

Switch to using the address_space_* functions instead, which
return a MemTxResult that we can check. This means that when
we switch SPARC over to using the do_transaction_failed hook
we'll still get the same MMU faults we did before.

This commit converts the ASIs which do MXCC stream source
and destination accesses.

It's not clear to me whether raising an MMU fault like this
is the correct behaviour if we encounter a bus error, but
we retain the same behaviour that the old unassigned_access
hook would implement.

Backports commit 776095d3cd751a58469b68f652c1ab6785f63652 from qemu
2019-11-28 02:56:17 -05:00
Peter Maydell a9e087b252
target/sparc: Check for transaction failures in MMU passthrough ASIs
Currently the ld/st_asi helper functions make calls to the
ld*_phys() and st*_phys() functions for those ASIs which
imply direct accesses to physical addresses. These implicitly
rely on the unassigned_access hook to cause them to generate
an MMU fault if the access fails.

Switch to using the address_space_* functions instead, which
return a MemTxResult that we can check. This means that when
we switch SPARC over to using the do_transaction_failed hook
we'll still get the same MMU faults we did before.

This commit converts the ASIs which do "MMU passthrough".

Backports commit b9f5fdad49c74583dcf9fcba0805b148e3992e13 from qemu
2019-11-28 02:56:11 -05:00
Peter Maydell 0b48392779
target/sparc: Factor out the body of sparc_cpu_unassigned_access()
Currently the SPARC target uses the old-style do_unassigned_access
hook. We want to switch it over to do_transaction_failed, but to do
this we must first remove all the direct calls in ldst_helper.c to
cpu_unassigned_access(). Factor out the body of the hook function's
code into a new sparc_raise_mmu_fault() and call it from the hook and
from the various places that used to call cpu_unassigned_access().

In passing, this fixes a bug where the code that raised the
MMU exception was directly calling GETPC() from a function that
was several levels deep in the callstack from the original
helper function: the new sparc_raise_mmu_fault() instead takes
the return address as an argument.

Other than the use of retaddr rather than GETPC() and a comment
format fixup, the body of the new function has no changes from
that of the old hook function.

Backports commit c9d793f44620a4793239da73f67758ce5f5ba5d0 from qemu
2019-11-28 02:56:05 -05:00
Wei Yang 813ec29d3c
exec.c: add a check between constants to see whether we could skip
The maximum level is defined as P_L2_LEVELS and skip is defined with 6
bits, which means if P_L2_LEVELS < (1 << 6), skip never exceeds the
boundary.

Since this check is between two constants, which leverages compiler
to optimize the code based on different configuration.

Backports commit 526ca2360ea1cd947f74c8c6c38b91b9d6fcfdb5 from qemu
2019-11-28 02:55:42 -05:00
Wei Yang 623632f3ac
exec.c: correct the maximum skip value during compact
skip is defined with 6 bits. So the maximum value should be (1 << 6).

Backports commit 26ca2075babd7775e246b9eb7da75d6de77eb658 from qemu
2019-11-28 02:55:31 -05:00
Wei Yang 2e55ddd339
exec.c: subpage->sub_section is already initialized to 0
In subpage_init(), we will set subpage->sub_section to
PHYS_SECTION_UNASSIGNED by subpage_register. Since
PHYS_SECTION_UNASSIGNED is defined to be 0, and we allocate subpage with
g_malloc0, this means subpage->sub_section is already initialized to 0.

This patch removes the redundant setup for a new subpage and also fix
the code style.

Backports commit b797ab1a15ba8d2b2fc4ec3e1f24d755f6855d05 from qemu
2019-11-28 02:55:23 -05:00
Wei Yang ef3cf096e7
exec.c: get nodes_nb_alloc with one MAX calculation
The purpose of these two MAX here is to get the maximum of these three
variables:

A: map->nodes_nb + nodes
B: map->nodes_nb_alloc
C: alloc_hint

We can write it like MAX(A, B, C). Since the if condition says A > B,
this means MAX(A, B, C) = MAX(A, C).

This patch just simplify the calculation a bit.

Backports commit c95cfd040078db8017f74fd3a4d6f798385d960c from qemu
2019-11-28 02:55:16 -05:00
Wei Yang 4274315278
exec.c: replace hwaddr with uint64_t for better understanding
Function phys_page_set() and phys_page_set_level() 's argument *nb*
stands for number of pages to set instead of hardware address.

This would be more proper to use uint64_t instead of hwaddr for its
type.

Backports commit 56b15076805a29673c1a90ea9c3ebef25bfcc912 from qemu
2019-11-28 02:55:08 -05:00
Peter Maydell 2faffb5af1
target/mips: Switch to do_transaction_failed() hook
Switch the MIPS target from the old unassigned_access hook to the new
do_transaction_failed hook.

Unlike the old hook, do_transaction_failed is only ever called from
the TCG memory access paths, so there is no need for the "ignore this
if we're using KVM" hack that we were previously using to work around
the way unassigned_access was called for all kinds of memory accesses
to unassigned physical addresses.

The MIPS target does not ever do direct memory reads by physical
address (via either ldl_phys etc or address_space_ldl etc), so the
only memory accesses this affects are the 'normal' guest loads and
stores, which will be handled by the new hook; their behaviour is
unchanged.

Backports commit 4f02a06d50ef0081089ed8cb3ec7c7986e3c95f8 from qemu
2019-11-28 02:54:53 -05:00
Daniel P. Berrangé 3c0bab6b11
docs: split the CODING_STYLE doc into distinct groups
Backports commit 9f8efa74d3f1cb9ceeee957ee382c2b4feb2ae30 from qemu
2019-11-28 02:54:44 -05:00
Daniel P. Berrangé 1862005570
docs: document use of automatic cleanup functions in glib
Document the use of g_autofree and g_autoptr in glib for automatic
freeing of memory.

Backports commit 821f2967562a1fdc7e52a644963163e6917c4293 from qemu
2019-11-28 02:54:35 -05:00
Daniel P. Berrangé e5c17018be
docs: merge HACKING.rst contents into CODING_STYLE.rst
The split of information between the two docs is rather arbitary and
unclear. It is simpler for contributors if all the information is in
one file.

Backports commit 637f39568fc0bd9848fd9d225d52ab0c4c443ed3 from qemu
2019-11-28 02:54:24 -05:00
Daniel P. Berrangé d37daf38a8
docs: convert README, CODING_STYLE and HACKING to RST syntax
Backports commit 336a7451e8803c21a2da6e7d1eca8cfb8e8b219a from qemu
2019-11-28 02:54:16 -05:00
Richard Henderson 654aaf9ebe
target/arm: Inline gen_bx_im into callers
There are only two remaining uses of gen_bx_im. In each case, we
know the destination mode -- not changing in the case of gen_jmp
or changing in the case of trans_BLX_i. Use this to simplify the
surrounding code.

For trans_BLX_i, use gen_jmp for the actual branch. For gen_jmp,
use gen_set_pc_im to set up the single-step.

Backports commit eac2f39602e0423adf56be410c9a22c31fec9a81 from qemu
2019-11-28 02:54:09 -05:00
Richard Henderson e61ca839d3
target/arm: Clean up disas_thumb_insn
Now that everything is converted, remove the rest of
the legacy decode.

Backports commit 0831403b08122b5bf801b0e3469cc63f019f60f0 from qemu
2019-11-28 02:53:59 -05:00
Richard Henderson a91de478cc
target/arm: Convert T16, long branches
Backports commit 67b54c554b39fd24f0c3aabc546e83b3082ee7ff from qemu
2019-11-28 02:53:54 -05:00
Richard Henderson 8d2fe3f6db
target/arm: Convert T16, Unconditional branch
Backports commit 8d4a4dc849a28aded8f335a25b223e8e3391b6f2 from qemu
2019-11-28 02:53:46 -05:00
Richard Henderson 482799d456
target/arm: Convert T16, Unconditional branch
Backports commit 8d4a4dc849a28aded8f335a25b223e8e3391b6f2 from qemu
2019-11-28 02:53:35 -05:00
Richard Henderson 2bc615157d
target/arm: Convert T16, load (literal)
Backports commit 46beb58efbb8a2a32f601a041aa22801a3138ece from qemu
2019-11-28 02:53:27 -05:00
Richard Henderson fad910c50b
target/arm: Convert T16, shift immediate
Backports commit 151c2f2841b01bf6fef079c9f1db15a86cae8276 from qemu
2019-11-28 02:53:18 -05:00
Richard Henderson ee96ab9ea9
target/arm: Convert T16, Miscellaneous 16-bit instructions
Backports commit 43f7e42c7d515f41ff243034f51b28267ae69938 from qemu
2019-11-28 02:53:08 -05:00
Richard Henderson dec55633dc
target/arm: Convert T16, Conditional branches, Supervisor call
Backports commit 629fcaa71ca9a5d6695d1664257b6a5327f38bd6 from qemu
2019-11-28 02:53:01 -05:00
Richard Henderson 336d6b3625
target/arm: Convert T16, push and pop
Backports commit 564b125fb9dec77e5bca9b4590786985ccc3d6cb from qemu
2019-11-28 02:52:44 -05:00