Commit graph

1388 commits

Author SHA1 Message Date
Richard Henderson d196288b4f target/arm: Implement ATS1E1 system registers
This is a minor enhancement over ARMv8.1-PAN.
The *_PAN mmu_idx are used with the existing do_ats_write.

Backports commit 04b07d29722192926f467ea5fedf2c3b0996a2a5 from qemu
2020-03-21 17:42:01 -04:00
Richard Henderson 6576864930 target/arm: Set PAN bit as required on exception entry
The PAN bit is preserved, or set as per SCTLR_ELx.SPAN,
plus several other conditions listed in the ARM ARM.

Backports commit 4a2696c0d4d80e14a192b28148c6167bc5056f94 from qemu
2020-03-21 17:40:11 -04:00
Richard Henderson aad0621f96 target/arm: Enforce PAN semantics in get_S1prot
If we have a PAN-enforcing mmu_idx, set prot == 0 if user_rw != 0.

Backports commit 81636b70c226dc27d7ebc8dedbcec26166d23085 from qemu
2020-03-21 17:35:55 -04:00
Richard Henderson 41d03da852 target/arm: Update arm_mmu_idx_el for PAN
Examine the PAN bit for EL1, EL2, and Secure EL1 to
determine if it applies.

Backports commit 66412260cc1bee60a22d96e4ad8569b85745fea4 from qemu
2020-03-21 17:34:12 -04:00
Richard Henderson 35fab80c57 target/arm: Update MSR access for PAN
For aarch64, there's a dedicated msr (imm, reg) insn.
For aarch32, this is done via msr to cpsr. Writes from el0
are ignored, which is already handled by the CPSR_USER mask.

Backports commit 220f508f49c5f49fb771d5105f991c19ffede3f7 from qemu
2020-03-21 17:33:16 -04:00
Richard Henderson 50bb867a6f target/arm: Introduce aarch64_pstate_valid_mask
Use this along the exception return path, where we previously
accepted any values

Backports commit 140845111809cd6fd57ccde93867b48cc56ffc74 from qemu
2020-03-21 17:26:00 -04:00
Richard Henderson b6b69d7ac5 target/arm: Remove CPSR_RESERVED
The only remaining use was in op_helper.c. Use PSTATE_SS
directly, and move the commentary so that it is more obvious
what is going on.

Backports commit 70dae0d069c45250bbefd9424089383a8ac239de from qemu
2020-03-21 17:24:21 -04:00
Richard Henderson 2d3239d0a1 target/arm: Use aarch32_cpsr_valid_mask in helper_exception_return
Using ~0 as the mask on the aarch64->aarch32 exception return
was not even as correct as the CPSR_ERET_MASK that we had used
on the aarch32->aarch32 exception return.

Backports commit d203cabd1bd12f31c9df0b5737421ba67b96857b from qemu
2020-03-21 17:20:53 -04:00
Richard Henderson c450694f1a target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_mask
CPSR_ERET_MASK was a useless renaming of CPSR_RESERVED.
The function also takes into account bits that the cpu
does not support.

Backports commit 437864216d63f052f3cd06ec8861d0e432496424 from qemu
2020-03-21 17:19:17 -04:00
Richard Henderson e4a7a089f0 target/arm: Mask CPSR_J when Jazelle is not enabled
The J bit signals Jazelle mode, and so of course is RES0
when the feature is not enabled.

Backports commit f062d1447f2a80e7a5f593b8cb5ac7cab5e16eb0 from qemu
2020-03-21 17:17:50 -04:00
Richard Henderson ca2bb77ab3 target/arm: Split out aarch32_cpsr_valid_mask
Split this helper out of msr_mask in translate.c. At the same time,
transform the negative reductive logic to positive accumulative logic.
It will be usable along the exception paths.

While touching msr_mask, fix up formatting.

Backports commit 4f9584ed4bba8a57a3cb2fa48a682725005d530a from qemu
2020-03-21 17:16:20 -04:00
Richard Henderson e9850834d5 target/arm: Move LOR regdefs to file scope
For static const regdefs, file scope is preferred.

Backports commit d8564ee4e5bce87ec1fdf23656df9367eb1bc571 from qemu
2020-03-21 17:13:58 -04:00
Richard Henderson 5c8c2ca505 target/arm: Add isar_feature tests for PAN + ATS1E1
Include definitions for all of the bits in ID_MMFR3.
We already have a definition for ID_AA64MMFR1.PAN.

Backports commit 3d6ad6bb466f487bcc861f99e2c9054230df1076 from qemu
2020-03-21 17:13:07 -04:00
Richard Henderson 7aaf0d442b target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabled
To implement PAN, we will want to swap, for short periods
of time, to a different privileged mmu_idx. In addition,
we cannot do this with flushing alone, because the AT*
instructions have both PAN and PAN-less versions.

Add the ARMMMUIdx*_PAN constants where necessary next to
the corresponding ARMMMUIdx* constant.

Backports commit 452ef8cb8c7b06f44a30a3c3a54d3be82c4aef59 from qemu
2020-03-21 17:12:16 -04:00
Richard Henderson ed5a4950fd target/arm: Add arm_mmu_idx_is_stage1_of_2
Use a common predicate for querying stage1-ness.

Backports commit fee7aa46edd76f06c3dc176abb8fd05b365efce6 from qemu
2020-03-21 16:56:03 -04:00
Richard Henderson eb0586f9cd target/arm: Raise only one interrupt in arm_cpu_exec_interrupt
The fall through organization of this function meant that we
would raise an interrupt, then might overwrite that with another.
Since interrupt prioritization is IMPLEMENTATION DEFINED, we
can recognize these in any order we choose.

Unify the code to raise the interrupt in a block at the end.

Backports commit d63d0ec59d87a698de5ed843288f90a23470cf2e from qemu
2020-03-21 16:42:52 -04:00
Richard Henderson d00e5ec47d target/arm: Use bool for unmasked in arm_excp_unmasked
The value computed is fully boolean; using int8_t is odd.

Backports commit 16e07f78df002067bc4bfb115ba1ee0ce278e9e5 from qemu
2020-03-21 16:40:36 -04:00
Richard Henderson 975f0a9bc5 target/arm: Pass more cpu state to arm_excp_unmasked
Avoid redundant computation of cpu state by passing it in
from the caller, which has already computed it for itself.

Backports commit be87955687446be152f366af543c9234eab78a7c from qemu
2020-03-21 16:39:16 -04:00
Richard Henderson 6023db20bc target/arm: Move arm_excp_unmasked to cpu.c
This inline function has one user in cpu.c, and need not be exposed
otherwise. Code movement only, with fixups for checkpatch.

Backports commit 310cedf39dea240a89f90729fd99481ff6158e90 from qemu
2020-03-21 16:37:12 -04:00
Richard Henderson ad5a3b2532 target/arm: Enable ARMv8.1-VHE in -cpu max
Backports commit cd3f80aba0c559a6291f7c3e686422b15381f6b7 from qemu
2020-03-21 16:36:04 -04:00
Richard Henderson 36407da586 target/arm: Update arm_cpu_do_interrupt_aarch64 for VHE
When VHE is enabled, the exception level below EL2 is not EL1,
but EL0, and so to identify the entry vector offset for exceptions
targeting EL2 we need to look at the width of EL0, not of EL1.

Backports commit cb092fbbaeb7b4e91b3f9c53150c8160f91577c7 from qemu
2020-03-21 16:35:07 -04:00
Richard Henderson 8f1201e392 target/arm: Update get_a64_user_mem_index for VHE
The EL2&0 translation regime is affected by Load Register (unpriv).

The code structure used here will facilitate later changes in this
area for implementing UAO and NV.

Backports commit cc28fc30e333dc2f20ebfde54444697e26cd8f6d from qemu
2020-03-21 16:33:52 -04:00
Alex Bennée 76ca1cd732 target/arm: check TGE and E2H flags for EL0 pauth traps
According to ARM ARM we should only trap from the EL1&0 regime.

Backports commit a7469a3c1edc7687d7d25967bc2c0280de202bca from qemu
2020-03-21 16:27:40 -04:00
Richard Henderson 01e1e7a3a0 target/arm: Update {fp,sve}_exception_el for VHE
When TGE+E2H are both set, CPACR_EL1 is ignored.

Backports commit c2ddb7cf963b3bea838266bfca62514dc9750a10 from qemu
2020-03-21 16:26:01 -04:00
Richard Henderson 86d0163465 target/arm: Update arm_phys_excp_target_el for TGE
The TGE bit routes all asynchronous exceptions to EL2.

Backports commit d1b31428fd522b725bc053c84b5fbc8764061363 from qemu
2020-03-21 16:23:52 -04:00
Richard Henderson 0c03fa2dac target/arm: Flush tlbs for E2&0 translation regime
Backports commit 85d0dc9fa205027554372367f6925749a2d2b4c4 from qemu
2020-03-21 16:22:46 -04:00
Richard Henderson 50ac89852a target/arm: Flush tlb for ASID changes in EL2&0 translation regime
Since we only support a single ASID, flush the tlb when it changes.

Note that TCR_EL2, like TCR_EL1, has the A1 bit that chooses between
the two TTBR* registers for the location of the ASID.

Backports commit d06dc93340825030b6297c61199a17c0067b0377 from qemu
2020-03-21 16:13:55 -04:00
Richard Henderson a2b8ebabfa target/arm: Add VHE timer register redirection and aliasing
Apart from the wholesale redirection that HCR_EL2.E2H performs
for EL2, there's a separate redirection specific to the timers
that happens for EL0 when running in the EL2&0 regime.

Backports commit bb5972e439dc0ac4d21329a9d97bad6760ec702d from qemu
2020-03-21 16:09:54 -04:00
Richard Henderson e41c51f6da target/arm: Add VHE system register redirection and aliasing
Several of the EL1/0 registers are redirected to the EL2 version when in
EL2 and HCR_EL2.E2H is set. Many of these registers have side effects.
Link together the two ARMCPRegInfo structures after they have been
properly instantiated. Install common dispatch routines to all of the
relevant registers.

The same set of registers that are redirected also have additional
EL12/EL02 aliases created to access the original register that was
redirected.

Omit the generic timer registers from redirection here, because we'll
need multiple kinds of redirection from both EL0 and EL2.

Backports commit e2cce18f5c1d0d55328c585c8372cdb096bbf528 from qemu
2020-03-21 15:57:03 -04:00
Richard Henderson ff720b7fd3 target/arm: Update define_one_arm_cp_reg_with_opaque for VHE
For ARMv8.1, op1 == 5 is reserved for EL2 aliases of
EL1 and EL0 registers.

Backports commit b4ecf60f7eee88cbfe5700044790cb7494c5dd37 from qemu
2020-03-21 15:39:54 -04:00
Richard Henderson 8c7795dc04 target/arm: Update timer access for VHE
Backports commit 5bc8437136fb1e7bc8b566f4f2f7269b0f990fad from qemu
2020-03-21 15:38:47 -04:00
Richard Henderson d6150127b4 target/arm: Add the hypervisor virtual counter
Backports commit 8c94b071a09c2183f032febff3112f2b7662156c from qemu
2020-03-21 15:35:36 -04:00
Richard Henderson 8e2ac48ad0 target/arm: Update ctr_el0_access for EL2
Update to include checks against HCR_EL2.TID2.

Backports commit 97475a89375d62a7722e04ced9fbdf0b992f4b83 from qemu
2020-03-21 15:31:48 -04:00
Richard Henderson 6886ba66d0 target/arm: Update aa64_zva_access for EL2
The comment that we don't support EL2 is somewhat out of date.
Update to include checks against HCR_EL2.TDZ.

Backports commit 4351cb72fb65926136ab618c9e40c1f5a8813251 from qemu
2020-03-21 15:30:37 -04:00
Richard Henderson 3a5135473f target/arm: Update arm_sctlr for VHE
Use the correct sctlr for EL2&0 regime. Due to header ordering,
and where arm_mmu_idx_el is declared, we need to move the function
out of line. Use the function in many more places in order to
select the correct control.

Backports commit aaec143212bb70ac9549cf73203d13100bd5c7c2 from qemu
2020-03-21 15:29:21 -04:00
Richard Henderson 6073542afc target/arm: Update arm_mmu_idx for VHE
Return the indexes for the EL2&0 regime when the appropriate bits
are set within HCR_EL2.

Backports commit 6003d9800ee38aa11eefb5cd64ae55abb64bef16 from qemu
2020-03-21 15:23:38 -04:00
Richard Henderson 3d583cd45f target/arm: Split out arm_mmu_idx_el
Backports commit 164690b29f9eaf69fe641859bc9f8954f12e691d from qemu
2020-03-21 15:22:01 -04:00
Richard Henderson f4397b0212 target/arm: Add regime_has_2_ranges
Create a predicate to indicate whether the regime has
both positive and negative addresses.

Backports commit 339370b90d067345b69585ddf4b668fa01f41d67 from qemu
2020-03-21 15:14:11 -04:00
Richard Henderson 0318d7af99 target/arm: Reorganize ARMMMUIdx
Prepare for, but do not yet implement, the EL2&0 regime.
This involves adding the new MMUIdx enumerators and adjusting
some of the MMUIdx related predicates to match.

Backports commit b9f6033c1a5fb7da55ed353794db8ec064f78bb2 from qemu.
2020-03-21 15:10:05 -04:00
Richard Henderson 85a7cfbdc6 target/arm: Tidy ARMMMUIdx m-profile definitions
Replace the magic numbers with the relevant ARM_MMU_IDX_M_* constants.
Keep the definitions short by referencing previous symbols.

Backports commit 25568316b2a7e73d68701042ba6ebdb217205e20 from qemu
2020-03-21 14:58:17 -04:00
Richard Henderson c223708063 target/arm: Rearrange ARMMMUIdxBit
Define via macro expansion, so that renumbering of the base ARMMMUIdx
symbols is automatically reflected in the bit definitions.

Backports commit 5f09a6dfbfbff4662f52cc3130a2e07044816497 from qemu
2020-03-21 14:56:46 -04:00
Richard Henderson 56504d255b target/arm: Expand TBFLAG_ANY.MMUIDX to 4 bits
We are about to expand the number of mmuidx to 10, and so need 4 bits.
For the benefit of reading the number out of -d exec, align it to the
penultimate nibble.

Backports commit 506f149815c2168f16ade17893e117419d93f248 from qemu
2020-03-21 14:54:55 -04:00
Richard Henderson be3c71fb8b target/arm: Recover 4 bits from TBFLAGs
We had completely run out of TBFLAG bits.
Split A- and M-profile bits into two overlapping buckets.
This results in 4 free bits.

We used to initialize all of the a32 and m32 fields in DisasContext
by assignment, in arm_tr_init_disas_context. Now we only initialize
either the a32 or m32 by assignment, because the bits overlap in
tbflags. So zero the entire structure in gen_intermediate_code.

Backports commit 79cabf1f473ca6e9fa0727f64ed9c2a84a36f0aa from qemu
2020-03-21 14:51:46 -04:00
Richard Henderson 153d7aadd5 target/arm: Rename ARMMMUIdx_S1E2 to ARMMMUIdx_E2
This is part of a reorganization to the set of mmu_idx.
The non-secure EL2 regime only has a single stage translation;
there is no point in pointing out that the idx is for stage1.

Backports commit e013b7411339342aac8d986c5d5e329e1baee8e1 from qemu
2020-03-21 14:42:23 -04:00
Richard Henderson f45ab0614e target/arm: Rename ARMMMUIdx*_S1E3 to ARMMMUIdx*_SE3
This is part of a reorganization to the set of mmu_idx.
The EL3 regime only has a single stage translation, and
is always secure.

Backports commit 127b2b086303296289099a6fb10bbc51077f1d53 from qemu
2020-03-21 14:38:44 -04:00
Richard Henderson 1a672fc3b1 target/arm: Rename ARMMMUIdx_S1SE[01] to ARMMMUIdx_SE10_[01]
This is part of a reorganization to the set of mmu_idx.
This emphasizes that they apply to the Secure EL1&0 regime.

Backports commit fba37aedecb82506c62a1f9e81d066b4fd04e443 from qemu
2020-03-21 14:35:28 -04:00
Richard Henderson 31837384b3 target/arm: Rename ARMMMUIdx_S1NSE* to ARMMMUIdx_Stage1_E*
This is part of a reorganization to the set of mmu_idx.
The EL1&0 regime is the only one that uses 2-stage translation.
Spelling out Stage avoids confusion with Secure.

Backports commit 2859d7b590760283a7b5aef40b723e9dfd7c98ba from qemu
2020-03-21 14:20:31 -04:00
Richard Henderson b62b4c4f35 target/arm: Rename ARMMMUIdx_S2NS to ARMMMUIdx_Stage2
The EL1&0 regime is the only one that uses 2-stage translation.

Backports commit 97fa9350017e647151dd1dc212f1bbca0294dba7 from qemu
2020-03-21 14:15:35 -04:00
Richard Henderson ec05f22e82 target/arm: Rename ARMMMUIdx*_S12NSE* to ARMMMUIdx*_E10_*
This is part of a reorganization to the set of mmu_idx.
This emphasizes that they apply to the EL1&0 regime.

The ultimate goal is

-- Non-secure regimes:
ARMMMUIdx_E10_0,
ARMMMUIdx_E20_0,
ARMMMUIdx_E10_1,
ARMMMUIdx_E2,
ARMMMUIdx_E20_2,

-- Secure regimes:
ARMMMUIdx_SE10_0,
ARMMMUIdx_SE10_1,
ARMMMUIdx_SE3,

-- Helper mmu_idx for non-secure EL1&0 stage1 and stage2
ARMMMUIdx_Stage2,
ARMMMUIdx_Stage1_E0,
ARMMMUIdx_Stage1_E1,

The 'S' prefix is reserved for "Secure". Unless otherwise specified,
each mmu_idx represents all stages of translation.

Backports commit 01b98b686460b3a0fb47125882e4f8d4268ac1b6 from qemu
2020-03-21 14:09:15 -04:00
Richard Henderson 270d557a99 target/arm: Split out alle1_tlbmask
No functional change, but unify code sequences.

Backports commit 90c19cdf1de440d7d9745cf255168999071b3a31 from qemu
2020-03-21 13:57:04 -04:00
Richard Henderson 6d4a7b84b5 target/arm: Split out vae1_tlbmask
No functional change, but unify code sequences.

Backports commit b7e0730de32d7079a1447ecbb5616d89de77b823 from qemu
2020-03-21 13:53:39 -04:00
Richard Henderson 4f4c385a8e target/arm: Update CNTVCT_EL0 for VHE
The virtual offset may be 0 depending on EL, E2H and TGE.

Backports commit 53d1f85608f83d645491eba6581d1f300dba2384 from qemu
2020-03-21 13:50:35 -04:00
Richard Henderson 215b4a9851 target/arm: Add TTBR1_EL2
At the same time, add writefn to TTBR0_EL2 and TCR_EL2.
A later patch will update any ASID therein.

Backports commit ed30da8eee6906032b38a84e4807e2142b09d8ec from qemu
2020-03-21 13:47:56 -04:00
Richard Henderson 35508d46c7 target/arm: Add CONTEXTIDR_EL2
Not all of the breakpoint types are supported, but those that
only examine contextidr are extended to support the new register.

Backports commit e2a1a4616c86159eb4c07659a02fff8bb25d3729 from qemu
2020-03-21 13:39:20 -04:00
Richard Henderson fe6825ca4d target/arm: Enable HCR_E2H for VHE
Backports commit 03c76131bc494366a4357a1d265c5eb5cc820754 from qemu
2020-03-21 13:35:00 -04:00
Richard Henderson 5455bd4037 target/arm: Define isar_feature_aa64_vh
Backports commit 8fc2ea21f75923b427eba261eb70f4a258f1b4e5 from qemu
2020-03-21 13:34:11 -04:00
Alex Bennée ced8834737 target/arm: fix TCG leak for fcvt half->double
When support for the AHP flag was added we inexplicably only freed the
new temps in one of the two legs. Move those tcg_temp_free to the same
level as the allocation to fix that leak.

Backports commit aeab8e5eb220cc5ff84b0b68b9afccc611bf0fcd from qemu
2020-03-21 13:14:47 -04:00
Vincent Dehors 6127f08028 target/arm: Fix PAuth sbox functions
In the PAC computation, sbox was applied over wrong bits.
As this is a 4-bit sbox, bit index should be incremented by 4 instead of 16.

Test vector from QARMA paper (https://eprint.iacr.org/2016/444.pdf) was
used to verify one computation of the pauth_computepac() function which
uses sbox2.

Launchpad: https://bugs.launchpad.net/bugs/1859713

Backports commit de0b1bae6461f67243282555475f88b2384a1eb9 from qemu
2020-03-21 12:17:26 -04:00
Clement Deschamps 1d97d223c3 target/arm: add PMU feature to cortex-r5 and cortex-r5f
The PMU is not optional on cortex-r5 and cortex-r5f (see
the "Features" chapter of the Technical Reference Manual).

Backports commit 90f671581ac601fcc1b840d9e9abe7e3c3e672db from qemu
2020-03-21 12:16:11 -04:00
Richard Henderson dc9733e555 target/arm: Set ISSIs16Bit in make_issinfo
During the conversion to decodetree, the setting of
ISSIs16Bit got lost. This causes the guest os to
incorrectly adjust trapping memory operations.

Backports commit 1a1fbc6cbb34c26d43d8360c66c1d21681af14a9 from qemu
2020-03-21 12:09:05 -04:00
Jeff Kubascik c9aadd696f target/arm: Return correct IL bit in merge_syn_data_abort
The IL bit is set for 32-bit instructions, thus passing false
with the is_16bit parameter to syn_data_abort_with_iss() makes
a syn mask that always has the IL bit set.

Pass is_16bit as true to make the initial syn mask have IL=0,
so that the final IL value comes from or'ing template_syn.

Cc: qemu-stable@nongnu.org
Fixes: aaa1f954d4ca ("target-arm: A64: Create Instruction Syndromes for Data Aborts")

Backports commit 30d544839e278dc76017b9a42990c41e84a34377 from qemu
2020-03-21 12:08:05 -04:00
Jeff Kubascik 95e39f60be target/arm: adjust program counter for wfi exception in AArch32
The wfi instruction can be configured to be trapped by a higher exception
level, such as the EL2 hypervisor. When the instruction is trapped, the
program counter should contain the address of the wfi instruction that
caused the exception. The program counter is adjusted for this in the wfi op
helper function.

However, this correction is done to env->pc, which only applies to AArch64
mode. For AArch32, the program counter is stored in env->regs[15]. This
adds an if-else statement to modify the correct program counter location
based on the the current CPU mode.

Backports commit 855532912b0e1bf803ae393e5b0c7e80948cd6a4 from qemu
2020-03-21 12:07:11 -04:00
Richard Henderson fb1988190e target/arm: Fix sign-extension for SMLAL*
The 32-bit product should be sign-extended, not zero-extended.

Fixes: ea96b37

Backports commit 1ab170865202aab8301131f31bffd87ea0f60d16 from qemu
2020-03-21 11:34:43 -04:00
Charles Ferguson 0d0d054382 Add implementation of access to the ARM SPSR register. (#1178)
The SPSR register is named within the Unicorn headers, but the code
to access it is absent. This means that it will always read as 0 and
ignore writes. This makes it harder to work with changes in processor
mode, as the usual way to return from a CPU exception is a
`MOVS pc, lr` for undefined instructions or `SUBS pc, lr, #4`
for most other aborts - which implicitly restores the CPSR from SPSR.

This change adds the access to the SPSR so that it can be read and
written as the caller might expect.

Backports commit 99097cab4c39fb3fc50eea8f0006954f62a149b2 from unicorn.
2020-01-14 09:57:55 -05:00
Pan Nengyuan 134a026e6b arm/translate-a64: fix uninitialized variable warning
Fixes:
target/arm/translate-a64.c: In function 'disas_crypto_three_reg_sha512':
target/arm/translate-a64.c:13625:9: error: 'genfn' may be used uninitialized in this function [-Werror=maybe-uninitialized]
genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
qemu/target/arm/translate-a64.c:13609:8: error: 'feature' may be used uninitialized in this function [-Werror=maybe-uninitialized]
if (!feature) {

Backports commit c7a5e7910517e2711215a9e869a733ffde696091 from qemu
2020-01-14 08:46:42 -05:00
Alex Bennée 8f275077b0 target/arm: only update pc after semihosting completes
Before we introduce blocking semihosting calls we need to ensure we
can restart the system on semi hosting exception. To be able to do
this the EXCP_SEMIHOST operation should be idempotent until it finally
completes. Practically this means ensureing we only update the pc
after the semihosting call has completed.

Backports commit 4ff5ef9e911c670ca10cdd36dd27c5395ec2c753 from qemu
2020-01-14 08:28:25 -05:00
Alex Bennée ea2714796f target/arm: remove unused EXCP_SEMIHOST leg
All semihosting exceptions are dealt with earlier in the common code
so we should never get here.

Backports commit b906acbb3aceed5b1eca30d9d365d5bd7431400b from qemu
2020-01-14 08:18:51 -05:00
Alex Bennée 639c5c4fe2 target/arm: ensure we use current exception state after SCR update
A write to the SCR can change the effective EL by droppping the system
from secure to non-secure mode. However if we use a cached current_el
from before the change we'll rebuild the flags incorrectly. To fix
this we introduce the ARM_CP_NEWEL CP flag to indicate the new EL
should be used when recomputing the flags.

Backports partof commit f80741d107673f162e3b097fc76a1590036cc9d1 from
qemu
2020-01-14 07:51:10 -05:00
Beata Michalska 81c14bb595 target/arm: Add support for DC CVAP & DC CVADP ins
ARMv8.2 introduced support for Data Cache Clean instructions
to PoP (point-of-persistence) - DC CVAP and PoDP (point-of-deep-persistence)
- DV CVADP. Both specify conceptual points in a memory system where all writes
that are to reach them are considered persistent.
The support provided considers both to be actually the same so there is no
distinction between the two. If none is available (there is no backing store
for given memory) both will result in Data Cache Clean up to the point of
coherency. Otherwise sync for the specified range shall be performed.

Backports commit 0d57b49992200a926c4436eead97ecfc8cc710be from qemu
2020-01-14 07:47:48 -05:00
Niek Linnenbank 998714db1f arm/arm-powerctl: set NSACR.{CP11, CP10} bits in arm_set_cpu_on()
This change ensures that the FPU can be accessed in Non-Secure mode
when the CPU core is reset using the arm_set_cpu_on() function call.
The NSACR.{CP11,CP10} bits define the exception level required to
access the FPU in Non-Secure mode. Without these bits set, the CPU
will give an undefined exception trap on the first FPU access for the
secondary cores under Linux.

This is necessary because in this power-control codepath QEMU
is effectively emulating a bit of EL3 firmware, and has to set
the CPU up as the EL3 firmware would.

Fixes: fc1120a7f5

Backports commit 0c7f8c43daf6556078e51de98aa13f069e505985 from qemu
2020-01-07 18:10:29 -05:00
Marc Zyngier 9c3e512479 target/arm: Add support for missing Jazelle system registers
QEMU lacks the minimum Jazelle implementation that is required
by the architecture (everything is RAZ or RAZ/WI). Add it
together with the HCR_EL2.TID0 trapping that goes with it.

Backports commit f96f3d5f09973ef40f164cf2d5fd98ce5498b82a from qemu
2020-01-07 18:09:13 -05:00
Marc Zyngier 457934855b target/arm: Handle AArch32 CP15 trapping via HSTR_EL2
HSTR_EL2 offers a way to trap ranges of CP15 system register
accesses to EL2, and it looks like this register is completely
ignored by QEMU.

To avoid adding extra .accessfn filters all over the place (which
would have a direct performance impact), let's add a new TB flag
that gets set whenever HSTR_EL2 is non-zero and that QEMU translates
a context where this trap has a chance to apply, and only generate
the extra access check if the hypervisor is actively using this feature.

Tested with a hand-crafted KVM guest accessing CBAR.

Backports commit 5bb0a20b74ad17dee5dae38e3b8b70b383ee7c2d from qemu
2020-01-07 18:07:21 -05:00
Marc Zyngier 868de52f69 target/arm: Handle trapping to EL2 of AArch32 VMRS instructions
HCR_EL2.TID3 requires that AArch32 reads of MVFR[012] are trapped to
EL2, and HCR_EL2.TID0 does the same for reads of FPSID.
In order to handle this, introduce a new TCG helper function that
checks for these control bits before executing the VMRC instruction.

Tested with a hacked-up version of KVM/arm64 that sets the control
bits for 32bit guests.

Backports commit 9ca1d776cb49c09b09579d9edd0447542970c834 from qemu
2020-01-07 18:04:16 -05:00
Marc Zyngier 51062d3fc2 target/arm: Honor HCR_EL2.TID1 trapping requirements
HCR_EL2.TID1 mandates that access from EL1 to REVIDR_EL1, AIDR_EL1
(and their 32bit equivalents) as well as TCMTR, TLBTR are trapped
to EL2. QEMU ignores it, making it harder for a hypervisor to
virtualize the HW (though to be fair, no known hypervisor actually
cares).

Do the right thing by trapping to EL2 if HCR_EL2.TID1 is set.

Backports commit 93fbc983b29a2eb84e2f6065929caf14f99c3681 from qemu
2020-01-07 18:00:01 -05:00
Marc Zyngier d1e981c44b target/arm: Honor HCR_EL2.TID2 trapping requirements
HCR_EL2.TID2 mandates that access from EL1 to CTR_EL0, CCSIDR_EL1,
CCSIDR2_EL1, CLIDR_EL1, CSSELR_EL1 are trapped to EL2, and QEMU
completely ignores it, making it impossible for hypervisors to
virtualize the cache hierarchy.

Do the right thing by trapping to EL2 if HCR_EL2.TID2 is set.

Backports commit 630fcd4d2ba37050329e0adafdc552d656ebe2f3 from qemu
2020-01-07 17:55:40 -05:00
Christophe Lyon 1df67780cd target/arm: Add support for cortex-m7 CPU
This is derived from cortex-m4 description, adding DP support and FPv5
instructions with the corresponding flags in isar and mvfr2.

Checked that it could successfully execute
vrinta.f32 s15, s15
while cortex-m4 emulation rejects it with "illegal instruction".

Backports commit cf7beda5072e106ddce875c1996446540c5fe239 from qemu
2020-01-07 17:52:27 -05:00
Marc Zyngier 145d58c367
target/arm: Honor HCR_EL2.TID3 trapping requirements
HCR_EL2.TID3 mandates that access from EL1 to a long list of id
registers traps to EL2, and QEMU has so far ignored this requirement.

This breaks (among other things) KVM guests that have PtrAuth enabled,
while the hypervisor doesn't want to expose the feature to its guest.
To achieve this, KVM traps the ID registers (ID_AA64ISAR1_EL1 in this
case), and masks out the unsupported feature.

QEMU not honoring the trap request means that the guest observes
that the feature is present in the HW, starts using it, and dies
a horrible death when KVM injects an UNDEF, because the feature
*really* isn't supported.

Do the right thing by trapping to EL2 if HCR_EL2.TID3 is set.

Note that this change does not include trapping of the MVFR
registers from AArch32 (they are accessed via the VMRS
instruction and need to be handled in a different way).

Backports commit 6a4ef4e5d1084ce41fafa7d470a644b0fd3d9317 from qemu
2019-11-28 03:46:32 -05:00
Marc Zyngier 2e8c8b5a7c
target/arm: Fix ISR_EL1 tracking when executing at EL2
The ARMv8 ARM states when executing at EL2, EL3 or Secure EL1,
ISR_EL1 shows the pending status of the physical IRQ, FIQ, or
SError interrupts.

Unfortunately, QEMU's implementation only considers the HCR_EL2
bits, and ignores the current exception level. This means a hypervisor
trying to look at its own interrupt state actually sees the guest
state, which is unexpected and breaks KVM as of Linux 5.3.

Instead, check for the running EL and return the physical bits
if not running in a virtualized context.

Backports commit 7cf95aed53c8770a338617ef40d5f37d2c197853 from qemu
2019-11-28 03:41:38 -05:00
Jean-Hugues Deschênes a2194585bb
target/arm: Fix handling of cortex-m FTYPE flag in EXCRET
According to the PushStack() pseudocode in the armv7m RM,
bit 4 of the LR should be set to NOT(CONTROL.PFCA) when
an FPU is present. Current implementation is doing it for
armv8, but not for armv7. This patch makes the existing
logic applicable to both code paths.

Backports commit f900b1e5b087a02199fbb6de7038828008e9e419 from qemu
2019-11-28 03:40:37 -05:00
Lioncash eadeae183d
target/arm: Amend bad merge 2019-11-28 03:29:56 -05:00
Richard Henderson f2ec6bc22d
target/arm: Support EL0 v7m msr/mrs for CONFIG_USER_ONLY
Simply moving the non-stub helper_v7m_mrs/msr outside of
!CONFIG_USER_ONLY is not an option, because of all of the
other system-mode helpers that are called.

But we can split out a few subroutines to handle the few
EL0 accessible registers without duplicating code.

Backports commit 04c9c81b8fa2ee33f59a26265700fae6fc646062 from qemu
2019-11-28 03:29:46 -05:00
Richard Henderson df5929cb69
target/arm: Relax r13 restriction for ldrex/strex for v8.0
Armv8-A removes UNPREDICTABLE for R13 for these cases.

Backports commit d46ad79efac7aaf9f0eb9f5a96a576e9f39200e0 from qemu
2019-11-28 03:29:31 -05:00
Richard Henderson fa7a6a5d91
target/arm: Do not reject rt == rt2 for strexd
There was too much cut and paste between ldrexd and strexd,
as ldrexd does prohibit two output registers the same.

Fixes: af288228995

Backports commit 655b02646dc175dc10666459b0a1e4346fc8d46a from qemu
2019-11-28 03:29:18 -05:00
Tony Nguyen f75368cd0f
tcg: TCGMemOp is now accelerator independent MemOp
Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.

Target dependant attributes are conditionalized upon NEED_CPU_H.

Backports commit 14776ab5a12972ea439c7fb2203a4c15a09094b4 from qemu
2019-11-28 03:01:12 -05:00
Richard Henderson 654aaf9ebe
target/arm: Inline gen_bx_im into callers
There are only two remaining uses of gen_bx_im. In each case, we
know the destination mode -- not changing in the case of gen_jmp
or changing in the case of trans_BLX_i. Use this to simplify the
surrounding code.

For trans_BLX_i, use gen_jmp for the actual branch. For gen_jmp,
use gen_set_pc_im to set up the single-step.

Backports commit eac2f39602e0423adf56be410c9a22c31fec9a81 from qemu
2019-11-28 02:54:09 -05:00
Richard Henderson e61ca839d3
target/arm: Clean up disas_thumb_insn
Now that everything is converted, remove the rest of
the legacy decode.

Backports commit 0831403b08122b5bf801b0e3469cc63f019f60f0 from qemu
2019-11-28 02:53:59 -05:00
Richard Henderson a91de478cc
target/arm: Convert T16, long branches
Backports commit 67b54c554b39fd24f0c3aabc546e83b3082ee7ff from qemu
2019-11-28 02:53:54 -05:00
Richard Henderson 8d2fe3f6db
target/arm: Convert T16, Unconditional branch
Backports commit 8d4a4dc849a28aded8f335a25b223e8e3391b6f2 from qemu
2019-11-28 02:53:46 -05:00
Richard Henderson 482799d456
target/arm: Convert T16, Unconditional branch
Backports commit 8d4a4dc849a28aded8f335a25b223e8e3391b6f2 from qemu
2019-11-28 02:53:35 -05:00
Richard Henderson 2bc615157d
target/arm: Convert T16, load (literal)
Backports commit 46beb58efbb8a2a32f601a041aa22801a3138ece from qemu
2019-11-28 02:53:27 -05:00
Richard Henderson fad910c50b
target/arm: Convert T16, shift immediate
Backports commit 151c2f2841b01bf6fef079c9f1db15a86cae8276 from qemu
2019-11-28 02:53:18 -05:00
Richard Henderson ee96ab9ea9
target/arm: Convert T16, Miscellaneous 16-bit instructions
Backports commit 43f7e42c7d515f41ff243034f51b28267ae69938 from qemu
2019-11-28 02:53:08 -05:00
Richard Henderson dec55633dc
target/arm: Convert T16, Conditional branches, Supervisor call
Backports commit 629fcaa71ca9a5d6695d1664257b6a5327f38bd6 from qemu
2019-11-28 02:53:01 -05:00
Richard Henderson 336d6b3625
target/arm: Convert T16, push and pop
Backports commit 564b125fb9dec77e5bca9b4590786985ccc3d6cb from qemu
2019-11-28 02:52:44 -05:00
Richard Henderson 25d1de9005
target/arm: Split gen_nop_hint
Now that all callers pass a constant value, split the switch
statement into the individual trans_* functions.

Backports commit 279de61a21a1622cb875ead82d6e78c989ba2966 from qemu
2019-11-28 02:52:37 -05:00
Richard Henderson a45db7fcd1
target/arm: Convert T16, nop hints
Backports commit 56e6250ede81b4e4b4ddb623874d6c3cdad4a96d from qemu
2019-11-28 02:52:29 -05:00
Richard Henderson 676f1c8783
target/arm: Convert T16, Reverse bytes
Backports commit ae3002b0218a90f2088817c70b35d3832ec91c18 from qemu
2019-11-28 02:52:19 -05:00
Richard Henderson 692ad18e62
target/arm: Convert T16, Change processor state
Add a check for ARMv6 in trans_CPS. We had this correct in
the T16 path, but had previously forgotten the check on the
A32 and T32 paths.

Backports commit 20556e7bd6111266fbf1d81e4ff7a89bfa5795a7 from qemu
2019-11-28 02:52:13 -05:00
Richard Henderson cc7d3fe9da
target/arm: Convert T16, extract
Backports commit e6f69612cc79e2acc05dafda8695f791a916946f from qemu
2019-11-28 02:52:06 -05:00
Richard Henderson 40282af492
target/arm: Convert T16 adjust sp (immediate)
Backports commit 2e6a646d7b1304d9106baad73c655132e2736c6c from qemu
2019-11-28 02:51:54 -05:00
Richard Henderson 927fb85b51
target/arm: Convert T16 add, compare, move (two high registers)
Backports commit 90aa042115a0fe39fe4cb3bcae4c4f728e2f3fdb from qemu
2019-11-28 02:51:46 -05:00
Richard Henderson 7ca1b3f817
target/arm: Convert T16 branch and exchange
Backports commit a0ef07740425b679d010fac7d9954ae003c1b191 from qemu
2019-11-28 02:51:39 -05:00
Richard Henderson c690f562c4
target/arm: Convert T16 one low register and immediate
Backports commit 6c6d237a865041972ec5b226657398f3b3018561 from qemu
2019-11-28 02:51:31 -05:00
Richard Henderson d9184b16a9
target/arm: Convert T16 add/sub (3 low, 2 low and imm)
Backports commit c4d3095bb62bdac0b4f9cb180bd7aa0b40c2c270 from qemu
2019-11-28 02:51:24 -05:00
Richard Henderson c52cfb9aa6
target/arm: Convert T16 load/store multiple
Backports commit 6e8514ba408f3cc758cd47e2da5475d8684507ec from qemu
2019-11-28 02:51:16 -05:00
Richard Henderson 10c8008266
target/arm: Convert T16 add pc/sp (immediate)
Backports commit 1cb1323433dca657a42483b2291c1ae923a91726 from qemu
2019-11-28 02:51:08 -05:00
Richard Henderson 7f79c0f2a1
target/arm: Convert T16 load/store (immediate offset)
Backports commit 07afd747f9fdd79fabf3a51416c7d795f873d297 from qemu
2019-11-28 02:51:01 -05:00
Richard Henderson 0ed7d59e35
target/arm: Convert T16 load/store (register offset)
Backports commit d1d229179c6b011cc3fa124d4b6b649866470530 from qemu
2019-11-28 02:50:48 -05:00
Richard Henderson e30f636e8a
target/arm: Convert T16 data-processing (two low regs)
Backports commit 080c4eadcbbaf95a6fcc4668cf16e4580f2bfe11 from qemu
2019-11-28 02:50:40 -05:00
Richard Henderson 0b2f37afaf
target/arm: Add skeleton for T16 decodetree
Backports commit f97b454e9e7f5d018d34b5ea85a66cff016bd3b7 from qemu
2019-11-28 02:50:27 -05:00
Richard Henderson 19f1da260f
target/arm: Simplify disas_arm_insn
Fold away all of the cases that now just goto illegal_op,
because all of their internal bits are now in decodetree.

Backports commit 590057d969a54de5d97261701c5702b3bebc9c07 from qemu
2019-11-28 02:50:03 -05:00
Richard Henderson e402eef2f0
target/arm: Simplify disas_thumb2_insn
Fold away all of the cases that now just goto illegal_op,
because all of their internal bits are now in decodetree.

Backports commit f843e77144c9334e244a422848177f2fbef5eb05 from qemu
2019-11-28 02:48:20 -05:00
Richard Henderson 1b5c72935d
target/arm: Convert TT
Backports commit d449f174e820b15ca1a1f5f3ec19999eeb7da14c from qemu
2019-11-28 02:48:06 -05:00
Richard Henderson 0e03a1f59c
target/arm: Convert SG
Backports commit 35d240acf1b6a89558e74b490feb13267415b236 from qemu
2019-11-28 02:47:57 -05:00
Richard Henderson 0afb6ab7af
target/arm: Convert Table Branch
Backports commit 808092bbe356eef0897476be50193d0778596877 from qemu
2019-11-28 02:47:48 -05:00
Richard Henderson e6b4480ca9
target/arm: Convert Unallocated memory hint
Backports commit 610f4e1764aa2049fa1711893ff62faf777813f3 from qemu
2019-11-28 02:47:41 -05:00
Richard Henderson 4593c67444
target/arm: Convert PLI, PLD, PLDW
Backports commit beb595f657d615856fee904f1e0f74f5e1e299a3 from qemu
2019-11-28 02:47:33 -05:00
Richard Henderson 5a23a1f020
target/arm: Convert SETEND
Backports commit 48c04a5dfaf2c08e00b659a22c502ec098999cf1 from qemu
2019-11-28 02:47:26 -05:00
Richard Henderson 55390520d6
target/arm: Convert CPS (privileged)
Backports commit 52f83b9c68bdde8d3da5f49292d3561bd474651d from qemu
2019-11-28 02:47:19 -05:00
Richard Henderson 15558cfa7e
target/arm: Convert Clear-Exclusive, Barriers
Backports commit 519b84711ea36bb8a339096e207b6b9b65cd2051 from qemu
2019-11-28 02:47:11 -05:00
Richard Henderson eff475c9a9
target/arm: Convert RFE and SRS
Backports commit 885782a78c6d05932c5d8f463dc50fc8312e3eb9 from qemu
2019-11-28 02:47:04 -05:00
Richard Henderson 87ff6a8bdf
target/arm: Convert SVC
Backports commit 542f5188a14758d64f7504580a9bd3cae973f546 from qemu
2019-11-28 02:46:55 -05:00
Richard Henderson a119870e57
target/arm: Convert B, BL, BLX (immediate)
Backports commit 360144f3b99f9a626ffcc6b9d76537e3a3e0e708 from qemu
2019-11-28 02:46:47 -05:00
Richard Henderson ed9b8ad2ea
target/arm: Diagnose base == pc for LDM/STM
We have been using store_reg and not store_reg_for_load when writing
back a loaded value into the base register. At first glance this is
incorrect when base == pc, however that case is UNPREDICTABLE.

Backports commit b0e382b8cf365fed8b8c43482029ac7655961a85 from qemu
2019-11-28 02:46:40 -05:00
Richard Henderson 1a0986ee25
target/arm: Diagnose too few registers in list for LDM/STM
This has been a TODO item for quite a while. The minimum bit
count for A32 and T16 is 1, and for T32 is 2.

Backports commit 4b222545dbf30b60c033e1cd6eddda612575fd8c from qemu
2019-11-28 02:46:33 -05:00
Richard Henderson fc81b12631
target/arm: Diagnose writeback register in list for LDM for v7
Prior to v7, for the A32 encoding, this operation wrote an UNKNOWN
value back to the base register. Starting in v7 this is UNPREDICTABLE.

Backports commit 3949f4675d13c587078f8f423845a3a537a22595 from qemu
2019-11-28 02:46:24 -05:00
Richard Henderson a501800ba6
target/arm: Convert LDM, STM
This includes a minor bug fix to LDM (user), which requires
bit 21 to be 0, which means no writeback.

Backports commit c5c426d4c680f908a1e262091a17b088b5709200 from qemu
2019-11-28 02:46:04 -05:00
Richard Henderson e4ca88f9d6
target/arm: Convert MOVW, MOVT
Backports commit 8f4451274b7010c1f50e0baa5bb608f19f02b90f from qemu
2019-11-28 02:46:04 -05:00
Richard Henderson b35749e239
target/arm: Convert Signed multiply, signed and unsigned divide
Backports commit 2c7c4e090409189488149869797da4acf895bad0 from qemu
2019-11-28 02:45:33 -05:00
Richard Henderson 987641cf10
target/arm: Convert packing, unpacking, saturation, and reversal
Backports commit 46497f6af73bb33c1064d43a28a48cbb4d233a23 from qemu
2019-11-28 02:44:55 -05:00
Richard Henderson 83cced6170
target/arm: Convert Parallel addition and subtraction
Backports commit adf1a5662a47d5b5b96f4f1e440e34c26b14a154 from qemu
2019-11-28 02:44:20 -05:00
Richard Henderson 21df423e47
target/arm: Convert USAD8, USADA8, SBFX, UBFX, BFC, BFI, UDF
In op_bfx, note that tcg_gen_{,s}extract_i32 already checks
for width == 32, so we don't need to special case that here.

Backports commit 86d21e4b509a2835ed79f234f476a4c5191d435b from qemu
2019-11-28 02:44:20 -05:00
Richard Henderson dbcc67ab20
target/arm: Diagnose UNPREDICTABLE ldrex/strex cases
Backports commit af2882289951e58363d714afd16f80050685fa29 from qemu
2019-11-28 02:44:20 -05:00
Richard Henderson 3ac019eb98
target/arm: Convert Synchronization primitives
Backports commit 1efdd407a25f617129e2e0d5c009c07cbe847990 from qemu
2019-11-28 02:44:18 -05:00
Richard Henderson c794962c42
target/arm: Convert load/store (register, immediate, literal)
Backports commit 5e291fe16846d216d5a69569b1c59f497dff96e4 from qemu
2019-11-28 02:42:01 -05:00
Richard Henderson d5d98450f3
target/arm: Convert T32 ADDW/SUBW
Backports commit 145952e87fb86aaa9434d768c31eedbd323f7157 from qemu
2019-11-28 02:42:01 -05:00
Richard Henderson 7b9025910d
target/arm: Convert the rest of A32 Miscelaneous instructions
Backports commit 2cde9ea57dbc4cdee3677a1a335574537810fe2e from qemu
2019-11-28 02:42:01 -05:00
Richard Henderson be2a259d3c
target/arm: Convert ERET
Pass the T5 encoding of SUBS PC, LR, #IMM through the normal SUBS path
to make it clear exactly what's happening -- we hit ALUExceptionReturn
along that path.

Backports commit ef11bc3c461e2c650e8bef552146a4b08f81884e from qemu
2019-11-28 02:42:00 -05:00
Richard Henderson 74040da34c
target/arm: Convert CLZ
Document our choice about the T32 CONSTRAINED UNPREDICTABLE behaviour.
This matches the undocumented choice made by the legacy decoder.

Backports commit 4c97f5b2f0fa9b37f9ff497f15411d809e6fd098 from qemu
2019-11-28 02:42:00 -05:00
Richard Henderson 94968602b8
target/arm: Convert BX, BXJ, BLX (register)
Backports commit 4ed95abd700e43dee8e032f754b53bec2b047f75 from qemu
2019-11-28 02:42:00 -05:00
Richard Henderson 831e17d970
target/arm: Convert Cyclic Redundancy Check
Backports commit 6c35d53f1bde7fe327c074473c3048d6e6f15e95 from qemu
2019-11-28 02:42:00 -05:00
Richard Henderson fdd135c7d2
target/arm: Convert MRS/MSR (banked, register)
The m-profile and a-profile decodings overlap. Only return false
for the case of wrong profile; handle UNDEFINED for permission failure
directly. This ensures that we don't accidentally pass an insn that
applies to the wrong profile.

Backports commit d0b26644502103ca97093ef67749812dc1df7eea from qemu
2019-11-28 02:42:00 -05:00
Richard Henderson 571d879c49
target/arm: Convert MSR (immediate) and hints
Backports commit 6313059623dc512308681ba160ed862ac387e2fb from qemu
2019-11-28 02:41:59 -05:00
Richard Henderson a011318794
target/arm: Simplify op_smlawx for SMLAW*
By shifting the 16-bit input left by 16, we can align the desired
portion of the 48-bit product and use tcg_gen_muls2_i32.

Backports commit 485b607d4f393e0de92c922806a68aef22340c98 from qemu
2019-11-28 02:40:01 -05:00
Richard Henderson 201be7b8b1
target/arm: Simplify op_smlaxxx for SMLAL*
Since all of the inputs and outputs are i32, dispense with
the intermediate promotion to i64 and use tcg_gen_add2_i32.

Backports commit ea96b374641bc429269096d88d4e91ee544273e9 from qemu
2019-11-28 02:40:00 -05:00
Richard Henderson 543b598d45
target/arm: Convert Halfword multiply and multiply accumulate
Backports commit 26c6923de7131fa1cf223ab67131d1992dc17001 from qemu
2019-11-28 02:40:00 -05:00
Richard Henderson 44416a6794
target/arm: Convert Saturating addition and subtraction
Backports commit 6d0730a82417e3a4a1911eb8e0246f3ba996f932 from qemu
2019-11-28 02:40:00 -05:00
Richard Henderson 45566b2780
target/arm: Simplify UMAAL
Since all of the inputs and outputs are i32, dispense with
the intermediate promotion to i64 and use tcg_gen_mulu2_i32
and tcg_gen_add2_i32.

Backports commit 2409d56454f0d028619fb1002eda86bf240906dd from qemu
2019-11-28 02:40:00 -05:00
Richard Henderson 5e5ae4c0d0
target/arm: Convert multiply and multiply accumulate
Backports commit bd92fe353bda4412ffc46c0f7415207a684b45f2 from qemu
2019-11-28 02:40:00 -05:00
Richard Henderson 677cf191d2
target/arm: Convert Data Processing (immediate)
Convert the modified immediate form of the data processing insns.
For A32, we can finally remove any code that was intertwined with
the register and register-shifted-register forms.

Backports commit 581c6ebd17c8f56ad52772216e6c6d8cc8997e8b from qemu
2019-11-28 02:39:16 -05:00
Richard Henderson 1b21ced6a1
target/arm: Convert Data Processing (reg-shifted-reg)
Convert the register shifted by register form of the data
processing insns. For A32, we cannot yet remove any code
because the legacy decoder intertwines the immediate form.

Backports commit 5be2c12337f4cbdbda4efe6ab485350f730faaad from qemu
2019-11-28 02:39:16 -05:00
Richard Henderson e151696a65
target/arm: Convert Data Processing (register)
Convert the register shifted by immediate form of the data
processing insns. For A32, we cannot yet remove any code
because the legacy decoder intertwines the reg-shifted-reg
and immediate forms.

Backports commit 25ae32c558182c07fc6ad01b936e9151cbf00c44 from qemu
2019-11-28 02:38:58 -05:00
Richard Henderson 9fc793b566
target/arm: Add stubs for aa32 decodetree
Add the infrastructure that will become the new decoder.
No instructions adjusted so far.

Backports commit 51409b9e8cfe997b1ac3365df7400e0c6e844437 from qemu
2019-11-28 02:38:49 -05:00
Richard Henderson 6ec6c71d50
target/arm: Use store_reg_from_load in thumb2 code
This function already includes the test for an interworking write
to PC from a load. Change the T32 LDM implementation to match the
A32 LDM implementation.

For LDM, the reordering of the tests does not change valid
behaviour because the only case that differs is has rn == 15,
which is UNPREDICTABLE.

Backports commit 69be3e13764111737e1a7a13bb0c231e4d5be756 from qemu
2019-11-28 02:38:42 -05:00
Richard Henderson 46a8dfff59
target/arm: Fix SMMLS argument order
The previous simplification got the order of operands to the
subtraction wrong. Since the 64-bit product is the subtrahend,
we must use a 64-bit subtract to properly compute the borrow
from the low-part of the product.

Fixes: 5f8cd06ebcf5 ("target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR")

Backports commit e0a0c8322b8ebcdad674f443a3e86db8708d6738 from qemu
2019-11-20 17:24:44 -05:00
Peter Maydell 9fb54a7f72
target/arm: Take exceptions on ATS instructions when needed
The translation table walk for an ATS instruction can result in
various faults. In general these are just reported back via the
PAR_EL1 fault status fields, but in some cases the architecture
requires that the fault is turned into an exception:
* synchronous stage 2 faults of any kind during AT S1E0* and
AT S1E1* instructions executed from NS EL1 fault to EL2 or EL3
* synchronous external aborts are taken as Data Abort exceptions

(This is documented in the v8A Arm ARM DDI0487A.e D5.2.11 and
G5.13.4.)

Backports commit 0710b2fa84a4aeb925422e1e88edac49ed407c79 from qemu
2019-11-20 17:24:44 -05:00
Peter Maydell 56b54f361e
target/arm: Allow ARMCPRegInfo read/write functions to throw exceptions
Currently the only part of an ARMCPRegInfo which is allowed to cause
a CPU exception is the access function, which returns a value indicating
that some flavour of UNDEF should be generated.

For the ATS system instructions, we would like to conditionally
generate exceptions as part of the writefn, because some faults
during the page table walk (like external aborts) should cause
an exception to be raised rather than returning a value.

There are several ways we could do this:
* plumb the GETPC() value from the top level set_cp_reg/get_cp_reg
helper functions through into the readfn and writefn hooks
* add extra readfn_with_ra/writefn_with_ra hooks that take the GETPC()
value
* require the ATS instructions to provide a dummy accessfn,
which serves no purpose except to cause the code generation
to emit TCG ops to sync the CPU state
* add an ARM_CP_ flag to mark the ARMCPRegInfo as possibly
throwing an exception in its read/write hooks, and make the
codegen sync the CPU state before calling the hooks if the
flag is set

This patch opts for the last of these, as it is fairly simple
to implement and doesn't require invasive changes like updating
the readfn/writefn hook function prototype signature.

Backports commit 37ff584c15bc3e1dd2c26b1998f00ff87189538c from qemu
2019-11-20 17:24:37 -05:00
Richard Henderson 87c06b7fae
target/arm: Factor out unallocated_encoding for aarch32
Make this a static function private to translate.c.
Thus we can use the same idiom between aarch64 and aarch32
without actually sharing function implementations.

Backports commit 1ce21ba1eaf08b22da5925f3e37fc0b4322da858 from qemu
2019-11-18 23:51:45 -05:00
Richard Henderson 1f59a43544
Revert "target/arm: Use unallocated_encoding for aarch32"
Despite the fact that the text for the call to gen_exception_insn
is identical for aarch64 and aarch32, the implementation inside
gen_exception_insn is totally different.

This fixes exceptions raised from aarch64.

This reverts commit fb2d3c9a9a.
2019-11-18 23:49:47 -05:00
Richard Henderson 9d2a3064af
target/arm: Use tcg_gen_extrh_i64_i32 to extract the high word
Separate shift + extract low will result in one extra insn
for hosts like RISC-V, MIPS, and Sparc.

Backports commit 664b7e3b97d6376f3329986c465b3782458b0f8b from qemu
2019-11-18 20:36:19 -05:00
Richard Henderson 93c016a3e7
target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR
All of the inputs to these instructions are 32-bits. Rather than
extend each input to 64-bits and then extract the high 32-bits of
the output, use tcg_gen_muls2_i32 and other 32-bit generator functions.

Backports commit 5f8cd06ebcf57420be8fea4574de2e074de46709 from qemu
2019-11-18 20:31:12 -05:00
Richard Henderson 4a1cc16eef
target/arm: Use tcg_gen_rotri_i32 for gen_swap_half
Rotate is the more compact and obvious way to swap 16-bit
elements of a 32-bit word.

Backports commit adefba76e8bf10dfb342094d2f5debfeedb1a74d from qemu
2019-11-18 20:27:12 -05:00
Richard Henderson 751ab7b24b
target/arm: Use ror32 instead of open-coding the operation
The helper function is more documentary, and also already
handles the case of rotate by zero.

Backports commit dd861b3f29be97a9e3cdb9769dcbc0c7d7825185 from qemu
2019-11-18 20:25:51 -05:00
Richard Henderson df4c773ed2
target/arm: Remove redundant shift tests
The immediate shift generator functions already test for,
and eliminate, the case of a shift by zero.

Backports commit 464eaa9571fae5867d9aea7d7209c091c8a50223 from qemu
2019-11-18 20:24:39 -05:00
Richard Henderson 4dd30ebfbd
target/arm: Use tcg_gen_deposit_i32 for PKHBT, PKHTB
Use deposit as the composit operation to merge the
bits from the two inputs.

Backports commit d1f8755fc93911f5b27246b1da794542d222fa1b from qemu
2019-11-18 20:22:00 -05:00
Richard Henderson 25ccd28e78
target/arm: Use tcg_gen_extract_i32 for shifter_out_im
Extract is a compact combination of shift + and.

Backports commit 191f4bfe8d6cf0c7d5cd7f84cd7076e32e3745dd from qemu
2019-11-18 20:19:40 -05:00
Andrew Jones ad63ee7509
target/arm/cpu: Use div-round-up to determine predicate register array size
Unless we're guaranteed to always increase ARM_MAX_VQ by a multiple of
four, then we should use DIV_ROUND_UP to ensure we get an appropriate
array size.

Backports commit 46417784d21c89446763f2047228977bdc267895 from qemu
2019-11-18 20:16:46 -05:00
Andrew Jones bb8b3bc42b
target/arm/helper: zcr: Add build bug next to value range assumption
The current implementation of ZCR_ELx matches the architecture, only
implementing the lower four bits, with the rest RAZ/WI. This puts
a strict limit on ARM_MAX_VQ of 16. Make sure we don't let ARM_MAX_VQ
grow without a corresponding update here.

Backports commit 7b351d98709d3f77d6bb18562e1bf228862b0d57 from qemu
2019-11-18 20:14:42 -05:00
Richard Henderson 3d3d56056b
target/arm: Remove helper_double_saturate
Replace x = double_saturate(y) with x = add_saturate(y, y).
There is no need for a separate more specialized helper.

Backports commit 640581a06d14e2d0d3c3ba79b916de6bc43578b0 from qemu
2019-11-18 20:13:21 -05:00
Richard Henderson fb2d3c9a9a
target/arm: Use unallocated_encoding for aarch32
Promote this function from aarch64 to fully general use.
Use it to unify the code sequences for generating illegal
opcode exceptions.

Backports commit 3cb36637157088892e9e33ddb1034bffd1251d3b from qemu
2019-11-18 20:10:50 -05:00
Richard Henderson d562bea784
target/arm: Remove offset argument to gen_exception_bkpt_insn
Unlike the other more generic gen_exception{,_internal}_insn
interfaces, breakpoints always refer to the current instruction.

Backports commit 06bcbda3f64d464b6ecac789bce4bd69f199cd68 from qemu
2019-11-18 20:05:45 -05:00
Richard Henderson f19b4df20d
target/arm: Replace offset with pc in gen_exception_internal_insn
The offset is variable depending on the instruction set.
Passing in the actual value is clearer in intent.

Backpors commit aee828e7541a5895669ade3a4b6978382b6b094a from qemu
2019-11-18 20:05:23 -05:00
Richard Henderson 00fbadf637
target/arm: Replace s->pc with s->base.pc_next
We must update s->base.pc_next when we return from the translate_insn
hook to the main translator loop. By incrementing s->base.pc_next
immediately after reading the insn word, "pc_next" contains the address
of the next instruction throughout translation.

All remaining uses of s->pc are referencing the address of the next insn,
so this is now a simple global replacement. Remove the "s->pc" field.

Backports commit a04159166b880b505ccadc16f2fe84169806883d from qemu
2019-11-18 17:32:53 -05:00
Richard Henderson 7d1fcef722
target/arm: Remove redundant s->pc & ~1
The thumb bit has already been removed from s->pc, and is always even.

Backports commit 4818c3743b0e0095fdcecd24457da9b3443730ab from qemu
2019-11-18 17:32:53 -05:00
Richard Henderson a2e60445de
target/arm: Introduce add_reg_for_lit
Provide a common routine for the places that require ALIGN(PC, 4)
as the base address as opposed to plain PC. The two are always
the same for A32, but the difference is meaningful for thumb mode.

Backports commit 16e0d8234ef9291747332d2c431e46808a060472 from qemu
2019-11-18 17:32:49 -05:00
Richard Henderson 1c0914e58c
target/arm: Introduce read_pc
We currently have 3 different ways of computing the architectural
value of "PC" as seen in the ARM ARM.

The value of s->pc has been incremented past the current insn,
but that is all. Thus for a32, PC = s->pc + 4; for t32, PC = s->pc;
for t16, PC = s->pc + 2. These differing computations make it
impossible at present to unify the various code paths.

With the newly introduced s->pc_curr, we can compute the correct
value for all cases, using the formula given in the ARM ARM.

This changes the behaviour for load_reg() and load_reg_var()
when called with reg==15 from a 32-bit Thumb instruction:
previously they would have returned the incorrect value
of pc_curr + 6, and now they will return the architecturally
correct value of PC, which is pc_curr + 4. This will not
affect well-behaved guest software, because all of the places
we call these functions from T32 code are instructions where
using r15 is UNPREDICTABLE. Using the architectural PC value
here is more consistent with the T16 and A32 behaviour.

Backports commit fdbcf6329d0c2984c55d7019419a72bf8e583c36 from qemu
2019-11-18 17:04:50 -05:00
Richard Henderson 0048f3e887
target/arm: Introduce pc_curr
Add a new field to retain the address of the instruction currently
being translated. The 32-bit uses are all within subroutines used
by a32 and t32. This will become less obvious when t16 support is
merged with a32+t32, and having a clear definition will help.

Convert aarch64 as well for consistency. Note that there is one
instance of a pre-assert fprintf that used the wrong value for the
address of the current instruction.

Backports commit 43722a6d4f0c92f7e7e1e291580039b0f9789df1 from qemu
2019-11-18 16:58:40 -05:00
Richard Henderson 1aa3c685a8
target/arm: Pass in pc to thumb_insn_is_16bit
This function is used in two different contexts, and it will be
clearer if the function is given the address to which it applies.

Backports commit 331b1ca616cb708db30dab68e3262d286e687f24 from qemu
2019-11-18 16:52:35 -05:00
Peter Maydell c61e22627d
target/arm: Fix routing of singlestep exceptions
When generating an architectural single-step exception we were
routing it to the "default exception level", which is to say
the same exception level we execute at except that EL0 exceptions
go to EL1. This is incorrect because the debug exception level
can be configured by the guest for situations such as single
stepping of EL0 and EL1 code by EL2.

We have to track the target debug exception level in the TB
flags, because it is dependent on CPU state like HCR_EL2.TGE
and MDCR_EL2.TDE. (That we were previously calling the
arm_debug_target_el() function to determine dc->ss_same_el
is itself a bug, though one that would only have manifested
as incorrect syndrome information.) Since we are out of TB
flag bits unless we want to expand into the cs_base field,
we share some bits with the M-profile only HANDLER and
STACKCHECK bits, since only A-profile has this singlestep.

Fixes: https://bugs.launchpad.net/qemu/+bug/1838913

Backports commit 8bd587c1066f4456ddfe611b571d9439a947d74c from qemu
2019-11-18 16:50:15 -05:00
Peter Maydell 3f531fac61
target/arm: Factor out 'generate singlestep exception' function
Factor out code to 'generate a singlestep exception', which is
currently repeated in four places.

To do this we need to also pull the identical copies of the
gen-exception() function out of translate-a64.c and translate.c
into translate.h.

(There is a bug in the code: we're taking the exception to the wrong
target EL. This will be simpler to fix if there's only one place to
do it.)

Backports commit c1d5f50f094ab204accfacc2ee6aafc9601dd5c4 from qemu
2019-11-18 16:47:08 -05:00
Alex Bennée 0d6ed39333
target/arm: generate a custom MIDR for -cpu max
While most features are now detected by probing the ID_* registers
kernels can (and do) use MIDR_EL1 for working out of they have to
apply errata. This can trip up warnings in the kernel as it tries to
work out if it should apply workarounds to features that don't
actually exist in the reported CPU type.

Avoid this problem by synthesising our own MIDR value.

Backports commit 2bd5f41c00686a1f847a60824d0375f3df2c26bf from qemu
2019-11-18 16:42:51 -05:00
Christophe Lyon 8264cb84fe
target/arm: Allow reading flags from FPSCR for M-profile
rt==15 is a special case when reading the flags: it means the
destination is APSR. This patch avoids rejecting vmrs apsr_nzcv, fpscr
as illegal instruction.

Backports commit cdc6896659b85f7ed8f7552850312e55170de0c5 from qemu
2019-11-18 16:32:06 -05:00
Peter Maydell 3fc86e1901
target/arm: Don't abort on M-profile exception return in linux-user mode
An attempt to do an exception-return (branch to one of the magic
addresses) in linux-user mode for M-profile should behave like
a normal branch, because linux-user mode is always going to be
in 'handler' mode. This used to work, but we broke it when we added
support for the M-profile security extension in commit d02a8698d7ae2bfed.

In that commit we allowed even handler-mode calls to magic return
values to be checked for and dealt with by causing an
EXCP_EXCEPTION_EXIT exception to be taken, because this is
needed for the FNC_RETURN return-from-non-secure-function-call
handling. For system mode we added a check in do_v7m_exception_exit()
to make any spurious calls from Handler mode behave correctly, but
forgot that linux-user mode would also be affected.

How an attempted return-from-non-secure-function-call in linux-user
mode should be handled is not clear -- on real hardware it would
result in return to secure code (not to the Linux kernel) which
could then handle the error in any way it chose. For QEMU we take
the simple approach of treating this erroneous return the same way
it would be handled on a CPU without the security extensions --
treat it as a normal branch.

The upshot of all this is that for linux-user mode we should never
do any of the bx_excret magic, so the code change is simple.

This ought to be a weird corner case that only affects broken guest
code (because Linux user processes should never be attempting to do
exception returns or NS function returns), except that the code that
assigns addresses in RAM for the process and stack in our linux-user
code does not attempt to avoid this magic address range, so
legitimate code attempting to return to a trampoline routine on the
stack can fall into this case. This change fixes those programs,
but we should also look at restricting the range of memory we
use for M-profile linux-user guests to the area that would be
real RAM in hardware.

Backports commit 9027d3fba605d8f6093342ebe4a1da450d374630 from qemu
2019-11-18 16:30:43 -05:00
Peter Maydell 8f7f19ce43
target/arm: Free TCG temps in trans_VMOV_64_sp()
The function neon_store_reg32() doesn't free the TCG temp that it
is passed, so the caller must do that. We got this right in most
places but forgot to free the TCG temps in trans_VMOV_64_sp().

Backports commit 38fb634853ac6547326d9f88b9a068d9fc6b4ad4 from qemu
2019-11-18 16:27:21 -05:00
Peter Maydell c6041bf94b
target/arm: Avoid bogus NSACR traps on M-profile without Security Extension
In Arm v8.0 M-profile CPUs without the Security Extension and also in
v7M CPUs, there is no NSACR register. However, the code we have to handle
the FPU does not always check whether the ARM_FEATURE_M_SECURITY bit
is set before testing whether env->v7m.nsacr permits access to the
FPU. This means that for a CPU with an FPU but without the Security
Extension we would always take a bogus fault when trying to stack
the FPU registers on an exception entry.

We could fix this by adding extra feature bit checks for all uses,
but it is simpler to just make the internal value of nsacr 0xcff
("all non-secure accesses allowed"), since this is not guest
visible when the Security Extension is not present. This allows
us to continue to follow the Arm ARM pseudocode which takes a
similar approach. (In particular, in the v8.1 Arm ARM the register
is documented as reading as 0xcff in this configuration.)

Fixes: https://bugs.launchpad.net/qemu/+bug/1838475

Backports commit 02ac2f7f613b47f6a5b397b20ab0e6b2e7fb89fa from qemu
2019-08-08 19:56:56 -04:00
Lioncash 59d808cf21
target/arm: Supply uc_struct instance to tcg_enabled() 2019-08-08 19:55:12 -04:00
Peter Maydell ecd3f0a5df
target/arm: Deliver BKPT/BRK exceptions to correct exception level
Most Arm architectural debug exceptions (eg watchpoints) are ignored
if the configured "debug exception level" is below the current
exception level (so for example EL1 can't arrange to get debug exceptions
for EL2 execution). Exceptions generated by the BRK or BPKT instructions
are a special case -- they must always cause an exception, so if
we're executing above the debug exception level then we
must take them to the current exception level.

This fixes a bug where executing BRK at EL2 could result in an
exception being taken at EL1 (which is strictly forbidden by the
architecture).

Fixes: https://bugs.launchpad.net/qemu/+bug/1838277

Backports commit 987a23224218fa3bb3aa0024ad236dcf29ebde9e from qemu
2019-08-08 19:53:30 -04:00
Peter Maydell fbbd582fb9
target/arm: Limit ID register assertions to TCG
In arm_cpu_realizefn() we make several assertions about the values of
guest ID registers:
* if the CPU provides AArch32 v7VE or better it must advertise the
ARM_DIV feature
* if the CPU provides AArch32 A-profile v6 or better it must
advertise the Jazelle feature

These are essentially consistency checks that our ID register
specifications in cpu.c didn't accidentally miss out a feature,
because increasingly the TCG emulation gates features on the values
in ID registers rather than using old-style checks of ARM_FEATURE_FOO
bits.

Unfortunately, these asserts can cause problems if we're running KVM,
because in that case we don't control the values of the ID registers
-- we read them from the host kernel. In particular, if the host
kernel is older than 4.15 then it doesn't expose the ID registers via
the KVM_GET_ONE_REG ioctl, and we set up dummy values for some
registers and leave the rest at zero. (See the comment in
target/arm/kvm64.c kvm_arm_get_host_cpu_features().) This set of
dummy values is not sufficient to pass our assertions, and so on
those kernels running an AArch32 guest on AArch64 will assert.

We could provide a more sophisticated set of dummy ID registers in
this case, but that still leaves the possibility of a host CPU which
reports bogus ID register values that would cause us to assert. It's
more robust to only do these ID register checks if we're using TCG,
as that is the only case where this is truly a QEMU code bug.

Backports commit 8f4821d77e465bc2ef77302d47640d5a43d92b30 from qemu
2019-08-08 19:44:16 -04:00
Philippe Mathieu-Daudé 9bd010263a
target/arm: Add missing break statement for Hypervisor Trap Exception
Reported by GCC9 when building with -Wimplicit-fallthrough=2:

target/arm/helper.c: In function ‘arm_cpu_do_interrupt_aarch32_hyp’:
target/arm/helper.c:7958:14: error: this statement may fall through [-Werror=implicit-fallthrough=]
7958 | addr = 0x14;
| ~~~~~^~~~~~
target/arm/helper.c:7959:5: note: here
7959 | default:
| ^~~~~~~
cc1: all warnings being treated as errors

Backports commit 9bbb4ef991fa93323f87769a6e217c2b9273a128 from qemu
2019-08-08 19:43:01 -04:00
Peter Maydell cdb9422f3a
target/arm: NS BusFault on vector table fetch escalates to NS HardFault
In the M-profile architecture, when we do a vector table fetch and it
fails, we need to report a HardFault. Whether this is a Secure HF or
a NonSecure HF depends on several things. If AIRCR.BFHFNMINS is 0
then HF is always Secure, because there is no NonSecure HardFault.
Otherwise, the answer depends on whether the 'underlying exception'
(MemManage, BusFault, SecureFault) targets Secure or NonSecure. (In
the pseudocode, this is handled in the Vector() function: the final
exc.isSecure is calculated by looking at the exc.isSecure from the
exception returned from the memory access, not the isSecure input
argument.)

We weren't doing this correctly, because we were looking at
the target security domain of the exception we were trying to
load the vector table entry for. This produces errors of two kinds:
* a load from the NS vector table which hits the "NS access
to S memory" SecureFault should end up as a Secure HardFault,
but we were raising an NS HardFault
* a load from the S vector table which causes a BusFault
should raise an NS HardFault if BFHFNMINS == 1 (because
in that case all BusFaults are NonSecure), but we were raising
a Secure HardFault

Correct the logic.

We also fix a comment error where we claimed that we might
be escalating MemManage to HardFault, and forgot about SecureFault.
(Vector loads can never hit MPU access faults, because they're
always aligned and always use the default address map.)

Backports commit 51c9122e92b776a3f16af0b9282f1dc5012e2a19 from qemu
2019-08-08 19:32:53 -04:00
Peter Maydell 8ec683b874
target/arm: Set VFP-related MVFR0 fields for arm926 and arm1026
The ARMv5 architecture didn't specify detailed per-feature ID
registers. Now that we're using the MVFR0 register fields to
gate the existence of VFP instructions, we need to set up
the correct values in the cpu->isar structure so that we still
provide an FPU to the guest.

This fixes a regression in the arm926 and arm1026 CPUs, which
are the only ones that both have VFP and are ARMv5 or earlier.
This regression was introduced by the VFP refactoring, and more
specifically by commits 1120827fa182f0e76 and 266bd25c485597c,
which accidentally disabled VFP short-vector support and
double-precision support on these CPUs.

Backports commit cb7cef8b32033f6284a47d797edd5c19c5491698 from qemu
2019-08-08 19:29:56 -04:00
Alex Bennée f893ff0b89
target/arm: report ARMv8-A FP support for AArch32 -cpu max
When we converted to using feature bits in 602f6e42cfbf we missed out
the fact (dp && arm_dc_feature(s, ARM_FEATURE_V8)) was supported for
-cpu max configurations. This caused a regression in the GCC test
suite. Fix this by setting the appropriate bits in mvfr1.FPHP to
report ARMv8-A with FP support (but not ARMv8.2-FP16).

Fixes: https://bugs.launchpad.net/qemu/+bug/1836078

Backports commit 45b1a243b81a7c9ae56235937280711dd9914ca7 from qemu
2019-08-08 19:28:39 -04:00
Philippe Mathieu-Daudé 6f8c8046d8
target/arm/vfp_helper: Call set_fpscr_to_host before updating to FPSCR
In commit e9d652824b0 we extracted the vfp_set_fpscr_to_host()
function but failed at calling it in the correct place, we call
it after xregs[ARM_VFP_FPSCR] is modified.

Fix by calling this function before we update FPSCR.

Backports commit 85795187f416326f87177cabc39fae1911f04c50 from qemu
2019-08-08 19:21:28 -04:00
Richard Henderson c687259bf6
target/arm: Fix sve_zcr_len_for_el
Off by one error in the EL2 and EL3 tests. Remove the test
against EL3 entirely, since it must always be true.

Backports commit 6a02a73211c5bc634fccd652777230954b83ccba from qemu
2019-08-08 19:20:35 -04:00
Peter Maydell 1f4c3d6bcc
target/arm: Correct VMOV_imm_dp handling of short vectors
Coverity points out (CID 1402195) that the loop in trans_VMOV_imm_dp()
that iterates over the destination registers in a short-vector VMOV
accidentally throws away the returned updated register number
from vfp_advance_dreg(). Add the missing assignment. (We got this
correct in trans_VMOV_imm_sp().)

Backports commit 89a11ff756410aecb87d2c774df6e45dbf4105c1 from qemu
2019-08-08 18:08:55 -04:00
Peter Maydell 0d89bce217
target/arm: Execute Thumb instructions when their condbits are 0xf
Thumb instructions in an IT block are set up to be conditionally
executed depending on a set of condition bits encoded into the IT
bits of the CPSR/XPSR. The architecture specifies that if the
condition bits are 0b1111 this means "always execute" (like 0b1110),
not "never execute"; we were treating it as "never execute". (See
the ConditionHolds() pseudocode in both the A-profile and M-profile
Arm ARM.)

This is a bit of an obscure corner case, because the only legal
way to get to an 0b1111 set of condbits is to do an exception
return which sets the XPSR/CPSR up that way. An IT instruction
which encodes a condition sequence that would include an 0b1111 is
UNPREDICTABLE, and for v8A the CONSTRAINED UNPREDICTABLE choices
for such an IT insn are to NOP, UNDEF, or treat 0b1111 like 0b1110.
Add a comment noting that we take the latter option.

Backports commit 5529de1e5512c05276825fa8b922147663fd6eac from qemu
2019-08-08 18:07:57 -04:00
Peter Maydell 9d01d50db8
target/arm: Use _ra versions of cpu_stl_data() in v7M helpers
In the various helper functions for v7M/v8M instructions, use
the _ra versions of cpu_stl_data() and friends. Otherwise we
may get wrong behaviour or an assert() due to not being able
to locate the TB if there is an exception on the memory access
or if it performs an IO operation when in icount mode

Backports commit 2884fbb60412049ec92389039ae716b32057382e from qemu
2019-08-08 18:06:23 -04:00
Philippe Mathieu-Daudé bde186433d
target/arm/helper: Move M profile routines to m_helper.c
In preparation for supporting TCG disablement on ARM, we move most
of TCG related v7m/v8m helpers and APIs into their own file.

Note: It is easier to review this commit using the 'histogram'
diff algorithm:

$ git diff --diff-algorithm=histogram ...
or
$ git diff --histogram ...

Backports commit 7aab5a8c8bb525ea390b4ebc17ab82c0835cfdb6 from qemu
2019-08-08 18:04:08 -04:00
Philippe Mathieu-Daudé 199e2f8a7d
target/arm: Restrict semi-hosting to TCG
Semihosting hooks either SVC or HLT instructions, and inside KVM
both of those go to EL1, ie to the guest, and can't be trapped to
KVM.

Let check_for_semihosting() return False when not running on TCG.

backports commit 91f78c58da9ba78c8ed00f5d822b701765be8499 from qemu
2019-08-08 17:48:34 -04:00
Philippe Mathieu-Daudé 6295fd7156
target/arm: Move debug routines to debug_helper.c
These routines are TCG specific.

Backports commit 9dd5cca42448770a940fa2145f1ff18cdc7b01a9 from qemu
2019-08-08 17:46:56 -04:00
Philippe Mathieu-Daudé fa2a772c7b
target/arm: Declare some M-profile functions publicly
In the next commit we will split the M-profile functions from this
file. Some function will be called out of helper.c. Declare them in
the "internals.h" header.

Backports commit 787a7e76c2e93a48c47b324fea592c9910a70483 from qemu
2019-08-08 15:37:01 -04:00
Philippe Mathieu-Daudé 4eacf0ce98
target/arm: Restrict PSCI to TCG
Under KVM, the kernel gets the HVC call and handle the PSCI requests.

Backports commit 21fbea8c8af72817d8e21570fa4edbfae417341b from qemu
2019-08-08 15:32:19 -04:00
Philippe Mathieu-Daudé 5aaf004bb2
target/arm/vfp_helper: Restrict the SoftFloat use to TCG
This code is specific to the SoftFloat floating-point
implementation, which is only used by TCG.

Backports commit 4a15527c9feecfd2fa2807d5e698abbc19feb35f from qemu
2019-08-08 15:30:47 -04:00
Philippe Mathieu-Daudé 4e214e5d64
target/arm/vfp_helper: Extract vfp_set_fpscr_from_host()
The vfp_set_fpscr() helper contains code specific to the host
floating point implementation (here the SoftFloat library).
Extract this code to vfp_set_fpscr_from_host().

Backports commit 0c6ad94809b37a1f0f1f75d3cd0e4a24fb77e65c from qemu
2019-08-08 15:28:38 -04:00
Philippe Mathieu-Daudé 2b5d731f56
target/arm/vfp_helper: Extract vfp_set_fpscr_to_host()
The vfp_set_fpscr() helper contains code specific to the host
floating point implementation (here the SoftFloat library).
Extract this code to vfp_set_fpscr_to_host().

Backports commit e9d652824b05845f143ef4797d707fae47d4b3ed from qemu
2019-08-08 15:27:27 -04:00
Philippe Mathieu-Daudé 76bae63014
target/arm/vfp_helper: Move code around
To ease the review of the next commit,
move the vfp_exceptbits_to_host() function directly after
vfp_exceptbits_from_host(). Amusingly the diff shows we
are moving vfp_get_fpscr().

Backports commit 20e62dd8c831c9065ed4a8e64813c93ad61c50d7 from qemu.
2019-08-08 15:26:03 -04:00
Philippe Mathieu-Daudé 91e264823e
target/arm: Move TLB related routines to tlb_helper.c
These routines are TCG specific.
The arm_deliver_fault() function is only used within the new
helper. Make it static.

Backports commit e21b551cb652663f2f2405a64d63ef6b4a1042b7 from qemu
2019-08-08 15:24:26 -04:00
Philippe Mathieu-Daudé 1af5deaf52
target/arm: Declare get_phys_addr() function publicly
In the next commit we will split the TLB related routines of
this file, and this function will also be called in the new
file. Declare it in the "internals.h" header.

Backports commit ebae861fc6c385a7bcac72dde4716be06e6776f1 from qemu
2019-08-08 15:14:45 -04:00
Samuel Ortiz 9296465289
target/arm: Move the DC ZVA helper into op_helper
Those helpers are a software implementation of the ARM v8 memory zeroing
op code. They should be moved to the op helper file, which is going to
eventually be built only when TCG is enabled.

Backports commit 6cdca173ef81a9dbcee9e142f1a5a34ad9c44b75 from qemu
2019-08-08 15:10:46 -04:00
Philippe Mathieu-Daudé f77b60d7e9
target/arm: Fix coding style issues
Since we'll move this code around, fix its style first.

Backports commit 9798ac7162c8a720c5d28f4d1fc9e03c7ab4f015 from qemu
2019-08-08 15:05:57 -04:00
Philippe Mathieu-Daudé 4f71266524
target/arm: Fix multiline comment syntax
Since commit 8c06fbdf36b checkpatch.pl enforce a new multiline
comment syntax. Since we'll move this code around, fix its style
first.

Backports commit 9a223097e44d5320f5e0546710263f22d11f12fc from qemu
2019-08-08 15:05:37 -04:00
Philippe Mathieu-Daudé 0a5152caf8
target/arm/helper: Remove unused include
Backports commit 2c8ec397f8438eea5e52be4898dfcf12a1f88267 from qemu
2019-08-08 14:43:57 -04:00
Philippe Mathieu-Daudé 8a0bae93ff
target/arm: Add copyright boilerplate
Backports commit ed3baad15b0b0edc480373f9c1d805d6b8e7e78c from qemu
2019-08-08 14:43:32 -04:00
Philippe Mathieu-Daudé b028a37fdb
target/arm: Makefile cleanup (softmmu)
Group SOFTMMU objects together.
Since PSCI is TCG specific, keep it separate.

Backports commit b601e0cd78c2c57548a468c6fdb566d514c05b89 from qemu
2019-08-08 14:42:00 -04:00
Philippe Mathieu-Daudé 348b813c03
target/arm: Makefile cleanup (ARM)
Group ARM objects together, TCG related ones at the bottom.
This will help when restricting TCG-only objects.

Backports commit 07774d584267488c8c2f104ae5b552791961908a from qemu
2019-08-08 14:39:52 -04:00
Lioncash 76d33b34e1
target/arm: Fix bad patch merge in arm_tr_init_disas_context 2019-08-08 14:37:38 -04:00
Philippe Mathieu-Daudé 5c0cff59f2
target/arm: Makefile cleanup (Aarch64)
Group Aarch64 rules together, TCG related ones at the bottom.
This will help when restricting TCG-only objects.

Backports commit 87f4f183484dba7a460c59e99dac0dbb9f42ed87 from qemu
2019-08-08 14:31:24 -04:00
Peter Maydell fa19f96e8c
target/arm: Check for dp support for dp VFM, not sp
In commit 1120827fa182f0e7622 we accidentally put the
"UNDEF unless FPU has double-precision support" check in
the single-precision VFM function. Put it in the dp
function where it belongs.

Backports commit 34bea4edb9bbe8edf4b8606276482acdff5ca58b from qemu
2019-06-25 18:56:34 -05:00
Peter Maydell dc1f2247ec
target/arm: Only implement doubles if the FPU supports them
The architecture permits FPUs which have only single-precision
support, not double-precision; Cortex-M4 and Cortex-M33 are
both like that. Add the necessary checks on the MVFR0 FPDP
field so that we UNDEF any double-precision instructions on
CPUs like this.

Note that even if FPDP==0 the insns like VMOV-to/from-gpreg,
VLDM/VSTM, VLDR/VSTR which take double precision registers
still exist.

Backports commit 1120827fa182f0e76226df7ffe7a86598d1df54f from qemu
2019-06-25 18:55:25 -05:00
Peter Maydell cfac686c95
target/arm: Fix typos in trans function prototypes
In several places cut and paste errors meant we were using the wrong
type for the 'arg' struct in trans_ functions called by the
decodetree decoder, because we were using the _sp version of the
struct in the _dp function. These were harmless, because the two
structs were identical and so decodetree made them typedefs of the
same underlying structure (and we'd have had a compile error if they
were not harmless), but we should clean them up anyway.

Backports commit 83655223ac6143a563e981906ce13fd6f2cfbefd from qemu
2019-06-25 18:48:34 -05:00
Peter Maydell 318a1ddf39
target/arm: Remove unused cpu_F0s, cpu_F0d, cpu_F1s, cpu_F1d
Remove the now unused TCG globals cpu_F0s, cpu_F0d, cpu_F1s, cpu_F1d.

cpu_M0 is still used by the iwmmxt code, and cpu_V0 and
cpu_V1 are used by both iwmmxt and Neon.

Backports commit d9eea52c67c04c58ecceba6ffe5a93d1d02051fa from qemu
2019-06-25 18:45:53 -05:00
Peter Maydell 74168c20f2
target/arm: Stop using deprecated functions in NEON_2RM_VCVT_F32_F16
Remove some old constructns from NEON_2RM_VCVT_F16_F32 code:
* don't use CPU_F0s
* don't use tcg_gen_st_f32

Backports commit b66f6b9981004bbf120b8d17c20f92785179bdf2 from qemu
2019-06-25 18:43:40 -05:00
Peter Maydell 8ae25f6e4c
target/arm: stop using deprecated functions in NEON_2RM_VCVT_F16_F32
Remove some old constructs from NEON_2RM_VCVT_F16_F32 code:
* don't use cpu_F0s
* don't use tcg_gen_ld_f32

Backports commit 58f2682eee738e8890f9cfe858e0f4f68b00d45d from qemu
2019-06-25 18:39:43 -05:00
Peter Maydell d419fbc270
target/arm: Stop using cpu_F0s in Neon VCVT fixed-point ops
Stop using cpu_F0s in the Neon VCVT fixed-point operations.

Backports commit c253dd7832bc6b4e140a0da56410a9336cce05bc from qemu
2019-06-25 18:35:33 -05:00
Peter Maydell 46216ae382
target/arm: Stop using cpu_F0s for Neon f32/s32 VCVT
Stop using cpu_F0s for the Neon f32/s32 VCVT operations.
Since this is the last user of cpu_F0s in the Neon 2rm-op
loop, we can remove the handling code for it too.

Backports commit 60737ed5785b9c1c6f1c85575dfdd1e9eec91878 from qemu
2019-06-25 18:32:32 -05:00
Peter Maydell 2fbe9c1d1d
target/arm: Stop using cpu_F0s for NEON_2RM_VRECPE_F and NEON_2RM_VRSQRTE_F
Stop using cpu_F0s for NEON_2RM_VRECPE_F and NEON_2RM_VRSQRTE_F.

Backports commit 9a011fece7201f8e268c982df8c7836f3335bbe6 from qemu
2019-06-25 18:29:22 -05:00
Peter Maydell f82ea34369
target/arm: Stop using cpu_F0s for NEON_2RM_VCVT[ANPM][US]
Stop using cpu_F0s for the NEON_2RM_VCVT[ANPM][US] ops.

Backports commit 30bf0a018f6c706913c8c0ea57b386907f4229be from qemu
2019-06-25 18:28:03 -05:00
Peter Maydell 0d4535bf16
target/arm: Stop using cpu_F0s for NEON_2RM_VRINT*
Switch NEON_2RM_VRINT* away from using cpu_F0s.

Backports commit 3b52ad1fae804acdc2fdc41b418a65249beae430 from qemu
2019-06-25 18:26:24 -05:00
Peter Maydell a62cbc7ac5
target/arm: Stop using cpu_F0s for NEON_2RM_VNEG_F
Switch NEON_2RM_VABS_F away from using cpu_F0s.

Backports commit cedcc96fc7c8e520a190a010ac97dbb53e57d7d2 from qemu
2019-06-25 18:24:01 -05:00
Peter Maydell 63d7f92eba
target/arm: Stop using cpu_F0s for NEON_2RM_VABS_F
Where Neon instructions are floating point operations, we
mostly use the old VFP utility functions like gen_vfp_abs()
which work on the TCG globals cpu_F0s and cpu_F1s. The
Neon for-each-element loop conditionally loads the inputs
into either a plain old TCG temporary for most operations
or into cpu_F0s for float operations, and similarly stores
back either cpu_F0s or the temporary.

Switch NEON_2RM_VABS_F away from using cpu_F0s, and
update neon_2rm_is_float_op() accordingly.

Backports commit fd8a68cdcf81d70eebf866a132e9780d4108da9c from qemu
2019-06-25 18:22:05 -05:00
Peter Maydell ba0ddd3459
target/arm: Use vfp_expand_imm() for AArch32 VFP VMOV_imm
The AArch32 VMOV (immediate) instruction uses the same VFP encoded
immediate format we already handle in vfp_expand_imm(). Use that
function rather than hand-decoding it.

Backports commit 9bee50b498410ed6466018b26464d7384c7879e9 from qemu
2019-06-25 18:20:19 -05:00
Peter Maydell b2dc290454
target/arm: Move vfp_expand_imm() to translate.[ch]
We want to use vfp_expand_imm() in the AArch32 VFP decode;
move it from the a64-only header/source file to the
AArch32 one (which is always compiled even for AArch64).

Backports commit d6a092d479333b5f20a647a912a31b0102d37335 from qemu
2019-06-25 18:17:49 -05:00
Peter Maydell 021da28bfd
target/arm: Fix short-vector increment behaviour
For VFP short vectors, the VFP registers are divided into a
series of banks: for single-precision these are s0-s7, s8-s15,
s16-s23 and s24-s31; for double-precision they are d0-d3,
d4-d7, ... d28-d31. Some banks are "scalar" meaning that
use of a register within them triggers a pure-scalar or
mixed vector-scalar operation rather than a full vector
operation. The scalar banks are s0-s7, d0-d3 and d16-d19.
When using a bank as part of a vector operation, we
iterate through it, increasing the register number by
the specified stride each time, and wrapping around to
the beginning of the bank.

Unfortunately our calculation of the "increment" part of this
was incorrect:
vd = ((vd + delta_d) & (bank_mask - 1)) | (vd & bank_mask)
will only do the intended thing if bank_mask has exactly
one set high bit. For instance for doubles (bank_mask = 0xc),
if we start with vd = 6 and delta_d = 2 then vd is updated
to 12 rather than the intended 4.

This only causes problems in the unlikely case that the
starting register is not the first in its bank: if the
register number doesn't have to wrap around then the
expression happens to give the right answer.

Fix this bug by abstracting out the "check whether register
is in a scalar bank" and "advance register within bank"
operations to utility functions which use the right
bit masking operations

Backports commit 18cf951af9a27ae573a6fa17f9d0c103f7b7679b from qemu
2019-06-13 19:44:27 -04:00
Peter Maydell 1a0d31c05e
target/arm: Convert float-to-integer VCVT insns to decodetree
Convert the float-to-integer VCVT instructions to decodetree.
Since these are the last unconverted instructions, we can
delete the old decoder structure entirely now.

Backports commit 3111bfc2da6ba0c8396dc97ca479942d711c6146 from qemu
2019-06-13 19:40:02 -04:00
Peter Maydell f6c67559d4
target/arm: Convert VCVT fp/fixed-point conversion insns to decodetree
Convert the VCVT (between floating-point and fixed-point) instructions
to decodetree.

Backports commit e3d6f4290c788e850c64815f0b3e331600a4bcc0 from qemu
2019-06-13 19:35:51 -04:00
Peter Maydell c66d477359
target/arm: Convert VJCVT to decodetree
Convert the VJCVT instruction to decodetree.

Backports commit 92073e947487e2109f3dfebfeaa48d6323cbd981 from qemu
2019-06-13 19:31:35 -04:00
Peter Maydell 7be9e6f9b4
target/arm: Convert integer-to-float insns to decodetree
Convert the VCVT integer-to-float instructions to decodetree.

Backports commit 8fc9d8918cde342c71923e361b9f2193e36ed18b from qemu
2019-06-13 19:20:41 -04:00
Peter Maydell e0e4f99103
target/arm: Convert double-single precision conversion insns to decodetree
Convert the VCVT double/single precision conversion insns to decodetree.

Backports commit 6ed7e49c3693ed8411773c4880f42b2932beb12d from qemu
2019-06-13 19:18:01 -04:00
Peter Maydell ab9d0235ed
target/arm: Convert VFP round insns to decodetree
Convert the VFP round-to-integer instructions VRINTR, VRINTZ and
VRINTX to decodetree.

These instructions were only introduced as part of the "VFP misc"
additions in v8A, so we check this. The old decoder's implementation
was incorrectly providing them even for v7A CPUs.

Backports commit e25155f55dc4abb427a88dfe58bbbc550fe7d643 from qemu
2019-06-13 19:15:05 -04:00
Peter Maydell 9e842a0f2a
target/arm: Convert the VCVT-to-f16 insns to decodetree
Convert the VCVTT and VCVTB instructions which convert from
f32 and f64 to f16 to decodetree.

Since we're no longer constrained to the old decoder's style
using cpu_F0s and cpu_F0d we can perform a direct 16 bit
store of the right half of the input single-precision register
rather than doing a load/modify/store sequence on the full
32 bits.

Backports commit cdfd14e86ab0b1ca29a702d13a8e4af2e902a9bf from qemu
2019-06-13 19:03:59 -04:00
Peter Maydell 7d927b2d0e
target/arm: Convert the VCVT-from-f16 insns to decodetree
Convert the VCVTT, VCVTB instructions that deal with conversion
from half-precision floats to f32 or 64 to decodetree.

Since we're no longer constrained to the old decoder's style
using cpu_F0s and cpu_F0d we can perform a direct 16 bit
load of the right half of the input single-precision register
rather than loading the full 32 bits and then doing a
separate shift or sign-extension.

Backports commit b623d803dda805f07aadcbf098961fde27315c19 from qemu
2019-06-13 19:00:23 -04:00
Peter Maydell e6cc2616d2
target/arm: Convert VFP comparison insns to decodetree
Convert the VFP comparison instructions to decodetree.

Note that comparison instructions should not honour the VFP
short-vector length and stride information: they are scalar-only
operations. This applies to all the 2-operand instructions except
for VMOV, VABS, VNEG and VSQRT. (In the old decoder this is
implemented via the "if (op == 15 && rn > 3) { veclen = 0; }" check.)

Backports commit 386bba2368842fc74388a3c1651c6c0c0c70adbd from qemu
2019-06-13 18:55:53 -04:00
Peter Maydell a75a3e321f
target/arm: Convert VMOV (register) to decodetree
Backports commit 17552b979ebb9848a534c25ebed18a1072710058 from qemu
2019-06-13 18:49:49 -04:00
Peter Maydell ee30962891
target/arm: Convert VSQRT to decodetree
Convert the VSQRT instruction to decodetree.

Backports commit b8474540cbce4e2fa45010416375d1bcbe86dc15 from qemu
2019-06-13 18:47:32 -04:00
Peter Maydell 7aea3da6b7
target/arm: Convert VNEG to decodetree
Convert the VNEG instruction to decodetree.

Backports commit 1882651afdb0ca44f0631192fbe65a71c660d809 from qemu
2019-06-13 18:43:50 -04:00
Peter Maydell 1032d86ad3
target/arm: Convert VABS to decodetree
Convert the VFP VABS instruction to decodetree.

Unlike the 3-op versions, we don't pass fpst to the VFPGen2OpSPFn or
VFPGen2OpDPFn because none of the operations which use this format
and support short vectors will need it.

Backports commit 90287e22c987e9840704345ed33d237cbe759dd9 from qemu
2019-06-13 18:41:43 -04:00
Peter Maydell 7a16bc6876
target/arm: Convert VMOV (imm) to decodetree
Convert the VFP VMOV (immediate) instruction to decodetree.

Backports commit b518c753f0b94e14e01e97b4ec42c100dafc0cc2 from qemu
2019-06-13 18:37:58 -04:00
Peter Maydell 0ebb6b8b90
target/arm: Convert VFP fused multiply-add insns to decodetree
Convert the VFP fused multiply-add instructions (VFNMA, VFNMS,
VFMA, VFMS) to decodetree.

Note that in the old decode structure we were implementing
these to honour the VFP vector stride/length. These instructions
were introduced in VFPv4, and in the v7A architecture they
are UNPREDICTABLE if the vector stride or length are non-zero.
In v8A they must UNDEF if stride or length are non-zero, like
all VFP instructions; we choose to UNDEF always.

Backports commit d4893b01d23060845ee3855bc96626e16aad9ab5 from qemu
2019-06-13 18:24:36 -04:00
Peter Maydell 321bcc822b
target/arm: Convert VDIV to decodetree
Convert the VDIV instruction to decodetree.

Backports commit 519ee7ae31e050eb0ff9ad35c213f0bd7ab1c03e from qemu
2019-06-13 18:19:47 -04:00
Peter Maydell 76c74bc657
target/arm: Convert VSUB to decodetree
Convert the VSUB instruction to decodetree.

Backports commit 8fec9a119264b7936503abce3c106fad7e3ccb76 from qemu.
2019-06-13 18:18:00 -04:00
Peter Maydell f56f0342ad
target/arm: Convert VADD to decodetree
Convert the VADD instruction to decodetree.

Backports commit ce28b303716e7eca3f3765bf6776d722ebbe1122 from qemu
2019-06-13 18:15:52 -04:00
Peter Maydell 06584edf61
target/arm: Convert VNMUL to decodetree
Convert the VNMUL instruction to decodetree.

Backports commit 43c4be1236c105090d134540da1036073d157cd4 from qemu
2019-06-13 18:14:16 -04:00
Peter Maydell 2c5e102017
target/arm: Convert VMUL to decodetree
Convert the VMUL instruction to decodetree.

Backports commit 88c5188ced60e9f2b8cc3af3b9bc4a8031c8c996 from qemu
2019-06-13 18:12:03 -04:00
Peter Maydell b26b6a12a2
target/arm: Convert VFP VNMLA to decodetree
Convert the VFP VNMLA instruction to decodetree.

Backports commit 8a483533adc1bdc2decb8f456dbe930a2d245a8b from qemu
2019-06-13 18:09:57 -04:00
Peter Maydell 638b90de31
target/arm: Convert VFP VNMLS to decodetree
Convert the VFP VNMLS instruction to decodetree.

Backports commit c54a416cc6d60efbc79dd37aaf0c8918c05b5815 from qemu
2019-06-13 18:06:59 -04:00
Peter Maydell 67ad40ffa4
target/arm: Convert VFP VMLS to decodetree
Convert the VFP VMLS instruction to decodetree.

Backports commit e7258280d46af4ab6a0cc93ccfe8f6614defb4b7 from qemu
2019-06-13 18:02:37 -04:00
Peter Maydell edf81eb214
target/arm: Convert VFP VMLA to decodetree
Convert the VFP VMLA instruction to decodetree.

This is the first of the VFP 3-operand data processing instructions,
so we include in this patch the code which loops over the elements
for an old-style VFP vector operation. The existing code to do this
looping uses the deprecated cpu_F0s/F0d/F1s/F1d TCG globals; since
we are going to be converting instructions one at a time anyway
we can take the opportunity to make the new loop use TCG temporaries,
which means we can do that conversion one operation at a time
rather than needing to do it all in one go.

We include an UNDEF check which was missing in the old code:
short-vector operations (with stride or length non-zero) were
deprecated in v7A and must UNDEF in v8A, so if the MVFR0 FPShVec
field does not indicate that support for short vectors is present
we UNDEF the operations that would use them. (This is a change
of behaviour for Cortex-A7, Cortex-A15 and the v8 CPUs, which
previously were all incorrectly allowing short-vector operations.)

Note that the conversion fixes a bug in the old code for the
case of VFP short-vector "mixed scalar/vector operations". These
happen where the destination register is in a vector bank but
but the second operand is in a scalar bank. For example
vmla.f64 d10, d1, d16 with length 2 stride 2
is equivalent to the pair of scalar operations
vmla.f64 d10, d1, d16
vmla.f64 d8, d3, d16
where the destination and first input register cycle through
their vector but the second input is scalar (d16). In the
old decoder the gen_vfp_F1_mul() operation uses cpu_F1{s,d}
as a temporary output for the multiply, which trashes the
second input operand. For the fully-scalar case (where we
never do a second iteration) and the fully-vector case
(where the loop loads the new second input operand) this
doesn't matter, but for the mixed scalar/vector case we
will end up using the wrong value for later loop iterations.
In the new code we use TCG temporaries and so avoid the bug.
This bug is present for all the multiply-accumulate insns
that operate on short vectors: VMLA, VMLS, VNMLA, VNMLS.

Note 2: the expression used to calculate the next register
number in the vector bank is not in fact correct; we leave
this behaviour unchanged from the old decoder and will
fix this bug later in the series.

Backports commit 266bd25c485597c94209bfdb3891c1d0c573c164 from qemu
2019-06-13 17:59:16 -04:00
Peter Maydell 93fe4cbe9e
target/arm: Remove VLDR/VSTR/VLDM/VSTM use of cpu_F0s and cpu_F0d
Expand out the sequences in the new decoder VLDR/VSTR/VLDM/VSTM trans
functions which perform the memory accesses by going via the TCG
globals cpu_F0s and cpu_F0d, to use local TCG temps instead.

Backports commit 3993d0407dff7233e42f2251db971e126a0497e9 from qemu
2019-06-13 17:31:28 -04:00
Peter Maydell ff7042567e
target/arm: Convert the VFP load/store multiple insns to decodetree
Convert the VFP load/store multiple insns to decodetree.
This includes tightening up the UNDEF checking for pre-VFPv3
CPUs which only have D0-D15 : they now UNDEF for any access
to D16-D31, not merely when the smallest register in the
transfer list is in D16-D31.

This conversion does not try to share code between the single
precision and the double precision versions; this looks a bit
duplicative of code, but it leaves the door open for a future
refactoring which gets rid of the use of the "F0" registers
by inlining the various functions like gen_vfp_ld() and
gen_mov_F0_reg() which are hiding "if (dp) { ... } else { ... }"
conditionalisation.

Backports commit fa288de272c5c8a66d5eb683b123706a52bc7ad6 from qemu
2019-06-13 17:26:52 -04:00
Peter Maydell 6f0633ce80
target/arm: Convert VFP VLDR and VSTR to decodetree
Convert the VFP single load/store insns VLDR and VSTR to decodetree.

Backports commit 79b02a3b5231c5b8cd31e50cd549968dd0a05c49 from qemu
2019-06-13 17:22:48 -04:00
Peter Maydell fe98885ff2
target/arm: Convert VFP two-register transfer insns to decodetree
Convert the VFP two-register transfer instructions to decodetree
(in the v8 Arm ARM these are the "Advanced SIMD and floating-point
64-bit move" encoding group).

Again, we expand out the sequences involving gen_vfp_msr() and
gen_msr_vfp().

Backports commit 81f681106eabe21c55118a5a41999fb7387fb714 from qemu
2019-06-13 17:20:00 -04:00
Peter Maydell 3fb3403b82
target/arm: Convert single-precision register moves to decodetree
Convert the "single-precision" register moves to decodetree:
* VMSR
* VMRS
* VMOV between general purpose register and single precision

Note that the VMSR/VMRS conversions make our handling of
the "should this UNDEF?" checks consistent between the two
instructions:
* VMSR to MVFR0, MVFR1, MVFR2 now UNDEF from EL0
  (previously was a nop)
* VMSR to FPSID now UNDEFs from EL0 or if VFPv3 or better
  (previously was a nop)
* VMSR to FPINST and FPINST2 now UNDEF if VFPv3 or better
  (previously would write to the register, which had no
  guest-visible effect because we always UNDEF reads)

We also tighten up the decode: we were previously underdecoding
some SBZ or SBO bits.

The conversion of VMOV_single includes the expansion out of the
gen_mov_F0_vreg()/gen_vfp_mrs() and gen_mov_vreg_F0()/gen_vfp_msr()
sequences into the simpler direct load/store of the TCG temp via
neon_{load,store}_reg32(): we know in the new function that we're
always single-precision, we don't need to use the old-and-deprecated
cpu_F0* TCG globals, and we don't happen to have the declaration of
gen_vfp_msr() and gen_vfp_mrs() at the point in the file where the
new function is.

Backports commit a9ab50011aeda2dd012da99069e078379315ea18 from qemu
2019-06-13 17:16:38 -04:00
Peter Maydell 694058da94
target/arm: Convert double-precision register moves to decodetree
Convert the "double-precision" register moves to decodetree:
this covers VMOV scalar-to-gpreg, VMOV gpreg-to-scalar and VDUP.

Note that the conversion process has tightened up a few of the
UNDEF encoding checks: we now correctly forbid:
* VMOV-to-gpr with U:opc1:opc2 == 10x00 or x0x10
* VMOV-from-gpr with opc1:opc2 == 0x10
* VDUP with B:E == 11
* VDUP with Q == 1 and Vn<0> == 1

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
The accesses of elements < 32 bits could be improved by doing
direct ld/st of the right size rather than 32-bit read-and-shift
or read-modify-write, but we leave this for later cleanup,
since this series is generally trying to stick to fixing
the decode.

Backports commit 9851ed9269d214c0c6feba960dd14ff09e6c34b4 from qemu
2019-06-13 17:11:56 -04:00
Peter Maydell 7265161108
target/arm: Add helpers for VFP register loads and stores
The current VFP code has two different idioms for
loading and storing from the VFP register file:
 1 using the gen_mov_F0_vreg() and similar functions,
   which load and store to a fixed set of TCG globals
   cpu_F0s, CPU_F0d, etc
 2 by direct calls to tcg_gen_ld_f64() and friends

We want to phase out idiom 1 (because the use of the
fixed globals is a relic of a much older version of TCG),
but idiom 2 is quite longwinded:
  tcg_gen_ld_f64(tmp, cpu_env, vfp_reg_offset(true, reg))
requires us to specify the 64-bitness twice, once in
the function name and once by passing 'true' to
vfp_reg_offset(). There's no guard against accidentally
passing the wrong flag.

Instead, let's move to a convention of accessing 64-bit
registers via the existing neon_load_reg64() and
neon_store_reg64(), and provide new neon_load_reg32()
and neon_store_reg32() for the 32-bit equivalents.

Implement the new functions and use them in the code in
translate-vfp.inc.c. We will convert the rest of the VFP
code as we do the decodetree conversion in subsequent
commits.

Backports commit 160f3b64c5cc4c8a09a1859edc764882ce6ad6bf from qemu
2019-06-13 17:01:59 -04:00
Peter Maydell 033a386ffb
target/arm: Move the VFP trans_* functions to translate-vfp.inc.c
Move the trans_*() functions we've just created from translate.c
to translate-vfp.inc.c. This is pure code motion with no textual
changes (this can be checked with 'git show --color-moved').

Backports commit f7bbb8f31f0761edbf0c64b7ab3c3f49c13612ea from qemu
2019-06-13 16:56:24 -04:00
Peter Maydell e55d31a5ac
target/arm: Convert VCVTA/VCVTN/VCVTP/VCVTM to decodetree
Convert the VCVTA/VCVTN/VCVTP/VCVTM instructions to decodetree.
trans_VCVT() is temporarily left in translate.c.

Backports commit c2a46a914cd5c38fd0ee57ff0befc1c5bde27bcf from qemu
2019-06-13 16:54:42 -04:00
Peter Maydell 9fb01cb526
target/arm: Convert VRINTA/VRINTN/VRINTP/VRINTM to decodetree
Convert the VRINTA/VRINTN/VRINTP/VRINTM instructions to decodetree.
Again, trans_VRINT() is temporarily left in translate.c.

Backports commit e3bb599d16e4678b228d80194cee328f894b1ceb from qemu
2019-06-13 16:50:36 -04:00
Peter Maydell 4501daf010
target/arm: Convert VMINNM, VMAXNM to decodetree
Convert the VMINNM and VMAXNM instructions to decodetree.
As with VSEL, we leave the trans_VMINMAXNM() function
in translate.c for the moment.

Backports commit f65988a1efdb42f9058db44297591491842e697c from qemu
2019-06-13 16:43:50 -04:00
Peter Maydell 3994dfd079
target/arm: Convert the VSEL instructions to decodetree
Convert the VSEL instructions to decodetree.
We leave trans_VSEL() in translate.c for now as this allows
the patch to show just the changes from the old handle_vsel().

In the old code the check for "do D16-D31 exist" was hidden in
the VFP_DREG macro, and assumed that VFPv3 always implied that
D16-D31 exist. In the new code we do the correct ID register test.
This gives identical behaviour for most of our CPUs, and fixes
previously incorrect handling for Cortex-R5F, Cortex-M4 and
Cortex-M33, which all implement VFPv3 or better with only 16
double-precision registers.

Backports commit b3ff4b87b4ae08120a51fe12592725e1dca8a085 from qemu
2019-06-13 16:41:22 -04:00
Peter Maydell 93adaa7de2
target/arm: Explicitly enable VFP short-vectors for aarch32 -cpu max
At the moment our -cpu max for AArch32 supports VFP short-vectors
because we always implement them, even for CPUs which should
not have them. The following commits are going to switch to
using the correct ID-register-check to enable or disable short
vector support, so we need to turn it on explicitly for -cpu max,
because Cortex-A15 doesn't implement it.

We don't enable this for the AArch64 -cpu max, because the v8A
architecture never supports short-vectors.

Backports commit 973751fd798d41402d34f9f705c0c6d1633d0cda from qemu
2019-06-13 16:38:01 -04:00
Peter Maydell 808d929d7c
target/arm: Fix Cortex-R5F MVFR values
The Cortex-R5F initfn was not correctly setting up the MVFR
ID register values. Fill these in, since some subsequent patches
will use ID register checks rather than CPU feature bit checks.

Backports commit 3de79d335c9aa7d726865e3933d9b21781032183 from qemu
2019-06-13 16:36:48 -04:00
Lioncash b3cfede44f
target/arm: Make load_cpu_offset() take a DisasContext* instead of uc_struct*
Keeps it consistent with store_cpu_offset
2019-06-13 16:35:31 -04:00
Peter Maydell 78997058e4
target/arm: Factor out VFP access checking code
Factor out the VFP access checking code so that we can use it in the
leaf functions of the decodetree decoder.

We call the function full_vfp_access_check() so we can keep
the more natural vfp_access_check() for a version which doesn't
have the 'ignore_vfp_enabled' flag -- that way almost all VFP
insns will be able to use vfp_access_check(s) and only the
special-register access function will have to use
full_vfp_access_check(s, ignore_vfp_enabled).

Backports commit 06db8196bba34776829020192ed623a0b22e6557 from qemu
2019-06-13 16:33:38 -04:00
Peter Maydell 9732ebba5c
target/arm: Add stubs for AArch32 VFP decodetree
Add the infrastructure for building and invoking a decodetree decoder
for the AArch32 VFP encodings. At the moment the new decoder covers
nothing, so we always fall back to the existing hand-written decode.

We need to have one decoder for the unconditional insns and one for
the conditional insns, as otherwise the patterns for conditional
insns would incorrectly match against the unconditional ones too.

Since translate.c is over 14,000 lines long and we're going to be
touching pretty much every line of the VFP code as part of the
decodetree conversion, we create a new translate-vfp.inc.c to hold
the code which deals with VFP in the new scheme. It should be
possible to convert this into a standalone translation unit
eventually, but the conversion process will be much simpler if we
simply #include it midway through translate.c to start with.

Backports commit 78e138bc1f672c145ef6ace74617db00eebaa2ba from qemu
2019-06-13 16:24:37 -04:00
Richard Henderson afaea6a291
target/arm: Fix output of PAuth Auth
The ARM pseudocode installs the error_code into the original
pointer, not the encrypted pointer. The difference applies
within the 7 bits of pac data; the result should be the sign
extension of bit 55.

Add a testcase to that effect.

Backports commit d67ebada159148bfdfde84871338738e4465e985 from qemu
2019-06-13 16:17:00 -04:00
Peter Maydell 230f8a091a
target/arm: Implement NSACR gating of floating point
The NSACR register allows secure code to configure the FPU
to be inaccessible to non-secure code. If the NSACR.CP10
bit is set then:
* NS accesses to the FPU trap as UNDEF (ie to NS EL1 or EL2)
* CPACR.{CP10,CP11} behave as if RAZ/WI
* HCPTR.{TCP11,TCP10} behave as if RAO/WI

Note that we do not implement the NSACR.NSASEDIS bit which
gates only access to Advanced SIMD, in the same way that
we don't implement the equivalent CPACR.ASEDIS and HCPTR.TASE.

Backports commit fc1120a7f5f2d4b601003205c598077d3eb11ad2 from qemu
2019-06-13 16:15:28 -04:00
Richard Henderson 7c32498b7f
target/arm: Use tcg_gen_gvec_bitsel
This replaces 3 target-specific implementations for BIT, BIF, and BSL.

Backports commit 3a7a2b4e5cf0d49cd8b14e8225af0310068b7d20 from qemu
2019-06-13 16:12:56 -04:00
Richard Henderson 8f53f09a05
cpu: Introduce CPUNegativeOffsetState
Nothing in there so far, but all of the plumbing done
within the target ArchCPU state.

Backports commit 5b146dc716cfd247f99556c04e6e46fbd67565a0 from qemu
2019-06-13 15:08:25 -04:00
Richard Henderson a672b89e3b
cpu: Introduce cpu_set_cpustate_pointers
Consolidate some boilerplate from foo_cpu_initfn.

Backports commit 7506ed902eb97fe4e2a1dd16766c621d32ecc40d from qemu
2019-06-12 12:27:16 -04:00
Richard Henderson ac176ccb38
cpu: Move ENV_OFFSET to exec/gen-icount.h
Now that we have ArchCPU, we can define this generically,
in the one place that needs it.

Backports commit 677c4d69ac21961e76a386f9bfc892a44923acc0 from qemu
2019-06-12 12:20:21 -04:00
Richard Henderson b8bd543390
target/arm: Use env_cpu, env_archcpu
Cleanup in the boilerplate that each target must define.
Replace arm_env_get_cpu with env_archcpu. The combination
CPU(arm_env_get_cpu) should have used ENV_GET_CPU to begin;
use env_cpu now.

Backports commit 2fc0cc0e1e034582f4718b1a2d57691474ccb6aa from qemu
2019-06-12 11:34:08 -04:00
Richard Henderson fbf91a6535
cpu: Replace ENV_GET_CPU with env_cpu
Now that we have both ArchCPU and CPUArchState, we can define
this generically instead of via macro in each target's cpu.h.

Backports commit 29a0af618ddd21f55df5753c3e16b0625f534b3c from qemu
2019-06-12 11:16:16 -04:00
Richard Henderson ae94fb5992
cpu: Define ArchCPU
For all targets, do this just before including exec/cpu-all.h.

Backports commit 2161a612b4e1d388046320bc464adefd6bba01a0 from qemu
2019-06-12 11:08:39 -04:00
Richard Henderson e3f1f25996
cpu: Define CPUArchState with typedef
For all targets, do this just before including exec/cpu-all.h.

Backports commit 4f7c64b3819d559417615ed2b1d028ebc1a49580 from qemu
2019-06-12 11:06:36 -04:00
Richard Henderson df2a890bd7
tcg: Split out target/arch/cpu-param.h
For all targets, into this new file move TARGET_LONG_BITS,
TARGET_PAGE_BITS, TARGET_PHYS_ADDR_SPACE_BITS,
TARGET_VIRT_ADDR_SPACE_BITS, and NB_MMU_MODES.

Include this new file from exec/cpu-defs.h.

This now removes the somewhat odd requirement that target/arch/cpu.h
defines TARGET_LONG_BITS before including exec/cpu-defs.h, so push the
bulk of the includes within target/arch/cpu.h to the top.

Backports commit 74433bf083b0766aba81534f92de13194f23ff3e from qemu
2019-06-10 19:35:46 -04:00
Alistair Francis f8f3e50372
target/arm: Fix vector operation segfault
Commit 89e68b575 "target/arm: Use vector operations for saturation"
causes this abort() when booting QEMU ARM with a Cortex-A15:

0 0x00007ffff4c2382f in raise () at /usr/lib/libc.so.6
1 0x00007ffff4c0e672 in abort () at /usr/lib/libc.so.6
2 0x00005555559c1839 in disas_neon_data_insn (insn=<optimized out>, s=<optimized out>) at ./target/arm/translate.c:6673
3 0x00005555559c1839 in disas_neon_data_insn (s=<optimized out>, insn=<optimized out>) at ./target/arm/translate.c:6386
4 0x00005555559cd8a4 in disas_arm_insn (insn=4081107068, s=0x7fffe59a9510) at ./target/arm/translate.c:9289
5 0x00005555559cd8a4 in arm_tr_translate_insn (dcbase=0x7fffe59a9510, cpu=<optimized out>) at ./target/arm/translate.c:13612
6 0x00005555558d1d39 in translator_loop (ops=0x5555561cc580 <arm_translator_ops>, db=0x7fffe59a9510, cpu=0x55555686a2f0, tb=<optimized out>, max_insns=<optimized out>) at ./accel/tcg/translator.c:96
7 0x00005555559d10d4 in gen_intermediate_code (cpu=cpu@entry=0x55555686a2f0, tb=tb@entry=0x7fffd7840080 <code_gen_buffer+126091347>, max_insns=max_insns@entry=512) at ./target/arm/translate.c:13901
8 0x00005555558d06b9 in tb_gen_code (cpu=cpu@entry=0x55555686a2f0, pc=3067096216, cs_base=0, flags=192, cflags=-16252928, cflags@entry=524288) at ./accel/tcg/translate-all.c:1736
9 0x00005555558ce467 in tb_find (cf_mask=524288, tb_exit=1, last_tb=0x7fffd783e640 <code_gen_buffer+126084627>, cpu=0x1) at ./accel/tcg/cpu-exec.c:407
10 0x00005555558ce467 in cpu_exec (cpu=cpu@entry=0x55555686a2f0) at ./accel/tcg/cpu-exec.c:728
11 0x000055555588b0cf in tcg_cpu_exec (cpu=0x55555686a2f0) at ./cpus.c:1431
12 0x000055555588d223 in qemu_tcg_cpu_thread_fn (arg=0x55555686a2f0) at ./cpus.c:1735
13 0x000055555588d223 in qemu_tcg_cpu_thread_fn (arg=arg@entry=0x55555686a2f0) at ./cpus.c:1709
14 0x0000555555d2629a in qemu_thread_start (args=<optimized out>) at ./util/qemu-thread-posix.c:502
15 0x00007ffff4db8a92 in start_thread () at /usr/lib/libpthread.

This patch ensures that we don't hit the abort() in the second switch
case in disas_neon_data_insn() as we will return from the first case.

Backports commit 2f143d3ad1c05e91cf2cdf5de06d59a80a95e6c8 from qemu
2019-05-24 18:02:32 -04:00
Richard Henderson 9287750362
target/arm: Simplify BFXIL expansion
The mask implied by the extract is redundant with the one
implied by the deposit. Also, fix spelling of BFXIL.

Backports commit 87eb65a3c45c788a309986d48170a54a0d1c0705 from qemu
2019-05-24 18:01:26 -04:00
Richard Henderson 1778828644
target/arm: Use extract2 for EXTR
This is, after all, how we implement extract2 in tcg/aarch64.

Backports commit 80ac954c369e7e61bd1ed00cef07b63e11f9c734 from qemu
2019-05-24 17:58:58 -04:00
Richard Henderson 3dd7358a53
target/arm: Implement ARMv8.5-RNG
Use the newly introduced infrastructure for guest random numbers.

Backports commit de390645675966cce113bf5394445bc1f8d07c85 from qemu

(with the actual RNG portion disabled to preserve determinism for the
time being).
2019-05-23 15:03:23 -04:00
Richard Henderson a8df33c37c
target/arm: Put all PAC keys into a structure
This allows us to use a single syscall to initialize them all.

Backports commit 108b3ba891408c4dce93df78261ec4aca38c0e2e from qemu
2019-05-23 14:54:06 -04:00
Richard Henderson 2a4a7b9391
tcg: Use tlb_fill probe from tlb_vaddr_to_host
Most of the existing users would continue around a loop which
would fault the tlb entry in via a normal load/store.

But for AArch64 SVE we have an existing emulation bug wherein we
would mark the first element of a no-fault vector load as faulted
(within the FFR, not via exception) just because we did not have
its address in the TLB. Now we can properly only mark it as faulted
if there really is no valid, readable translation, while still not
raising an exception. (Note that beyond the first element of the
vector, the hardware may report a fault for any reason whatsoever;
with at least one element loaded, forward progress is guaranteed.)

Backports commit 4811e9095c0491bc6f5450e5012c9c4796b9e59d from qemu
2019-05-16 18:27:03 -04:00
Richard Henderson dab0061a0d
tcg: Use CPUClass::tlb_fill in cputlb.c
We can now use the CPUClass hook instead of a named function.

Create a static tlb_fill function to avoid other changes within
cputlb.c. This also isolates the asserts within. Remove the
named tlb_fill function from all of the targets.

Backports commit c319dc13579a92937bffe02ad2c9f1a550e73973 from qemu
2019-05-16 17:35:37 -04:00
Richard Henderson 31ecdb5341
target/arm: Convert to CPUClass::tlb_fill
Backports commit 7350d553b5066abdc662045d7db5cdb73d0f9d53 from qemu
2019-05-16 16:55:12 -04:00
Richard Henderson 552e48f14e
target/arm: Use tcg_gen_abs_i64 and tcg_gen_gvec_abs
Backports commit 4e027a710673f5d4dc6cff88728bcfd32e4c47b0 from qemu
2019-05-16 16:43:02 -04:00
Richard Henderson 6d1730048d
tcg: Add support for integer absolute value
Remove a function of the same name from target/arm/.
Use a branchless implementation of abs gleaned from gcc.

Backports commit ff1f11f7f8710a768f9313f24bd7f509d3db27e5 from qemu
2019-05-16 16:25:15 -04:00
Richard Henderson c54b2776f6
tcg: Specify optional vector requirements with a list
Replace the single opcode in .opc with a null-terminated
array in .opt_opc. We still require that all opcodes be
used with the same .vece.

Validate the contents of this list with CONFIG_DEBUG_TCG.
All tcg_gen_*_vec functions will check any list active
during .fniv expansion. Swap the active list in and out
as we expand other opcodes, or take control away from the
front-end function.

Convert all existing vector aware front ends.

Backports commit 53229a7703eeb2bbe101a19a33ef22aaf960c65b from qemu
2019-05-16 15:05:02 -04:00
Peter Maydell 26cb1b8767
target/arm: Stop using variable length array in dc_zva
Currently the dc_zva helper function uses a variable length
array. In fact we know (as the comment above remarks) that
the length of this array is bounded because the architecture
limits the block size and QEMU limits the target page size.
Use a fixed array size and assert that we don't run off it.

Backports commit 63159601fb3e396b28da14cbb71e50ed3f5a0331 from qemu
2019-05-09 17:48:25 -04:00
Peter Maydell 7861820e94
target/arm: Implement XPSR GE bits
In the M-profile architecture, if the CPU implements the DSP extension
then the XPSR has GE bits, in the same way as the A-profile CPSR. When
we added DSP extension support we forgot to add support for reading
and writing the GE bits, which are stored in env->GE. We did put in
the code to add XPSR_GE to the mask of bits to update in the v7m_msr
helper, but forgot it in v7m_mrs. We also must not allow the XPSR we
pull off the stack on exception return to set the nonexistent GE bits.
Correct these errors:
* read and write env->GE in xpsr_read() and xpsr_write()
* only set GE bits on exception return if DSP present
* read GE bits for MRS if DSP present

Backports commit f1e2598c46d480c9e21213a244bc514200762828 from qemu
2019-05-09 17:46:31 -04:00
Lioncash a71c027063
decodetree: Add DisasContext argument to !function expanders
This does require adjusting all existing users.

Backports commit 451e4ffdb0003ab5ed0d98bd37b385c076aba183 from qemu
2019-05-09 17:40:45 -04:00
Emilio G. Cota 1715f382b4
target/arm: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

Backports commit 2399d4e7cec22ecf1c51062d2ebfd45220dbaace from qemu
2019-05-04 22:44:32 -04:00