Commit graph

325 commits

Author SHA1 Message Date
Peter Maydell 6dd4a8e93f target/arm: Implement fp16 for VACGE, VACGT
Convert the neon floating-point vector absolute comparison ops
VACGE and VACGT over to using a gvec hepler and use this to
implement the fp16 case.

Backports bb2741da186ebaebc7d5189372be4401e1ff9972
2021-03-01 16:47:44 -05:00
Peter Maydell 4eb39f1b2f target/arm: Implement fp16 for VCEQ, VCGE, VCGT comparisons
Convert the Neon floating-point vector comparison ops VCEQ,
VCGE and VCGT over to using a gvec helper and use this to
implement the fp16 case.

(We put the float16_ceq() etc functions above the DO_2OP()
macro definition because later when we convert the
compare-against-zero instructions we'll want their
definitions to be visible at that point in the source file.)

Backports ad505db233b89b7fd4b5a98b6f0e8ac8d05b11db
2021-03-01 16:44:34 -05:00
Peter Maydell 4850377f01 target/arm: Implement FP16 for Neon VADD, VSUB, VABD, VMUL
Implement FP16 support for the Neon insns which use the DO_3S_FP_GVEC
macro: VADD, VSUB, VABD, VMUL.

For VABD this requires us to implement a new gvec_fabd_h helper
using the machinery we have already for the other helpers.

Backport e4a6d4a69e239becfd83bdcd996476e7b8e1138d
2021-03-01 16:31:54 -05:00
Peter Maydell 90aa9647e0 target/arm: Implement VFP fp16 VRINT*
Implement the fp16 version of the VFP VRINT* insns.

Backports 0a6f4b4cb338665b81ad824d9a6868932461b7f7
2021-03-01 16:15:21 -05:00
Peter Maydell 9c5b6f06a2 target/arm: Use macros instead of open-coding fp16 conversion helpers
Now the VFP_CONV_FIX macros can handle fp16's distinction between the
width of the operation and the width of the type used to pass operands,
use the macros rather than the open-coded functions.

This creates an extra six helper functions, all of which we are going
to need for the AArch32 VFP fp16 instructions.

Backports commit 414ba270c4fb758d987adf37ae9bfe531715c604
2021-02-28 05:08:44 -05:00
Peter Maydell 5d98e14545 target/arm: Implement VFP fp16 VCMP
Implement fp16 version of VCMP.

Backports 1b88b054c5b201e8581114d29527c6a5a7e088c9
2021-02-28 04:56:24 -05:00
Peter Maydell 2d9abf7c0b target/arm: Implement VFP fp16 for VABS, VNEG, VSQRT
Implement VFP fp16 for VABS, VNEG and VSQRT. This is all
the fp16 insns that use the DO_VFP_2OP macro, because there
is no fp16 version of VMOV_reg.

Notes:
* the gen_helper_vfp_negh already exists as we needed to create
it for the fp16 multiply-add insns
* as usual we need to use the f16 version of the fp_status;
this is only relevant for VSQRT

Backports ce2d65a5d191380756cdac7a1fd1ba76bd1621cf
2021-02-28 04:48:28 -05:00
Peter Maydell 6ac2c597ab target/arm: Implement VFP fp16 for fused-multiply-add
Implement VFP fp16 support for fused multiply-add insns
VFNMA, VFNMS, VFMA, VFMS.

Backports 9886fe2834b064a3cf0675a4659942ed547aed42
2021-02-28 04:39:21 -05:00
Peter Maydell a42ecfe203 target/arm: Implement VFP fp16 VMLA, VMLS, VNMLS, VNMLA, VNMUL
Implement fp16 versions of the VFP VMLA, VMLS, VNMLS, VNMLA, VNMUL
instructions. (These are all the remaining ones which we implement
via do_vfp_3op_[hsd]p().)

Backports commit e7cb0ded52c6d7b86585b09935fe7caeb9e38b69
2021-02-28 04:29:37 -05:00
Peter Maydell eae621098d target/arm: Implement VFP fp16 for VFP_BINOP operations
Implmeent VFP fp16 support for simple binary-operator VFP insns VADD,
VSUB, VMUL, VDIV, VMINNM and VMAXNM:

* make the VFP_BINOP() macro generate float16 helpers as well as
float32 and float64
* implement a do_vfp_3op_hp() function similar to the existing
do_vfp_3op_sp()
* add decode for the half-precision insn patterns

Note that the VFP_BINOP macro use creates a couple of unused helper
functions vfp_maxh and vfp_minh, but they're small so it's not worth
splitting the BINOP operations into "needs halfprec" and "no
halfprec" groups.

Backports commit 120a0eb3ea23a5b06fae2f3daebd46a4035864cf
2021-02-28 04:24:39 -05:00
LIU Zhiwei d26cd63ad6 softfloat: Define misc operations for bfloat16
Backports 5ebf5f4be66c378fd5f3dee85f54dd4942171d57
2021-02-27 16:41:46 -05:00
LIU Zhiwei d8168a8142 softfloat: Define convert operations for bfloat16
Backports 34f0c0a98a5f3bb6706088c0384f937f7a294d3e
2021-02-27 16:37:11 -05:00
LIU Zhiwei b0be0d28cc softfloat: Define operations for bfloat16
Backports 8282310d8535cc2a8431c516e907da79f92df6eb
2021-02-26 15:20:30 -05:00
Frank Chang d97454eb63 softfloat: Add fp16 and uint8/int8 conversion functions
Backports 0d93d8ec632154dea2627a9e989972ee09721187
2021-02-26 15:11:57 -05:00
Lioncash f5a21abc0b target/arm: Convert sq{, r}dmulh to gvec for aa64 advsimd 2021-02-26 15:01:44 -05:00
Richard Henderson 94b0876f15 target/arm: Add sve infrastructure for page lookup
For contiguous predicated memory operations, we want to
minimize the number of tlb lookups performed. We have
open-coded this for sve_ld1_r, but for correctness with
MTE we will need this for all of the memory operations.

Create a structure that holds the bounds of active elements,
and metadata for two pages. Add routines to find those
active elements, lookup the pages, and run watchpoints
for those pages.

Temporarily mark the functions unused to avoid Werror.

Backports commit b4cd95d2f4c7197b844f51b29871d888063ea3e7 from qemu
2021-02-25 20:28:23 -05:00
Richard Henderson 2e03f74a53 target/arm: Use cpu_*_data_ra for sve_ldst_tlb_fn
Use the "normal" memory access functions, rather than the
softmmu internal helper functions directly.

Since fb901c9, cpu_mem_index is now a simple extract
from env->hflags and not a large computation.  Which means
that it's now more work to pass around this value than it
is to recompute it.

This only adjusts the primitives, and does not clean up
all of the uses within sve_helper.c.
2021-02-25 20:16:38 -05:00
Richard Henderson 5b3ddcf2e2 target/arm: Simplify DC_ZVA
Now that we know that the operation is on a single page,
we need not loop over pages while probing.

Backports commit e26d0d226892f67435cadcce86df0ddfb9943174 from qemu
2021-02-25 15:55:46 -05:00
Joseph Myers b08d204a37 softfloat: merge floatx80_mod and floatx80_rem
The m68k-specific softfloat code includes a function floatx80_mod that
is extremely similar to floatx80_rem, but computing the remainder
based on truncating the quotient toward zero rather than rounding it
to nearest integer. This is also useful for emulating the x87 fprem
and fprem1 instructions. Change the floatx80_rem implementation into
floatx80_modrem that can perform either operation, with both
floatx80_rem and floatx80_mod as thin wrappers available for all
targets.

There does not appear to be any use for the _mod operation for other
floating-point formats in QEMU (the only other architectures using
_rem at all are linux-user/arm/nwfpe, for FPA emulation, and openrisc,
for instructions that have been removed in the latest version of the
architecture), so no change is made to the code for other formats.

Backports commit 6b8b0136ab3018e4b552b485f808bf66bcf19ead from qemu
2021-02-25 13:34:05 -05:00
Richard Henderson 1d95dd1c89 target/arm: Split helper_crypto_sm3tt
Rather than passing an opcode to a helper, fully decode the
operation at translate time. Use clear_tail_16 to zap the
balance of the SVE register with the AdvSIMD write.

Backports commit 43fa36c96c24349145497adc1b451f9caf74e344 from qemu
2020-06-14 23:24:21 -04:00
Richard Henderson 5ca8caf656 target/arm: Split helper_crypto_sha1_3reg
Rather than passing an opcode to a helper, fully decode the
operation at translate time. Use clear_tail_16 to zap the
balance of the SVE register with the AdvSIMD write.

Backports commit afc8b7d32668547308bdd654a63cf5228936e0ba from qemu
2020-06-14 23:18:45 -04:00
Richard Henderson 894f2168da target/arm: Convert rax1 to gvec helpers
With this conversion, we will be able to use the same helpers
with sve. This also fixes a bug in which we failed to clear
the high bits of the SVE register after an AdvSIMD operation.

Backports commit 1738860d7e60dec5dbeba17f8b44d31aae3accac from qemu
2020-06-14 22:49:36 -04:00
Richard Henderson cc3187b1e4 tcg: Implement gvec support for rotate by scalar
No host backend support yet, but the interfaces for rotls
are in place. Only implement left-rotate for now, as the
only known use of vector rotate by scalar is s390x, so any
right-rotate would be unused and untestable.

Backports commit 23850a74afb641102325b4b7f74071d929fc4594 from qemu
2020-06-14 22:00:50 -04:00
Richard Henderson be78062fd8 tcg: Implement gvec support for rotate by vector
No host backend support yet, but the interfaces for rotlv
and rotrv are in place.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v3: Drop the generic expansion from rot to shift; we can do better
for each backend, and then this code becomes unused.

Backports commit 5d0ceda902915e3f0e21c39d142c92c4e97c3ebb from qemu
2020-06-14 21:43:46 -04:00
Richard Henderson 5cce52a04b tcg: Implement gvec support for rotate by immediate
No host backend support yet, but the interfaces for rotli
are in place. Canonicalize immediate rotate to the left,
based on a survey of architectures, but provide both left
and right shift interfaces to the translators.

Backports commit b0f7e7444c03da17e41bf327c8aea590104a28ab from qemu
2020-06-14 21:26:58 -04:00
Peter Maydell bb0aa79847 target/arm: Convert Neon VADD, VSUB, VABD 3-reg-same insns to decodetree
Convert the Neon VADD, VSUB, VABD 3-reg-same insns to decodetree.
We already have gvec helpers for addition and subtraction, but must
add one for fabd.

Backports commit a26a352bb498662cd0c205cb433a352f86fac7d2 from qemu
2020-05-15 23:26:51 -04:00
Richard Henderson 451683ee79 target/arm: Vectorize SABA/UABA
Include 64-bit element size in preparation for SVE2.

Backports commit cfdb2c0c95ae9205b0dd7f0f5e970cdec50fef20 from qemu
2020-05-15 22:15:14 -04:00
Richard Henderson 5d7c46204d target/arm: Create gen_gvec_[us]sra
The functions eliminate duplication of the special cases for
this operation. They match up with the GVecGen2iFn typedef.

Add out-of-line helpers. We got away with only having inline
expanders because the neon vector size is only 16 bytes, and
we know that the inline expansion will always succeed.
When we reuse this for SVE, tcg-gvec-op may decide to use an
out-of-line helper due to longer vector lengths.

Backports commit 631e565450c483e0622eec3d8b61d7fa41d16bca from qemu
2020-05-15 20:10:32 -04:00
Richard Henderson 07f622e57d tcg: Add tcg_gen_gvec_dup_imm
Add a version of tcg_gen_dup_* that takes both immediate and
a vector element size operand. This will replace the set of
tcg_gen_gvec_dup{8,16,32,64}i functions that encode the element
size within the function name.

Backports commit 44c94677febd15488f9190b11eaa4a08e8ac696b from qemu
2020-05-07 09:55:25 -04:00
Thomas Huth 84f2729a29 target/arm: Make cpu_register() available for other files
Make cpu_register() (renamed to arm_cpu_register()) available
from internals.h so we can register CPUs also from other files
in the future.

Backports commit 37bcf244454f4efb82e2c0c64bbd7eabcc165a0c from qemu
2020-04-30 21:38:42 -04:00
Richard Henderson b26b4c06cd target/arm: Vectorize integer comparison vs zero
These instructions are often used in glibc's string routines.
They were the final uses of the 32-bit at a time neon helpers.

Backports commit 6b375d3546b009d1e63e07397ec9c6af256e15e9 from qemu
2020-04-30 21:29:17 -04:00
Richard Henderson fcce8d4aa1 target/arm: Convert PMULL.8 to gvec
We still need two different helpers, since NEON and SVE2 get the
inputs from different locations within the source vector. However,
we can convert both to the same internal form for computation.

The sve2 helper is not used yet, but adding it with this patch
helps illustrate why the neon changes are helpful.

Backports commit e7e96fc5ec8c79dc77fef522d5226ac09f684ba5 from qemu
2020-03-21 19:35:46 -04:00
Richard Henderson c00f72f74f target/arm: Convert PMULL.64 to gvec
The gvec form will be needed for implementing SVE2.

Backports commit b9ed510e46f2f9e31e5e8adb4661d5d1cbe9a459 from qemu
2020-03-21 19:27:38 -04:00
Richard Henderson db8a935b44 target/arm: Convert PMUL.8 to gvec
The gvec form will be needed for implementing SVE2.

Extend the implementation to operate on uint64_t instead of uint32_t.
Use a counted inner loop instead of terminating when op1 goes to zero,
looking toward the required implementation for ARMv8.4-DIT.

Backports commit a21bb78e5817be3f494922e1dadd6455fe5d6318 from qemu
2020-03-21 19:22:18 -04:00
Richard Henderson d3139f2f0a target/arm: Vectorize USHL and SSHL
These instructions shift left or right depending on the sign
of the input, and 7 bits are significant to the shift. This
requires several masks and selects in addition to the actual
shifts to form the complete answer.

That said, the operation is still a small improvement even for
two 64-bit elements -- 13 vector operations instead of 2 * 7
integer operations.

Backports commit 87b74e8b6edd287ea2160caa0ebea725fa8f1ca1 from qemu
2020-03-21 19:14:17 -04:00
Richard Henderson 12b4e01d9c tcg: Add tcg_gen_gvec_5_ptr
Extend the vector generator infrastructure to handle
5 vector arguments.

Backports commit 2445971604c1cfd3ec484457159f4ac300fb04d2 from qemu
2020-03-21 16:54:01 -04:00
Richard Henderson d6150127b4 target/arm: Add the hypervisor virtual counter
Backports commit 8c94b071a09c2183f032febff3112f2b7662156c from qemu
2020-03-21 15:35:36 -04:00
Yongbok Kim 7fbc373f59 target/mips: Add implementation of GINVT instruction
Implement emulation of GINVT instruction. As QEMU doesn't support
caches and virtualization, this implementation covers only one
instruction (GINVT - Global Invalidate TLB) among all TLB-related
MIPS instructions.

Backports commit 99029be1c2875cd857614397674bbf563ddb6f91 from qemu
2020-03-21 13:01:35 -04:00
Yongbok Kim f10de71e73 target/mips: Amend CP0 WatchHi register implementation
WatchHi is extended by the field MemoryMapID with the GINVT instruction.
The field is accessible by MTHC0/MFHC0 in 32-bit architectures and DMTC0/
DMFC0 in 64-bit architectures.

Backports commit feafe82cc2289a31b3e3f11dc76f3539ea22d670 from qemu
2020-03-21 12:39:00 -04:00
Beata Michalska 0716794d86 Memory: Enable writeback for given memory region
Add an option to trigger memory writeback to sync given memory region
with the corresponding backing store, case one is available.
This extends the support for persistent memory, allowing syncing on-demand.

Backports commit 61c490e25e081af39ff40556f6c1229b8b011585 from qemu
2020-01-14 07:44:24 -05:00
Beata Michalska 47776dc862 tcg: cputlb: Add probe_read
Add probe_read alongside the write probing equivalent.

Backports commit 9e70492b4389d4355ae9c9ee2ba6286cfdadc257 from qemu
2020-01-14 07:16:41 -05:00
David Hildenbrand d9d91c1db6 tcg: Factor out probe_write() logic into probe_access()
Let's also allow to probe other access types.

Backports commit c25c283df0f08582df29f1d5d7be1516b851532d from qemu
2020-01-14 07:07:54 -05:00
Richard Henderson 07f30382c0 cputlb: Handle watchpoints via TLB_WATCHPOINT
The raising of exceptions from check_watchpoint, buried inside
of the I/O subsystem, is fundamentally broken. We do not have
the helper return address with which we can unwind guest state.

Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT.
Move the call to cpu_check_watchpoint into the cputlb helpers
where we do have the helper return address.

This allows watchpoints on RAM to bypass the full i/o access path.

Backports commit 50b107c5d617eaf93301cef20221312e7a986701 from qemu
2020-01-14 06:58:33 -05:00
Tony Nguyen da98d0da4e memory: Access MemoryRegion with endianness
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.

Call memory_region_dispatch_{read|write} with endianness encoded into
the "MemOp op" operand.

This patch does not change any behaviour as
memory_region_dispatch_{read|write} is yet to handle the endianness.

Once it does handle endianness, callers with byte swaps can collapse
them into adjust_endianness.

Backports commit d5d680cacc66ef7e3c02c81dc8f3a34eabce6dfe from qemu
2020-01-07 18:54:11 -05:00
Marc Zyngier 868de52f69 target/arm: Handle trapping to EL2 of AArch32 VMRS instructions
HCR_EL2.TID3 requires that AArch32 reads of MVFR[012] are trapped to
EL2, and HCR_EL2.TID0 does the same for reads of FPSID.
In order to handle this, introduce a new TCG helper function that
checks for these control bits before executing the VMRC instruction.

Tested with a hacked-up version of KVM/arm64 that sets the control
bits for 32bit guests.

Backports commit 9ca1d776cb49c09b09579d9edd0447542970c834 from qemu
2020-01-07 18:04:16 -05:00
Peter Maydell 2faffb5af1
target/mips: Switch to do_transaction_failed() hook
Switch the MIPS target from the old unassigned_access hook to the new
do_transaction_failed hook.

Unlike the old hook, do_transaction_failed is only ever called from
the TCG memory access paths, so there is no need for the "ignore this
if we're using KVM" hack that we were previously using to work around
the way unassigned_access was called for all kinds of memory accesses
to unassigned physical addresses.

The MIPS target does not ever do direct memory reads by physical
address (via either ldl_phys etc or address_space_ldl etc), so the
only memory accesses this affects are the 'normal' guest loads and
stores, which will be handled by the new hook; their behaviour is
unchanged.

Backports commit 4f02a06d50ef0081089ed8cb3ec7c7986e3c95f8 from qemu
2019-11-28 02:54:53 -05:00
Richard Henderson 3d3d56056b
target/arm: Remove helper_double_saturate
Replace x = double_saturate(y) with x = add_saturate(y, y).
There is no need for a separate more specialized helper.

Backports commit 640581a06d14e2d0d3c3ba79b916de6bc43578b0 from qemu
2019-11-18 20:13:21 -05:00
Mateja Marjanovic 9e8aed043e
target/mips: Refactor and fix INSERT.<B|H|W|D> instructions
The old version of the helper for the INSERT.<B|H|W|D> MSA instructions
has been replaced with four helpers that don't use switch, and change
the endianness of the given index, when executed on a big endian host.

Backports commit c1c9a10fb1f7a6782711817c167a2c20b000fc12 from qemu
2019-05-28 19:42:28 -04:00
Mateja Marjanovic d6a8d25015
target/mips: Refactor and fix COPY_U.<B|H|W> instructions
The old version of the helper for the COPY_U.<B|H|W> MSA instructions
has been replaced with four helpers that don't use switch, and change
the endianness of the given index, when executed on a big endian host.

Backports commit 41d288582782cf8d63241ecb6efa1e4160fe78f7 from qemu
2019-05-28 19:39:22 -04:00
Mateja Marjanovic 54a33d1db3
target/mips: Refactor and fix COPY_S.<B|H|W|D> instructions
The old version of the helper for the COPY_S.<B|H|W|D> MSA instructions
has been replaced with four helpers that don't use switch, and change
the endianness of the given index, when executed on a big endian host.

Backports commit 631c467461496dcf6d6a3e4c3d27a1433e96868e from qemu
2019-05-28 19:36:14 -04:00