Enable this on i386 to restrict the set of input registers
for an 8-bit store, as required by the architecture. This
removes the last use of scratch registers for user-only mode.
Backports 07ce0b05300de5bc8f1932a4cfbe38f3323e5ab1
The alias is intended to indicate that the bswap is for the
entire target_long. This should avoid ifdefs on some targets.
Backports a66424ba17d661007dc13d78c9e3014ccbaf0efb
In f47db80cc07, we handled odd-sized tail clearing for
the case of hosts that have vector operations, but did
not handle the case of hosts that do not have vector ops.
This was ok until e2e7168a214b, which changed the encoding
of simd_desc such that the odd sizes are impossible.
Add memset as a tcg helper, and use that for all out-of-line
byte stores to vectors. This includes, but is not limited to,
the tail clearing operation in question.
Backports 6d3ef04893bdea3e7aa08be3cce5141902836a31
To be able to compile this file with -Werror=implicit-fallthrough,
we need to add some fallthrough annotations to the case statements
that might fall through. Unfortunately, the typical "/* fallthrough */"
comments do not work here as expected since some case labels are
wrapped in macros and the compiler fails to match the comments in
this case. But using __attribute__((fallthrough)) seems to work fine,
so let's use that instead (by introducing a new QEMU_FALLTHROUGH
macro in our compiler.h header file).
Backports d84568b773fe1fc469c4d8419c3545be52eec82c
When the two arguments are identical, this can be reduced to
dup_vec or to mov_vec from a tcg_constant_vec.
Backports commit 1dc4fe70128db05237a00eda6eb15e2b44deb31f
The definition of INDEX_op_dupi_vec is that it operates on
units of tcg_target_ulong -- in this case 32 bits. It does
not work to use this for a uint64_t value that happens to be
small enough to fit in tcg_target_ulong.
Backports a5b30d950c42b14bc9da24d1e68add6538d23336
The previous change wrongly stated that 32-bit avx2 should have
used VPBROADCASTW. But that's a 16-bit broadcast and we want a
32-bit broadcast.
Backports f80d09b599a5e0fd7f44653f23b04104cb703f7a
These are easier to set and test when they have their own fields.
Reduce the size of alias_index and sort_index to 4 bits, which is
sufficient for TCG_MAX_OP_ARGS. This leaves only the bits indicating
constants within the ct field.
Move all initialization to allocation time, rather than init
individual fields in process_op_defs.
Backports bc2b17e6ea582ef3ade2bdca750de269c674c915
This wasn't actually used for anything, really. All variable
operands must accept registers, and which are indicated by the
set in TCGArgConstraint.regs.
Backports commit 74a117906b87ff9220e4baae5a7431d6f4eadd45
This uses an existing hole in the TCGArgConstraint structure
and will be convenient for keeping the data in one place.
Backports 66792f90f14fef18b25a168922877a367ecdca05
With larger vector sizes, it turns out oprsz == maxsz, and we only
need to represent mismatch for oprsz <= 32. We do, however, need
to represent larger oprsz and do so without reducing SIMD_DATA_BITS.
Reduce the size of the oprsz field and increase the maxsz field.
Steal the oprsz value of 24 to indicate equality with maxsz.
Backports e2e7168a214b0ed98dc357bba96816486a289762
We already support duplication of 128-bit blocks. This extends
that support to 256-bit blocks. This will be needed by SVE2.
Backports commit fe4b0b5bfa96c38ad1cad0689a86cca9f307e353
The fallback inline expansion for vectorized absolute value,
when the host doesn't support such an insn was flawed.
E.g. when a vector of bytes has all elements negative, mask
will be 0xffff_ffff_ffff_ffff. Subtracting mask only adds 1
to the low element instead of all elements becase -mask is 1
and not 0x0101_0101_0101_0101.
Backports commit e7e8f33fb603c3bfa0479d7d924f2ad676a84317
In arm_tr_init_disas_context() we have a FIXME comment that suggests
"cpu_M0 can probably be the same as cpu_V0". This isn't in fact
possible: cpu_V0 is used as a temporary inside gen_iwmmxt_shift(),
and that function is called in various places where cpu_M0 contains a
live value (i.e. between gen_op_iwmmxt_movq_M0_wRn() and
gen_op_iwmmxt_movq_wRn_M0() calls). Remove the comment.
We also have a comment on the declarations of cpu_V0/V1/M0 which
claims they're "for efficiency". This isn't true with modern TCG, so
replace this comment with one which notes that they're only used with
the iwmmxt decode
Backports 8b4c9a50dc9531a729ae4b5941d287ad0422db48
The 32 vector registers will be viewed as a continuous memory block.
It avoids the convension between element index and (regno, offset).
Thus elements can be directly accessed by offset from the first vector
base address.
Backports ad9e5aa2ae8032f19a8293b6b8f4661c06167bf0 from qemu
Forgetting this asserts when tcg_gen_cmp_vec is called from
within tcg_gen_cmpsel_vec.
Fixes: 72b4c792c7a
Backports commit 69c918d2ef319ac63cd759c527debc2a2bdf3a0c from qemu
The smin/smax/umin/umax operations require the operands to be
properly sign extended. Do not drop the MO_SIGN bit from the
load, and additionally extend the val input.
Backports commit 852f933e482518797f7785a2e017a215b88df815 from qemu
If the output of the move is dead, then the last use is in
the store. If we propagate the input to the store, then we
can remove the move opcode entirely.
Backports commit 61f15c487fc2aea14f6b0e52c459ae8b7d252a65 from qemu
For immediates, we must continue the special casing of 8-bit
elements. The other element sizes and shift types are trivially
implemented with shifts.
Backports commit 885b1706df6f0211a22e120fac910fb3abf3e733 from qemu
No host backend support yet, but the interfaces for rotls
are in place. Only implement left-rotate for now, as the
only known use of vector rotate by scalar is s390x, so any
right-rotate would be unused and untestable.
Backports commit 23850a74afb641102325b4b7f74071d929fc4594 from qemu
We do not reflect this expansion in tcg_can_emit_vecop_list,
so it is unused and unusable. However, we actually perform
the same expansion in do_gvec_shifts, so it is also unneeded.
Backports commit 3d5bb2ea5cc9ed54f65a6929a6e6baa01cabd98b from qemu
No host backend support yet, but the interfaces for rotlv
and rotrv are in place.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
v3: Drop the generic expansion from rot to shift; we can do better
for each backend, and then this code becomes unused.
Backports commit 5d0ceda902915e3f0e21c39d142c92c4e97c3ebb from qemu
No host backend support yet, but the interfaces for rotli
are in place. Canonicalize immediate rotate to the left,
based on a survey of architectures, but provide both left
and right shift interfaces to the translators.
Backports commit b0f7e7444c03da17e41bf327c8aea590104a28ab from qemu
For the benefit of compatibility of function pointer types,
we have standardized on int32_t and int64_t as the integral
argument to tcg expanders.
We converted most of them in 474b2e8f0f7, but missed the rotates.
Backports commit 07dada0336a83002dfa8673a9220a88e13d9a45c from qemu
We have this same parameter for GVecGen2i, GVecGen3,
and GVecGen3i. This will make some SVE2 insns easier
to parameterize.
Backports commit ac09ae627e9a2c65c8a452b69c3dac33c29d0719 from qemu
For use when a target needs to pass a configure-specific
target_ulong value to duplicate.
Backports commit 0f039e3ad9131966d9fe509c231b756868b015e2 from qemu
Add a version of tcg_gen_dup_* that takes both immediate and
a vector element size operand. This will replace the set of
tcg_gen_gvec_dup{8,16,32,64}i functions that encode the element
size within the function name.
Backports commit 44c94677febd15488f9190b11eaa4a08e8ac696b from qemu
OPC_SYNC_WMB, OPC_SYNC_MB, OPC_SYNC_ACQUIRE, OPC_SYNC_RELEASE and
OPC_SYNC_RMB have wrong encode. According to the mips manual,
their encode should be 'OPC_SYNC | 0x?? << 6' rather than
'OPC_SYNC | 0x?? << 5'. Wrong encode can lead illegal instruction
errors. These instructions often appear with multi-threaded
simulation.
Fixes: 6f0b99104a3 ("tcg/mips: Add support for fence")
Backports commit a4e57084c16d5b0eff3651693fba04f26b30b551 from qemu
We were only constructing the 64-bit element, and not
replicating the 64-bit element across the rest of the vector.
Backports commit e20cb81d9c5a3d0f9c08f3642728a210a1c162c9 from qemu
A given RISU testcase for SVE can produce
tcg-op-vec.c:511: do_shifti: Assertion `i >= 0 && i < (8 << vece)' failed.
because expand_vec_sari gave a shift count of 32 to a MO_32
vector shift.
In 44f1441dbe1, we changed from direct expansion of vector opcodes
to re-use of the tcg expanders. So while the comment correctly notes
that the hw will handle such a shift count, we now have to take our
own sanity checks into account. Which is easy in this particular case.
Fixes: 44f1441dbe1
Backports commit 312b426fea4d6dd322d7472c80010a8ba7a166d2 from qemu
Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.
Target dependant attributes are conditionalized upon NEED_CPU_H.
Backports commit 14776ab5a12972ea439c7fb2203a4c15a09094b4 from qemu
This patch moves the define of target access alignment earlier from
target/foo/cpu.h to configure.
Suggested in Richard Henderson's reply to "[PATCH 1/4] tcg: TCGMemOp is now
accelerator independent MemOp"
Backports commit 52bf9771fdfce98e98cea36a17a18915be6f6b7f from qemu
This patch fixes two problems:
(1) The inputs to the EXTR insn were reversed,
(2) The input constraints use rZ, which means that we need to use
the REG0 macro in order to supply XZR for a constant 0 input.
Fixes: 464c2969d5d
Backports commit 1789d4274b851fb8fdf4a947ce5474c63e813d0d from qemu
On a 64-bit host, discard any replications of the 32-bit
sign bit when performing the shift and merge.
Backports commit 80f4d7c3ae216c191fb403e149bcba88d6aa40bb from qemu
This operation can always be emitted, even if we need to
fall back to xor. Adjust the assertions to match.
Backports commit 11978f6f58f1d3d66429f7ff897524f693d823ce from qemu
Amusingly, we had already ignored the comment to keep this value
at the end of CPUState. This restores the minimum negative offset
from TCG_AREG0 for code generation.
For the couple of uses within qom/cpu.c, without NEED_CPU_H, add
a pointer from the CPUState object to the IcountDecr object within
CPUNegativeOffsetState.
Backports commit 5e1401969b25f676fee6b1c564441759cf967a43 from qemu
Now that we have ArchCPU, we can define this generically,
in the one place that needs it.
Backports commit 677c4d69ac21961e76a386f9bfc892a44923acc0 from qemu
The allows immediates to be used for ORR and BIC,
as well as the trivial inversions, ORC and AND.
Backports commit 9e27f58b9902834dffc0d66d9eb62f78d9c2a632 from qemu