Commit graph

6977 commits

Author SHA1 Message Date
LIU Zhiwei 0554e79ad1 target/riscv: support vector extension csr
The v0.7.1 specification does not define vector status within mstatus.
A future revision will define the privileged portion of the vector status.

Backports 8e3a1f18871e0ea251b95561fe1ec5a9bc896c4a from qemu
2021-02-26 02:25:58 -05:00
LIU Zhiwei bff31d8822 target/riscv: implementation-defined constant parameters
vlen is the vector register length in bits.
elen is the max element size in bits.
vext_spec is the vector specification version, default value is v0.7.1.

Backports 32931383270e2ca8209267ca99f23f3c5f780982 from qemu
2021-02-26 02:23:28 -05:00
LIU Zhiwei 0968caa249 target/riscv: add vector extension field in CPURISCVState
The 32 vector registers will be viewed as a continuous memory block.
It avoids the convension between element index and (regno, offset).
Thus elements can be directly accessed by offset from the first vector
base address.

Backports ad9e5aa2ae8032f19a8293b6b8f4661c06167bf0 from qemu
2021-02-26 02:17:49 -05:00
Peter Maydell fceb5e309a Open 5.2 development tree
Backports commit 672b2f2695891b6d818bddc3ce0df964c7627969 from qemu
2021-02-25 23:52:17 -05:00
Peter Maydell 1f497fc74a Update version for v5.1.0 release
Backports commit d0ed6a69d399ae193959225cdeaa9382746c91cc from qemu
2021-02-25 23:51:51 -05:00
Peter Maydell 3c229a2b9e Update version for v5.1.0-rc3 release 2021-02-25 23:51:33 -05:00
Peter Maydell 0718459fb3 target/arm: Fix Rt/Rt2 in ESR_ELx for copro traps from AArch32 to 64
When a coprocessor instruction in an AArch32 guest traps to AArch32
Hyp mode, the syndrome register (HSR) includes Rt and Rt2 fields
which are simply copies of the Rt and Rt2 fields from the trapped
instruction. However, if the instruction is trapped from AArch32 to
an AArch64 higher exception level, the Rt and Rt2 fields in the
syndrome register (ESR_ELx) must be the AArch64 view of the register.
This makes a difference if the AArch32 guest was in a mode other than
User or System and it was using r13 or r14, or if it was in FIQ mode
and using r8-r14.

We don't know at translate time which AArch32 CPU mode we are in, so
we leave the values we generate in our prototype syndrome register
value at translate time as the raw Rt/Rt2 from the instruction, and
instead correct them to the AArch64 view when we find we need to take
an exception from AArch32 to AArch64 with one of these syndrome
values.

Fixes: https://bugs.launchpad.net/qemu/+bug/1879587

Backports commit a65dabf71a9f9b949d556b1b57fd72595df92398 from qemu
2021-02-25 23:50:18 -05:00
Peter Collingbourne 7de60dfa51 target/arm: Fix decode of LDRA[AB] instructions
These instructions use zero as the discriminator, not SP.

Backports commit d250bb19ced3b702c7c37731855f6876d0cc7995 from qemu
2021-02-25 23:47:25 -05:00
Kaige Li 3004cc1f97 target/arm: Avoid maybe-uninitialized warning with gcc 4.9
GCC version 4.9.4 isn't clever enough to figure out that all
execution paths in disas_ldst() that use 'fn' will have initialized
it first, and so it warns:

/home/LiKaige/qemu/target/arm/translate-a64.c: In function ‘disas_ldst’:
/home/LiKaige/qemu/target/arm/translate-a64.c:3392:5: error: ‘fn’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
fn(cpu_reg(s, rt), clean_addr, tcg_rs, get_mem_index(s),
^
/home/LiKaige/qemu/target/arm/translate-a64.c:3318:22: note: ‘fn’ was declared here
AtomicThreeOpFn *fn;
^

Make it happy by initializing the variable to NULL.

Backports commit 88a90e3de6ae99cbcfcc04c862c51f241fdf685f from qemu
2021-02-25 23:45:13 -05:00
Richard Henderson ce8282d9cd target/arm: Fix AddPAC error indication
The definition of top_bit used in this function is one higher
than that used in the Arm ARM psuedo-code, which put the error
indication at top_bit - 1 at the wrong place, which meant that
it wasn't visible to Auth.

Fixing the definition of top_bit requires more changes, because
its most common use is for the count of bits in top_bit:bot_bit,
which would then need to be computed as top_bit - bot_bit + 1.

For now, prefer the minimal fix to the error indication alone.

Fixes: 63ff0ca94cb

Backports commit 8796fe40dd30cd9ffd3c958906471715c923b341 from qemu
2021-02-25 23:44:28 -05:00
Peter Maydell 4952920d4d Update version for v5.1.0-rc2 release
Backports commit 5772f2b1fc5d00e7e04e01fa28e9081d6550440a from qemu
2021-02-25 23:43:39 -05:00
Lioncash a1e8e0adff target/arm: Fix bad rebase within do_mem_zpz 2021-02-25 23:43:16 -05:00
Richard Henderson 5e1316a92e target/arm: Always pass cacheattr in S1_ptw_translate
When we changed the interface of get_phys_addr_lpae to require
the cacheattr parameter, this spot was missed. The compiler is
unable to detect the use of NULL vs the nonnull attribute here.

Fixes: 7e98e21c098

Backports commit a6d6f37aed4b171d121cd4a9363fbb41e90dcb53 from qemu
2021-02-25 23:40:32 -05:00
Laszlo Ersek 40c04c73b0 target/i386: floatx80: avoid compound literals in static initializers
Quoting ISO C99 6.7.8p4, "All the expressions in an initializer for an
object that has static storage duration shall be constant expressions or
string literals".

The compound literal produced by the make_floatx80() macro is not such a
constant expression, per 6.6p7-9. (An implementation may accept it,
according to 6.6p10, but is not required to.)

Therefore using "floatx80_zero" and make_floatx80() for initializing
"f2xm1_table" and "fpatan_table" is not portable. And gcc-4.8 in RHEL-7.6
actually chokes on them:

> target/i386/fpu_helper.c:871:5: error: initializer element is not constant
> { make_floatx80(0xbfff, 0x8000000000000000ULL),
> ^

We've had the make_floatx80_init() macro for this purpose since commit
3bf7e40ab914 ("softfloat: fix for C99", 2012-03-17), so let's use that
macro again.

Fixes: eca30647fc0 ("target/i386: reimplement f2xm1 using floatx80 operations")
Fixes: ff57bb7b632 ("target/i386: reimplement fpatan using floatx80 operations")

Backports commit 163b3d1af2552845a60967979aca8d78a6b1b088 from qemu
2021-02-25 23:38:54 -05:00
Richard Henderson 6390789a09 target/i386: Save cc_op before loop insns
We forgot to update cc_op before these branch insns,
which lead to losing track of the current eflags.

Buglink: https://bugs.launchpad.net/qemu/+bug/1888165

Backports commit 3cb3a7720b01830abd5fbb81819dbb9271bf7821 from qemu
2021-02-25 23:36:43 -05:00
Zong Li 001d2e6a29 target/riscv: Fix the range of pmpcfg of CSR funcion table
Backports commit 8ba26b0b2b00dd5849a6c0981e358dc7a7cc315d from qemu
2021-02-25 23:35:21 -05:00
Peter Maydell 08ce565d7c Update version for v5.1.0-rc1 release
Backports commit c8004fe6bbfc0d9c2e7b942c418a85efb3ac4b00 from qemu
2021-02-25 23:34:20 -05:00
Richard Henderson 55369d710c tcg: Save/restore vecop_list around minmax fallback
Forgetting this asserts when tcg_gen_cmp_vec is called from
within tcg_gen_cmpsel_vec.

Fixes: 72b4c792c7a

Backports commit 69c918d2ef319ac63cd759c527debc2a2bdf3a0c from qemu
2021-02-25 23:33:24 -05:00
Chenyi Qiang e5d9e0ed53 target/i386: add fast short REP MOV support
For CPUs support fast short REP MOV[CPUID.(EAX=7,ECX=0):EDX(bit4)], e.g
Icelake and Tigerlake, expose it to the guest VM.

Backports commit 5cb287d2bd578dfe4897458793b4fce35bc4f744 from qemu
2021-02-25 23:31:42 -05:00
Peter Maydell 113dc25fbf Update version for v5.1.0-rc0 release
Backports commit 8746309137ba470d1b2e8f5ce86ac228625db940 from qemu
2021-02-25 23:30:37 -05:00
Aaron Lindsay e532ce610e target/arm: Don't do raw writes for PMINTENCLR
Raw writes to this register when in KVM mode can cause interrupts to be
raised (even when the PMU is disabled). Because the underlying state is
already aliased to PMINTENSET (which already provides raw write
functions), we can safely disable raw accesses to PMINTENCLR entirely.

Backports commit 887c0f1544991f567543b7c214aa11ab0cea0a29 from qemu
2021-02-25 23:27:47 -05:00
Richard Henderson f403c1f54f target/arm: Fix mtedesc for do_mem_zpz
The mtedesc that was constructed was not actually passed in.
Found by Coverity (CID 1429996).

Backports commit cdecb3fc1eb182d90666348a47afe63c493686e7 from qemu
2021-02-25 23:25:54 -05:00
Paolo Bonzini 5b794349d3 target/i386: implement undocumented 'smsw r32' behavior
In 32-bit mode, the higher 16 bits of the destination
register are undefined. In practice CR0[31:0] is stored,
just like in 64-bit mode, so just remove the "if" that
currently differentiates the behavior.

Backports commit c0c8445255b2b5b440c355431c8b01b7b7b7c8cf from qemu
2021-02-25 23:23:51 -05:00
Joseph Myers cf54c51869 target/i386: fix IEEE SSE floating-point exception raising
The SSE instruction implementations all fail to raise the expected
IEEE floating-point exceptions because they do nothing to convert the
exception state from the softfloat machinery into the exception flags
in MXCSR.

Fix this by adding such conversions. Unlike for x87, emulated SSE
floating-point operations might be optimized using hardware floating
point on the host, and so a different approach is taken that is
compatible with such optimizations. The required invariant is that
all exceptions set in env->sse_status (other than "denormal operand",
for which the SSE semantics are different from those in the softfloat
code) are ones that are set in the MXCSR; the emulated MXCSR is
updated lazily when code reads MXCSR, while when code sets MXCSR, the
exceptions in env->sse_status are set accordingly.

A few instructions do not raise all the exceptions that would be
raised by the softfloat code, and those instructions are made to save
and restore the softfloat exception state accordingly.

Nothing is done about "denormal operand"; setting that (only for the
case when input denormals are *not* flushed to zero, the opposite of
the logic in the softfloat code for such an exception) will require
custom code for relevant instructions, or else architecture-specific
conditionals in the softfloat code for when to set such an exception
together with custom code for various SSE conversion and rounding
instructions that do not set that exception.

Nothing is done about trapping exceptions (for which there is minimal
and largely broken support in QEMU's emulation in the x87 case and no
support at all in the SSE case).

Backports commit 418b0f93d12a1589d5031405de857844f32e9ccc from qemu
2021-02-25 23:21:32 -05:00
Joseph Myers fd5b0dd456 target/i386: set SSE FTZ in correct floating-point state
The code to set floating-point state when MXCSR changes calls
set_flush_to_zero on &env->fp_status, so affecting the x87
floating-point state rather than the SSE state. Fix to call it for
&env->sse_status instead.

Backports commit 3ddc0eca2229846bfecc3485648a6cb85a466dc7 from qemu
2021-02-25 23:15:53 -05:00
Laurent Vivier c15ddf11dd softfloat,m68k: disable floatx80_invalid_encoding() for m68k
According to the comment, this definition of invalid encoding is given
by intel developer's manual, and doesn't comply with 680x0 FPU.

With m68k, the explicit integer bit can be zero in the case of:
- zeros (exp == 0, mantissa == 0)
- denormalized numbers (exp == 0, mantissa != 0)
- unnormalized numbers (exp != 0, exp < 0x7FFF)
- infinities (exp == 0x7FFF, mantissa == 0)
- not-a-numbers (exp == 0x7FFF, mantissa != 0)

For infinities and NaNs, the explicit integer bit can be either one or
zero.

The IEEE 754 standard does not define a zero integer bit. Such a number
is an unnormalized number. Hardware does not directly support
denormalized and unnormalized numbers, but implicitly supports them by
trapping them as unimplemented data types, allowing efficient conversion
in software.

See "M68000 FAMILY PROGRAMMER’S REFERENCE MANUAL",
"1.6 FLOATING-POINT DATA TYPES"

We will implement in the m68k TCG emulator the FP_UNIMP exception to
trap into the kernel to normalize the number. In case of linux-user,
the number will be normalized by QEMU.

Backports commit d159dd058c7dc48a9291fde92eaae52a9f26a4d1 from qemu
2021-02-25 23:14:47 -05:00
Mark Cave-Ayland db742bec00 target/m68k: consolidate physical translation offset into get_physical_address()
Since all callers to get_physical_address() now apply the same page offset to
the translation result, move the logic into get_physical_address() itself to
avoid duplication.

Backports commit 852002b5664bf079da05c5201dbf2345b870e5ed from qemu
2021-02-25 23:13:48 -05:00
Mark Cave-Ayland 3b2bc4b0c8 target/m68k: fix physical address translation in m68k_cpu_get_phys_page_debug()
The result of the get_physical_address() function should be combined with the
offset of the original page access before being returned. Otherwise the
m68k_cpu_get_phys_page_debug() function can round to the wrong page causing
incorrect lookups in gdbstub and various "Disassembler disagrees with
translator over instruction decoding" warnings to appear at translation time.

Fixes: 88b2fef6c3 ("target/m68k: add MC68040 MMU")
2021-02-25 23:12:12 -05:00
Richard Henderson 65d5288563 tcg: Fix do_nonatomic_op_* vs signed operations
The smin/smax/umin/umax operations require the operands to be
properly sign extended. Do not drop the MO_SIGN bit from the
load, and additionally extend the val input.

Backports commit 852f933e482518797f7785a2e017a215b88df815 from qemu
2021-02-25 23:10:40 -05:00
Richard Henderson 57c66389c2 target/arm: Fix temp double-free in sve ldr/str
The temp that gets assigned to clean_addr has been allocated with
new_tmp_a64, which means that it will be freed at the end of the
instruction. Freeing it earlier leads to assertion failure.

The loop creates a complication, in which we allocate a new local
temp, which does need freeing, and the final code path is shared
between the loop and non-loop.

Fix this complication by adding new_tmp_a64_local so that the new
local temp is freed at the end, and can be treated exactly like
the non-loop path.

Fixes: bba87d0a0f4

Backports commit 4b4dc9750a0aa0b9766bd755bf6512a84744ce8a from qemu
2021-02-25 23:10:37 -05:00
Richard Henderson 54e2107bdf target/arm: Enable MTE
We now implement all of the components of MTE, without actually
supporting any tagged memory. All MTE instructions will work,
trivially, so we can enable support.

Backports commit c7459633baa71d1781fde4a245d6ec9ce2f008cf from qemu
2021-02-25 23:00:27 -05:00
Richard Henderson a34fda25b0 target/arm: Add allocation tag storage for system mode
Look up the physical address for the given virtual address,
convert that to a tag physical address, and finally return
the host address that backs it.

Backports commit e4d5bf4fbd5abfc3727e711eda64a583cab4d637 from qemu
2021-02-25 22:58:56 -05:00
Richard Henderson 9b6c64f8f8 target/arm: Create tagged ram when MTE is enabled
Backports commit 8bce44a2f6beb388a3f157652b46e99929839a96 from qemu
2021-02-25 22:51:23 -05:00
Richard Henderson 2ea0b53c1a target/arm: Cache the Tagged bit for a page in MemTxAttrs
This "bit" is a particular value of the page's MemAttr.

Backports commit 337a03f07ff0f9e6295662f4094e03a045b60bdc from qemu
2021-02-25 22:48:04 -05:00
Richard Henderson 28cd096d67 target/arm: Always pass cacheattr to get_phys_addr
We need to check the memattr of a page in order to determine
whether it is Tagged for MTE. Between Stage1 and Stage2,
this becomes simpler if we always collect this data, instead
of occasionally being presented with NULL.

Use the nonnull attribute to allow the compiler to check that
all pointer arguments are non-null.

Backports commit 7e98e21c09871cddc20946c8f3f3595e93154ecb from qemu
2021-02-25 22:46:00 -05:00
Richard Henderson e2456a83a4 target/arm: Set PSTATE.TCO on exception entry
D1.10 specifies that exception handlers begin with tag checks overridden.

Backports commit 34669338bd9d66255fceaa84c314251ca49ca8d5 from qemu
2021-02-25 22:41:26 -05:00
Richard Henderson 35d0443056 target/arm: Implement data cache set allocation tags
This is DC GVA and DC GZVA, and the tag check for DC ZVA.

Backports commit eb821168db798302bd124a3b000cebc23bd0a395 from qemu
2021-02-25 22:40:08 -05:00
Richard Henderson 33f5bdabb1 target/arm: Complete TBI clearing for user-only for SVE
There are a number of paths by which the TBI is still intact
for user-only in the SVE helpers.

Because we currently always set TBI for user-only, we do not
need to pass down the actual TBI setting from above, and we
can remove the top byte in the inner-most primitives, so that
none are forgotten. Moreover, this keeps the "dirty" pointer
around at the higher levels, where we need it for any MTE checking.

Since the normal case, especially for user-only, goes through
RAM, this clearing merely adds two insns per page lookup, which
will be completely in the noise.

Backports commit c4af8ba19b9d22aac79cab679a20b159af9d6809 from qemu
2021-02-25 22:37:12 -05:00
Richard Henderson 732efce958 target/arm: Add mte helpers for sve scatter/gather memory ops
Because the elements are non-sequential, we cannot eliminate many
tests straight away like we can for sequential operations. But
we often have the PTE details handy, so we can test for Tagged.

Backports commit d28d12f008ee44dc2cc2ee5d8f673be9febc951e from qemu
2021-02-25 22:34:24 -05:00
Richard Henderson 5698b7badb target/arm: Handle TBI for sve scalar + int memory ops
We still need to handle tbi for user-only when mte is inactive.

Backports commit 9473d0ecafcffc8b258892b1f9f18e037bdba958 from qemu
2021-02-25 22:17:46 -05:00
Richard Henderson 586235d02d target/arm: Add mte helpers for sve scalar + int ff/nf loads
Because the elements are sequential, we can eliminate many tests all
at once when the tag hits TCMA, or if the page(s) are not Tagged.

Backports commit aa13f7c3c378fa41366b9fcd6c29af1c3d81126a from qemu
2021-02-25 22:09:17 -05:00
Richard Henderson cb31d54b18 target/arm: Add mte helpers for sve scalar + int stores
Because the elements are sequential, we can eliminate many tests all
at once when the tag hits TCMA, or if the page(s) are not Tagged.

Backports commit 71b9f3948c75bb97641a3c8c7de96d1cb47cdc07 from qemu
2021-02-25 21:53:55 -05:00
Richard Henderson 670b25c5fa target/arm: Add mte helpers for sve scalar + int loads
Because the elements are sequential, we can eliminate many tests all
at once when the tag hits TCMA, or if the page(s) are not Tagged.

Backports commit 206adacfb8d35e671e3619591608c475aa046b63 from qemu
2021-02-25 21:45:32 -05:00
Richard Henderson 6a78133659 target/arm: Remove sve_memopidx
None of the sve helpers use TCGMemOpIdx any longer, so we can
stop passing it.

Backports commit ba080b8682fc6bde7f2d9dedddb519d63cbe138f from qemu
2021-02-25 21:33:44 -05:00
Richard Henderson 1f306230d4 target/arm: Reuse sve_probe_page for gather loads
Backports commit 10a85e2c8ab6e004e7f3f1dcfea8cb0bf58fb9fb from qemu
2021-02-25 21:30:13 -05:00
Richard Henderson 585da952ec target/arm: Reuse sve_probe_page for scatter stores
Backports commit 88a660a48ef513ce9875b595e19b2a820b3f3fca from qemu
2021-02-25 21:27:14 -05:00
Richard Henderson 3eee880c2a target/arm: Reuse sve_probe_page for gather first-fault loads
This avoids the need for a separate set of helpers to implement
no-fault semantics, and will enable MTE in the future.

Backports commit 50de9b78cec06e6d16e92a114a505779359ca532 from qemu
2021-02-25 21:22:16 -05:00
Richard Henderson b1e31f3bf3 target/arm: Use SVEContLdSt for contiguous stores
Follow the model set up for contiguous loads. This handles
watchpoints correctly for contiguous stores, recognizing the
exception before any changes to memory.

Backports commit 0fa476c1bb37a70df7eeff1e5bfb4791feb37e0e from qemu
2021-02-25 21:15:14 -05:00
Richard Henderson 3591c2f548 target/arm: Update contiguous first-fault and no-fault loads
With sve_cont_ldst_pages, the differences between first-fault and no-fault
are minimal, so unify the routines. With cpu_probe_watchpoint, we are able
to make progress through pages with TLB_WATCHPOINT set when the watchpoint
does not actually fire.

Backports commit c647673ce4d72a8789703c62a7f3cbc732cb1ea8 from qemu
2021-02-25 21:06:14 -05:00
Richard Henderson 6c9304448e target/arm: Use SVEContLdSt for multi-register contiguous loads
Backports commit 5c9b8458a0b3008d24d84b67e1c9b6d5f39f4d66 from qemu
2021-02-25 20:50:22 -05:00