Commit graph

4857 commits

Author SHA1 Message Date
Richard Henderson 68ce00ac2f
target/riscv: Split gen_arith_imm into functional and temp
The tcg_gen_fooi_tl functions have some immediate constant
folding built in, which match up with some of the riscv asm
builtin macros, like mv and not.

Backports commit 598aa1160c3d17ab9271daf1f69d093ebada3f25 from qemu
2019-05-28 19:07:53 -04:00
Richard Henderson a62b4e5def
target/riscv: Split RVC32 and RVC64 insns into separate files
This eliminates all functions in insn_trans/trans_rvc.inc.c,
so the entire file can be removed.

Backports commit 0e68e240a9bd3b44a91cd6012f0e2bf2a43b9fe2 from qemu
2019-05-28 19:00:23 -04:00
Richard Henderson a968769d26
target/riscv: Use pattern groups in insn16.decode
This eliminates about half of the complicated decode
bits within insn_trans/trans_rvc.inc.c.

Backports commit c2cfb97c01a3636867c1a4a24f8a99fd8c6bed28 from qemu
2019-05-28 18:55:28 -04:00
Richard Henderson 8360c1fa3b
target/riscv: Merge argument decode for RVC shifti
Special handling for IMM==0 is the only difference between
RVC shifti and RVI shifti. This can be handled with !function.

Backports commit 6cafec92f1c862a9754ef6a28be68ba7178a284d from qemu
2019-05-28 18:52:50 -04:00
Richard Henderson dc087c4c0c
target/riscv: Merge argument sets for insn32 and insn16
In some cases this allows us to directly use the insn32
translator function. In some cases we still need a shim.

Backports commit e1d455dd91c935c714412dafeb24db947429a929 from qemu
2019-05-28 18:50:48 -04:00
Richard Henderson cb2ce66814
target/riscv: Use --static-decode for decodetree
The generated functions are only used within translate.c
and do not need to be global, or declared.

Backports commit 81770255581bd210c57b86a6e808628ab8d0c543 from qemu
2019-05-28 18:45:23 -04:00
Richard Henderson d51505f6e9
target/riscv: Name the argument sets for all of insn32 formats
Backports commit e761799796ac2211b9706753c459e117e7be58fa from qemu
2019-05-28 18:36:53 -04:00
Fabien Chouteau 7e6d37b51d
RISC-V: fix single stepping over ret and other branching instructions
This patch introduces wrappers around the tcg_gen_exit_tb() and
tcg_gen_lookup_and_goto_ptr() functions that handle single stepping,
i.e. call gen_exception_debug() when single stepping is enabled.

Theses functions are then used instead of the originals, bringing single
stepping handling in places where it was previously ignored such as jalr
and system branch instructions (ecall, mret, sret, etc.).

Backports commit 6e2716d8ca4edf3597307accef7af36e8ad966eb from qemu
2019-05-28 18:35:07 -04:00
Jonathan Behrens 25c0333213
target/riscv: Do not allow sfence.vma from user mode
The 'sfence.vma' instruction is privileged, and should only ever be allowed
when executing in supervisor mode or higher.

Backports commit b86f4167630802128d94f3c89043d97d2f4c2546 from qemu
2019-05-28 18:29:46 -04:00
Richard Henderson 67f0af4282
tcg/aarch64: Allow immediates for vector ORR and BIC
The allows immediates to be used for ORR and BIC,
as well as the trivial inversions, ORC and AND.

Backports commit 9e27f58b9902834dffc0d66d9eb62f78d9c2a632 from qemu
2019-05-24 18:47:07 -04:00
Richard Henderson 5ecfba4fe6
tcg/aarch64: Build vector immediates with two insns
Use MOVI+ORR or MVNI+BIC in order to build some vector constants,
as opposed to dropping them to the constant pool. This includes
all 16-bit constants and a similar set of 32-bit constants.

Backports commit 02f3a5b4744885258758d07ebe09cf965de78bcf from qemu
2019-05-24 18:43:54 -04:00
Richard Henderson 06058ef648
tcg/aarch64: Use MVNI in tcg_out_dupi_vec
The compliment of a subset of immediates can be computed
with a single instruction.

Backports commit 7e308e003e5b6ddd3130e09711e1d33693230696 from qemu
2019-05-24 18:42:40 -04:00
Richard Henderson c18ec586dc
tcg/aarch64: Split up is_fimm
There are several sub-classes of vector immediate, and only MOVI
can use them all. This will enable usage of MVNI and ORRI, which
use progressively fewer sub-classes.

This patch adds no new functionality, merely splits the function
and moves part of the logic into tcg_out_dupi_vec.

Backports commit 984fdcee342473dfe797897758929dad654693c8 from qemu
2019-05-24 18:41:37 -04:00
Richard Henderson 0ea4c05dc3
tcg/aarch64: Support vector bitwise select value
The instruction set has 3 insns that perform the same operation,
only varying in which operand must overlap the destination. We
can represent the operation without overlap and choose based on
the operands seen.

Backports commit a9e434a5dc16f71ee156428619fc3c3765b68f26 from qemu
2019-05-24 18:38:37 -04:00
Richard Henderson c79510378f
tcg/i386: Use umin/umax in expanding unsigned compare
Using umin(a, b) == a as an expansion for TCG_COND_LEU is a
better alternative to (a - INT_MIN) <= (b - INT_MIN).

Backports commit ebcfb91abed8c0fb180a968b9004419c208dcc02 from qemu
2019-05-24 18:36:32 -04:00
Richard Henderson ffdbc1a233
tcg/i386: Remove expansion for missing minmax
This is now handled by code within tcg-op-vec.c.

Backports commit 3ec3538a45f2fead475b0cca6945092c87927b4f from qemu
2019-05-24 18:34:44 -04:00
Richard Henderson 68cb096196
tcg/i386: Support vector comparison select value
We already had backend support for this feature. Expand the new
cmpsel opcode using vpblendb. The combination allows us to avoid
an extra NOT for some comparison codes.

Backports commit 904c5e19672778cc3349f4975437cfdf3371abb6 from qemu
2019-05-24 18:33:16 -04:00
Richard Henderson a868533297
tcg: Add TCG_OPF_NOT_PRESENT if TCG_TARGET_HAS_foo is negative
If INDEX_op_foo is always expanded by tcg_expand_vec_op, then
there may be no reasonable set of constraints to return from
tcg_target_op_def for that opcode.

Let TCG_TARGET_HAS_foo be specified as -1 in that case. Thus a
boolean test for TCG_TARGET_HAS_foo is true, but we will not
assert within process_op_defs when no constraints are specified.

Compare this with tcg_can_emit_vec_op, which already uses this
tri-state indication.

Backports commit 25c012b4009256505be3430480954a0233de343e from qemu
2019-05-24 18:28:11 -04:00
Richard Henderson 568da655c6
tcg: Expand vector minmax using cmp+cmpsel
Provide a generic fallback for the min/max operations.

Backports commit 72b4c792c7a576d9246207a8e9a940ed9e191722 from qemu
2019-05-24 18:26:53 -04:00
Richard Henderson 56d35e80aa
tcg: Introduce do_op3_nofail for vector expansion
This makes do_op3 match do_op2 in allowing for failure,
and thus fall back expansions.

Backports commit 17f79944ebeace8bf43047a33b7775ba5ed9070e from qemu
2019-05-24 18:24:44 -04:00
Richard Henderson 2ea6dfbd63
tcg: Add support for vector compare select
Perform a per-element conditional move. This combination operation is
easier to implement on some host vector units than plain cmp+bitsel.
Omit the usual gvec interface, as this is intended to be used by
target-specific gvec expansion call-backs.

Backports commit f75da2988eb2457fa23d006d573220c5c680ec4e from qemu
2019-05-24 18:21:13 -04:00
Richard Henderson ca58be9cb4
tcg: Add support for vector bitwise select
This operation performs d = (b & a) | (c & ~a), and is present
on a majority of host vector units. Include gvec expanders.

Backports commit 38dc12947ec9106237f9cdbd428792c985cd86ae from qemu
2019-05-24 18:15:10 -04:00
Richard Henderson fa363c3d6d
tcg: Fix missing checks and clears in tcg_gen_gvec_dup_mem
The paths through tcg_gen_dup_mem_vec and through MO_128 were
missing the check_size_align. The path through MO_128 was also
missing the expand_clr. This last was not visible because the
only user is ARM SVE, which would set oprsz == maxsz, and not
require the clear.

Fix by adding the check_size_align and using do_dup directly
instead of duplicating the check in tcg_gen_gvec_dup_{i32,i64}.

Backports commit 532ba368a13712724137228b5e7e9435994d25e1 from qemu
2019-05-24 18:07:28 -04:00
Richard Henderson 60cfe541b2
tcg/i386: Fix dupi/dupm for avx1 and 32-bit hosts
The VBROADCASTSD instruction only allows %ymm registers as destination.
Rather than forcing VEX.L and writing to the entire 256-bit register,
revert to using MOVDDUP with an %xmm register. This is sufficient for
an avx1 host since we do not support TCG_TYPE_V256 for that case.

Also fix the 32-bit avx2, which should have used VPBROADCASTW.

Fixes: 1e262b49b533

Backports commit 7b60ef3264e9627ac6efb34e9a6130647e9b55c0 from qemu
2019-05-24 18:04:08 -04:00
Alistair Francis f8f3e50372
target/arm: Fix vector operation segfault
Commit 89e68b575 "target/arm: Use vector operations for saturation"
causes this abort() when booting QEMU ARM with a Cortex-A15:

0 0x00007ffff4c2382f in raise () at /usr/lib/libc.so.6
1 0x00007ffff4c0e672 in abort () at /usr/lib/libc.so.6
2 0x00005555559c1839 in disas_neon_data_insn (insn=<optimized out>, s=<optimized out>) at ./target/arm/translate.c:6673
3 0x00005555559c1839 in disas_neon_data_insn (s=<optimized out>, insn=<optimized out>) at ./target/arm/translate.c:6386
4 0x00005555559cd8a4 in disas_arm_insn (insn=4081107068, s=0x7fffe59a9510) at ./target/arm/translate.c:9289
5 0x00005555559cd8a4 in arm_tr_translate_insn (dcbase=0x7fffe59a9510, cpu=<optimized out>) at ./target/arm/translate.c:13612
6 0x00005555558d1d39 in translator_loop (ops=0x5555561cc580 <arm_translator_ops>, db=0x7fffe59a9510, cpu=0x55555686a2f0, tb=<optimized out>, max_insns=<optimized out>) at ./accel/tcg/translator.c:96
7 0x00005555559d10d4 in gen_intermediate_code (cpu=cpu@entry=0x55555686a2f0, tb=tb@entry=0x7fffd7840080 <code_gen_buffer+126091347>, max_insns=max_insns@entry=512) at ./target/arm/translate.c:13901
8 0x00005555558d06b9 in tb_gen_code (cpu=cpu@entry=0x55555686a2f0, pc=3067096216, cs_base=0, flags=192, cflags=-16252928, cflags@entry=524288) at ./accel/tcg/translate-all.c:1736
9 0x00005555558ce467 in tb_find (cf_mask=524288, tb_exit=1, last_tb=0x7fffd783e640 <code_gen_buffer+126084627>, cpu=0x1) at ./accel/tcg/cpu-exec.c:407
10 0x00005555558ce467 in cpu_exec (cpu=cpu@entry=0x55555686a2f0) at ./accel/tcg/cpu-exec.c:728
11 0x000055555588b0cf in tcg_cpu_exec (cpu=0x55555686a2f0) at ./cpus.c:1431
12 0x000055555588d223 in qemu_tcg_cpu_thread_fn (arg=0x55555686a2f0) at ./cpus.c:1735
13 0x000055555588d223 in qemu_tcg_cpu_thread_fn (arg=arg@entry=0x55555686a2f0) at ./cpus.c:1709
14 0x0000555555d2629a in qemu_thread_start (args=<optimized out>) at ./util/qemu-thread-posix.c:502
15 0x00007ffff4db8a92 in start_thread () at /usr/lib/libpthread.

This patch ensures that we don't hit the abort() in the second switch
case in disas_neon_data_insn() as we will return from the first case.

Backports commit 2f143d3ad1c05e91cf2cdf5de06d59a80a95e6c8 from qemu
2019-05-24 18:02:32 -04:00
Richard Henderson 9287750362
target/arm: Simplify BFXIL expansion
The mask implied by the extract is redundant with the one
implied by the deposit. Also, fix spelling of BFXIL.

Backports commit 87eb65a3c45c788a309986d48170a54a0d1c0705 from qemu
2019-05-24 18:01:26 -04:00
Richard Henderson 1778828644
target/arm: Use extract2 for EXTR
This is, after all, how we implement extract2 in tcg/aarch64.

Backports commit 80ac954c369e7e61bd1ed00cef07b63e11f9c734 from qemu
2019-05-24 17:58:58 -04:00
Richard Henderson 0412b3be8a
target/i386: Implement CPUID_EXT_RDRAND
We now have an interface for guest visible random numbers.

Backports commit 369fd5ca66810b2ddb16e23a497eabe59385eceb from qemu with
the actual RNG portion disabled for the time being.
2019-05-23 15:12:50 -04:00
Richard Henderson 3dd7358a53
target/arm: Implement ARMv8.5-RNG
Use the newly introduced infrastructure for guest random numbers.

Backports commit de390645675966cce113bf5394445bc1f8d07c85 from qemu

(with the actual RNG portion disabled to preserve determinism for the
time being).
2019-05-23 15:03:23 -04:00
Richard Henderson a8df33c37c
target/arm: Put all PAC keys into a structure
This allows us to use a single syscall to initialize them all.

Backports commit 108b3ba891408c4dce93df78261ec4aca38c0e2e from qemu
2019-05-23 14:54:06 -04:00
Paolo Bonzini 1341e371a8
target/i386: add MDS-NO feature
Microarchitectural Data Sampling is a hardware vulnerability which allows
unprivileged speculative access to data which is available in various CPU
internal buffers.

Some Intel processors use the ARCH_CAP_MDS_NO bit in the
IA32_ARCH_CAPABILITIES
MSR to report that they are not vulnerable, make it available to
guests.

Backports commit 20140a82c67467f53814ca197403d5e1b561a5e5 from qemu
2019-05-23 14:49:20 -04:00
Paolo Bonzini a4f2517f46
target/i386: define md-clear bit
md-clear is a new CPUID bit which is set when microcode provides the
mechanism to invoke a flush of various exploitable CPU buffers by invoking
the VERW instruction.

Backports commit b2ae52101fca7f9547ac2f388085dbc58f8fe1c0 from qemu
2019-05-23 14:48:18 -04:00
Philippe Mathieu-Daudé 4a1b8d64bd
target/m68k: Optimize rotate_x() using extract_i32()
Optimize rotate_x() using tcg_gen_extract_i32(). We can now free the
'sz' tcg_temp earlier. Since it is allocated with tcg_const_i32(),
free it with tcg_temp_free_i32().

Backports commit 60d3d0cfeb1658d2827d6a4f0df27252bb36baba from qemu
2019-05-17 12:07:07 -04:00
Philippe Mathieu-Daudé 3c6cb445a0
target/m68k: Fix a tcg_temp leak
The function gen_get_ccr() returns a tcg_temp created with
tcg_temp_new(). Free it with tcg_temp_free().

Backports commit 44c64e90950adf9efe7f4235a32eb868d1290ebb from qemu
2019-05-17 12:05:11 -04:00
Philippe Mathieu-Daudé a72a53c2d9
target/m68k: Reduce the l1 TCGLabel scope
Backports commit 89fa312be0dfd8b4c539c8763796e785c6b00b46 from qemu
2019-05-17 12:03:37 -04:00
Peter Maydell 3fb64fd5a2
target/m68k: Switch to transaction_failed hook
Switch the m68k target from the old unassigned_access hook
to the transaction_failed hook.

The notable difference is that rather than it being called
for all physical memory accesses which fail (including
those made by DMA devices or by the gdbstub), it is only
called for those made by the CPU via its MMU. (In previous
commits we put in explicit checks for the direct physical
loads made by the target/m68k code which will no longer
be handled by calling the unassigned_access hook.)

Backports commit e1aaf3a88e95ab007445281e2b2f6e3c8da47f22 from qemu
2019-05-17 12:01:40 -04:00
Peter Maydell ab63f1a102
target/m68k: In get_physical_address() check for memory access failures
In get_physical_address(), use address_space_ldl() and
address_space_stl() instead of ldl_phys() and stl_phys().
This allows us to check whether the memory access failed.
For the moment, we simply return -1 in this case;
add a TODO comment that we should ideally generate the
appropriate kind of fault.

Backports commit adcf0bf017351776510121e47b9226095836023c from qemu
2019-05-17 11:59:01 -04:00
Richard Henderson 2a4a7b9391
tcg: Use tlb_fill probe from tlb_vaddr_to_host
Most of the existing users would continue around a loop which
would fault the tlb entry in via a normal load/store.

But for AArch64 SVE we have an existing emulation bug wherein we
would mark the first element of a no-fault vector load as faulted
(within the FFR, not via exception) just because we did not have
its address in the TLB. Now we can properly only mark it as faulted
if there really is no valid, readable translation, while still not
raising an exception. (Note that beyond the first element of the
vector, the hardware may report a fault for any reason whatsoever;
with at least one element loaded, forward progress is guaranteed.)

Backports commit 4811e9095c0491bc6f5450e5012c9c4796b9e59d from qemu
2019-05-16 18:27:03 -04:00
Laurent Vivier 8cdfed1032
linux-user: fix 32bit g2h()/h2g()
sparc32plus has 64bit long type but only 32bit virtual address space.

For instance, "apt-get upgrade" failed because of a mmap()/msync()
sequence.

mmap() returned 0xff252000 but msync() used g2h(0xffffffffff252000)
to find the host address. The "(target_ulong)" in g2h() doesn't fix the
address because it is 64bit long.

This patch introduces an "abi_ptr" that is set to uint32_t
if the virtual address space is addressed using 32bit in the linux-user
case. It stays set to target_ulong with softmmu case.

Backports commit 3e23de15237c81fe7af7c3ffa299a6ae5fec7d43 from qemu
2019-05-16 18:20:55 -04:00
Richard Henderson e736ef3238
tcg: Remove CPUClass::handle_mmu_fault
This hook is now completely replaced by tlb_fill.

Backports commit 69963f5709a0645934c169784820d0bee22208ba from qemu
2019-05-16 18:12:17 -04:00
Lioncash fcaa52c1fe
tcg: Synchronize with qemu
Resolves any formatting discrepancies and bad merges that slipped
through.
2019-05-16 18:11:08 -04:00
Richard Henderson dab0061a0d
tcg: Use CPUClass::tlb_fill in cputlb.c
We can now use the CPUClass hook instead of a named function.

Create a static tlb_fill function to avoid other changes within
cputlb.c. This also isolates the asserts within. Remove the
named tlb_fill function from all of the targets.

Backports commit c319dc13579a92937bffe02ad2c9f1a550e73973 from qemu
2019-05-16 17:35:37 -04:00
Richard Henderson 5d83199931
target/sparc: Convert to CPUClass::tlb_fill
Backports commit e84942f2ceaa79430414f2cb68d77c044dadca96 from qemu
2019-05-16 17:29:35 -04:00
Richard Henderson e98c731550
target/riscv: Convert to CPUClass::tlb_fill
Note that env->pc is removed from the qemu_log as that value is garbage.
The PC isn't recovered until cpu_restore_state, called from
cpu_loop_exit_restore, called from riscv_raise_exception.

Backports commit 8a4ca3c10a96be6ed7f023b685b688c4d409bbcb from qemu
2019-05-16 17:24:01 -04:00
Richard Henderson 14d48974a4
target/mips: Convert to CPUClass::tlb_fill
Note that env->active_tc.PC is removed from the qemu_log as that value
is garbage. The PC isn't recovered until cpu_restore_state, called from
cpu_loop_exit_restore, called from do_raise_exception_err.

Backports commit 931d019f5b2e7bbacb162869497123be402ddd86 from qemu
2019-05-16 17:19:47 -04:00
Richard Henderson 49cb8cfe5b
target/mips: Tidy control flow in mips_cpu_handle_mmu_fault
Since the only non-negative TLBRET_* value is TLBRET_MATCH,
the subsequent test for ret < 0 is useless. Use early return
to allow subsequent blocks to be unindented.

Backports commit e38f4eb63020075432cb77bf48398187809cf4a3 from qemu
2019-05-16 17:15:33 -04:00
Richard Henderson f175e89ca2
target/mips: Pass a valid error to raise_mmu_exception for user-only
At present we give ret = 0, or TLBRET_MATCH. This gets matched
by the default case, which falls through to TLBRET_BADADDR.
However, it makes more sense to use a proper value. All of the
tlb-related exceptions are handled identically in cpu_loop.c,
so TLBRET_BADADDR is as good as any other. Retain it.

Backports commit 995ffde9622c01f5b307cab47f9bd7962ac09db2 from qemu
2019-05-16 17:14:02 -04:00
Richard Henderson 52998fe46d
target/m68k: Convert to CPUClass::tlb_fill
Backports commit fe5f7b1b3a2317f598687218c348b54e02a75e1f from qemu
2019-05-16 17:12:41 -04:00
Richard Henderson fe9ac6e1c4
target/i386: Convert to CPUClass::tlb_fill
We do not support probing, but we do not need it yet either.

Backports commit 5d0044212c375c0696baef7bba13699277dac5b5 from qemu
2019-05-16 17:08:14 -04:00
Richard Henderson 31ecdb5341
target/arm: Convert to CPUClass::tlb_fill
Backports commit 7350d553b5066abdc662045d7db5cdb73d0f9d53 from qemu
2019-05-16 16:55:12 -04:00
Richard Henderson 1f30062c41
tcg: Add CPUClass::tlb_fill
This hook will replace the (user-only mode specific) handle_mmu_fault
hook, and the (system mode specific) tlb_fill function.

The handle_mmu_fault hook was written as if there was a valid
way to recover from an mmu fault, and had 3 possible return states.
In reality, the only valid action is to raise an exception,
return to the main loop, and deliver the SIGSEGV to the guest.

Note that all of the current implementations of handle_mmu_fault
for guests which support linux-user do in fact only ever return 1,
which is the signal to return to the main loop.

Using the hook for system mode requires that all targets be converted,
so for now the hook is (optionally) used only from user-only mode.

Backports commit da6bbf8513e621a8fc2fd315d77318f36547474d from qemu
2019-05-16 16:46:19 -04:00
Richard Henderson de260cfbd6
tcg/aarch64: Do not advertise minmax for MO_64
The min/max instructions are not available for 64-bit elements.

Backports commit a7b6d286cfb5205b9f5330aefc5727269b3d810f from qemu
2019-05-16 16:44:34 -04:00
Richard Henderson 552e48f14e
target/arm: Use tcg_gen_abs_i64 and tcg_gen_gvec_abs
Backports commit 4e027a710673f5d4dc6cff88728bcfd32e4c47b0 from qemu
2019-05-16 16:43:02 -04:00
Richard Henderson 7c9b3a9021
tcg/aarch64: Support vector absolute value
Backports commit a456394ae540f852cd0d10fd693fe9f33598dc01 from qemu
2019-05-16 16:39:14 -04:00
Richard Henderson fd35490991
tcg/i386: Support vector absolute value
Backports commit 18f9b65f1a4225dd314cb9b0a8dea968c5bc2ef3 from qemu
2019-05-16 16:37:33 -04:00
Richard Henderson 6d5e7856ff
tcg: Add support for vector absolute value
Backports commit bcefc90208f8a1d6f619d61c2647281d92277015 from qemu
2019-05-16 16:33:43 -04:00
Richard Henderson 6d1730048d
tcg: Add support for integer absolute value
Remove a function of the same name from target/arm/.
Use a branchless implementation of abs gleaned from gcc.

Backports commit ff1f11f7f8710a768f9313f24bd7f509d3db27e5 from qemu
2019-05-16 16:25:15 -04:00
Richard Henderson 18b3df6e4e
tcg/i386: Support vector scalar shift opcodes
Backports commit 0a8d7a3bf5a149a82450eef555fd61728703dd84 from qemu
2019-05-16 16:19:44 -04:00
Richard Henderson 79b9dc559e
tcg: Add gvec expanders for vector shift by scalar
Allow expansion either via shift by scalar or by replicating
the scalar for shift by vector.

Backports commit b4578cd91cda4cef1c413304353ca6dc5b957b60 from qemu
2019-05-16 16:17:58 -04:00
Richard Henderson 0217ee7b24
tcg/aarch64: Support vector variable shift opcodes
Backports commit 79525dfd08262d8de10d271f17e5a4096ef96d16 from qemu
2019-05-16 15:58:54 -04:00
Richard Henderson f793ec847d
tcg/i386: Support vector variable shift opcodes
Backports commit a2ce146a06807fe1d1a81e878b8f249ff1e14038 from qemu
2019-05-16 15:53:33 -04:00
Richard Henderson 8c17687934
tcg: Add gvec expanders for variable shift
The gvec expanders perform a modulo on the shift count. If the target
requires alternate behaviour, then it cannot use the generic gvec
expanders anyway, and will have to have its own custom code.

Backports commit 5ee5c14cacda27e904cd6b0d9e7ffe1acff42838 from qemu
2019-05-16 15:51:09 -04:00
Richard Henderson 66e6bea084
tcg: Add INDEX_op_dupm_vec
Allow the backend to expand dup from memory directly, instead of
forcing the value into a temp first. This is especially important
if integer/vector register moves do not exist.

Note that officially tcg_out_dupm_vec is allowed to fail.
If it did, we could fix this up relatively easily:

VECE == 32/64:
Load the value into a vector register, then dup.
Both of these must work.

VECE == 8/16:
If the value happens to be at an offset such that an aligned
load would place the desired value in the least significant
end of the register, go ahead and load w/garbage in high bits.

Load the value w/INDEX_op_ld{8,16}_i32.
Attempt a move directly to vector reg, which may fail.
Store the value into the backing store for OTS.
Load the value into the vector reg w/TCG_TYPE_I32, which must work.
Duplicate from the vector reg into itself, which must work.

All of which is well and good, except that all supported
hosts can support dupm for all vece, so all of the failure
paths would be dead code and untestable.

Backports commit 37ee55a081b7863ffab2151068dd1b2f11376914 from qemu
2019-05-16 15:38:02 -04:00
Richard Henderson fd7a67e4a7
tcg/aarch64: Implement tcg_out_dupm_vec
The LD1R instruction does all the work. Note that the only
useful addressing mode is a base register with no offset.

Backports commit f23e5e15edfd49d5dd72cab2ed2d85ac354b2eeb from qemu
2019-05-16 15:29:04 -04:00
Richard Henderson a6fd4e2345
tcg/i386: Implement tcg_out_dupm_vec
At the same time, improve tcg_out_dupi_vec wrt broadcast
from the constant pool.

Backports commit 1e262b49b5331441f697461e4305fe06719758a7 from qemu
2019-05-16 15:27:15 -04:00
Richard Henderson d4e7c6a8c5
tcg: Add tcg_out_dupm_vec to the backend interface
Currently stubbed out in all backends that support vectors.

Backports commit d6ecb4a978b718dbe108a9fa9ecccc8b7f7cb579 from qemu
2019-05-16 15:24:48 -04:00
Richard Henderson cf238d3544
tcg: Manually expand INDEX_op_dup_vec
This case is similar to INDEX_op_mov_* in that we need to do
different things depending on the current location of the source.

Backports commit bab1671f0fa928fd678a22f934739f06fd5fd035 from qemu
2019-05-16 15:22:29 -04:00
Richard Henderson 3d20e1678c
tcg: Promote tcg_out_{dup,dupi}_vec to backend interface
The i386 backend already has these functions, and the aarch64 backend
could easily split out one. Nothing is done with these functions yet,
but this will aid register allocation of INDEX_op_dup_vec in a later patch.

Adjust the aarch64 tcg_out_dupi_vec signature to match the new interface.

Backports commit e7632cfa8b76cdbbc1c76e8737338ef5844e7d60 from qemu
2019-05-16 15:18:48 -04:00
Richard Henderson d58d9ad16e
tcg: Support cross-class moves without instruction support
PowerPC Altivec does not support direct moves between vector registers
and general registers. So when tcg_out_mov fails, we can use the
backing memory for the temporary to perform the move.

Backports commit 240c08d0998f402c325fce489de0d14831048128 from qemu
2019-05-16 15:16:23 -04:00
Richard Henderson f86bd1c5d6
tcg: Return bool success from tcg_out_mov
This patch merely changes the interface, aborting on all failures,
of which there are currently none.

Backports commit 78113e83e0007e869c9f0cb4c0497a77538988e3 from qemu
2019-05-16 15:14:42 -04:00
Richard Henderson f7d9ee8451
tcg/arm: Use tcg_out_mov_reg in tcg_out_mov
We have a function that takes an additional condition parameter
over the standard backend interface. It already takes care of
eliding no-op moves.

Backports commit c16f52b2c5d91c36e121795bd3b386cea0b7573c from qemu
2019-05-16 15:10:52 -04:00
Richard Henderson fef5700c9c
tcg: Assert fixed_reg is read-only
The only fixed_reg is cpu_env, and it should not be modified
during any TB. Therefore code that tries to special-case moves
into a fixed_reg is dead. Remove it.

Backports commit d63e3b6e694ad6c887be135dddb9cd4893f1a844 from qemu
2019-05-16 15:09:37 -04:00
Richard Henderson c54b2776f6
tcg: Specify optional vector requirements with a list
Replace the single opcode in .opc with a null-terminated
array in .opt_opc. We still require that all opcodes be
used with the same .vece.

Validate the contents of this list with CONFIG_DEBUG_TCG.
All tcg_gen_*_vec functions will check any list active
during .fniv expansion. Swap the active list in and out
as we expand other opcodes, or take control away from the
front-end function.

Convert all existing vector aware front ends.

Backports commit 53229a7703eeb2bbe101a19a33ef22aaf960c65b from qemu
2019-05-16 15:05:02 -04:00
Richard Henderson 37762fd92b
tcg: Allow add_vec, sub_vec, neg_vec, not_vec to be expanded
PowerPC Altivec does not support add and subtract of 64-bit elements.
Prepare for that configuration by not assuming the operation is
universally supported.

Backports commit ce27c5d1a38e93da38653af71fb468c5eded4c7b from qemu
2019-05-16 14:33:18 -04:00
Richard Henderson 9a9b681b38
tcg: Do not recreate INDEX_op_neg_vec unless supported
Use tcg_can_emit_vec_op instead of just TCG_TARGET_HAS_neg_vec,
so that we check the type and vece for the actual operation.

Backports commit ac383dde33405106469d04a78de1d76f1a730cb1 from qemu
2019-05-16 14:28:41 -04:00
David Hildenbrand f3b4a64d27
tcg: Implement tcg_gen_gvec_3i()
Let's add tcg_gen_gvec_3i(), similar to tcg_gen_gvec_2i(), however
without introducing "gen_helper_gvec_3i *fnoi", as it isn't needed
for now.

Backports commit e1227bb6e59173117f094a6a13b998587b45c928 from qemu
2019-05-16 14:26:50 -04:00
Markus Armbruster 1b2c8c44d5
Clean up ill-advised or unusual header guards
Leading underscores are ill-advised because such identifiers are
reserved. Trailing underscores are merely ugly. Strip both.

Our header guards commonly end in _H. Normalize the exceptions.

Done with scripts/clean-header-guards.pl.

Backports commit a8b991b52dcde75ab5065046653626951aac666d from qemu
2019-05-14 08:02:53 -04:00
Richard Henderson 9a02741c13
cputlb: Do unaligned store recursion to outermost function
This is less tricky than for loads, because we always fall
back to single byte stores to implement unaligned stores.

Backports commit 4601f8d10d7628bcaf2a8179af36e04b42879e91 from qemu
2019-05-14 07:45:15 -04:00
Richard Henderson bcab6f1719
cputlb: Do unaligned load recursion to outermost function
If we attempt to recurse from load_helper back to load_helper,
even via intermediary, we do not get all of the constants
expanded away as desired.

But if we recurse back to the original helper (or a shim that
has a consistent function signature), the operands are folded
away as desired.

Backports commit 2dd926067867c2dd19e66d31a7990e8eea7258f6 from qemu
2019-05-14 07:43:31 -04:00
Richard Henderson f12f36aebd
cputlb: Drop attribute flatten
Going to approach this problem via __attribute__((always_inline))
instead, but full conversion will take several steps.

Backports commit fc1bc777910dc14a3db4e2ad66f3e536effc297d from qemu
2019-05-14 07:33:39 -04:00
Richard Henderson 7991cd601f
cputlb: Move TLB_RECHECK handling into load/store_helper
Having this in io_readx/io_writex meant that we forgot to
re-compute index after tlb_fill. It also means we can use
the normal aligned memory load path. It also fixes a bug
in that we had cached a use of index across a tlb_fill.

Backports commit f1be36969de2fb9b6b64397db1098f115210fcd9 from qemu
2019-05-14 07:28:15 -04:00
Alex Bennée ccee796272
accel/tcg: demacro cputlb
Instead of expanding a series of macros to generate the load/store
helpers we move stuff into common functions and rely on the compiler
to eliminate the dead code for each variant.

Backports commit eed5664238ea5317689cf32426d9318686b2b75c from qemu
2019-05-14 07:28:11 -04:00
Peter Maydell 26cb1b8767
target/arm: Stop using variable length array in dc_zva
Currently the dc_zva helper function uses a variable length
array. In fact we know (as the comment above remarks) that
the length of this array is bounded because the architecture
limits the block size and QEMU limits the target page size.
Use a fixed array size and assert that we don't run off it.

Backports commit 63159601fb3e396b28da14cbb71e50ed3f5a0331 from qemu
2019-05-09 17:48:25 -04:00
Peter Maydell 7861820e94
target/arm: Implement XPSR GE bits
In the M-profile architecture, if the CPU implements the DSP extension
then the XPSR has GE bits, in the same way as the A-profile CPSR. When
we added DSP extension support we forgot to add support for reading
and writing the GE bits, which are stored in env->GE. We did put in
the code to add XPSR_GE to the mask of bits to update in the v7m_msr
helper, but forgot it in v7m_mrs. We also must not allow the XPSR we
pull off the stack on exception return to set the nonexistent GE bits.
Correct these errors:
* read and write env->GE in xpsr_read() and xpsr_write()
* only set GE bits on exception return if DSP present
* read GE bits for MRS if DSP present

Backports commit f1e2598c46d480c9e21213a244bc514200762828 from qemu
2019-05-09 17:46:31 -04:00
Cao Jiaxi bcb1270f23
osdep: Fix mingw compilation regarding stdio formats
I encountered the following compilation error on mingw:

/mnt/d/qemu/include/qemu/osdep.h:97:9: error: '__USE_MINGW_ANSI_STDIO' macro redefined [-Werror,-Wmacro-redefined]
\#define __USE_MINGW_ANSI_STDIO 1
^
/mnt/d/llvm-mingw/aarch64-w64-mingw32/include/_mingw.h:433:9: note: previous definition is here
\#define __USE_MINGW_ANSI_STDIO 0 /* was not defined so it should be 0 */

It turns out that __USE_MINGW_ANSI_STDIO must be set before any
system headers are included, not just before stdio.h.

Backports commit 946376c21be1cd9dcc3c7936b204b113781603f7 from qemu
2019-05-09 17:44:14 -04:00
Cao Jiaxi 3922118434
util/cacheinfo: Use uint64_t on LLP64 model to satisfy Windows ARM64
Windows ARM64 uses LLP64 model, which breaks current assumptions.

Backports commit 8041336ef74e19ca607c1601016333c986de8f9c from qemu
2019-05-09 17:43:27 -04:00
Lioncash a71c027063
decodetree: Add DisasContext argument to !function expanders
This does require adjusting all existing users.

Backports commit 451e4ffdb0003ab5ed0d98bd37b385c076aba183 from qemu
2019-05-09 17:40:45 -04:00
Richard Henderson 9030870a8f
decodetree: Expand a decode_load function
Read the instruction, loading no more bytes than necessary.

Backports commit 70e0711ab18fa48279cd2c8cc570b57f38648598 from qemu
2019-05-09 17:35:20 -04:00
Richard Henderson a98e70e791
decodetree: Initial support for variable-length ISAs
Assuming that the ISA clearly describes how to determine
the length of the instruction, and the ISA has a reasonable
maximum instruction length, the input to the decoder can be
right-justified in an appropriate insn word.

This is not 100% convenient, as out-of-line %fields are
numbered relative to the maximum instruction length, but
this appears to still be usable.

Backports commit 17560e9349ff1fcce814184b37993f92378cf0c4 from qemu
2019-05-09 17:32:38 -04:00
Richard Henderson 8fdd009a9d
tcg: Remove CF_IGNORE_ICOUNT
Now that we have curr_cflags, we can include CF_USE_ICOUNT
early and then remove it as necessary.

Backports commit 416986d3f97329655e30da7271a2d11c6d707b06 from qemu
2019-05-06 00:57:09 -04:00
Richard Henderson 12f9def3a2
tcg: Add CF_LAST_IO + CF_USE_ICOUNT to CF_HASH_MASK
These flags are used by target/*/translate.c,
and affect code generation.

Backports commit 0cf8a44c2f56ba884c2f6db47d27fbb24975daa3 from qemu
2019-05-06 00:53:35 -04:00
Emilio G. Cota b1b069e8ad
cpu-exec: lookup/generate TB outside exclusive region during step_atomic
Now that all code generation has been converted to check CF_PARALLEL, we can
generate !CF_PARALLEL code without having yet set !parallel_cpus --
and therefore without having to be in the exclusive region during
cpu_exec_step_atomic.

While at it, merge cpu_exec_step into cpu_exec_step_atomic.

Backports commit ac03ee5331612e44beb393df2b578c951d27dc0d from qemu
2019-05-06 00:52:43 -04:00
Emilio G. Cota c1e26c4e35
tcg: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

The tb->cflags field is not passed to tcg generation functions. So
we add a field to TCGContext, storing there a copy of tb->cflags.

Most architectures have <= 32 registers, which results in a 4-byte hole
in TCGContext. Use this hole for the new field.

Backports commit e82d5a2460b0e176128027651ff9b104e4bdf5cc from qemu
2019-05-06 00:52:08 -04:00
Emilio G. Cota 175a5223ad
target/sparc: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

Backports commit 87d757d60d66d5ee1608460b0f1e07e2b758db9c from qemu
2019-05-06 00:43:21 -04:00
Emilio G. Cota 77ccb4918d
target/m68k: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

Backports commit f0ddf11b23260f0af84fb529486a8f9ba2d19401 from qemu
2019-05-06 00:42:16 -04:00
Emilio G. Cota ad2a4edd76
target/i386: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

Backports commit b5e3b4c2aca8eb5a9cfeedfb273af623f17c3731 from qemu
2019-05-04 22:45:49 -04:00
Emilio G. Cota 1715f382b4
target/arm: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

Backports commit 2399d4e7cec22ecf1c51062d2ebfd45220dbaace from qemu
2019-05-04 22:44:32 -04:00
Richard Henderson 4a858100f4
tcg: Include CF_COUNT_MASK in CF_HASH_MASK
Backports commit cdfef1715c779eb528d633e8b76cbc8a10e71ac8 from qemu
2019-05-04 22:31:32 -04:00
Richard Henderson 30c0950567
tcg: Add CPUState cflags_next_tb
We were generating code during tb_invalidate_phys_page_range,
check_watchpoint, cpu_io_recompile, and (seemingly) discarding
the TB, assuming that it would magically be picked up during
the next iteration through the cpu_exec loop.

Instead, record the desired cflags in CPUState so that we request
the proper TB so that there is no more magic.

Backports commit 9b990ee5a3cc6aa38f81266fb0c6ef37a36c45b9 from qemu
2019-05-04 22:30:22 -04:00
Richard Henderson ee1ddf4a92
tcg: define CF_PARALLEL and use it for TB hashing along with CF_COUNT_MASK
This will enable us to decouple code translation from the value
of parallel_cpus at any given time. It will also help us minimize
TB flushes when generating code via EXCP_ATOMIC.

Note that the declaration of parallel_cpus is brought to exec-all.h
to be able to define there the "curr_cflags" inline.

Backports commit 4e2ca83e71b51577b06b1468e836556912bd5b6e from qemu
2019-05-04 22:22:06 -04:00
Lioncash cc37db76b6
tcg: Synchronize with qemu 2019-05-04 21:40:23 -04:00
Daniel P. Berrangé aec899b73e
configure: automatically pick python3 is available
Unless overridden via an env var or configure arg, QEMU will only look
for the 'python' binary in $PATH. This is unhelpful on distros which
are only shipping Python 3.x (eg Fedora) in their default install as,
if they comply with PEP 394, the bare 'python' binary won't exist.

This changes configure so that by default it will search for all three
common python binaries, preferring to find Python 3.x versions.

Backports commit faf441429adfe5767be52c5dcdb8bc03161d064f from qemu
2019-05-03 11:36:36 -04:00
Thomas Huth 73176e89ce
configure: Remove old *-config-devices.mak.d files when running configure
When running "make" in a build directory from the pre-Kconfig merge time,
the build process currently fails with:

make: *** No rule to make target `.../default-configs/pci.mak',
needed by `aarch64-softmmu/config-devices.mak'. Stop.

To make sure that this problem at least goes away when the user runs
"configure" (or "sh config.status") again, we have to make sure that
we re-generate the .mak.d files. Thus remove the old stale files
while running the configure script.

Backports commit 9c79024225af6b3ae04ea2dd94a5e5c4132a9e65 from qemu
2019-05-03 11:33:56 -04:00
Thomas Huth 7107f72cc6
configure: Add -Wno-typedef-redefinition to CFLAGS (for Clang)
Without the -Wno-typedef-redefinition option, clang complains if a typedef
gets redefined in gnu99 mode (since this is officially a C11 feature). This
used to also happen with older versions of GCC, but since we've bumped our
minimum GCC version to 4.8, all versions of GCC that we support do not seem
to issue this warning in gnu99 mode anymore. So this has become a common
problem for people who only test their code with GCC - they do not notice
the issue until they submit their patches and suddenly patchew or a
maintainer complains.

Now that we do not urgently need to keep the code clean from typedef
redefintions anymore with recent versions of GCC, we can ease the
situation with clang, too, and simply shut these warnings off for good.

Backports commit e6e90feedb706b1b92827a5977b37e1e8defb8ef from qemu
2019-05-03 11:33:06 -04:00
Eduardo Habkost 42c35d968a
accel: Remove unused AccelClass::available field
The field is not used anymore, we can remove it.

Backports commit 8d006d4bc2ab4f72877d8bd47cba9aa8d24b54d0 from qemu
2019-05-03 11:31:27 -04:00
Peter Maydell d4549fccfb
target/arm: Enable FPU for Cortex-M4 and Cortex-M33
Enable the FPU by default for the Cortex-M4 and Cortex-M33.

Backports commit 14fd0c31e26b88a2189b3f459b864d5e1faf302a from qemu
2019-04-30 11:29:26 -04:00
Peter Maydell 77ae3982b4
target/arm: Implement VLLDM for v7M CPUs with an FPU
Implement the VLLDM instruction for v7M for the FPU present cas.

Backports commit 956fe143b4f254356496a0a1c479fa632376dfec from qemu
2019-04-30 11:27:54 -04:00
Peter Maydell b483951046
target/arm: Implement VLSTM for v7M CPUs with an FPU
Implement the VLSTM instruction for v7M for the FPU present case.

Backports commit 019076b036da4444494de38388218040d9d3a26c from qemu
2019-04-30 11:25:44 -04:00
Peter Maydell a976d7642a
target/arm: Implement M-profile lazy FP state preservation
The M-profile architecture floating point system supports
lazy FP state preservation, where FP registers are not
pushed to the stack when an exception occurs but are instead
only saved if and when the first FP instruction in the exception
handler is executed. Implement this in QEMU, corresponding
to the check of LSPACT in the pseudocode ExecuteFPCheck().

Backports commit e33cf0f8d8c9998a7616684f9d6aa0d181b88803 from qemu
2019-04-30 11:21:50 -04:00
Peter Maydell 72e5ae480d
target/arm: Add lazy-FP-stacking support to v7m_stack_write()
Pushing registers to the stack for v7M needs to handle three cases:
* the "normal" case where we pend exceptions
* an "ignore faults" case where we set FSR bits but
do not pend exceptions (this is used when we are
handling some kinds of derived exception on exception entry)
* a "lazy FP stacking" case, where different FSR bits
are set and the exception is pended differently

Implement this by changing the existing flag argument that
tells us whether to ignore faults or not into an enum that
specifies which of the 3 modes we should handle.

Backports commit a356dacf647506bccdf8ecd23574246a8bf615ac from qemu
2019-04-30 10:59:53 -04:00
Peter Maydell b1d6bd2792
target/arm: New function armv7m_nvic_set_pending_lazyfp()
In the v7M architecture, if an exception is generated in the process
of doing the lazy stacking of FP registers, the handling of
possible escalation to HardFault is treated differently to the normal
approach: it works based on the saved information about exception
readiness that was stored in the FPCCR when the stack frame was
created. Provide a new function armv7m_nvic_set_pending_lazyfp()
which pends exceptions during lazy stacking, and implements
this logic.

This corresponds to the pseudocode TakePreserveFPException().

Backports the relevant parts of commit
a99ba8ab1601904e0fa20325192fc850362ce80e from qemu
2019-04-30 10:56:54 -04:00
Peter Maydell 3fff653e20
target/arm: New helper function arm_v7m_mmu_idx_all()
Add a new helper function which returns the MMU index to use
for v7M, where the caller specifies all of the security
state, privilege level and whether the execution priority
is negative, and reimplement the existing
arm_v7m_mmu_idx_for_secstate_and_priv() in terms of it.

We are going to need this for the lazy-FP-stacking code.

Backports commit fa6252a988dbe440cd6087bf93cbe0887f0c401b from qemu
2019-04-30 10:54:26 -04:00
Peter Maydell 719231b4c0
target/arm: Activate M-profile floating point context when FPCCR.ASPEN is set
The M-profile FPCCR.ASPEN bit indicates that automatic floating-point
context preservation is enabled. Before executing any floating-point
instruction, if FPCCR.ASPEN is set and the CONTROL FPCA/SFPA bits
indicate that there is no active floating point context then we
must create a new context (by initializing FPSCR and setting
FPCA/SFPA to indicate that the context is now active). In the
pseudocode this is handled by ExecuteFPCheck().

Implement this with a new TB flag which tracks whether we
need to create a new FP context.

Backports commit 6000531e19964756673a5f4b694a649ef883605a from qemu
2019-04-30 10:51:31 -04:00
Peter Maydell 87c8c0fde7
target/arm: Set FPCCR.S when executing M-profile floating point insns
The M-profile FPCCR.S bit indicates the security status of
the floating point context. In the pseudocode ExecuteFPCheck()
function it is unconditionally set to match the current
security state whenever a floating point instruction is
executed.

Implement this by adding a new TB flag which tracks whether
FPCCR.S is different from the current security state, so
that we only need to emit the code to update it in the
less-common case when it is not already set correctly.

Note that we will add the handling for the other work done
by ExecuteFPCheck() in later commits.

Backports commit 6d60c67a1a03be32c3342aff6604cdc5095088d1 from qemu
2019-04-30 10:50:17 -04:00
Peter Maydell 8d726490ff
target/arm: Overlap VECSTRIDE and XSCALE_CPAR TB flags
We are close to running out of TB flags for AArch32; we could
start using the cs_base word, but before we do that we can
economise on our usage by sharing the same bits for the VFP
VECSTRIDE field and the XScale XSCALE_CPAR field. This
works because no XScale CPU ever had VFP.

Backports commit ea7ac69d124c94c6e5579145e727adec9ccbefef from qemu
2019-04-30 10:45:14 -04:00
Peter Maydell 3c1f3548c4
target/arm: Move NS TBFLAG from bit 19 to bit 6
Move the NS TBFLAG down from bit 19 to bit 6, which has not
been used since commit c1e3781090b9d36c60 in 2015, when we
started passing the entire MMU index in the TB flags rather
than just a 'privilege level' bit.

This rearrangement is not strictly necessary, but means that
we can put M-profile-only bits next to each other rather
than scattered across the flag word.

Backports commit 7fbb535f7aeb22896fedfcf18a1eeff48165f1d7 from qemu
2019-04-30 10:41:04 -04:00
Peter Maydell 86776d451e
target/arm: Handle floating point registers in exception return
Handle floating point registers in exception return.
This corresponds to pseudocode functions ValidateExceptionReturn(),
ExceptionReturn(), PopStack() and ConsumeExcStackFrame().

Backports commit 6808c4d2d2826920087533f517472c09edc7b0d2 from qemu
2019-04-30 10:40:12 -04:00
Peter Maydell 2244bb085a
target/arm: Allow for floating point in callee stack integrity check
The magic value pushed onto the callee stack as an integrity
check is different if floating point is present.

Backports commit 0dc51d66fcfcc4c72011cdafb401fd876ca216e7 from qemu
2019-04-30 10:36:58 -04:00
Peter Maydell 746d377221
target/arm: Clean excReturn bits when tail chaining
The TailChain() pseudocode specifies that a tail chaining
exception should sanitize the excReturn all-ones bits and
(if there is no FPU) the excReturn FType bits; we weren't
doing this.

Backports commit 60fba59a2f9a092a44b688df5d058cdd6dd9c276 from qemu
2019-04-30 10:35:36 -04:00
Peter Maydell ca0ac5dca9
target/arm: Clear CONTROL.SFPA in BXNS and BLXNS
For v8M floating point support, transitions from Secure
to Non-secure state via BLNS and BLXNS must clear the
CONTROL.SFPA bit. (This corresponds to the pseudocode
BranchToNS() function.)

Backports commit 3cd6726f0ba7cc77342ee721bd86094e13b2a42a from qemu
2019-04-30 10:33:25 -04:00
Peter Maydell c7f5633cfe
target/arm: Implement v7m_update_fpccr()
Implement the code which updates the FPCCR register on an
exception entry where we are going to use lazy FP stacking.
We have to defer to the NVIC to determine whether the
various exceptions are currently ready or not.

Backports commit b593c2b81287040ab6f452afec6281e2f7ee487b from qemu
2019-04-30 10:32:12 -04:00
Peter Maydell 065e60503f
target/arm: Handle floating point registers in exception entry
Handle floating point registers in exception entry.
This corresponds to the FP-specific parts of the pseudocode
functions ActivateException() and PushStack().

We defer the code corresponding to UpdateFPCCR() to a later patch.

Backports commit 0ed377a8013f40653a83f6ad2c9693897522d7dc from qemu
2019-04-30 10:25:23 -04:00
Peter Maydell c164a9f191
target/arm/helper: don't return early for STKOF faults during stacking
Currently the code in v7m_push_stack() which detects a violation
of the v8M stack limit simply returns early if it does so. This
is OK for the current integer-only code, but won't work for the
floating point handling we're about to add. We need to continue
executing the rest of the function so that we check for other
exceptions like not having permission to use the FPU and so
that we correctly set the FPCCR state if we are doing lazy
stacking. Refactor to avoid the early return.

Backports commit 3432c79a4e7345818d2defcf9e61a1bcb2907f9f from qemu
2019-04-30 10:22:36 -04:00
Peter Maydell 05add081a3
target/arm: Handle SFPA and FPCA bits in reads and writes of CONTROL
The M-profile CONTROL register has two bits -- SFPA and FPCA --
which relate to floating-point support, and should be RES0 otherwise.
Handle them correctly in the MSR/MRS register access code.
Neither is banked between security states, so they are stored
in v7m.control[M_REG_S] regardless of current security state.

Backports commit 2e1c5bcd32014c9ede1b604ae6c2c653de17fc53 from qemu
2019-04-30 10:21:24 -04:00
Peter Maydell c0cebeb5b5
target/arm: Clear CONTROL_S.SFPA in SG insn if FPU present
If the floating point extension is present, then the SG instruction
must clear the CONTROL_S.SFPA bit. Implement this.

(On a no-FPU system the bit will always be zero, so we don't need
to make the clearing of the bit conditional on ARM_FEATURE_VFP.)

Backports commit 1702071302934af77a072b7ee7c5eadc45b37573 from qemu
2019-04-30 10:20:45 -04:00
Peter Maydell 89baa5cffa
target/arm: Decode FP instructions for M profile
Correct the decode of the M-profile "coprocessor and
floating-point instructions" space:
* op0 == 0b11 is always unallocated
* if the CPU has an FPU then all insns with op1 == 0b101
are floating point and go to disas_vfp_insn()

For the moment we leave VLLDM and VLSTM as NOPs; in
a later commit we will fill in the proper implementation
for the case where an FPU is present.

Backports commit 8859ba3c9625e7ceb5599f457a344bcd7c5e112b from qemu
2019-04-30 10:19:45 -04:00
Peter Maydell 18bb21c035
target/arm: Honour M-profile FP enable bits
Like AArch64, M-profile floating point has no FPEXC enable
bit to gate floating point; so always set the VFPEN TB flag.

M-profile also has CPACR and NSACR similar to A-profile;
they behave slightly differently:
* the CPACR is banked between Secure and Non-Secure
* if the NSACR forces a trap then this is taken to
the Secure state, not the Non-Secure state

Honour the CPACR and NSACR settings. The NSACR handling
requires us to borrow the exception.target_el field
(usually meaningless for M profile) to distinguish the
NOCP UsageFault taken to Secure state from the more
usual fault taken to the current security state.

Backports commit d87513c0abcbcd856f8e1dee2f2d18903b2c3ea2 from qemu
2019-04-30 10:18:21 -04:00
Peter Maydell c6bb8d483d
target/arm: Disable most VFP sysregs for M-profile
The only "system register" that M-profile floating point exposes
via the VMRS/VMRS instructions is FPSCR, and it does not have
the odd special case for rd==15. Add a check to ensure we only
expose FPSCR.

Backports commit ef9aae2522c22c05df17dd898099dd5c3f20d688 from qemu
2019-04-30 10:15:25 -04:00
Peter Maydell a4f332f3e9
target/arm: Implement dummy versions of M-profile FP-related registers
The M-profile floating point support has three associated config
registers: FPCAR, FPCCR and FPDSCR. It also makes the registers
CPACR and NSACR have behaviour other than reads-as-zero.
Add support for all of these as simple reads-as-written registers.
We will hook up actual functionality later.

The main complexity here is handling the FPCCR register, which
has a mix of banked and unbanked bits.

Note that we don't share storage with the A-profile
cpu->cp15.nsacr and cpu->cp15.cpacr_el1, though the behaviour
is quite similar, for two reasons:
* the M profile CPACR is banked between security states
* it preserves the invariant that M profile uses no state
inside the cp15 substruct

Backports commit d33abe82c7c9847284a23e575e1078cccab540b5 from qemu
2019-04-30 10:13:41 -04:00
Peter Maydell 978cd9c524
target/arm: Make sure M-profile FPSCR RES0 bits are not settable
Enforce that for M-profile various FPSCR bits which are RES0 there
but have defined meanings on A-profile are never settable. This
ensures that M-profile code can't enable the A-profile behaviour
(notably vector length/stride handling) by accident.

Backports commit 5bcf8ed9401e62c73158ba110864ee1375558bf7 from qemu
2019-04-30 10:12:17 -04:00
Shahab Vahedi 7f59d62f4a
cputlb: Fix io_readx() to respect the access_type
This change adapts io_readx() to its input access_type. Currently
io_readx() treats any memory access as a read, although it has an
input argument "MMUAccessType access_type". This results in:

1) Calling the tlb_fill() only with MMU_DATA_LOAD
2) Considering only entry->addr_read as the tlb_addr

Buglink: https://bugs.launchpad.net/qemu/+bug/1825359

Backports commit ef5dae6805cce7b59d129d801bdc5db71bcbd60d from qemu
2019-04-30 10:11:11 -04:00
Richard Henderson 5847d833b2
tcg/arm: Restrict constant pool displacement to 12 bits
This will not necessarily restrict the size of the TB, since for v7
the majority of constant pool usage is for calls from the out-of-line
ldst code, which is already at the end of the TB. But this does
allow us to save one insn per reference on the off-chance.

Backports commit b4b82d7e9caff7ccca5c621817b5a4b8e95eb9b1 from qemu
2019-04-30 10:10:21 -04:00
Richard Henderson 187e80c9a5
tcg/ppc: Allow the constant pool to overflow at 32k
There is no point in coding for a 2GB offset when the max TB size
is already limited to 64k. If we further restrict to 32k then we
can eliminate the extra ADDIS instruction.

Backports commit a7cdaf710f2aaaf0be855a338dd67463d4bb99e2 from qemu
2019-04-30 10:08:18 -04:00
Richard Henderson 6145e3fdd7
tcg: Restart TB generation after out-of-line ldst overflow
This is part c of relocation overflow handling.

Backports commit aeee05f53a5d67304a521d2644dc0a607e3c8b28 from qemu
2019-04-30 10:06:53 -04:00
Richard Henderson 196631e0a4
tcg: Restart TB generation after constant pool overflow
This is part b of relocation overflow handling.

Backports commit 1768987b73fa7e23e58b7844abe5882490ff8e42 from qemu
2019-04-30 10:00:52 -04:00
Richard Henderson 45315fd8ef
tcg: Restart TB generation after relocation overflow
If the TB generates too much code, such that backend relocations
overflow, try again with a smaller TB. In support of this, move
relocation processing from a random place within tcg_out_op, in
the handling of branch opcodes, to a new function at the end of
tcg_gen_code.

This is not a complete solution, as there are additional relocs
generated for out-of-line ldst handling and constant pools.

Backports commit 7ecd02a06f8f4c0bbf872ecc15e37035b7e1df5f from qemu
2019-04-30 09:58:45 -04:00
Richard Henderson 434b3ab9ec
tcg: Restart after TB code generation overflow
If a TB generates too much code, try again with fewer insns.

Fixes: https://bugs.launchpad.net/bugs/1824853

Backports commit 6e6c4efed995d9eca6ae0cfdb2252df830262f50 from qemu
2019-04-30 09:52:57 -04:00
Richard Henderson bca82cde84
tcg: Hoist max_insns computation to tb_gen_code
In order to handle TB's that translate to too much code, we
need to place the control of the length of the translation
in the hands of the code gen master loop.

Backports commit 8b86d6d25807e13a63ab6ea879f976b9f18cc45a from qemu
2019-04-30 09:49:57 -04:00
Richard Henderson 2479bbd3b2
tcg/aarch64: Support INDEX_op_extract2_{i32,i64}
Backports commit 464c2969d5d7a0a5d38d2aa5d930986df876d3fb from qemu
2019-04-30 09:40:40 -04:00
Richard Henderson cbc5f919c2
tcg/arm: Support INDEX_op_extract2_i32
Backports commit 3b832d67a993968868f4087a9720a5c911e23f7a from qemu
2019-04-30 09:39:30 -04:00
Richard Henderson 0f20a26b36
tcg/i386: Support INDEX_op_extract2_{i32,i64}
Backports commit c6fb8c0cf704c4a1a48c3e99e995ad4c58150dab from qemu
2019-04-30 09:37:39 -04:00
Richard Henderson da39922c60
tcg: Use extract2 in tcg_gen_deposit_{i32,i64}
Backports commit b0a6056719b4a409a5699d11bbfdf79301417221 from qemu
2019-04-30 09:35:49 -04:00
Richard Henderson 948635602c
tcg: Use deposit and extract2 in tcg_gen_shifti_i64
Backports commit 02616bad6f0788652deaca9a48d0dfa7716ff87a from qemu
2019-04-30 09:33:44 -04:00
Richard Henderson 269fa0daba
tcg: Add INDEX_op_extract2_{i32,i64}
This will let backends implement the double-word shift operation.

Backports commit fce1296f135669eca85dc42154a2a352c818ad76 from qemu
2019-04-30 09:29:05 -04:00
David Hildenbrand 458942d94e
tcg: Implement tcg_gen_extract2_{i32,i64}
Will be helpful for s390x. Input 128 bit and output 64 bit only,
which is sufficient for now.

Backports commit 2089fcc9e7b4174d1c351eaa7d277c02188a6dd2 from qemu
2019-04-30 09:20:45 -04:00
Stanislav Lanci 64f51949a7
Pass through cache information for TOPOEXT CPUs
Backports commit a4e0b436f44a4bb47ed4a75b0c05d2547cf12b1c from qemu
2019-04-30 09:15:25 -04:00
Pu Wen 4bbf02a5f6
i386: Add new Hygon 'Dhyana' CPU model
Add a new base CPU model called 'Dhyana' to model processors from Hygon
Dhyana(family 18h), which derived from AMD EPYC(family 17h).

The following features bits have been removed compare to AMD EPYC:
aes, pclmulqdq, sha_ni

The Hygon Dhyana support to KVM in Linux is already accepted upstream[1].
So add Hygon Dhyana support to Qemu is necessary to create Hygon's own
CPU model.

Reference:
[1] https://git.kernel.org/tip/fec98069fb72fb656304a3e52265e0c2fc9adf87

Backports commit 8d031cec366f26669807eb43f61eb335973b7053 from qemu
2019-04-30 09:13:55 -04:00
Lioncash f6911ea73d
target/arm: Handle AArch32 CRC instructions 2019-04-27 10:50:25 -04:00
Lioncash c3df12e534
target/arm/translate: Synchronize with Qemu 2019-04-27 10:13:01 -04:00
Lioncash 9dfe2b527b
cpu-exec: Synchronize with qemu 2019-04-26 16:07:51 -04:00
Lioncash 5daabe55a4
cputlb: Synchronize with qemu
Synchronizes the code with Qemu to reduce a few differences.
2019-04-26 15:48:45 -04:00
Lioncash ef9e607e1c
qemu: Update bitmap.c/.h
Keeps it up to date with Qemu.
2019-04-26 13:05:55 -04:00
Lioncash f0c271ca2f
tcg: Correct special-cased brcond handling 2019-04-26 10:25:46 -04:00
Lioncash 4a64ebf95e
tcg: Synchronize with qemu 2019-04-26 09:32:20 -04:00
Lioncash 3996153514
tcg: Synchronize with qemu 2019-04-26 08:48:32 -04:00
Lioncash 006a13026a
tcg: Remove inconsistent g_strdup usage
The ts's name was allocated with strdup, but ts2's was being done with
g_strdup. Makes them consistent with upstream Qemu.
2019-04-26 08:48:32 -04:00
Lioncash 6d80445fe1
unicorn_arm: Treat registers as unsigned values in casts
It isn't particularly advisable to treat these as signed values, given
the registers themselves have no notion of signedness associated with
them.
2019-04-26 08:48:31 -04:00
Lioncash f419015aa3
unicorn_arm: Don't steamroll CPSR bits defined as RAZ/SBZP
Prevents bits from being set that should always read as zero according
to the ARM architecture reference manual.
2019-04-26 08:47:50 -04:00
Peter Maydell 8b2a0554cf
Open 4.1 development tree
Backports commit 85947dafad13ef8aea02eef2b058ee7aee47ab3e from qemu
2019-04-24 11:59:00 -04:00
Peter Maydell b62ab65eca
Update version for v4.0.0 release
Backports commit 131b9a05705636086699df15d4a6d328bb2585e8 from qemu
2019-04-24 11:58:36 -04:00
Lioncash 70836028eb
exec/helper-*: Synchronize with qemu 2019-04-22 08:22:49 -04:00
Lioncash 0379335677
cpu_ldst: Remove unused macros 2019-04-22 08:17:20 -04:00
Peter Maydell ff9c67b8f0
cpu_ldst.h: Don't define helpers if MMU_MODE*_SUFFIX not defined
Not all targets define a full set of suffix strings for the
NB_MMU_MODES that they have. In this situation, don't define any
helper functions for that mode, rather than defining helper functions
with no suffix at all. The MMU mode is still functional; it is merely
not directly accessible via cpu_ld*_MODE from target helper functions.

Also add an "NB_MMU_MODES >= 2" check to the definition of the mode 1
helpers -- some targets only define one MMU mode.

Backports commit de5ee4a888667ca0a198f0743d70075d70564117 from qemu
2019-04-22 07:44:32 -04:00
Lioncash e75b32ca4b
cpu_ldst.h, cpu-all.h, bswap.h: Update documentation on ld/st accessors
Add documentation of what the cpu_*_* accessors look like.
Correct some minor errors in the existing documentation of the
direct _p accessor family. Remove the near-duplicate comment
on the _p accessors from cpu-all.h and replace it with a reference
to the comment in bswap.h.

Backports commit db5fd8d709fd57f4d4f11edfca9f421f657f4508 from qemu
2019-04-22 07:39:13 -04:00
Peter Maydell 84eafc0cf6
cpu_ldst_template.h: Drop unused cpu_ldfq/stfq/ldfl/stfl accessors
The cpu_ldfq/stfq/ldfl/stfl accessors for loading and storing
float32 and float64 are completely unused, so delete them.
(The union they use for converting from the float32/float64
type to uint32_t or uint64_t is the wrong way to do it anyway:
they should be using make_float* and float*_val.)

Backports commit 82f11917c99e3c7fa3d6aa98572ecc98c7324c2f from qemu
2019-04-22 07:21:03 -04:00
Peter Maydell 32650e7816
cpu_ldst.h: Drop unused _raw macros, saddr() and laddr()
The _raw macros and their helpers saddr() and laddr() are now
totally unused -- delete them.

Backports commit 800e2ecc896beb6b79e7333c762da163b6a9135a from qemu
2019-04-22 07:19:20 -04:00
Peter Maydell f1a1f3c642
cpu_ldst_template.h: Use ld*_p directly rather than via ld*_raw macros
The ld*_raw and st*_raw macros are now only used within the code
produced by cpu_ldst_template.h, and only in three places.
Expand these out to just call the ld_p and st_p functions directly.

Note that in all the callsites the address argument is a uintptr_t,
so we can drop that part of the double-cast used in the saddr() and
laddr() macros.

Backports commit 355392329e4a843580e53cb027ed85e0cbebb640 from qemu
2019-04-22 07:11:50 -04:00
Peter Maydell 1a880ef99b
cpu_ldst.h: Use inline functions for usermode cpu_ld/st accessors
Use inline functions rather than macros for cpu_ld/st accessors
for the *-user configurations, as we already do for softmmu.
This has a two advantages:
* we can actually typecheck our arguments
* we don't need to leak the _raw macros everywhere

Since the _kernel functions were only used by target-i386/seg_helper.c,
put the definitions for them in that file too. (It already has the
similar template include code to define them for the softmmu case,
so it makes sense to have it deal with defining them for user-only.)

Backports commit 9220fe54c679d145232a28df6255e166ebf91bab from qemu
2019-04-22 07:08:39 -04:00
Peter Maydell 4fe3b4f95c
cpu_ldst.h: Remove unused very short ld*/st* defines
The very short ld*/st* defines are now not used anywhere; delete them.

Backports commit 177ea79f65c90b3bc84d59565b7519e47ea02f63 from qemu
2019-04-22 06:57:28 -04:00
Peter Maydell 36cd9f0df0
cpu_ldst.h: Drop unused ld/st*_kernel defines
The ld*_kernel and st*_kernel defines are not used anywhere;
delete them.

Backports commit 5a0826f7d2f9bea6e02157985b103d0a4c458aaa from qemu
2019-04-22 06:54:26 -04:00
Lioncash 830756a725
gen-icount: Use tcg_ctx where applicable in commented out code
If this is ever used in the future, it'll already be able to be used.
2019-04-22 06:17:10 -04:00
Lioncash d844d7cc9d
exec: Backport tb_cflags accessor 2019-04-22 06:12:59 -04:00
Lioncash 9f0e469142
gen-icount: Synchronize with qemu 2019-04-22 05:53:46 -04:00
Lioncash 96c52ea053
tcg: Synchronize with qemu 2019-04-22 02:03:01 -04:00
Lioncash 14f1cb03e9
target/arm/unicorn_arm: Get rid of magic constants where applicable 2019-04-19 20:23:41 -04:00
Lioncash 8ebb8f9fec
memory: Extract memory region freeing code to a common function
Makes deallocation behavior consistent and also fixes a memory leak
where the name of the region would be neglected to be freed.
2019-04-19 18:44:28 -04:00
Lioncash 7de690e87c
memory: Use a uint64_t instead of target_ulong for representing the incremented address
Prevents an infinite loop case if mapping near the upper boundary of an
address space on 32-bit emulated targets. i.e. mapping at 0xFFFFF000
with a size of 4096 won't overflow back to zero.

While we're at it, also tidy up the unicorn-specific functions.
2019-04-19 18:32:28 -04:00
Lioncash 5968b3d96f
target/arm: Synchronize with qemu 2019-04-19 15:31:18 -04:00
Lioncash ccf16bc572
softmmu_template: Fix invalid argument to tlb_fill in helper_be_st_name
This should be passing in the page2 value like in the little-endian
handler.
2019-04-18 08:43:14 -04:00
Lioncash bf6dfeb175
target/arm/translate: Synchronize with qemu
Backports a few other missing pieces from mainline qemu.
2019-04-18 06:22:36 -04:00
Lioncash 5b062dacf2
target/arm: Simplify and correct thumb instruction tracing
This wasn't subtracting the size of the instruction off the PC like how
the ARM mode tracing was performing the tracing. This simplifies it and
makes the behavior identical.
2019-04-18 06:00:15 -04:00
Lioncash 5d6ddec7fb
target/arm/translate: Subtract PC value properly for thumb tracecode calls 2019-04-18 05:44:48 -04:00
Lioncash b9d1002609
tcg-op: Make sure to free temporaries within gen_uc_tracecode()
After the helper is generated, these are no longer needed and can be
reclaimed.
2019-04-18 05:40:48 -04:00
Lioncash e579832dcb
tcg: Synchronize with qemu 2019-04-18 04:57:19 -04:00
Lioncash 3521e72580
target/arm: Sychronize with qemu
Synchronizes with bits and pieces that were missed due to merging
incorrectly (sorry :<)
2019-04-18 04:49:11 -04:00
Peter Maydell 753b6601c5
Update version for v4.0.0-rc4 release
Backports commit eeba63fc7fface36f438bcbc0d3b02e7dcb59983 from qemu
2019-04-16 22:39:02 -04:00
Lukas Dresel 4b94a8cc44
support for YMM registers ymm8-ymm15 (#1079)
Backports 55d8d073bd80935e807289ae2ff6161145a2afb6 from qemu
2019-04-16 06:35:41 -04:00
Lioncash 5de5b69344
target/i386: Fix compilation of the x86 target
Thanks to @rk700 for reporting it.
2019-04-16 06:29:06 -04:00
Lioncash ddcf400955
arm: Always enable access to coprocessors initially
Allows non-AArch64 environments to always access coprocessors initially.
Removes the need to do avoidable register management when testing
floating-point code.
2019-04-13 19:49:43 -04:00
Peter Maydell 5bcb1e13ab
Update version for v4.0.0-rc3 release
Backports commit 532cc6da74ec25b5ba6893b5757c977d54582949 from qemu
2019-04-10 15:00:31 -04:00
Peter Maydell 3ff38c2402
include/qemu/bswap.h: Use __builtin_memcpy() in accessor functions
In the accessor functions ld*_he_p() and st*_he_p() we use memcpy()
to perform a load or store to a pointer which might not be aligned
for the size of the type. We rely on the compiler to optimize this
memcpy() into an efficient load or store instruction where possible.
This is required for good performance, but at the moment it is also
required for correct operation, because some users of these functions
require that the access is atomic if the pointer is aligned, which
will only be the case if the compiler has optimized out the memcpy().
(The particular example where we discovered this is the virtio
vring_avail_idx() which calls virtio_lduw_phys_cached() which
eventually ends up calling lduw_he_p().)

Unfortunately some compile environments, such as the fortify-source
setup used in Alpine Linux, define memcpy() to a wrapper function
in a way that inhibits this compiler optimization.

The correct long-term fix here is to add a set of functions for
doing atomic accesses into AddressSpaces (and to other relevant
families of accessor functions like the virtio_*_phys_cached()
ones), and make sure that callsites which want atomic behaviour
use the correct functions.

In the meantime, switch to using __builtin_memcpy() in the
bswap.h accessor functions. This will make us robust against things
like this fortify library in the short term. In the longer term
it will mean that we don't end up with these functions being really
badly-performing even if the semantics of the out-of-line memcpy()
are correct.
2019-04-10 14:57:52 -04:00
Peter Maydell 6b413ffa97
target/i386: Generate #UD for LOCK on a register increment
Fix a TCG crash due to attempting an atomic increment
operation without having set up the address first.
This is a similar case to that dealt with in commit
e84fcd7f662a0d8198703, and we fix it in the same way.

Fixes: https://bugs.launchpad.net/qemu/+bug/1807675

Backports commit 8cb2ca3d7479748587313f0b34034a3f8aa08c92 from qemu
2019-04-09 09:28:46 -04:00
Peter Maydell 1a50ee5826
Update version for v4.0.0-rc2 release
Backports commit 061b51e9195670e9d190cdec46fabcb3c77763fb from qemu
2019-04-03 10:02:46 -04:00
Paolo Bonzini b4dd8afa14
config-all-devices.mak: rebuild on reconfigure
This ensures that softmmu directories are culled after a
"./configure --target-list=x86_64-linux-user".

Backports commit b7c11e574977a0addfbbdb89377c6f52affe64ec from qemu
2019-03-29 19:31:32 -04:00
Singh, Brijesh 54b9701799
memory: Fix the memory region type assignment order
Currently, a callback registered through the RAMBlock notifier
is not able to get the memory region type (i.e callback is not
able to use memory_region_is_ram_device function). This is
because mr->ram assignment happens _after_ the memory is allocated
whereas the callback is executed during allocation.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1667249

Backports commit 2ddb89b00f947f785c9ca6742f28f954e3b75e62 from qemu
2019-03-29 19:28:41 -04:00
Peter Maydell d57c77afcf
Update version for v4.0.0-rc1 release
Backports commit 49fc899f8d673dd9e73f3db0d9e9ea60b77c331b from qemu
2019-03-26 20:44:04 -04:00
Richard Henderson f5cb1a5865
target/arm: Set SIMDMISC and FPMISC for 32-bit -cpu max
Fixes: https://bugs.launchpad.net/bugs/1821430

Backports commit c8877d0f2f662bf01346a03bc9fd279954b4132d from qemu
2019-03-26 20:41:01 -04:00
Kito Cheng 5a7ad783e9
target/riscv: Fix wrong expanding for c.fswsp
base register is no rs1 not rs2 for fsw.

Backports commit 620455350a8da7cc62ae82cb69dd5c556f744136 from qemu
2019-03-26 20:39:34 -04:00
Palmer Dabbelt fc662c281a
target/riscv: Zero extend the inputs of divuw and remuw
While running the GCC test suite against 4.0.0-rc0, Kito found a
regression introduced by the decodetree conversion that caused divuw and
remuw to sign-extend their inputs. The ISA manual says they are
supposed to be zero extended:

DIVW and DIVUW instructions are only valid for RV64, and divide the
lower 32 bits of rs1 by the lower 32 bits of rs2, treating them as
signed and unsigned integers respectively, placing the 32-bit
quotient in rd, sign-extended to 64 bits. REMW and REMUW
instructions are only valid for RV64, and provide the corresponding
signed and unsigned remainder operations respectively. Both REMW
and REMUW always sign-extend the 32-bit result to 64 bits, including
on a divide by zero.

Here's Kito's reduced test case from the GCC test suite

unsigned calc_mp(unsigned mod)
{
unsigned a,b,c;
c=-1;
a=c/mod;
b=0-a*mod;
if (b > mod) { a += 1; b-=mod; }
return b;
}

int main(int argc, char *argv[])
{
unsigned x = 1234;
unsigned y = calc_mp(x);

if ((sizeof (y) == 4 && y != 680)
|| (sizeof (y) == 2 && y != 134))
abort ();
exit (0);
}

I haven't done any other testing on this, but it does fix the test case.

Backports commit f17e02cd3731bdfe2942d1d0b2a92f26da02408c from qemu
2019-03-26 20:38:17 -04:00
Andrew Jones 8719b3edb3
target/arm: make pmccntr_op_start/finish static
These functions are not used outside helper.c

Backports commit f2b2f53f6429b5abd7cd86bd65747f5f13e195eb from qemu
2019-03-26 20:35:34 -04:00
Andrew Jones 6482182ba5
target/arm: cortex-a7 and cortex-a15 have pmus
cortex-a7 and cortex-a15 have pmus (PMUv2) and they advertise
them in ID_DFR0. Let's allow them to function. This also enables
the pmu cpu property to work with these cpu types, i.e. we can
now do '-cpu cortex-a15,pmu=off' to remove the pmu.

Backports commit a46118fc16537a593119e5b316052a98514046bb from qemu
2019-03-26 20:34:11 -04:00
Andrew Jones 3c50e72c40
target/arm: fix crash on pmu register access
Fix a QEMU NULL derefence that occurs when the guest attempts to
enable PMU counters with a non-v8 cpu model or a v8 cpu model
which has not configured a PMU.

Backports commit cbbb3041fe2f57a475cef5d6b0ef836118aad106 from qemu
2019-03-26 20:32:49 -04:00
Richard Henderson 2427ace0c0
target/arm: Fix non-parallel expansion of CASP
The second word has been loaded from the unincremented
address since the first commit.

Backports commit a036f5302c13634f3d375615b2949fd1fa1657b6 from qemu
2019-03-26 20:31:01 -04:00
Eduardo Habkost df51e8bbb3
i386: Disable OSPKE on CPU model definitions
Currently, the Cascadelake-Server, Icelake-Client, and
Icelake-Server are always generating the following warning:

qemu-system-x86_64: warning: \
host doesn't support requested feature: CPUID.07H:ECX [bit 4]

This happens because OSPKE was never returned by
GET_SUPPORTED_CPUID or x86_cpu_get_supported_feature_word().
OSPKE is a runtime flag automatically set by the KVM module or by
TCG code, was always cleared by x86_cpu_filter_features(), and
was not supposed to appear on the CPU model table.

Remove the OSPKE flag from the CPU model table entries, to avoid
the bogus warning and avoid returning invalid feature data on
query-cpu-* QMP commands. As OSPKE was always cleared by
x86_cpu_filter_features(), this won't have any guest-visible
impact.

Include a test case that should detect the problem if we introduce
a similar bug again.

Fixes: c7a88b52f62b ("i386: Add new model of Cascadelake-Server")
Fixes: 8a11c62da914 ("i386: Add new CPU model Icelake-{Server,Client}")

Backports commit bb4928c7cafe50ab2137a0034e350ef1bfa044d9 from qemu
2019-03-22 09:46:44 -04:00
Eduardo Habkost a71df717c9
i386: Make arch_capabilities migratable
Now that kvm_arch_get_supported_cpuid() will only return
arch_capabilities if QEMU is able to initialize the MSR properly,
we know that the feature is safely migratable.

Backports commit 014018e19b3c54dd1bf5072bc912ceffea40abe8 from qemu
2019-03-22 09:45:43 -04:00
Peter Maydell db02d0b733
Update version for v4.0.0-rc0 release
Backports commit 62a172e6a77d9072bb1a18f295ce0fcf4b90a4f2 from qemu
2019-03-19 23:58:31 -04:00
Alistair Francis a9cc62cb23
target/riscv: Remove unused struct
Backports commit 6b745d4fada5c73db44f596a62e29a5dbe3fc53f from qemu
2019-03-19 23:58:31 -04:00
Michael Clark b247ee234d
RISC-V: Update load reservation comment in do_interrupt
Backports commit d9360e96885dbd69ce4aa925d1701c7a10cf54ae from qemu
2019-03-19 23:58:31 -04:00
Michael Clark d3dbcb6dfc
RISC-V: Add support for vectored interrupts
If vectored interrupts are enabled (bits[1:0]
of mtvec/stvec == 1) then use the following
logic for trap entry address calculation:

pc = mtvec + cause * 4

In addition to adding support for vectored interrupts
this patch simplifies the interrupt delivery logic
by making sync/async cause decoding and encoding
steps distinct.

The cause code and the sign bit indicating sync/async
is split at the beginning of the function and fixed
cause is renamed to cause. The MSB setting for async
traps is delayed until setting mcause/scause to allow
redundant variables to be eliminated. Some variables
are renamed for conciseness and moved so that decls
are at the start of the block.

Backports commit acbbb94e5730c9808830938e869d243014e2923a from qemu
2019-03-19 23:58:31 -04:00
Michael Clark 8ffa68e757
RISC-V: Change local interrupts from edge to level
This effectively changes riscv_cpu_update_mip
from edge to level. i.e. cpu_interrupt or
cpu_reset_interrupt are called regardless of
the current interrupt level.

Fixes WFI doesn't return when a IPI is issued:

- https://github.com/riscv/riscv-qemu/issues/132

To test:

1) Apply RISC-V Linux CPU hotplug patch:

- http://lists.infradead.org/pipermail/linux-riscv/2018-May/000603.html

2) Enable CONFIG_CPU_HOTPLUG in linux .config

3) Try to offline and online cpus:

echo 1 > /sys/devices/system/cpu/cpu2/online
echo 0 > /sys/devices/system/cpu/cpu2/online
echo 1 > /sys/devices/system/cpu/cpu2/online

Backports commit d26f5a423438e579d3ff0ca35e44edb966a36233 from qemu
2019-03-19 23:58:31 -04:00
Kito Cheng bd3e9ebaea
RISC-V: linux-user support for RVE ABI
This change checks elf_flags for EF_RISCV_RVE and if
present uses the RVE linux syscall ABI which uses t0
for the syscall number instead of a7.

Warn and exit if a non-RVE ABI binary is run on a
cpu with the RVE extension as it is incompatible.

Backports relevant parts of 5836c3eccedb6dfab16b8f606f2de24b8938b69c
from qemu
2019-03-19 23:58:31 -04:00
Michael Clark 2e0c040062
RISC-V: Allow interrupt controllers to claim interrupts
We can't allow the supervisor to control SEIP as this would allow the
supervisor to clear a pending external interrupt which will result in
lost a interrupt in the case a PLIC is attached. The SEIP bit must be
hardware controlled when a PLIC is attached.

This logic was previously hard-coded so SEIP was always masked even
if no PLIC was attached. This patch adds riscv_cpu_claim_interrupts
so that the PLIC can register control of SEIP. In the case of models
without a PLIC (spike), the SEIP bit remains software controlled.

This interface allows for hardware control of supervisor timer and
software interrupts by other interrupt controller models.

Backports commit e3e7039cc24ecf47d81c091e8bb04552d6564ad8 from qemu
2019-03-19 23:48:12 -04:00
Alistair Francis a4f2dcde28
riscv: pmp: Log pmp access errors as guest errors
Backports commit aad5ac2311f3ad2c0be12d0eaaf4ef4398438fc2 from qemu
2019-03-19 23:45:03 -04:00
Jim Wilson 65903cf9a4
RISC-V: Add debug support for accessing CSRs.
Add a debugger field to CPURISCVState. Add riscv_csrrw_debug function
to set it. Disable mode checks when debugger field true.

Backports commit 753e3fe207db08ce0ef0405e8452c3397c9b9308 from qemu
2019-03-19 23:42:48 -04:00
Jim Wilson 30ab335bb3
RISC-V: Fixes to CSR_* register macros.
This adds some missing CSR_* register macros, and documents some as being
priv v1.9.1 specific.

Backports commit 8e73df6aa3f2f0e5c26c03a94a88406616291815 from qemu
2019-03-19 23:39:49 -04:00
Bastian Koppelmann c0f036578c
target/riscv: Fix manually parsed 16 bit insn
during the refactor to decodetree we removed the manual decoding that is
necessary for c.jal/c.addiw and removed the translation of c.flw/c.ld
and c.fsw/c.sd. This reintroduces the manual parsing and the
omited implementation.

Backports commit f330433b3633647b047cfa418c2ca4d18fda69c7 from qemu
2019-03-19 05:44:58 -04:00
Amir Charif 2392d8b8ab
target/arm: Check access permission to ADDVL/ADDPL/RDVL
These instructions do not trap when SVE is disabled in EL0,
causing them to be executed with wrong size information.

Backports commit 5de56742a3c91de3d646326bec43a989bba83ca4 from qemu
2019-03-19 05:42:59 -04:00
Dongjiu Geng 4dc3d59fd3
target/arm: change arch timer registers access permission
Some generic arch timer registers are Config-RW in the EL0,
which means the EL0 exception level can have write permission
if it is appropriately configured.

When VM access registers, QEMU firstly checks whether they have RW
permission, then check whether it is appropriately configured.
If they are defined to read only in EL0, even though they have been
appropriately configured, they still do not have write permission.
So need to add the write permission according to ARMV8 spec when
define it.

Backports commit daf1dc5f82cefe2a57f184d5053e8b274ad2ba9a from qemu
2019-03-19 05:40:44 -04:00
Bastian Koppelmann e96282eb28
target/riscv: Remove decode_RV32_64G()
decodetree handles all instructions now so the fallback is not necessary
anymore.

Backports commit 25e6ca30c668783cd72ff97080ff44e141b99f9b from qemu
2019-03-19 05:37:42 -04:00
Bastian Koppelmann a371684da9
target/riscv: Remove gen_system()
with all 16 bit insns moved to decodetree no path is falling back to
gen_system(), so we can remove it.

Backports commit 8f7bc273868939f0821e07fb23792db63d45bffb from qemu
2019-03-19 05:36:48 -04:00
Bastian Koppelmann 1765e6a090
target/riscv: Rename trans_arith to gen_arith
Backports commit 8dc9e8a8b04c4308cf275aa6480d289dcd3cf9b3 from qemu
2019-03-19 05:35:44 -04:00
Bastian Koppelmann 28daad082b
target/riscv: Remove manual decoding of RV32/64M insn
Backports commit 1288701682d81b93f62e01cd87001dc90b30b881 from qemu
2019-03-19 05:34:32 -04:00
Bastian Koppelmann b9eda7c464
target/riscv: Remove shift and slt insn manual decoding
Backports commit 34446e845829f55eaa9a07a915950af0b2710b47 from qemu
2019-03-19 05:23:47 -04:00
Bastian Koppelmann 177726afb8
target/riscv: make ADD/SUB/OR/XOR/AND insn use arg lists
manual decoding in gen_arith() is not necessary with decodetree. For now
the function is called trans_arith as the original gen_arith still
exists. The former will be renamed to gen_arith as soon as the old
gen_arith can be removed.

Backports commit f2ab1728675772cd475a33f4df3d2f68a22c188f from qemu
2019-03-19 05:17:54 -04:00
Bastian Koppelmann cb7c94fbc4
target/riscv: Move gen_arith_imm() decoding into trans_* functions
gen_arith_imm() does a lot of decoding manually, which was hard to read
in case of the shift instructions and is not necessary anymore with
decodetree.

Backports commit 7a50d3e2ae7f13b24fe55990ea0b8ddcbbb43130 from qemu
2019-03-19 05:14:21 -04:00
Bastian Koppelmann 6190837e2f
target/riscv: Remove manual decoding from gen_store()
With decodetree we don't need to convert RISC-V opcodes into to MemOps
as the old gen_store() did.

Backports commit bce8a342a1f0919479d18ec812b100136daa746b from qemu
2019-03-19 05:05:14 -04:00
Bastian Koppelmann f91f286ed2
target/riscv: Remove manual decoding from gen_load()
With decodetree we don't need to convert RISC-V opcodes into to MemOps
as the old gen_load() did.

Backports commit 98898b20e9cca462843c22ad952c216ffd57d654 from qemu
2019-03-19 05:02:25 -04:00
Bastian Koppelmann 6f89816f5d
target/riscv: Remove manual decoding from gen_branch()
We now utilizes argument-sets of decodetree such that no manual
decoding is necessary.

Backports commit 090cc2c898a04e42350eabf1bcf7d245471603f9 from qemu
2019-03-19 04:59:08 -04:00
Bastian Koppelmann 3fe4cf353c
target/riscv: Remove gen_jalr()
trans_jalr() is the only caller, so move the code into trans_jalr().

Backports commit 9e92c57d834cd50ab088d75510c3c720878eef13 from qemu
2019-03-19 04:55:52 -04:00
Bastian Koppelmann 580457a1d2
target/riscv: Convert quadrant 2 of RVXC insns to decodetree
Backports commit 97b0be81f6f20bfd53725cb2500b47c6786be532 from qemu
2019-03-19 04:53:07 -04:00
Bastian Koppelmann b4854e3340
target/riscv: Convert quadrant 1 of RVXC insns to decodetree
Backports commit 07b001c6fc500fa0e87fd8210f270d7dc8aff9ea from qemu
2019-03-19 04:50:08 -04:00
Lioncash 8d294a7897
target/riscv: Convert quadrant 0 of RVXC insns to decodetree 2019-03-19 04:45:53 -04:00
Bastian Koppelmann 67164f2b29
target/riscv: Convert RV priv insns to decodetree
Backports commit 4ba79c47a205b3af4b62b9b1b6090dee678a1069 from qemu
2019-03-19 04:40:24 -04:00
Bastian Koppelmann 7475207aba
target/riscv: Convert RV64D insns to decodetree
Backports commit 31fe4d35f2608daecb2319c81e0bb4af81b398ae from qemu
2019-03-18 16:57:16 -04:00
Bastian Koppelmann 71f2ed2959
target/riscv: Convert RV32D insns to decodetree
Backports commit 97f8b49372d73aab4d172df4ea297d7f3ce4e02e from qemu
2019-03-18 16:51:20 -04:00
Bastian Koppelmann d8d107ec85
target/riscv: Convert RV64F insns to decodetree
Backports commit 95561ee3b41a536cc373e59da10605e2a8676ee2 from qemu
2019-03-18 16:43:17 -04:00
Bastian Koppelmann 9edaf2069e
target/riscv: Convert RV32F insns to decodetree
Backports commit 6f0e74ff4b7f83901e99e59108eaa43513a0ce36 from qemu
2019-03-18 16:40:04 -04:00
Bastian Koppelmann 3f9177f6e7
target/riscv: Convert RV64A insns to decodetree
Backports commit 40b9faecfe8000520958f50a77ea16f4b3dd6405 from qemu
2019-03-18 16:27:53 -04:00
Bastian Koppelmann 81013f9e2b
target/riscv: Convert RV32A insns to decodetree
Backports commit 3b77c289aef21b33517f2fd7639cce13bed50cc1 from qemu
2019-03-18 16:25:50 -04:00
Bastian Koppelmann 3a5da0b939
target/riscv: Convert RVXM insns to decodetree
Backports commit d2e2c1e406e0ab886eafeb012fd2ed0d21f3a6a1 from qemu
2019-03-18 16:20:29 -04:00
Bastian Koppelmann 4ea449a809
target/riscv: Convert RVXI csr insns to decodetree
Backports commit 771fbe156a2a2be964a4fbe6251339a5570a26c4 from qemu
2019-03-18 16:17:59 -04:00
Bastian Koppelmann de580ee378
target/riscv: Convert RVXI fence insns to decodetree
Backports commit 0c865e856a7e97d37c4dea4cf2ff875faa6e72ed from qemu
2019-03-18 16:09:21 -04:00
Bastian Koppelmann 11e2b9c410
target/riscv: Convert RVXI arithmetic insns to decodetree
we cannot remove the call to gen_arith() in decode_RV32_64G() since it
is used to translate multiply instructions.

Backports commit b73a987b09ad5081123dc6b1e8e6c8305a1c8673 from qemu
2019-03-18 16:04:49 -04:00
Bastian Koppelmann 1024ceb4df
target/riscv: Convert RV64I load/store insns to decodetree
this splits the 64-bit only instructions into its own decode file such
that we generate the decoder for these instructions only for the RISC-V
64 bit target.

Backports commit 7e45a682edc32ba90d6955215f062210531b835b from qemu
2019-03-18 16:02:16 -04:00
Bastian Koppelmann 65a415372b
target/riscv: Convert RV32I load/store insns to decodetree
Backports commit c1000d4e1bdb13857b601c425aca2fda9131283b from qemu
2019-03-18 15:59:43 -04:00
Bastian Koppelmann 55dc0038e8
target/riscv: Convert RVXI branch insns to decodetree
Backports commit 3cca75a6fe8b3f85e19559ffa64cb0be370d2814 from qemu
2019-03-18 15:58:16 -04:00
Bastian Koppelmann 5e5b3e9ea9
target/riscv: Activate decodetree and implemnt LUI & AUIPC
for now only LUI & AUIPC are decoded and translated. If decodetree fails, we
fall back to the old decoder.

Backports commit 2a53cff418335ccb4719e9a94fde55f6ebcc895d from qemu
2019-03-18 15:54:17 -04:00
Richard Henderson 4111a3a892
decodetree: Properly diagnose fields overflowing an insn
Previously this would result in an exception for shifting
the field mask by a negative number.

Backports commit 2decfc95583dc28add69810eaca6ada7b4b44d3a from qemu
2019-03-13 11:21:04 -04:00
Richard Henderson fcb49bb6f6
decodetree: Prefix extract function names with decode_function
This makes it easier to name Formats within multiple decode files.

Backports commit 71ecf79bf40db20237a3cfc01cc407cc4cad8817 from qemu
2019-03-13 11:20:26 -04:00
Richard Henderson 1f57e9bedc
decodetree: Allow +- to begin a number initializing a field
Backports commit 263ac638a76a72841e3f513b14c515680703e084 from qemu
2019-03-13 11:19:56 -04:00
Richard Henderson e18e116ce5
decodetree: Produce clean output for an empty input file
This is interesting for bisection, where an output file is plumbed,
but does not yet have patterns.

Backports commit 82bfac1c06cadeb5c7252734dc695d951185916c from qemu
2019-03-13 11:19:22 -04:00
Richard Henderson 190ee1657b
decodetree: Add --static-decode option
Like --decode, but do not drop 'static' qualifier.

Backports commit cd3e7fc18db43b296f413814cd4b72bcd6878bc4 from qemu
2019-03-13 11:18:43 -04:00
Richard Henderson c8514cc538
decodetree: Allow grouping of overlapping patterns
Backports commit 0eff2df4a2ce677230119440f7eb057acffad5eb from qemu
2019-03-13 11:17:48 -04:00
Richard Henderson 0c1b2a5d5a
decodetree: Do not unconditionaly return from Pattern.output_code
As a consequence, the 'return false' gets pushed up one level.

This will allow us to perform some other action when the
translator returns failure.

Backports commit eb6b87fac70dd62e3f1286703db20c012e7a9611 from qemu
2019-03-13 11:13:48 -04:00
Philippe Mathieu-Daudé edb21478b7
decodetree: Ensure build_tree does not include values outside insnmask
Reproduced with "scripts/decodetree.py /dev/null".

Backports commit 9b3186e38f00ae0cba36c096e3654f916699f336 from qemu
2019-03-13 11:12:24 -04:00
Wei Yang 20994eca53
exec.c: refactor function flatview_add_to_dispatch()
flatview_add_to_dispatch() registers page based on the condition of
*section*, which may looks like this:

|s|PPPPPPP|s|

where s stands for subpage and P for page.

The procedure of this function could be described as:

- register first subpage
- register page
- register last subpage

This means the procedure could be simplified into these three steps
instead of a loop iteration.

This patch refactors the function into three corresponding steps and
adds some comment to clarify it.

Backports commit 494d199727ba248c96326b4e1c97f86eb11a5ec7 from qemu
2019-03-11 17:00:46 -04:00
Philippe Mathieu-Daudé 963beb216d
configure: Disable W^X on OpenBSD
Since OpenBSD 6.0 [1], W^X is enforced by default [2].
TCG requires WX access. Disable W^X if it is available.
This fixes:

\# lm32-softmmu/qemu-system-lm32
Could not allocate dynamic translator buffer

\# sysctl kern.wxabort=1
kern.wxabort: 0 -> 1
\# lm32-softmmu/qemu-system-lm32
mmap: Not supported
Abort trap (core dumped)
\# gdb -q lm32-softmmu/qemu-system-lm32 qemu-system-lm32.core
(gdb) bt
\#0 0x000017e3c156c50a in _thread_sys___syscall () at {standard input}:5
\#1 0x000017e3c15e5d7a in *_libc_mmap (addr=Variable "addr" is not available.) at /usr/src/lib/libc/sys/mmap.c:47
\#2 0x000017e17d9abc8b in alloc_code_gen_buffer () at /usr/src/qemu/accel/tcg/translate-all.c:1064
\#3 0x000017e17d9abd04 in code_gen_alloc (tb_size=0) at /usr/src/qemu/accel/tcg/translate-all.c:1112
\#4 0x000017e17d9abe81 in tcg_exec_init (tb_size=0) at /usr/src/qemu/accel/tcg/translate-all.c:1149
\#5 0x000017e17d9897e9 in tcg_init (ms=0x17e45e456800) at /usr/src/qemu/accel/tcg/tcg-all.c:66
\#6 0x000017e17d9891b8 in accel_init_machine (acc=0x17e3c3f50800, ms=0x17e45e456800) at /usr/src/qemu/accel/accel.c:63
\#7 0x000017e17d989312 in configure_accelerator (ms=0x17e45e456800, progname=0x7f7fffff07b0 "lm32-softmmu/qemu-system-lm32") at /usr/src/qemu/accel/accel.c:111
\#8 0x000017e17d9d8616 in main (argc=1, argv=0x7f7fffff06b8, envp=0x7f7fffff06c8) at vl.c:4325

[1] https://www.openbsd.org/faq/upgrade60.html
[2] https://undeadly.org/cgi?action=article&sid=20160527203200

Backports commit 7776ea6b49873ed18a2111e25ed8a6d94bd73db8 from qemu
2019-03-11 16:46:52 -04:00
Luwei Kang 9f2ce63414
i386: extended the cpuid_level when Intel PT is enabled
Intel Processor Trace required CPUID[0x14] but the cpuid_level
have no change when create a kvm guest with
e.g. "-cpu qemu64,+intel-pt

Backports relevant bits of commit
f24c3a79a415042f6dc195f029a2ba7247d14cac from qemu
2019-03-11 16:40:23 -04:00
Lioncash d6b706a296
qemu/fpu: Synchronize with Qemu
Resolves a few formatting discrepancies
2019-03-09 18:27:31 -05:00
Lioncash b6f752970b
target/riscv: Initial introduction of the RISC-V target
This ports over the RISC-V architecture from Qemu. This is currently a
very barebones transition. No code hooking or any fancy stuff.
Currently, you can feed it instructions and query the CPU state itself.

This also allows choosing whether or not RISC-V 32-bit or RISC-V 64-bit
is desirable through Unicorn's interface as well.

Extremely basic examples of executing a single instruction have been
added to the samples directory to help demonstrate how to use the basic
functionality.
2019-03-08 21:46:10 -05:00
yhql 1723cb1015
Add ARM MSP, PSP and CONTROL register access (#1071)
Necessary for NVIC exception emulation from user.

Backports commit 31851280316d37305f412fff42f45bb375999074 from unicorn
2019-03-08 02:24:49 -05:00
Lioncash 8f688748c4
translate/i386: Restore Qemu's ordering of CPU and cache definitions
Like the previous two changes, this restores the layout of Qemu's
designated initializers.
2019-03-08 01:51:27 -05:00
Lioncash 1ddbb253e2
target/mips: Restore Qemu's organization of CPU definitions
Like 5075a0158a, this restores Qemu's
formatting of the processor tables to make it significantly less
annoying to maintain.
2019-03-08 01:40:50 -05:00
Lioncash 5075a0158a
target/arm: Restore Qemu's organization of coprocessor registers
These changes were mostly made in upstream unicorn for what I can guess,
was to support old versions of MSVC's compiler.

This is also a pain to maintain, since everything needs to be done
manually and can be a source of errors. It also makes it take more work
than it needs to, to backport changes from qemu.

Because of that, this change restores Qemu's organization of the
coprocessor registers.
2019-03-08 01:32:47 -05:00
Richard Henderson f116560d2c
target/arm: Implement ARMv8.5-FRINT
Backports 6bea25631af92531027d3bf3ef972a4d51d62e7c from qemu.
2019-03-05 23:17:33 -05:00
Richard Henderson f855ac073d
target/arm: Restructure handle_fp_1src_{single, double}
This will allow sharing code that adjusts rmode beyond
the existing users.

Backports commit 0e4db23d1fdbfed4fc1ec19b6e59820209600358 from qemu
2019-03-05 23:09:48 -05:00
Richard Henderson 94b5aab8f8
target/arm: Implement ARMv8.5-CondM
Backports commit 5ef84f111483e3f7b57efc690e22081ca8f99544 from qemu
2019-03-05 23:04:06 -05:00
Richard Henderson 1dfa15a683
target/arm: Implement ARMv8.4-CondM
Backports commit b89d9c988a988d5547c73e2bc43f59b0c07420a5 from qemu
2019-03-05 22:59:51 -05:00
Richard Henderson 65a3f3be5b
target/arm: Rearrange disas_data_proc_reg
This decoding more closely matches the ARMv8.4 Table C4-6,
Encoding table for Data Processing - Register Group.

In particular, op2 == 0 is now more than just Add/sub (with carry).

Backports commit 2fba34f70d9a81bab56e61bb99a4d6632bdfe531 from qemu
2019-03-05 22:55:27 -05:00
Richard Henderson 45c297c99b
target/arm: Add set/clear_pstate_bits, share gen_ss_advance
We do not need an out-of-line helper for manipulating bits in pstate.
While changing things, share the implementation of gen_ss_advance.

Backports commit 22ac3c49641f6eed93dca5b852030b4d3eacf6c4 from qemu
2019-03-05 22:55:22 -05:00
Richard Henderson 60742608f5
target/arm: Split helper_msr_i_pstate into 3
The EL0+UMA check is unique to DAIF. While SPSel had avoided the
check by nature of already checking EL >= 1, the other post v8.0
extensions to MSR (imm) allow EL0 and do not require UMA. Avoid
the unconditional write to pc and use raise_exception_ra to unwind.

Backports commit ff730e9666a716b669ac4a8ca7c521177d1d2b15 from qemu
2019-03-05 22:45:11 -05:00
Richard Henderson 5d42ca6a65
target/arm: Implement ARMv8.0-PredInv
Backports commit cb570bd318beb2ecce83cabf8016dacceb824dce from qemu
2019-03-05 22:37:57 -05:00
Richard Henderson 1721e429c2
target/arm: Implement ARMv8.0-SB
Backports commit 9888bd1e20425dfe4dcca5dcd1ca2fac8e90ad19 from qemu
2019-03-05 22:35:16 -05:00
Richard Henderson a552a7b2e0
target/arm: Split out arm_sctlr
Minimize the number of places that will need updating when
the virtual host extensions are added.

Backports commit 64e40755cd41fbe8cd266cf387e42ddc57a449ef from qemu
2019-03-05 22:29:25 -05:00
Richard Henderson fa70a2bc69
target/arm: Fix PC test for LDM (exception return)
Found by inspection: Rn is the base register against which the
load began; I is the register within the mask being processed.
The exception return should of course be processed from the loaded PC.

Backports commit 9d090d17234058f55c3c439d285db78c94d7d4de from qemu
2019-03-05 22:27:38 -05:00
Lioncash 7a6f61057b
target/m68k: Correct instruction emulation
Previously we weren't even initializing the instruction table, so any
attempt at emulation would cause a segmentation fault.

This also moves the end address check after the decoding to correctly
perform exiting behavior with the new translator model.
2019-02-28 19:21:49 -05:00
Lioncash 0868015992
target/arm: Move TCGContext variable within arm_post_translate_insn into a narrower scope
This is only used within the scope of the if statement, so we can just
move it there.
2019-02-28 18:53:33 -05:00
Lioncash 15440a83c5
target/arm: Fix execution of ARM instructions
Previously we'd be checking prior to the actual decoding if we were at
the ending address. This worked fine using the old model of the
translation process in qemu. However, this causes the wrong behavior to
occur in both ARM and Thumb/Thumb-2 modes using the newer translator
model.

Given the translator itself checks for the end address already, this
needs to be placed within arm_post_translate_insn().

This prevents the emulation process being off-by-one as well when it
comes to actually executing the instructions.
2019-02-28 18:49:22 -05:00
dmarxn 7164ab5ff4
changed cpu_compue_eflags to use the updated eflags variable. Otherwise, cli/sti and popfl may break, as we get the non-updated eflags (#1057)
Backports commit 360e9c60e1feb4a93e7e43f30858e38eac2d35f2 from unicorn
2019-02-28 17:05:13 -05:00
nanoric 245d2070fe
[Fix] Add feature support for CMPXCHG16B instruction. (#983)
Backports commit 2a240079d8fa4f1c77208379338c676ac6bf18ce from unicorn
2019-02-28 17:03:08 -05:00
nanoric 9e8e5645fc
[Fix] Fix a problem that use uc_reg_write to write fs, gs has no effets in x86 64-bit mode. (#984)
Backports commit a2493a0d4121b671fe9d16e41a9bdd3307b7b1ef from unicorn.
2019-02-28 16:52:54 -05:00
BrunoPujos 1d4bfd9aca
i386: set MSR IA32_EFER to correct value at init for IA32e Mode (#1047)
Backports commit 536c4e77c4350fac3e5c2b9b57d8c16f69b934d3 from unicorn.
2019-02-28 16:49:31 -05:00
cfrantz 5ad3a0ea82
Add support for the ARM IPSR register. (#1067)
1. Create an enum name for the IPSR register.
2. Implement read and write of the IPSR via the xpsr helper functions.

Fixes #1065

Backports commit 6c319941a5462ee3a4af4593c371f5674394d6ce from unicorn.
2019-02-28 16:40:54 -05:00
dmarxn cdcd026413
target/i386: Added MXCSR register, fixed writing to FPUCW. (#1059)
* Added MXCSR register for reading and writing
* Changed writing for fpucw register, now the qemu rounding status is updated as well

Backports commit 256e7782ceafb1f8915da167040d5368c38f9585 from unicorn
2019-02-28 16:31:22 -05:00
Lioncash 2b5d424ded
target/mips: Amend botched rename 2019-02-28 16:21:09 -05:00
Mateja Marjanovic d50a1fef6b
target/mips: Preparing for adding MMI instructions
Set up MMI code to be compiled only for TARGET_MIPS64. This is
needed so that GPRs are 64 bit, and combined with MMI registers,
they will form full 128 bit registers.

Backports commit 37b9aae2e6e005e6df206a0b4804972460806166 from qemu
2019-02-28 16:15:47 -05:00
Richard Henderson fbe1ee25ff
target/arm: Enable ARMv8.2-FHM for -cpu max
Backports commit 991c05995a7bbafbebc1e4d405e947f2edcee063 from qemu
2019-02-28 15:47:03 -05:00
Richard Henderson 4ae3ff8e61
target/arm: Implement VFMAL and VFMSL for aarch32
Backports commit 87732318c5d68a366fc2d6fc394d9c20412099fa from qemu
2019-02-28 15:44:59 -05:00
Richard Henderson 625d3f3cfb
target/arm: Implement FMLAL and FMLSL for aarch64
Backports commit 0caa5af802ff622c854ff4ee2e2b8cdd135b4d73 from qemu
2019-02-28 15:36:41 -05:00
Richard Henderson 5473c3603f
target/arm: Add helpers for FMLAL
Note that float16_to_float32 rightly squashes SNaN to QNaN.
But of course pickNaNMulAdd, for ARM, selects SNaNs first.
So we have to preserve SNaN long enough for the correct NaN
to be selected. Thus float16_to_float32_by_bits.

Backports commit a4e943a716d5fac923d82df3eabc65d1e3624019 from qemu
2019-02-28 15:31:48 -05:00
Peter Maydell 82b8e97f76
target/arm: Gate "miscellaneous FP" insns by ID register field
There is a set of VFP instructions which we implement in
disas_vfp_v8_insn() and gate on the ARM_FEATURE_V8 bit.
These were all first introduced in v8 for A-profile, but in
M-profile they appeared in v7M. Gate them on the MVFR2
FPMisc field instead, and rename the function appropriately.

Backports commit c0c760afe800b60b48c80ddf3509fec413594778 from qemu
2019-02-28 15:26:27 -05:00
Peter Maydell 118a2bde5c
target/arm: Use MVFR1 feature bits to gate A32/T32 FP16 instructions
Instead of gating the A32/T32 FP16 conversion instructions on
the ARM_FEATURE_VFP_FP16 flag, switch to our new approach of
looking at ID register bits. In this case MVFR1 fields FPHP
and SIMDHP indicate the presence of these insns.

This change doesn't alter behaviour for any of our CPUs.

Backports commit 602f6e42cfbfe9278be34e9b91d2ceb695837e02 from qemu
2019-02-28 15:23:51 -05:00
Lioncash 4f210d0731
header_gen: Add float128_to_uint32 to the list
Avoids multiple definition errors.
2019-02-28 15:19:44 -05:00
Richard Henderson 11679ff3cf
softfloat: Support float_round_to_odd more places
Previously this was only supported for roundAndPackFloat64.

New support in round_canonical, round_to_int, float128_round_to_int,
roundAndPackFloat32, roundAndPackInt32, roundAndPackInt64,
roundAndPackUint64. This does not include any of the floatx80 routines,
as we do not have users for that rounding mode there.

Backports commit 5d64abb32ffe558e616545819f3e53dd66335994 from qemu
2019-02-28 15:17:38 -05:00
David Hildenbrand 7373819b1a
softfloat: Implement float128_to_uint32
Handling it just like float128_to_uint32_round_to_zero, that hopefully
is free of bugs :)

Documentation basically copied from float128_to_uint64

Backports commit e45de9922e43c1ce4f4739b62142314a13029d5c from qemu
2019-02-28 15:13:09 -05:00
David Hildenbrand 24a2bd702c
softfloat: add float128_is_{normal,denormal}
Needed on s390x, to test for the data class of a number. So it will
gain soon a user.

A number is considered normal if the exponent is neither 0 nor all 1's.
That can be checked by adding 1 to the exponent, and comparing against
>= 2 after dropping an eventual overflow into the sign bit.

While at it, convert the other floatXX_is_normal functions to use a
similar, less error prone calculation, as suggested by Richard H.

Backports commit 47393181604d507f4fe2a15a65b1eede0f974d6a from qemu
2019-02-28 15:11:50 -05:00
Lioncash b9da32241b
header_gen: Correct multiple definition errors 2019-02-27 17:03:28 -05:00
Lioncash f2ff870171
Fix arm build 2019-02-22 21:00:04 -05:00
David Hildenbrand 8583c8f1f6
include/exec/helper-head.h: support "const void *" in helper calls
Especially when dealing with out-of-line gvec helpers, it is often
helpful to specify some vector pointers as constant. E.g. when
we have two inputs and one output, marking the two inputs as consts
pointers helps to avoid bugs.

Const pointers can be specified via "cptr", however behave in TCG just
like ordinary pointers. We can specify helpers like:

DEF_HELPER_FLAGS_4(gvec_vbperm, TCG_CALL_NO_RWG, void, ptr, cptr, cptr, i32)

void HELPER(gvec_vbperm)(void *v1, const void *v2, const void *v3,
uint32_t desc)

And make sure that here, only v1 will be written (as long as const is
not casted away, of course).

Backports commit 8c6edfdd90522caa4fc429144d393aba5b99f584 from qemu
2019-02-22 19:12:09 -05:00
Richard Henderson b9156a0f55
tcg: Remove TODO file
The last update to this file was 9 years ago. In the meantime,
4 of the 6 ideas have actually been completed. The lat two do
not actually make sense anymore.

Backports commit 9e564a1dde5abc7ae4cebc115142f685d98938d7 from qemu
2019-02-22 19:10:26 -05:00
Richard Henderson c9ad233678
target/arm: Implement ARMv8.3-JSConv
Backports commit 6c1f6f2733a7692793135ea5ce72b829add99a50 from qemu
2019-02-22 19:08:57 -05:00
Richard Henderson f16dcbe226
target/arm: Rearrange Floating-point data-processing (2 regs)
There are lots of special cases within these insns. Split the
major argument decode/loading/saving into no_output (compares),
rd_is_dp, and rm_is_dp.

We still need to special case argument load for compare (rd as
input, rm as zero) and vcvt fixed (rd as input+output), but lots
of special cases do disappear.

Now that we have a full switch at the beginning, hoist the ISA
checks from the code generation.

Backports commit e80941bd64cc388554770fd72334e9e7d459a1ef from qemu
2019-02-22 18:57:25 -05:00
Richard Henderson dbe623dacc
target/arm: Split out vfp_helper.c
Move all of the fp helpers out of helper.c into a new file.
This is code movement only. Since helper.c has no copyright
header, take the one from cpu.h for the new file.

Backports commit 37356079fcdb34e13abbed8ea0c00ca880c31247 from qemu
2019-02-22 18:48:44 -05:00
Richard Henderson d6fbc0f4f3
target/arm: Restructure disas_fp_int_conv
For opcodes 0-5, move some if conditions into the structure
of a switch statement. For opcodes 6 & 7, decode everything
at once with a second switch.

Backports commit 3c3ff68492c2d00bd8cb39ed2d02bdaf5caf5cb8 from qemu
2019-02-22 18:39:08 -05:00
Aaron Lindsay OS 5c153537f5
target/arm: Stop unintentional sign extension in pmu_init
This was introduced by
commit bf8d09694ccc07487cd73d7562081fdaec3370c8
target/arm: Don't clear supported PMU events when initializing PMCEID1
and identified by Coverity (CID 1398645).

Backports commit 67da43d668320e1bcb0a0195aaf2de4ff2a001a0 from qemu
2019-02-22 18:32:10 -05:00
Peter Maydell 928f226ed6
target/arm: v8M MPU should use background region as default, not always
The "background region" for a v8M MPU is a default which will be used
(if enabled, and if the access is privileged) if the access does
not match any specific MPU region. We were incorrectly using it
always (by putting the condition at the wrong nesting level). This
meant that we would always return the default background permissions
rather than the correct permissions for a specific region, and also
that we would not return the right information in response to a
TT instruction.

Move the check for the background region to the same place in the
logic as the equivalent v8M MPUCheck() pseudocode puts it.
This in turn means we must adjust the condition we use to detect
matches in multiple regions to avoid false-positives.

Backports commit cff21316c666c8053b1f425577e324038d0ca30d from qemu
2019-02-22 18:30:44 -05:00
Markus Armbruster 6279d604d3
qapi: Prepare for system modules other than 'builtin'
The next commit wants to generate qapi-emit-events.{c.h}. To enable
that, extend QAPISchemaModularCVisitor to support additional "system
modules", i.e. modules that don't correspond to a (user-defined) QAPI
schema module.

Backports commit c2e196a9b41235a308fb6d1c516aa91ba0a807c8 from qemu
2019-02-21 09:58:54 -05:00
Markus Armbruster 5d0966ce6b
qapi: Clean up modular built-in code generation a bit
We neglect to call .visit_module() for the special module we use for
built-ins. Harmless, but clean it up anyway. The
tests/qapi-schema/*.out now show the built-in module as 'module None'.

Subclasses of QAPISchemaModularCVisitor need to ._add_module() this
special module to enable code generation for built-ins. When this
hasn't been done, QAPISchemaModularCVisitor.visit_module() does
nothing for the special module. That looks like built-ins could
accidentally be generated into the wrong module when a subclass
neglects to call ._add_module(). Can't happen, because built-ins are
all visited before any other module. But that's non-obvious. Switch
off code generation explicitly.

Rename QAPISchemaModularCVisitor._begin_module() to
._begin_user_module().

New QAPISchemaModularCVisitor._is_builtin_module(), for clarity.

Backports commit dcac64711ea906e844ae60a5927e5580f7252c1e from qemu
2019-02-21 09:56:29 -05:00
Richard Henderson 5c34cab41c
target/arm: Add missing clear_tail calls
Fortunately, the functions affected are so far only called from SVE,
so there is no tail to be cleared. But as we convert more of AdvSIMD
to gvec, this will matter.

Backports commit d8efe78e8039511b95c23d75bb48eca6873fbb0f from qemu
2019-02-15 18:15:20 -05:00
Richard Henderson f3cb92c86c
target/arm: Use vector operations for saturation
For same-sign saturation, we have tcg vector operations. We can
compute the QC bit by comparing the saturated value against the
unsaturated value.

Backports commit 89e68b575e138d0af1435f11a8ffcd8779c237bd from qemu
2019-02-15 18:14:09 -05:00
Richard Henderson 10d468f601
target/arm: Split out FPSCR.QC to a vector field
Change the representation of this field such that it is easy
to set from vector code.

Backports commit a4d5846245c5e029e5aa3945a9bda1de1c3fedbf from qemu
2019-02-15 18:04:13 -05:00
Richard Henderson 356b70e931
target/arm: Fix set of bits kept in xregs[ARM_VFP_FPSCR]
Given that we mask bits properly on set, there is no reason
to mask them again on get. We failed to clear the exception
status bits, 0x9f, which means that the wrong value would be
returned on get. Except in the (probably normal) case in which
the set clears all of the bits.

Simplify the code in set to also clear the RES0 bits.

Backports commit 18aaa59c622208743565307668a2100ab24f7de9 from qemu
2019-02-15 18:00:57 -05:00
Richard Henderson ca4bb1b4bc
target/arm: Split out flags setting from vfp compares
Minimize the code within a macro by splitting out a helper function.
Use deposit32 instead of manual bit manipulation.

Backports commit 55a889456ef78f3f9b8eae9846c2f1453b1dd77b from qemu
2019-02-15 17:59:34 -05:00
Richard Henderson 4e44043956
target/arm: Fix arm_cpu_dump_state vs FPSCR
Backports commit ec527e4eeccc31e3beadf3b61b66c61bbd873811 from qemu
2019-02-15 17:58:25 -05:00
Richard Henderson ed7c9d0710
target/arm: Remove neon min/max helpers
These are now unused.

Backports commit a5c5dc53c4688efc149b235361d2d49869e77139 from qemu
2019-02-15 17:57:18 -05:00
Richard Henderson 198befc50e
target/arm: Use tcg integer min/max primitives for neon
The 32-bit PMIN/PMAX has been decomposed to scalars,
and so can be trivially expanded inline.

Backports commit 9ecd3c5c1651fa7f9adbedff4806a2da0b50490c from qemu
2019-02-15 17:55:11 -05:00
Richard Henderson eee33bd692
target/arm: Use vector minmax expanders for aarch32
Backports commit 6f2782218230bbb33fa22f9a2f73f8a570046007 from qemu
2019-02-15 17:54:05 -05:00
Richard Henderson 96d1df966b
target/arm: Use vector minmax expanders for aarch64
Backports commit 264d2a481a6c34dfda53be3fbea66116bcef9c5a from qemu
2019-02-15 17:52:36 -05:00
Richard Henderson d147946edc
target/arm: Rely on optimization within tcg_gen_gvec_or
Since we're now handling a == b generically, we no longer need
to do it by hand within target/arm/.

Backports commit 2900847ff4c862887af750935a875059615f509a from qemu
2019-02-15 17:50:28 -05:00
Alex Bennée bf9c8499ca
target/arm: expose remaining CPUID registers as RAZ
There are a whole bunch more registers in the CPUID space which are
currently not used but are exposed as RAZ. To avoid too much
duplication we expand ARMCPRegUserSpaceInfo to understand glob
patterns so we only need one entry to tweak whole ranges of registers.

Backports commit d040242effe47850060d2ef1c461ff637d88a84d from qemu
2019-02-15 17:48:37 -05:00
Alex Bennée 890983f186
target/arm: expose MPIDR_EL1 to userspace
As this is a single register we could expose it with a simple ifdef
but we use the existing modify_arm_cp_regs mechanism for consistency.

Backports commit 522641660c3de64ed8322b8636c58625cd564a3f from qemu
2019-02-15 17:29:23 -05:00
Alex Bennée babf31dfa0
target/arm: expose CPUID registers to userspace
A number of CPUID registers are exposed to userspace by modern Linux
kernels thanks to the "ARM64 CPU Feature Registers" ABI. For QEMU's
user-mode emulation we don't need to emulate the kernels trap but just
return the value the trap would have done. To avoid too much #ifdef
hackery we process ARMCPRegInfo with a new helper (modify_arm_cp_regs)
before defining the registers. The modify routine is driven by a
simple data structure which describes which bits are exported and
which are fixed.

Backports commit 6c5c0fec29bbfe36c64eca1edfd8455be46b77c6 from qemu
2019-02-15 17:27:30 -05:00
Alex Bennée 0a51e5055f
target/arm: relax permission checks for HWCAP_CPUID registers
Although technically not visible to userspace the kernel does make
them visible via a trap and emulate ABI. We provide a new permission
mask (PL0U_R) which maps to PL0_R for CONFIG_USER builds and adjust
the minimum permission check accordingly.

Backports commit b5bd7440422bb66deaceb812bb9287a6a3cdf10c from qemu
2019-02-15 17:18:06 -05:00
Catherine Ho 841ac2b3bb
target/arm: Fix int128_make128 lo, hi order in paired_cmpxchg64_be
The lo,hi order is different from the comments. And in commit
1ec182c33379 ("target/arm: Convert to HAVE_CMPXCHG128"), it changes
the original code logic. So just restore the old code logic before this
commit:
do_paired_cmpxchg64_be():
cmpv = int128_make128(env->exclusive_high, env->exclusive_val);
newv = int128_make128(new_hi, new_lo);

This fixes a bug that would only be visible for big-endian
AArch64 guest code.

Fixes: 1ec182c33379 ("target/arm: Convert to HAVE_CMPXCHG128")

Backports commit abd5abc58c5d4c9bd23427b0998a44eb87ed47a2 from qemu
2019-02-15 17:16:55 -05:00
Peter Maydell 31813bafe2
target/arm: Implement HACR_EL2
HACR_EL2 is a register with IMPDEF behaviour, which allows
implementation specific trapping to EL2. Implement it as RAZ/WI,
since QEMU's implementation has no extra traps. This also
matches what h/w implementations like Cortex-A53 and A57 do.

Backports commit 831a2fca343ebcd6651eab9102bd7a36b77da65d from qemu
2019-02-15 17:15:41 -05:00
Aaron Lindsay OS af17f7fa59
target/arm: Fix CRn to be 14 for PMEVTYPER/PMEVCNTR
This bug was introduced in:
commit 5ecdd3e47cadae83a62dc92b472f1fe163b56f59
target/arm: Finish implementation of PM[X]EVCNTR and PM[X]EVTYPER

Backports commit 62c7ec3488fe0dcbabffd543f458914e27736115 from qemu
2019-02-15 17:12:04 -05:00
Leon Alrae 6099733fc5
target/mips: reimplement SC instruction emulation and use cmpxchg
Completely rewrite conditional stores handling. Use cmpxchg.

This eliminates need for separate implementations of SC instruction
emulation for user and system emulation.

Backports commit 33a07fa2db66376e6ee780d4a8b064dc5118cf34 from qemu
2019-02-15 17:10:16 -05:00
Leon Alrae df67716ed5
target/mips: compare virtual addresses in LL/SC sequence
Do only virtual addresses comaprisons in LL/SC sequence emulations.

Until this patch, physical addresses had been compared in SC part of
LL/SC sequence, even though such comparisons could be avoided. Getting
rid of them allows throwing away SC helpers and having common SC
implementations in user and system mode, avoiding the need for two
separate implementations selected by #ifdef CONFIG_USER_ONLY.

Correct guest software should not rely on LL/SC if they accesses the
same physical address via different virtual addresses or if page
mapping gets changed between LL/SC due to manipulating TLB entries.
MIPS Instruction Set Manual clearly says that an RMW sequence must
use the same address in the LL and SC (virtual address, physical
address, cacheability and coherency attributes must be identical).
Otherwise, the result of the SC is not predictable. This patch takes
advantage of this fact and removes the virtual->physical address
translation from SC helper.

lladdr served as Coprocessor 0 LLAddr register which captures physical
address of the most recent LL instruction, and also lladdr was used
for comparison with following SC physical address. This patch changes
the meaning of lladdr - now it will only keep the virtual address of
the most recent LL. Additionally, CP0_LLAddr field is introduced which
is the actual Coperocessor 0 LLAddr register that guest can access.

Backports commit c7c7e1e9a5e3f0a8a1dbff6e4ccfd21c2dc9f845 from qemu
2019-02-15 16:59:40 -05:00
Emilio G. Cota f31764dd5b
cputlb: update TLB entry/index after tlb_fill
We are failing to take into account that tlb_fill() can cause a
TLB resize, which renders prior TLB entry pointers/indices stale.
Fix it by re-doing the TLB entry lookups immediately after tlb_fill.

Fixes: 86e1eff8bc ("tcg: introduce dynamic TLB sizing", 2019-01-28)

Backports commit 6d967cb86d5b4a60ba15b497126b621ce9ca6609 from qemu
2019-02-12 11:48:48 -05:00
Emilio G. Cota 1b44fd94ac
exec-all: document that tlb_fill can trigger a TLB resize
Backports commit ae56a2ff92ac73782279abf8857585c34b15f509 from qemu
2019-02-12 11:38:28 -05:00
Mark Cave-Ayland 576df55076
tcg/i386: fix unsigned vector saturating arithmetic
Due to a cut/paste error in the original implementation, the unsigned
vector saturating arithmetic was erroneously being calculated as signed
vector saturating arithmetic.

Fixes: 8ffafbcec2 ("tcg/i386: Implement vector saturating arithmetic")

Backports commit 3115584d39afe8cf2a84a40549029f53792abca5 from qemu
2019-02-12 11:37:12 -05:00
Richard Henderson f7c5f0ccbe
tcg: Diagnose referenced labels that have not been emitted
Currently, a jump to a label that is not defined anywhere will
be emitted not be relocated. This results in a jump to a random
jump target. With tcg debugging, print a diagnostic to the -d op
file and abort.

This could help debug or detect errors like
c2d9644e6d ("target/arm: Fix crash on conditional instruction in an IT block")

Backports commit bef16ab4e641636b4e85c3d863b4257ce0be4e6f from qemu
2019-02-12 11:35:11 -05:00
Catherine Ho 17477ac1ca
tcg: add early clober modifier in atomic16_cmpxchg on aarch64
Without this patch, gcc might up the Input/Output registers and
cause unpredictable error.

Fixes: 1ec182c33379 ("target/arm: Convert to HAVE_CMPXCHG128")

Backports commit 7400d6938c6d455c4eba2b80c06d60c8fa5c5ba3 from qemu
2019-02-07 08:58:53 -05:00
Robert Hoo c8a49d8593
Revert "i386: Add CPUID bit for PCONFIG"
This reverts commit 5131dc433df54b37e8e918d8fba7fe10344e7a7b.
For new instruction 'PCONFIG' will not be exposed to guest.

Backports commit 712f807e1965c8f1f1da5bbec2b92a8c540e6631 from qemu
2019-02-07 08:56:40 -05:00
Paolo Bonzini 4234209a64
i386: remove the 'INTEL_PT' CPUID bit from named CPU models
Processor tracing is not yet implemented for KVM and it will be an
opt in feature requiring a special module parameter.
Disable it, because it is wrong to enable it by default and
it is impossible that no one has ever used it.

Backports commit 4c257911dcc7c4189768e9651755c849ce9db4e8 from qemu
2019-02-07 08:55:32 -05:00
Robert Hoo 2c61060ecb
i386: remove the new CPUID 'PCONFIG' from Icelake-Server CPU model
PCONFIG is not available to guests; it must be specifically enabled
using the PCONFIG_ENABLE execution control. Disable it, because
no one can ever use it.

Backports commit 76e5a4d58357b9d077afccf7f7c82e17f733b722 from qemu
2019-02-07 08:54:13 -05:00
Peter Maydell 04676ed074
target/arm: Make FPSCR/FPCR trapped-exception bits RAZ/WI
The {IOE, DZE, OFE, UFE, IXE, IDE} bits in the FPSCR/FPCR are for
enabling trapped IEEE floating point exceptions (where IEEE exception
conditions cause a CPU exception rather than updating the FPSR status
bits). QEMU doesn't implement this (and nor does the hardware we're
modelling), but for implementations which don't implement trapped
exception handling these control bits are supposed to be RAZ/WI.
This allows guest code to test for whether the feature is present
by trying to write to the bit and checking whether it sticks.

QEMU is incorrectly making these bits read as written. Make them
RAZ/WI as the architecture requires.

In particular this was causing problems for the NetBSD automatic
test suite.

Backports commit a15945d98d3a3390c3da344d1b47218e91e49d8b from qemu
2019-02-05 17:45:22 -05:00
Richard Henderson 9b0e04f3ab
target/arm: Enable TBI for user-only
This has been enabled in the linux kernel since v3.11
(commit d50240a5f6cea, 2013-09-03,
"arm64: mm: permit use of tagged pointers at EL0").

Backports commit f6a148fef63698826e69ca91cc11877ab1ed786f from qemu
2019-02-05 17:44:17 -05:00
Peter Maydell 8124b9f975
target/arm: Compute TB_FLAGS for TBI for user-only
Enables, but does not turn on, TBI for CONFIG_USER_ONLY.

Backports commit c47eaf9fc2af68cfbdbd9ae31f8e2e5ebb7022b4 from qemu
2019-02-05 17:43:11 -05:00
Richard Henderson b928902908
target/arm: Clean TBI for data operations in the translator
This will allow TBI to be used in user-only mode, as well as
avoid ping-ponging the softmmu TLB when TBI is in use. It
will also enable other armv8 extensions.

Backports commit 3a471103ac1823bafc907962dcaf6bd4fc0942a2 from qemu
2019-02-05 17:39:12 -05:00
Richard Henderson 5c6ffde710
target/arm: Add TBFLAG_A64_TBID, split out gen_top_byte_ignore
Split out gen_top_byte_ignore in preparation of handling these
data accesses; the new tbflags field is not yet honored.

Backports commit 4a9ee99db38ba513bf1e8f43665b79c60accd017 from qemu
2019-02-05 17:20:11 -05:00
Richard Henderson fbd8992e27
target/arm: Enable BTI for -cpu max
Backports commit a15daafa1cba96ff28abdfb6c860e0939655dbd1 from qemu
2019-02-05 17:15:32 -05:00
Richard Henderson 4dc5f80683
target/arm: Set btype for indirect branches
Backports commit 001d47b6efbe4795ed77366986b8ef384ab8b127 from qemu
2019-02-05 17:14:16 -05:00
Richard Henderson 11736a659b
target/arm: Reset btype for direct branches
This is all of the non-exception cases of DISAS_NORETURN.

Backports commit 358622703583d2e2967e0a93da990e747dcc3ac6 from qemu
2019-02-05 17:11:59 -05:00
Richard Henderson 88193cf7c3
target/arm: Default handling of BTYPE during translation
The branch target exception for guarded pages has high priority,
and only 8 instructions are valid for that case. Perform this
check before doing any other decode.

Clear BTYPE after all insns that neither set BTYPE nor exit via
exception (DISAS_NORETURN).

Not yet handled are insns that exit via DISAS_NORETURN for some
other reason, like direct branches.

Backports commit 51bf0d7aa91a9d4e2563240a42e6cb705cef84aa from qemu
2019-02-05 17:05:31 -05:00
Richard Henderson d594b2047f
target/arm: Cache the GP bit for a page in MemTxAttrs
Caching the bit means that we will not have to re-walk the
page tables to look up the bit during translation.

Backports commit 1bafc2ba7e6bfe89fff3503fdac8db39c973de48 from qemu
2019-02-05 17:02:19 -05:00
Richard Henderson 9c2a5963d0
exec: Add target-specific tlb bits to MemTxAttrs
These bits can be used to cache target-specific data in cputlb
read from the page tables.

Backports commit d3765835ed02f91f0c6cbb452874209a6af4a730 from qemu
2019-02-05 17:00:56 -05:00
Richard Henderson cf3ac035bc
target/arm: Add BT and BTYPE to tb->flags
Backports commit 08f1434a71ddf2bdfdb034dcd24b24464d1efd72 from qemu
2019-02-05 16:59:53 -05:00
Richard Henderson a99119ce39
target/arm: Add PSTATE.BTYPE
Place this in its own field within ENV, as that will
make it easier to reset from within TCG generated code.

With the change to pstate_read/write, exception entry
and return are automatically handled.

Backports commit f6e52eaac13b6947f4406c127e3090c898e439c9 from qemu
2019-02-05 16:57:51 -05:00
Richard Henderson 6b4f7a28b5
target/arm: Introduce isar_feature_aa64_bti
Also create field definitions for id_aa64pfr1 from ARMv8.5.

Backports commit be53b6f4d7ace2e6a018e45af825069ccb7bab66 from qemu
2019-02-05 16:56:01 -05:00
Murilo Opsfelder Araujo 0010078e4b
mmap-alloc: fix hugetlbfs misaligned length in ppc64
The commit 7197fb4058bcb68986bae2bb2c04d6370f3e7218 ("util/mmap-alloc:
fix hugetlb support on ppc64") fixed Huge TLB mappings on ppc64.

However, we still need to consider the underlying huge page size
during munmap() because it requires that both address and length be a
multiple of the underlying huge page size for Huge TLB mappings.
Quote from "Huge page (Huge TLB) mappings" paragraph under NOTES
section of the munmap(2) manual:

"For munmap(), addr and length must both be a multiple of the
underlying huge page size."

On ppc64, the munmap() in qemu_ram_munmap() does not work for Huge TLB
mappings because the mapped segment can be aligned with the underlying
huge page size, not aligned with the native system page size, as
returned by getpagesize().

This has the side effect of not releasing huge pages back to the pool
after a hugetlbfs file-backed memory device is hot-unplugged.

This patch fixes the situation in qemu_ram_mmap() and
qemu_ram_munmap() by considering the underlying page size on ppc64.

After this patch, memory hot-unplug releases huge pages back to the
pool.

Fixes: 7197fb4058bcb68986bae2bb2c04d6370f3e7218

Backports commit 53adb9d43e1abba187387a51f238e878e934c647 from qemu
2019-02-05 16:52:39 -05:00
Murilo Opsfelder Araujo 22e3feb162
mmap-alloc: unfold qemu_ram_mmap()
Unfold parts of qemu_ram_mmap() for the sake of understanding, moving
declarations to the top, and keeping architecture-specifics in the
ifdef-else blocks. No changes in the function behaviour.

Give ptr and ptr1 meaningful names:
ptr -> guardptr : pointer to the PROT_NONE guard region
ptr1 -> ptr : pointer to the mapped memory returned to caller

Backports commit 2044c3e7116eeac0449dcb4a4130cc8f8b9310da from qemu
2019-02-05 16:49:47 -05:00
Remi Denis-Courmont 0b7f1ff086
target/arm: fix decoding of B{,L}RA{A,B}
A flawed test lead to the instructions always being treated as
unallocated encodings.

Fixes: https://bugs.launchpad.net/bugs/1813460

Backports commit 1cf86a8618644beb860951ff4383457ee88a7f4a from qemu
2019-02-03 17:55:31 -05:00
Remi Denis-Courmont a20bb60f06
target/arm: fix AArch64 virtual address space size
Since QEMU does not support the ARMv8.2-LVA, Large Virtual Address,
extension (yet), the VA address space is 48-bits plus a sign bit. User
mode can only handle the positive half of the address space, so that
makes a limit of 48 bits.

(With LVA, it would be 53 and 52 bits respectively.)

The incorrectly large address space conflicts with PAuth instructions,
which use bits 48-54 and 56-63 for the pointer authentication code. This
also conflicts with (as yet unsupported by QEMU) data tagging and with
the ARMv8.5-MTE extension.

Backports commit f6768aa1b4c6a80448eabd22bb9b4123c709caea from qemu
2019-02-03 17:55:30 -05:00
Richard Henderson 932c4e8569
target/arm: Always enable pac keys for user-only
Drop the pac properties. This approach cannot work as written
because the properties are applied before arm_cpu_reset, which
zeros SCTLR_EL1 (amongst everything else).

We can re-introduce the properties if they turn out to be useful.
But since linux 5.0 enables all of the keys, they may not be.

Backports commit 276c6e813719568bdc9743e87ff8f42115006206 from qemu
2019-02-03 17:55:30 -05:00
Julia Suvorova 93acc4dc56
arm: Clarify the logic of set_pc()
Until now, the set_pc logic was unclear, which raised questions about
whether it should be used directly, applying a value to PC or adding
additional checks, for example, set the Thumb bit in Arm cpu. Let's set
the set_pc logic for “Configure the PC, as was done in the ELF file”
and implement synchronize_with_tb hook for preserving PC to cpu_tb_exec.

Backports commit 42f6ed919325413392bea247a1e6f135deb469cd from qemu
2019-02-03 17:55:30 -05:00
Richard Henderson 50cf5634da
target/arm: Enable API, APK bits in SCR, HCR
These bits become writable with the ARMv8.3-PAuth extension.

Backports commit ef682cdb4aded5c65a018e175482e875de66059d from qemu
2019-02-03 17:55:30 -05:00
Aaron Lindsay OS 0dd5bf84fc
target/arm: Send interrupts on PMU counter overflow
Whenever we notice that a counter overflow has occurred, send an
interrupt. This is made more reliable with the addition of a timer in a
follow-on commit.

Backports commit f4efb4b2a17528837cb445f9bdfaef8df4a5acf7 from qemu
2019-02-03 17:55:30 -05:00
Peter Maydell edfb13f8eb
target/arm/translate-a64: Fix mishandling of size in FCMLA decode
In disas_simd_indexed(), for the case of "complex fp", each indexable
element is a complex pair, so the total size is twice that indicated
in the 'size' field in the encoding. We were trying to do this
"double the size" operation with a left shift by 1, but this is
incorrect because the 'size' field is a MO_8/MO_16/MO_32/MO_64
value, and doubling the size should be done by a simple increment.

This meant we were mishandling FCMLA (by element) of values where
the real and imaginary parts are 32-bit floats, and would incorrectly
UNDEF this encoding. (No other insns take this code path, and for
16-bit floats it happens that 1 << 1 and 1 + 1 are both the same).

Backports commit eaefb97a8b97dbf42c016fe65b68b92f99a346f6 from qemu
2019-02-03 17:55:30 -05:00
Peter Maydell f999e06c22
target/arm/translate-a64: Fix FCMLA decoding error
The FCMLA (by element) instruction exists in the
"vector x indexed element" encoding group, but not in
the "scalar x indexed element" group. Correctly UNDEF
the unallocated encodings.

Backports commit 4dfabb6d568e6b315594d7d464dacaf3368aff60 from qemu
2019-02-03 17:55:30 -05:00
Peter Maydell eaecbe7901
target/arm/translate-a64: Don't underdecode SDOT and UDOT
In the AdvSIMD scalar x indexed element and vector x indexed element
encoding group, the SDOT and UDOT instructions are vector only,
and their opcode is unallocated in the scalar group. Correctly
UNDEF this unallocated encoding.

Backports commit 4977986ca38fb1d5357532e1a8032b984047a369 from qemu
2019-02-03 17:55:30 -05:00
Peter Maydell f7d78d9e08
target/arm/translate-a64: Don't underdecode FP insns
In the encoding groups
* floating-point data-processing (1 source)
* floating-point data-processing (2 source)
* floating-point data-processing (3 source)
* floating-point immediate
* floating-point compare
* floating-ponit conditional compare
* floating-point conditional select

bit 31 is M and bit 29 is S (and bit 30 is 0, already checked at
this point in the decode). None of these groups allocate any
encoding for M=1 or S=1. We checked this in disas_fp_compare(),
disas_fp_ccomp() and disas_fp_csel(), but missed it in disas_fp_1src(),
disas_fp_2src(), disas_fp_3src() and disas_fp_imm().

We also missed that in the fp immediate encoding the imm5 field
must be all zeroes.

Correctly UNDEF the unallocated encodings here.

Backports commit c1e20801f5ee53472dbf2757df605543f3f4ce0b from qemu
2019-02-03 17:55:29 -05:00
Peter Maydell 1128b4d77d
target/arm/translate-a64: Don't underdecode add/sub extended register
In the "add/subtract (extended register)" encoding group, the "opt"
field in bits [23:22] must be zero. Correctly UNDEF the unallocated
encodings where this field is not zero.

Backports commit 4f61106614410945b1d1c93081544ad5b13044fc from qemu
2019-02-03 17:55:29 -05:00
Peter Maydell decebb5936
target/arm/translate-a64: Don't underdecode SIMD ld/st single
In the AdvSIMD load/store single structure encodings, the
non-post-indexed case should have zeroes in [20:16] (which is the
Rm field for the post-indexed case). Bit 31 must also be zero
(a check we got right in ldst_multiple but not here). Correctly
UNDEF these unallocated encodings.

Backports commit 9c72b68ad746a51f63822cffab4d144b5957823a from qemu
2019-02-03 17:55:29 -05:00
Peter Maydell 60ccaf56ac
target/arm/translate-a64: Don't underdecode SIMD ld/st multiple
In the AdvSIMD load/store multiple structures encodings,
the non-post-indexed case should have zeroes in [20:16]
(which is the Rm field for the post-indexed case).
Correctly UNDEF the currently unallocated encodings which
have non-zeroes in those bits.

Backports commit e1f220811dbd5d85fb02ff286358f9ee6188938f from qemu
2019-02-03 17:55:29 -05:00
Peter Maydell 80248fecb6
target/arm/translate-a64: Don't underdecode PRFM
The PRFM prefetch insn in the load/store with imm9 encodings
requires idx field 0b00; we were underdecoding this by
only checking !is_unpriv (which is equivalent to idx != 2).
Correctly UNDEF the unallocated encodings where idx == 0b01
and 0b11 as well as 0b10.

Backports commit a80c4256543987ca88407349ee012a673a10a2ae from qemu
2019-02-03 17:55:29 -05:00
Peter Maydell 147269ed81
target/arm/translate-a64: Don't underdecode system instructions
The "system instructions" and "system register move" subcategories
of "branches, exception generating and system instructions" for A64
only apply if bits [23:22] are zero; other values are currently
unallocated. Correctly UNDEF these unallocated encodings.

Backports commit 08d5e3bde6b4ad32996bf69d93aa66ae43d3f3ff from qemu
2019-02-03 17:55:29 -05:00
Thomas Huth a0489c8849
target/m68k: Fix LGPL information in the file headers
It's either "GNU *Library* General Public License version 2" or
"GNU Lesser General Public License version *2.1*", but there was
no "version 2.0" of the "Lesser" license. So assume that version
2.1 is meant here.
Also some files mention the GPL instead of the LGPL after declaring
that the files are licensed under the LGPL, so change these spots to
use LGPL, too.

Backports commit d749fb85bd35f2f175a4ed3d170561e4f54f7297 from qemu
2019-02-03 17:55:29 -05:00
Thomas Huth 85bc48fecd
tcg: Fix LGPL version number
It's either "GNU *Library* General Public version 2" or "GNU Lesser
General Public version *2.1*", but there was no "version 2.0" of the
"Lesser" library. So assume that version 2.1 is meant here.

Backports commit fb0343d5b4dd4b9b9e96e563d913a3e0c709fe4e from qemu
2019-02-03 17:55:28 -05:00
Thomas Huth aa9e5f9abe
Don't talk about the LGPL if the file is licensed under the GPL
Some files claim that the code is licensed under the GPL, but then
suddenly suggest that the user should have a look at the LGPL.
That's of course non-sense, replace it with the correct GPL wording
instead.

Backports commit e361a772ffcd33675ffdd4637eea98a460dfed1b from qemu
2019-02-03 17:55:28 -05:00
Vitaly Kuznetsov d13b35769d
i386: Enable NPT and NRIPSAVE for AMD CPUs
Modern AMD CPUs support NPT and NRIPSAVE features and KVM exposes these
when present. NRIPSAVE apeared somewhere in Opteron_G3 lifetime (e.g.
QuadCore AMD Opteron 2378 has is but QuadCore AMD Opteron HE 2344 doesn't),
NPT was introduced a bit earlier.

Add the FEAT_SVM leaf to Opteron_G4/G5 and EPYC/EPYC-IBPB cpu models.

Backports commit 9fe8b7be17eaac4cfde4083000cc96747d7cf4f8 from qemu
2019-02-03 17:55:28 -05:00
Tao Xu 72afbf6d48
i386: Update stepping of Cascadelake-Server
Update the stepping from 5 to 6, in order that
the Cascadelake-Server CPU model can support AVX512VNNI
and MSR based features exposed by ARCH_CAPABILITIES.

Backports commit b0a1980384fc265d91de7e09aa5fe531a69e6288 from qemu
2019-02-03 17:55:28 -05:00
Lioncash b36f8220b2 tcg: Provide an MSVC compatible version of dup_const
This just simply forwards to dup_const_impl
2019-01-30 17:02:54 -05:00
Lioncash 572252fcfd
header_gen: Remove deposit32/64 from the list
These are always inlined.
2019-01-30 14:05:52 -05:00
Lioncash 8eaa850287
target/arm/vec_helper: Remove use of void pointer arithmetic
This is a GNU-specific extension.
2019-01-30 14:03:26 -05:00
Lioncash 0de4a47169
qemu/host-utils: Handle ctpop8/16/32/64 on MSVC
Maybe not the most platform friendly way of doing so
2019-01-30 13:29:58 -05:00
Lioncash 4e605ba038
qemu/compiler: Include <intrin.h> on MSVC 2019-01-30 13:25:26 -05:00
Lioncash 5745f2f75d
qemu/host_utils: Handle MSVC within clrsb32/64 2019-01-30 13:23:24 -05:00
Lioncash 205035a267
qemu/host_utils: Provide MSVC compatible equivalents of clz32/64 and ctz32/64 2019-01-30 13:19:03 -05:00
Lioncash 17ff842261
tcg: Remove the use of void pointer arithmetic within tcgv_i32_temp()
Removes the use of a GNU-specific extension.
2019-01-30 13:04:52 -05:00
Lioncash 6b702a9905
tcg: Convert void* casts to char* in temp_tcgv_i32()
Gets rid of the use of a GNU extension that allows arithmetic on void
pointers. This is equivalent to doing arithmetic on char/unsigned char
pointers.
2019-01-30 13:02:01 -05:00
Peter Maydell d5298c5370
qom/cpu: Add cluster_index to CPUState
For TCG we want to distinguish which cluster a CPU is in, and
we need to do it quickly. Cache the cluster index in the CPUState
struct, by having the cluster object set cpu->cluster_index for
each CPU child when it is realized.

This means that board/SoC code must add all CPUs to the cluster
before realizing the cluster object. Regrettably QOM provides no
way to prevent adding children to a realized object and no way for
the parent to be notified when a new child is added to it, so
we don't have any way to enforce/assert this constraint; all
we can do is document it in a comment. We can at least put in a
check that the cluster contains at least one CPU, which should
catch the typical cases of "realized cluster too early" or
"forgot to parent the CPUs into it".

The restriction on how many clusters can exist in the system
is imposed by TCG code which will be added in a subsequent commit,
but the check to enforce it in cluster.c fits better in this one.

Backports relevant parts of commit 7ea7b9ad532e59c3efbcabff0e3484f4df06104c from qemu
2019-01-30 12:59:59 -05:00
Aaron Lindsay OS 8d7bb2cab3
target/arm: Don't clear supported PMU events when initializing PMCEID1
A bug was introduced during a respin of:

commit 57a4a11b2b281bb548b419ca81bfafb214e4c77a
target/arm: Add array for supported PMU events, generate PMCEID[01]_EL0

This patch introduced two calls to get_pmceid() during CPU
initialization - one each for PMCEID0 and PMCEID1. In addition to
building the register values, get_pmceid() clears an internal array
mapping event numbers to their implementations (supported_event_map)
before rebuilding it. This is an optimization since much of the logic is
shared. However, since it was called twice, the contents of
supported_event_map reflect only the events in PMCEID1 (the second call
to get_pmceid()).

Fix this bug by moving the initialization of PMCEID0 and PMCEID1 back
into a single function call, and name it more appropriately since it is
doing more than simply generating the contents of the PMCEID[01]
registers.

Backports commit bf8d09694ccc07487cd73d7562081fdaec3370c8 from qemu
2019-01-29 17:12:23 -05:00
Lioncash 3fa54df972
exec.c: Use correct attrs in cpu_memory_rw_debug()
In the softmmu version of cpu_memory_rw_debug(), we ask the
CPU for the attributes to use for the virtual memory access,
and we correctly use those to identify the address space
index. However, we were not passing them in to the
address_space_write_rom() and address_space_rw() functions.

The effect of this was that a memory access from the gdbstub
to a device which had behaviour that was sensitive to the
memory attributes (such as some ARMv8M NVIC registers) was
incorrectly always performed as if non-secure, rather than
using the right security state for the CPU's current state.

Fixes: https://bugs.launchpad.net/qemu/+bug/1812091

Backports commit ea7a5330b79523540ba776c529b09dc8cf3fa0c5 from qemu
2019-01-29 17:05:50 -05:00
Richard Henderson 64f10fa075
target/arm: Fix validation of 32-bit address spaces for aa32
When tsz == 0, aarch32 selects the address space via exclusion,
and there are no "top_bits" remaining that require validation.

Fixes: ba97be9f4a4

Backports commit 36d820af0eddf4fc6a533579b052d8f0085a9fb8 from qemu
2019-01-29 16:46:36 -05:00
Richard Henderson 3f0781e39b
tcg/aarch64: Implement vector minmax arithmetic
Backports commit 93f332a50371936ea02392bdb748c8140ef3f06a from qemu
2019-01-29 16:44:09 -05:00
Richard Henderson 8a012c3929
tcg/aarch64: Implement vector saturating arithmetic
Backports commit d32648d445c534cea7e2ad7ed8608208aa8831c1 from qemu
2019-01-29 16:42:50 -05:00
Richard Henderson 63d1aae6b2
tcg/i386: Implement vector minmax arithmetic
The avx instruction set does not directly provide MO_64.
We can still implement 64-bit with comparison and vpblendvb.

Backports commit bc37faf4cb2baa77c44298c01558970b88d32808 from qemu
2019-01-29 16:41:11 -05:00
Richard Henderson 5518b543ed
tcg/i386: Implement vector saturating arithmetic
Only MO_8 and MO_16 are implemented, since that's all the
instruction set provides.

Backports commit 8ffafbcec275e61f6a1a17ac1d0bd918d5b23db3 from qemu
2019-01-29 16:37:55 -05:00
Richard Henderson 24e65f60ed
tcg/i386: Split subroutines out of tcg_expand_vec_op
This routine was becoming too large.

Backports commit 44f1441dbe14e7174a707d7e7ecbc2c8e080bfda from qemu
2019-01-29 16:33:59 -05:00
Richard Henderson fb684825c8
tcg: Add opcodes for vector minmax arithmetic
Backports commit dd0a0fcdd8848c2a18970c44a62bd8f394c2b495 from qemu
2019-01-29 16:24:52 -05:00
Richard Henderson e0266239ea
tcg: Add opcodes for vector saturated arithmetic
Backports commit 8afaf0506606f8003ef696df849c5a98637a7a83 from qemu
2019-01-29 16:14:34 -05:00
Richard Henderson f1f2d78a52
tcg: Add write_aofs to GVecGen4
This allows writing 2 output, 3 input operations.

Backports commit 5d6acdd4a485f15b1081acc523b99c1f1a7c42ab from qemu
2019-01-29 16:00:57 -05:00
Richard Henderson e08d0feee4
tcg: Add gvec expanders for nand, nor, eqv
Backports commit f550805d8309500d642f640af8d9928958465478 from qemu
2019-01-29 15:57:28 -05:00
Richard Henderson 11664d444e
tcg: Add logical simplifications during gvec expand
We handle many of these during integer expansion, and the
rest of them during integer optimization.

Backports commit 9a9eda78e4e56051485efb65e01748084f99ac3c from qemu
2019-01-29 15:49:00 -05:00
Lioncash 4aaa75d05b
compiler: Add missing container_of macro for MSVC 2019-01-28 09:27:55 -05:00
Lioncash 65e5d72a94
compiler: Add glue macros for MSVC 2019-01-28 09:26:01 -05:00
Lioncash 3d4f37b78f
osdep: Conditionally include non-Windows headers 2019-01-28 09:24:20 -05:00
Lioncash 977ad292b3
accel/translate-all: Get rid of variable shadowing 2019-01-28 09:17:37 -05:00
Lioncash ce8697f978
accel/translate-all: Convert a void* cast into an unsigned char* cast
Strictly speaking, as far as the standard care, performing pointer
arithmetic on a void* type is ill formed. This is a GNU extension that
allows this. Instead, just use unsigned char* which preserves the same
behavior.
2019-01-28 09:14:33 -05:00