Commit graph

4630 commits

Author SHA1 Message Date
Richard Henderson 0412b3be8a
target/i386: Implement CPUID_EXT_RDRAND
We now have an interface for guest visible random numbers.

Backports commit 369fd5ca66810b2ddb16e23a497eabe59385eceb from qemu with
the actual RNG portion disabled for the time being.
2019-05-23 15:12:50 -04:00
Richard Henderson 3dd7358a53
target/arm: Implement ARMv8.5-RNG
Use the newly introduced infrastructure for guest random numbers.

Backports commit de390645675966cce113bf5394445bc1f8d07c85 from qemu

(with the actual RNG portion disabled to preserve determinism for the
time being).
2019-05-23 15:03:23 -04:00
Richard Henderson a8df33c37c
target/arm: Put all PAC keys into a structure
This allows us to use a single syscall to initialize them all.

Backports commit 108b3ba891408c4dce93df78261ec4aca38c0e2e from qemu
2019-05-23 14:54:06 -04:00
Paolo Bonzini 1341e371a8
target/i386: add MDS-NO feature
Microarchitectural Data Sampling is a hardware vulnerability which allows
unprivileged speculative access to data which is available in various CPU
internal buffers.

Some Intel processors use the ARCH_CAP_MDS_NO bit in the
IA32_ARCH_CAPABILITIES
MSR to report that they are not vulnerable, make it available to
guests.

Backports commit 20140a82c67467f53814ca197403d5e1b561a5e5 from qemu
2019-05-23 14:49:20 -04:00
Paolo Bonzini a4f2517f46
target/i386: define md-clear bit
md-clear is a new CPUID bit which is set when microcode provides the
mechanism to invoke a flush of various exploitable CPU buffers by invoking
the VERW instruction.

Backports commit b2ae52101fca7f9547ac2f388085dbc58f8fe1c0 from qemu
2019-05-23 14:48:18 -04:00
Philippe Mathieu-Daudé 4a1b8d64bd
target/m68k: Optimize rotate_x() using extract_i32()
Optimize rotate_x() using tcg_gen_extract_i32(). We can now free the
'sz' tcg_temp earlier. Since it is allocated with tcg_const_i32(),
free it with tcg_temp_free_i32().

Backports commit 60d3d0cfeb1658d2827d6a4f0df27252bb36baba from qemu
2019-05-17 12:07:07 -04:00
Philippe Mathieu-Daudé 3c6cb445a0
target/m68k: Fix a tcg_temp leak
The function gen_get_ccr() returns a tcg_temp created with
tcg_temp_new(). Free it with tcg_temp_free().

Backports commit 44c64e90950adf9efe7f4235a32eb868d1290ebb from qemu
2019-05-17 12:05:11 -04:00
Philippe Mathieu-Daudé a72a53c2d9
target/m68k: Reduce the l1 TCGLabel scope
Backports commit 89fa312be0dfd8b4c539c8763796e785c6b00b46 from qemu
2019-05-17 12:03:37 -04:00
Peter Maydell 3fb64fd5a2
target/m68k: Switch to transaction_failed hook
Switch the m68k target from the old unassigned_access hook
to the transaction_failed hook.

The notable difference is that rather than it being called
for all physical memory accesses which fail (including
those made by DMA devices or by the gdbstub), it is only
called for those made by the CPU via its MMU. (In previous
commits we put in explicit checks for the direct physical
loads made by the target/m68k code which will no longer
be handled by calling the unassigned_access hook.)

Backports commit e1aaf3a88e95ab007445281e2b2f6e3c8da47f22 from qemu
2019-05-17 12:01:40 -04:00
Peter Maydell ab63f1a102
target/m68k: In get_physical_address() check for memory access failures
In get_physical_address(), use address_space_ldl() and
address_space_stl() instead of ldl_phys() and stl_phys().
This allows us to check whether the memory access failed.
For the moment, we simply return -1 in this case;
add a TODO comment that we should ideally generate the
appropriate kind of fault.

Backports commit adcf0bf017351776510121e47b9226095836023c from qemu
2019-05-17 11:59:01 -04:00
Richard Henderson 2a4a7b9391
tcg: Use tlb_fill probe from tlb_vaddr_to_host
Most of the existing users would continue around a loop which
would fault the tlb entry in via a normal load/store.

But for AArch64 SVE we have an existing emulation bug wherein we
would mark the first element of a no-fault vector load as faulted
(within the FFR, not via exception) just because we did not have
its address in the TLB. Now we can properly only mark it as faulted
if there really is no valid, readable translation, while still not
raising an exception. (Note that beyond the first element of the
vector, the hardware may report a fault for any reason whatsoever;
with at least one element loaded, forward progress is guaranteed.)

Backports commit 4811e9095c0491bc6f5450e5012c9c4796b9e59d from qemu
2019-05-16 18:27:03 -04:00
Laurent Vivier 8cdfed1032
linux-user: fix 32bit g2h()/h2g()
sparc32plus has 64bit long type but only 32bit virtual address space.

For instance, "apt-get upgrade" failed because of a mmap()/msync()
sequence.

mmap() returned 0xff252000 but msync() used g2h(0xffffffffff252000)
to find the host address. The "(target_ulong)" in g2h() doesn't fix the
address because it is 64bit long.

This patch introduces an "abi_ptr" that is set to uint32_t
if the virtual address space is addressed using 32bit in the linux-user
case. It stays set to target_ulong with softmmu case.

Backports commit 3e23de15237c81fe7af7c3ffa299a6ae5fec7d43 from qemu
2019-05-16 18:20:55 -04:00
Richard Henderson e736ef3238
tcg: Remove CPUClass::handle_mmu_fault
This hook is now completely replaced by tlb_fill.

Backports commit 69963f5709a0645934c169784820d0bee22208ba from qemu
2019-05-16 18:12:17 -04:00
Lioncash fcaa52c1fe
tcg: Synchronize with qemu
Resolves any formatting discrepancies and bad merges that slipped
through.
2019-05-16 18:11:08 -04:00
Richard Henderson dab0061a0d
tcg: Use CPUClass::tlb_fill in cputlb.c
We can now use the CPUClass hook instead of a named function.

Create a static tlb_fill function to avoid other changes within
cputlb.c. This also isolates the asserts within. Remove the
named tlb_fill function from all of the targets.

Backports commit c319dc13579a92937bffe02ad2c9f1a550e73973 from qemu
2019-05-16 17:35:37 -04:00
Richard Henderson 5d83199931
target/sparc: Convert to CPUClass::tlb_fill
Backports commit e84942f2ceaa79430414f2cb68d77c044dadca96 from qemu
2019-05-16 17:29:35 -04:00
Richard Henderson e98c731550
target/riscv: Convert to CPUClass::tlb_fill
Note that env->pc is removed from the qemu_log as that value is garbage.
The PC isn't recovered until cpu_restore_state, called from
cpu_loop_exit_restore, called from riscv_raise_exception.

Backports commit 8a4ca3c10a96be6ed7f023b685b688c4d409bbcb from qemu
2019-05-16 17:24:01 -04:00
Richard Henderson 14d48974a4
target/mips: Convert to CPUClass::tlb_fill
Note that env->active_tc.PC is removed from the qemu_log as that value
is garbage. The PC isn't recovered until cpu_restore_state, called from
cpu_loop_exit_restore, called from do_raise_exception_err.

Backports commit 931d019f5b2e7bbacb162869497123be402ddd86 from qemu
2019-05-16 17:19:47 -04:00
Richard Henderson 49cb8cfe5b
target/mips: Tidy control flow in mips_cpu_handle_mmu_fault
Since the only non-negative TLBRET_* value is TLBRET_MATCH,
the subsequent test for ret < 0 is useless. Use early return
to allow subsequent blocks to be unindented.

Backports commit e38f4eb63020075432cb77bf48398187809cf4a3 from qemu
2019-05-16 17:15:33 -04:00
Richard Henderson f175e89ca2
target/mips: Pass a valid error to raise_mmu_exception for user-only
At present we give ret = 0, or TLBRET_MATCH. This gets matched
by the default case, which falls through to TLBRET_BADADDR.
However, it makes more sense to use a proper value. All of the
tlb-related exceptions are handled identically in cpu_loop.c,
so TLBRET_BADADDR is as good as any other. Retain it.

Backports commit 995ffde9622c01f5b307cab47f9bd7962ac09db2 from qemu
2019-05-16 17:14:02 -04:00
Richard Henderson 52998fe46d
target/m68k: Convert to CPUClass::tlb_fill
Backports commit fe5f7b1b3a2317f598687218c348b54e02a75e1f from qemu
2019-05-16 17:12:41 -04:00
Richard Henderson fe9ac6e1c4
target/i386: Convert to CPUClass::tlb_fill
We do not support probing, but we do not need it yet either.

Backports commit 5d0044212c375c0696baef7bba13699277dac5b5 from qemu
2019-05-16 17:08:14 -04:00
Richard Henderson 31ecdb5341
target/arm: Convert to CPUClass::tlb_fill
Backports commit 7350d553b5066abdc662045d7db5cdb73d0f9d53 from qemu
2019-05-16 16:55:12 -04:00
Richard Henderson 1f30062c41
tcg: Add CPUClass::tlb_fill
This hook will replace the (user-only mode specific) handle_mmu_fault
hook, and the (system mode specific) tlb_fill function.

The handle_mmu_fault hook was written as if there was a valid
way to recover from an mmu fault, and had 3 possible return states.
In reality, the only valid action is to raise an exception,
return to the main loop, and deliver the SIGSEGV to the guest.

Note that all of the current implementations of handle_mmu_fault
for guests which support linux-user do in fact only ever return 1,
which is the signal to return to the main loop.

Using the hook for system mode requires that all targets be converted,
so for now the hook is (optionally) used only from user-only mode.

Backports commit da6bbf8513e621a8fc2fd315d77318f36547474d from qemu
2019-05-16 16:46:19 -04:00
Richard Henderson de260cfbd6
tcg/aarch64: Do not advertise minmax for MO_64
The min/max instructions are not available for 64-bit elements.

Backports commit a7b6d286cfb5205b9f5330aefc5727269b3d810f from qemu
2019-05-16 16:44:34 -04:00
Richard Henderson 552e48f14e
target/arm: Use tcg_gen_abs_i64 and tcg_gen_gvec_abs
Backports commit 4e027a710673f5d4dc6cff88728bcfd32e4c47b0 from qemu
2019-05-16 16:43:02 -04:00
Richard Henderson 7c9b3a9021
tcg/aarch64: Support vector absolute value
Backports commit a456394ae540f852cd0d10fd693fe9f33598dc01 from qemu
2019-05-16 16:39:14 -04:00
Richard Henderson fd35490991
tcg/i386: Support vector absolute value
Backports commit 18f9b65f1a4225dd314cb9b0a8dea968c5bc2ef3 from qemu
2019-05-16 16:37:33 -04:00
Richard Henderson 6d5e7856ff
tcg: Add support for vector absolute value
Backports commit bcefc90208f8a1d6f619d61c2647281d92277015 from qemu
2019-05-16 16:33:43 -04:00
Richard Henderson 6d1730048d
tcg: Add support for integer absolute value
Remove a function of the same name from target/arm/.
Use a branchless implementation of abs gleaned from gcc.

Backports commit ff1f11f7f8710a768f9313f24bd7f509d3db27e5 from qemu
2019-05-16 16:25:15 -04:00
Richard Henderson 18b3df6e4e
tcg/i386: Support vector scalar shift opcodes
Backports commit 0a8d7a3bf5a149a82450eef555fd61728703dd84 from qemu
2019-05-16 16:19:44 -04:00
Richard Henderson 79b9dc559e
tcg: Add gvec expanders for vector shift by scalar
Allow expansion either via shift by scalar or by replicating
the scalar for shift by vector.

Backports commit b4578cd91cda4cef1c413304353ca6dc5b957b60 from qemu
2019-05-16 16:17:58 -04:00
Richard Henderson 0217ee7b24
tcg/aarch64: Support vector variable shift opcodes
Backports commit 79525dfd08262d8de10d271f17e5a4096ef96d16 from qemu
2019-05-16 15:58:54 -04:00
Richard Henderson f793ec847d
tcg/i386: Support vector variable shift opcodes
Backports commit a2ce146a06807fe1d1a81e878b8f249ff1e14038 from qemu
2019-05-16 15:53:33 -04:00
Richard Henderson 8c17687934
tcg: Add gvec expanders for variable shift
The gvec expanders perform a modulo on the shift count. If the target
requires alternate behaviour, then it cannot use the generic gvec
expanders anyway, and will have to have its own custom code.

Backports commit 5ee5c14cacda27e904cd6b0d9e7ffe1acff42838 from qemu
2019-05-16 15:51:09 -04:00
Richard Henderson 66e6bea084
tcg: Add INDEX_op_dupm_vec
Allow the backend to expand dup from memory directly, instead of
forcing the value into a temp first. This is especially important
if integer/vector register moves do not exist.

Note that officially tcg_out_dupm_vec is allowed to fail.
If it did, we could fix this up relatively easily:

VECE == 32/64:
Load the value into a vector register, then dup.
Both of these must work.

VECE == 8/16:
If the value happens to be at an offset such that an aligned
load would place the desired value in the least significant
end of the register, go ahead and load w/garbage in high bits.

Load the value w/INDEX_op_ld{8,16}_i32.
Attempt a move directly to vector reg, which may fail.
Store the value into the backing store for OTS.
Load the value into the vector reg w/TCG_TYPE_I32, which must work.
Duplicate from the vector reg into itself, which must work.

All of which is well and good, except that all supported
hosts can support dupm for all vece, so all of the failure
paths would be dead code and untestable.

Backports commit 37ee55a081b7863ffab2151068dd1b2f11376914 from qemu
2019-05-16 15:38:02 -04:00
Richard Henderson fd7a67e4a7
tcg/aarch64: Implement tcg_out_dupm_vec
The LD1R instruction does all the work. Note that the only
useful addressing mode is a base register with no offset.

Backports commit f23e5e15edfd49d5dd72cab2ed2d85ac354b2eeb from qemu
2019-05-16 15:29:04 -04:00
Richard Henderson a6fd4e2345
tcg/i386: Implement tcg_out_dupm_vec
At the same time, improve tcg_out_dupi_vec wrt broadcast
from the constant pool.

Backports commit 1e262b49b5331441f697461e4305fe06719758a7 from qemu
2019-05-16 15:27:15 -04:00
Richard Henderson d4e7c6a8c5
tcg: Add tcg_out_dupm_vec to the backend interface
Currently stubbed out in all backends that support vectors.

Backports commit d6ecb4a978b718dbe108a9fa9ecccc8b7f7cb579 from qemu
2019-05-16 15:24:48 -04:00
Richard Henderson cf238d3544
tcg: Manually expand INDEX_op_dup_vec
This case is similar to INDEX_op_mov_* in that we need to do
different things depending on the current location of the source.

Backports commit bab1671f0fa928fd678a22f934739f06fd5fd035 from qemu
2019-05-16 15:22:29 -04:00
Richard Henderson 3d20e1678c
tcg: Promote tcg_out_{dup,dupi}_vec to backend interface
The i386 backend already has these functions, and the aarch64 backend
could easily split out one. Nothing is done with these functions yet,
but this will aid register allocation of INDEX_op_dup_vec in a later patch.

Adjust the aarch64 tcg_out_dupi_vec signature to match the new interface.

Backports commit e7632cfa8b76cdbbc1c76e8737338ef5844e7d60 from qemu
2019-05-16 15:18:48 -04:00
Richard Henderson d58d9ad16e
tcg: Support cross-class moves without instruction support
PowerPC Altivec does not support direct moves between vector registers
and general registers. So when tcg_out_mov fails, we can use the
backing memory for the temporary to perform the move.

Backports commit 240c08d0998f402c325fce489de0d14831048128 from qemu
2019-05-16 15:16:23 -04:00
Richard Henderson f86bd1c5d6
tcg: Return bool success from tcg_out_mov
This patch merely changes the interface, aborting on all failures,
of which there are currently none.

Backports commit 78113e83e0007e869c9f0cb4c0497a77538988e3 from qemu
2019-05-16 15:14:42 -04:00
Richard Henderson f7d9ee8451
tcg/arm: Use tcg_out_mov_reg in tcg_out_mov
We have a function that takes an additional condition parameter
over the standard backend interface. It already takes care of
eliding no-op moves.

Backports commit c16f52b2c5d91c36e121795bd3b386cea0b7573c from qemu
2019-05-16 15:10:52 -04:00
Richard Henderson fef5700c9c
tcg: Assert fixed_reg is read-only
The only fixed_reg is cpu_env, and it should not be modified
during any TB. Therefore code that tries to special-case moves
into a fixed_reg is dead. Remove it.

Backports commit d63e3b6e694ad6c887be135dddb9cd4893f1a844 from qemu
2019-05-16 15:09:37 -04:00
Richard Henderson c54b2776f6
tcg: Specify optional vector requirements with a list
Replace the single opcode in .opc with a null-terminated
array in .opt_opc. We still require that all opcodes be
used with the same .vece.

Validate the contents of this list with CONFIG_DEBUG_TCG.
All tcg_gen_*_vec functions will check any list active
during .fniv expansion. Swap the active list in and out
as we expand other opcodes, or take control away from the
front-end function.

Convert all existing vector aware front ends.

Backports commit 53229a7703eeb2bbe101a19a33ef22aaf960c65b from qemu
2019-05-16 15:05:02 -04:00
Richard Henderson 37762fd92b
tcg: Allow add_vec, sub_vec, neg_vec, not_vec to be expanded
PowerPC Altivec does not support add and subtract of 64-bit elements.
Prepare for that configuration by not assuming the operation is
universally supported.

Backports commit ce27c5d1a38e93da38653af71fb468c5eded4c7b from qemu
2019-05-16 14:33:18 -04:00
Richard Henderson 9a9b681b38
tcg: Do not recreate INDEX_op_neg_vec unless supported
Use tcg_can_emit_vec_op instead of just TCG_TARGET_HAS_neg_vec,
so that we check the type and vece for the actual operation.

Backports commit ac383dde33405106469d04a78de1d76f1a730cb1 from qemu
2019-05-16 14:28:41 -04:00
David Hildenbrand f3b4a64d27
tcg: Implement tcg_gen_gvec_3i()
Let's add tcg_gen_gvec_3i(), similar to tcg_gen_gvec_2i(), however
without introducing "gen_helper_gvec_3i *fnoi", as it isn't needed
for now.

Backports commit e1227bb6e59173117f094a6a13b998587b45c928 from qemu
2019-05-16 14:26:50 -04:00
Markus Armbruster 1b2c8c44d5
Clean up ill-advised or unusual header guards
Leading underscores are ill-advised because such identifiers are
reserved. Trailing underscores are merely ugly. Strip both.

Our header guards commonly end in _H. Normalize the exceptions.

Done with scripts/clean-header-guards.pl.

Backports commit a8b991b52dcde75ab5065046653626951aac666d from qemu
2019-05-14 08:02:53 -04:00
Richard Henderson 9a02741c13
cputlb: Do unaligned store recursion to outermost function
This is less tricky than for loads, because we always fall
back to single byte stores to implement unaligned stores.

Backports commit 4601f8d10d7628bcaf2a8179af36e04b42879e91 from qemu
2019-05-14 07:45:15 -04:00
Richard Henderson bcab6f1719
cputlb: Do unaligned load recursion to outermost function
If we attempt to recurse from load_helper back to load_helper,
even via intermediary, we do not get all of the constants
expanded away as desired.

But if we recurse back to the original helper (or a shim that
has a consistent function signature), the operands are folded
away as desired.

Backports commit 2dd926067867c2dd19e66d31a7990e8eea7258f6 from qemu
2019-05-14 07:43:31 -04:00
Richard Henderson f12f36aebd
cputlb: Drop attribute flatten
Going to approach this problem via __attribute__((always_inline))
instead, but full conversion will take several steps.

Backports commit fc1bc777910dc14a3db4e2ad66f3e536effc297d from qemu
2019-05-14 07:33:39 -04:00
Richard Henderson 7991cd601f
cputlb: Move TLB_RECHECK handling into load/store_helper
Having this in io_readx/io_writex meant that we forgot to
re-compute index after tlb_fill. It also means we can use
the normal aligned memory load path. It also fixes a bug
in that we had cached a use of index across a tlb_fill.

Backports commit f1be36969de2fb9b6b64397db1098f115210fcd9 from qemu
2019-05-14 07:28:15 -04:00
Alex Bennée ccee796272
accel/tcg: demacro cputlb
Instead of expanding a series of macros to generate the load/store
helpers we move stuff into common functions and rely on the compiler
to eliminate the dead code for each variant.

Backports commit eed5664238ea5317689cf32426d9318686b2b75c from qemu
2019-05-14 07:28:11 -04:00
Peter Maydell 26cb1b8767
target/arm: Stop using variable length array in dc_zva
Currently the dc_zva helper function uses a variable length
array. In fact we know (as the comment above remarks) that
the length of this array is bounded because the architecture
limits the block size and QEMU limits the target page size.
Use a fixed array size and assert that we don't run off it.

Backports commit 63159601fb3e396b28da14cbb71e50ed3f5a0331 from qemu
2019-05-09 17:48:25 -04:00
Peter Maydell 7861820e94
target/arm: Implement XPSR GE bits
In the M-profile architecture, if the CPU implements the DSP extension
then the XPSR has GE bits, in the same way as the A-profile CPSR. When
we added DSP extension support we forgot to add support for reading
and writing the GE bits, which are stored in env->GE. We did put in
the code to add XPSR_GE to the mask of bits to update in the v7m_msr
helper, but forgot it in v7m_mrs. We also must not allow the XPSR we
pull off the stack on exception return to set the nonexistent GE bits.
Correct these errors:
* read and write env->GE in xpsr_read() and xpsr_write()
* only set GE bits on exception return if DSP present
* read GE bits for MRS if DSP present

Backports commit f1e2598c46d480c9e21213a244bc514200762828 from qemu
2019-05-09 17:46:31 -04:00
Cao Jiaxi bcb1270f23
osdep: Fix mingw compilation regarding stdio formats
I encountered the following compilation error on mingw:

/mnt/d/qemu/include/qemu/osdep.h:97:9: error: '__USE_MINGW_ANSI_STDIO' macro redefined [-Werror,-Wmacro-redefined]
\#define __USE_MINGW_ANSI_STDIO 1
^
/mnt/d/llvm-mingw/aarch64-w64-mingw32/include/_mingw.h:433:9: note: previous definition is here
\#define __USE_MINGW_ANSI_STDIO 0 /* was not defined so it should be 0 */

It turns out that __USE_MINGW_ANSI_STDIO must be set before any
system headers are included, not just before stdio.h.

Backports commit 946376c21be1cd9dcc3c7936b204b113781603f7 from qemu
2019-05-09 17:44:14 -04:00
Cao Jiaxi 3922118434
util/cacheinfo: Use uint64_t on LLP64 model to satisfy Windows ARM64
Windows ARM64 uses LLP64 model, which breaks current assumptions.

Backports commit 8041336ef74e19ca607c1601016333c986de8f9c from qemu
2019-05-09 17:43:27 -04:00
Lioncash a71c027063
decodetree: Add DisasContext argument to !function expanders
This does require adjusting all existing users.

Backports commit 451e4ffdb0003ab5ed0d98bd37b385c076aba183 from qemu
2019-05-09 17:40:45 -04:00
Richard Henderson 9030870a8f
decodetree: Expand a decode_load function
Read the instruction, loading no more bytes than necessary.

Backports commit 70e0711ab18fa48279cd2c8cc570b57f38648598 from qemu
2019-05-09 17:35:20 -04:00
Richard Henderson a98e70e791
decodetree: Initial support for variable-length ISAs
Assuming that the ISA clearly describes how to determine
the length of the instruction, and the ISA has a reasonable
maximum instruction length, the input to the decoder can be
right-justified in an appropriate insn word.

This is not 100% convenient, as out-of-line %fields are
numbered relative to the maximum instruction length, but
this appears to still be usable.

Backports commit 17560e9349ff1fcce814184b37993f92378cf0c4 from qemu
2019-05-09 17:32:38 -04:00
Richard Henderson 8fdd009a9d
tcg: Remove CF_IGNORE_ICOUNT
Now that we have curr_cflags, we can include CF_USE_ICOUNT
early and then remove it as necessary.

Backports commit 416986d3f97329655e30da7271a2d11c6d707b06 from qemu
2019-05-06 00:57:09 -04:00
Richard Henderson 12f9def3a2
tcg: Add CF_LAST_IO + CF_USE_ICOUNT to CF_HASH_MASK
These flags are used by target/*/translate.c,
and affect code generation.

Backports commit 0cf8a44c2f56ba884c2f6db47d27fbb24975daa3 from qemu
2019-05-06 00:53:35 -04:00
Emilio G. Cota b1b069e8ad
cpu-exec: lookup/generate TB outside exclusive region during step_atomic
Now that all code generation has been converted to check CF_PARALLEL, we can
generate !CF_PARALLEL code without having yet set !parallel_cpus --
and therefore without having to be in the exclusive region during
cpu_exec_step_atomic.

While at it, merge cpu_exec_step into cpu_exec_step_atomic.

Backports commit ac03ee5331612e44beb393df2b578c951d27dc0d from qemu
2019-05-06 00:52:43 -04:00
Emilio G. Cota c1e26c4e35
tcg: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

The tb->cflags field is not passed to tcg generation functions. So
we add a field to TCGContext, storing there a copy of tb->cflags.

Most architectures have <= 32 registers, which results in a 4-byte hole
in TCGContext. Use this hole for the new field.

Backports commit e82d5a2460b0e176128027651ff9b104e4bdf5cc from qemu
2019-05-06 00:52:08 -04:00
Emilio G. Cota 175a5223ad
target/sparc: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

Backports commit 87d757d60d66d5ee1608460b0f1e07e2b758db9c from qemu
2019-05-06 00:43:21 -04:00
Emilio G. Cota 77ccb4918d
target/m68k: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

Backports commit f0ddf11b23260f0af84fb529486a8f9ba2d19401 from qemu
2019-05-06 00:42:16 -04:00
Emilio G. Cota ad2a4edd76
target/i386: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

Backports commit b5e3b4c2aca8eb5a9cfeedfb273af623f17c3731 from qemu
2019-05-04 22:45:49 -04:00
Emilio G. Cota 1715f382b4
target/arm: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

Backports commit 2399d4e7cec22ecf1c51062d2ebfd45220dbaace from qemu
2019-05-04 22:44:32 -04:00
Richard Henderson 4a858100f4
tcg: Include CF_COUNT_MASK in CF_HASH_MASK
Backports commit cdfef1715c779eb528d633e8b76cbc8a10e71ac8 from qemu
2019-05-04 22:31:32 -04:00
Richard Henderson 30c0950567
tcg: Add CPUState cflags_next_tb
We were generating code during tb_invalidate_phys_page_range,
check_watchpoint, cpu_io_recompile, and (seemingly) discarding
the TB, assuming that it would magically be picked up during
the next iteration through the cpu_exec loop.

Instead, record the desired cflags in CPUState so that we request
the proper TB so that there is no more magic.

Backports commit 9b990ee5a3cc6aa38f81266fb0c6ef37a36c45b9 from qemu
2019-05-04 22:30:22 -04:00
Richard Henderson ee1ddf4a92
tcg: define CF_PARALLEL and use it for TB hashing along with CF_COUNT_MASK
This will enable us to decouple code translation from the value
of parallel_cpus at any given time. It will also help us minimize
TB flushes when generating code via EXCP_ATOMIC.

Note that the declaration of parallel_cpus is brought to exec-all.h
to be able to define there the "curr_cflags" inline.

Backports commit 4e2ca83e71b51577b06b1468e836556912bd5b6e from qemu
2019-05-04 22:22:06 -04:00
Lioncash cc37db76b6
tcg: Synchronize with qemu 2019-05-04 21:40:23 -04:00
Daniel P. Berrangé aec899b73e
configure: automatically pick python3 is available
Unless overridden via an env var or configure arg, QEMU will only look
for the 'python' binary in $PATH. This is unhelpful on distros which
are only shipping Python 3.x (eg Fedora) in their default install as,
if they comply with PEP 394, the bare 'python' binary won't exist.

This changes configure so that by default it will search for all three
common python binaries, preferring to find Python 3.x versions.

Backports commit faf441429adfe5767be52c5dcdb8bc03161d064f from qemu
2019-05-03 11:36:36 -04:00
Thomas Huth 73176e89ce
configure: Remove old *-config-devices.mak.d files when running configure
When running "make" in a build directory from the pre-Kconfig merge time,
the build process currently fails with:

make: *** No rule to make target `.../default-configs/pci.mak',
needed by `aarch64-softmmu/config-devices.mak'. Stop.

To make sure that this problem at least goes away when the user runs
"configure" (or "sh config.status") again, we have to make sure that
we re-generate the .mak.d files. Thus remove the old stale files
while running the configure script.

Backports commit 9c79024225af6b3ae04ea2dd94a5e5c4132a9e65 from qemu
2019-05-03 11:33:56 -04:00
Thomas Huth 7107f72cc6
configure: Add -Wno-typedef-redefinition to CFLAGS (for Clang)
Without the -Wno-typedef-redefinition option, clang complains if a typedef
gets redefined in gnu99 mode (since this is officially a C11 feature). This
used to also happen with older versions of GCC, but since we've bumped our
minimum GCC version to 4.8, all versions of GCC that we support do not seem
to issue this warning in gnu99 mode anymore. So this has become a common
problem for people who only test their code with GCC - they do not notice
the issue until they submit their patches and suddenly patchew or a
maintainer complains.

Now that we do not urgently need to keep the code clean from typedef
redefintions anymore with recent versions of GCC, we can ease the
situation with clang, too, and simply shut these warnings off for good.

Backports commit e6e90feedb706b1b92827a5977b37e1e8defb8ef from qemu
2019-05-03 11:33:06 -04:00
Eduardo Habkost 42c35d968a
accel: Remove unused AccelClass::available field
The field is not used anymore, we can remove it.

Backports commit 8d006d4bc2ab4f72877d8bd47cba9aa8d24b54d0 from qemu
2019-05-03 11:31:27 -04:00
Peter Maydell d4549fccfb
target/arm: Enable FPU for Cortex-M4 and Cortex-M33
Enable the FPU by default for the Cortex-M4 and Cortex-M33.

Backports commit 14fd0c31e26b88a2189b3f459b864d5e1faf302a from qemu
2019-04-30 11:29:26 -04:00
Peter Maydell 77ae3982b4
target/arm: Implement VLLDM for v7M CPUs with an FPU
Implement the VLLDM instruction for v7M for the FPU present cas.

Backports commit 956fe143b4f254356496a0a1c479fa632376dfec from qemu
2019-04-30 11:27:54 -04:00
Peter Maydell b483951046
target/arm: Implement VLSTM for v7M CPUs with an FPU
Implement the VLSTM instruction for v7M for the FPU present case.

Backports commit 019076b036da4444494de38388218040d9d3a26c from qemu
2019-04-30 11:25:44 -04:00
Peter Maydell a976d7642a
target/arm: Implement M-profile lazy FP state preservation
The M-profile architecture floating point system supports
lazy FP state preservation, where FP registers are not
pushed to the stack when an exception occurs but are instead
only saved if and when the first FP instruction in the exception
handler is executed. Implement this in QEMU, corresponding
to the check of LSPACT in the pseudocode ExecuteFPCheck().

Backports commit e33cf0f8d8c9998a7616684f9d6aa0d181b88803 from qemu
2019-04-30 11:21:50 -04:00
Peter Maydell 72e5ae480d
target/arm: Add lazy-FP-stacking support to v7m_stack_write()
Pushing registers to the stack for v7M needs to handle three cases:
* the "normal" case where we pend exceptions
* an "ignore faults" case where we set FSR bits but
do not pend exceptions (this is used when we are
handling some kinds of derived exception on exception entry)
* a "lazy FP stacking" case, where different FSR bits
are set and the exception is pended differently

Implement this by changing the existing flag argument that
tells us whether to ignore faults or not into an enum that
specifies which of the 3 modes we should handle.

Backports commit a356dacf647506bccdf8ecd23574246a8bf615ac from qemu
2019-04-30 10:59:53 -04:00
Peter Maydell b1d6bd2792
target/arm: New function armv7m_nvic_set_pending_lazyfp()
In the v7M architecture, if an exception is generated in the process
of doing the lazy stacking of FP registers, the handling of
possible escalation to HardFault is treated differently to the normal
approach: it works based on the saved information about exception
readiness that was stored in the FPCCR when the stack frame was
created. Provide a new function armv7m_nvic_set_pending_lazyfp()
which pends exceptions during lazy stacking, and implements
this logic.

This corresponds to the pseudocode TakePreserveFPException().

Backports the relevant parts of commit
a99ba8ab1601904e0fa20325192fc850362ce80e from qemu
2019-04-30 10:56:54 -04:00
Peter Maydell 3fff653e20
target/arm: New helper function arm_v7m_mmu_idx_all()
Add a new helper function which returns the MMU index to use
for v7M, where the caller specifies all of the security
state, privilege level and whether the execution priority
is negative, and reimplement the existing
arm_v7m_mmu_idx_for_secstate_and_priv() in terms of it.

We are going to need this for the lazy-FP-stacking code.

Backports commit fa6252a988dbe440cd6087bf93cbe0887f0c401b from qemu
2019-04-30 10:54:26 -04:00
Peter Maydell 719231b4c0
target/arm: Activate M-profile floating point context when FPCCR.ASPEN is set
The M-profile FPCCR.ASPEN bit indicates that automatic floating-point
context preservation is enabled. Before executing any floating-point
instruction, if FPCCR.ASPEN is set and the CONTROL FPCA/SFPA bits
indicate that there is no active floating point context then we
must create a new context (by initializing FPSCR and setting
FPCA/SFPA to indicate that the context is now active). In the
pseudocode this is handled by ExecuteFPCheck().

Implement this with a new TB flag which tracks whether we
need to create a new FP context.

Backports commit 6000531e19964756673a5f4b694a649ef883605a from qemu
2019-04-30 10:51:31 -04:00
Peter Maydell 87c8c0fde7
target/arm: Set FPCCR.S when executing M-profile floating point insns
The M-profile FPCCR.S bit indicates the security status of
the floating point context. In the pseudocode ExecuteFPCheck()
function it is unconditionally set to match the current
security state whenever a floating point instruction is
executed.

Implement this by adding a new TB flag which tracks whether
FPCCR.S is different from the current security state, so
that we only need to emit the code to update it in the
less-common case when it is not already set correctly.

Note that we will add the handling for the other work done
by ExecuteFPCheck() in later commits.

Backports commit 6d60c67a1a03be32c3342aff6604cdc5095088d1 from qemu
2019-04-30 10:50:17 -04:00
Peter Maydell 8d726490ff
target/arm: Overlap VECSTRIDE and XSCALE_CPAR TB flags
We are close to running out of TB flags for AArch32; we could
start using the cs_base word, but before we do that we can
economise on our usage by sharing the same bits for the VFP
VECSTRIDE field and the XScale XSCALE_CPAR field. This
works because no XScale CPU ever had VFP.

Backports commit ea7ac69d124c94c6e5579145e727adec9ccbefef from qemu
2019-04-30 10:45:14 -04:00
Peter Maydell 3c1f3548c4
target/arm: Move NS TBFLAG from bit 19 to bit 6
Move the NS TBFLAG down from bit 19 to bit 6, which has not
been used since commit c1e3781090b9d36c60 in 2015, when we
started passing the entire MMU index in the TB flags rather
than just a 'privilege level' bit.

This rearrangement is not strictly necessary, but means that
we can put M-profile-only bits next to each other rather
than scattered across the flag word.

Backports commit 7fbb535f7aeb22896fedfcf18a1eeff48165f1d7 from qemu
2019-04-30 10:41:04 -04:00
Peter Maydell 86776d451e
target/arm: Handle floating point registers in exception return
Handle floating point registers in exception return.
This corresponds to pseudocode functions ValidateExceptionReturn(),
ExceptionReturn(), PopStack() and ConsumeExcStackFrame().

Backports commit 6808c4d2d2826920087533f517472c09edc7b0d2 from qemu
2019-04-30 10:40:12 -04:00
Peter Maydell 2244bb085a
target/arm: Allow for floating point in callee stack integrity check
The magic value pushed onto the callee stack as an integrity
check is different if floating point is present.

Backports commit 0dc51d66fcfcc4c72011cdafb401fd876ca216e7 from qemu
2019-04-30 10:36:58 -04:00
Peter Maydell 746d377221
target/arm: Clean excReturn bits when tail chaining
The TailChain() pseudocode specifies that a tail chaining
exception should sanitize the excReturn all-ones bits and
(if there is no FPU) the excReturn FType bits; we weren't
doing this.

Backports commit 60fba59a2f9a092a44b688df5d058cdd6dd9c276 from qemu
2019-04-30 10:35:36 -04:00
Peter Maydell ca0ac5dca9
target/arm: Clear CONTROL.SFPA in BXNS and BLXNS
For v8M floating point support, transitions from Secure
to Non-secure state via BLNS and BLXNS must clear the
CONTROL.SFPA bit. (This corresponds to the pseudocode
BranchToNS() function.)

Backports commit 3cd6726f0ba7cc77342ee721bd86094e13b2a42a from qemu
2019-04-30 10:33:25 -04:00
Peter Maydell c7f5633cfe
target/arm: Implement v7m_update_fpccr()
Implement the code which updates the FPCCR register on an
exception entry where we are going to use lazy FP stacking.
We have to defer to the NVIC to determine whether the
various exceptions are currently ready or not.

Backports commit b593c2b81287040ab6f452afec6281e2f7ee487b from qemu
2019-04-30 10:32:12 -04:00
Peter Maydell 065e60503f
target/arm: Handle floating point registers in exception entry
Handle floating point registers in exception entry.
This corresponds to the FP-specific parts of the pseudocode
functions ActivateException() and PushStack().

We defer the code corresponding to UpdateFPCCR() to a later patch.

Backports commit 0ed377a8013f40653a83f6ad2c9693897522d7dc from qemu
2019-04-30 10:25:23 -04:00
Peter Maydell c164a9f191
target/arm/helper: don't return early for STKOF faults during stacking
Currently the code in v7m_push_stack() which detects a violation
of the v8M stack limit simply returns early if it does so. This
is OK for the current integer-only code, but won't work for the
floating point handling we're about to add. We need to continue
executing the rest of the function so that we check for other
exceptions like not having permission to use the FPU and so
that we correctly set the FPCCR state if we are doing lazy
stacking. Refactor to avoid the early return.

Backports commit 3432c79a4e7345818d2defcf9e61a1bcb2907f9f from qemu
2019-04-30 10:22:36 -04:00
Peter Maydell 05add081a3
target/arm: Handle SFPA and FPCA bits in reads and writes of CONTROL
The M-profile CONTROL register has two bits -- SFPA and FPCA --
which relate to floating-point support, and should be RES0 otherwise.
Handle them correctly in the MSR/MRS register access code.
Neither is banked between security states, so they are stored
in v7m.control[M_REG_S] regardless of current security state.

Backports commit 2e1c5bcd32014c9ede1b604ae6c2c653de17fc53 from qemu
2019-04-30 10:21:24 -04:00
Peter Maydell c0cebeb5b5
target/arm: Clear CONTROL_S.SFPA in SG insn if FPU present
If the floating point extension is present, then the SG instruction
must clear the CONTROL_S.SFPA bit. Implement this.

(On a no-FPU system the bit will always be zero, so we don't need
to make the clearing of the bit conditional on ARM_FEATURE_VFP.)

Backports commit 1702071302934af77a072b7ee7c5eadc45b37573 from qemu
2019-04-30 10:20:45 -04:00
Peter Maydell 89baa5cffa
target/arm: Decode FP instructions for M profile
Correct the decode of the M-profile "coprocessor and
floating-point instructions" space:
* op0 == 0b11 is always unallocated
* if the CPU has an FPU then all insns with op1 == 0b101
are floating point and go to disas_vfp_insn()

For the moment we leave VLLDM and VLSTM as NOPs; in
a later commit we will fill in the proper implementation
for the case where an FPU is present.

Backports commit 8859ba3c9625e7ceb5599f457a344bcd7c5e112b from qemu
2019-04-30 10:19:45 -04:00
Peter Maydell 18bb21c035
target/arm: Honour M-profile FP enable bits
Like AArch64, M-profile floating point has no FPEXC enable
bit to gate floating point; so always set the VFPEN TB flag.

M-profile also has CPACR and NSACR similar to A-profile;
they behave slightly differently:
* the CPACR is banked between Secure and Non-Secure
* if the NSACR forces a trap then this is taken to
the Secure state, not the Non-Secure state

Honour the CPACR and NSACR settings. The NSACR handling
requires us to borrow the exception.target_el field
(usually meaningless for M profile) to distinguish the
NOCP UsageFault taken to Secure state from the more
usual fault taken to the current security state.

Backports commit d87513c0abcbcd856f8e1dee2f2d18903b2c3ea2 from qemu
2019-04-30 10:18:21 -04:00
Peter Maydell c6bb8d483d
target/arm: Disable most VFP sysregs for M-profile
The only "system register" that M-profile floating point exposes
via the VMRS/VMRS instructions is FPSCR, and it does not have
the odd special case for rd==15. Add a check to ensure we only
expose FPSCR.

Backports commit ef9aae2522c22c05df17dd898099dd5c3f20d688 from qemu
2019-04-30 10:15:25 -04:00
Peter Maydell a4f332f3e9
target/arm: Implement dummy versions of M-profile FP-related registers
The M-profile floating point support has three associated config
registers: FPCAR, FPCCR and FPDSCR. It also makes the registers
CPACR and NSACR have behaviour other than reads-as-zero.
Add support for all of these as simple reads-as-written registers.
We will hook up actual functionality later.

The main complexity here is handling the FPCCR register, which
has a mix of banked and unbanked bits.

Note that we don't share storage with the A-profile
cpu->cp15.nsacr and cpu->cp15.cpacr_el1, though the behaviour
is quite similar, for two reasons:
* the M profile CPACR is banked between security states
* it preserves the invariant that M profile uses no state
inside the cp15 substruct

Backports commit d33abe82c7c9847284a23e575e1078cccab540b5 from qemu
2019-04-30 10:13:41 -04:00
Peter Maydell 978cd9c524
target/arm: Make sure M-profile FPSCR RES0 bits are not settable
Enforce that for M-profile various FPSCR bits which are RES0 there
but have defined meanings on A-profile are never settable. This
ensures that M-profile code can't enable the A-profile behaviour
(notably vector length/stride handling) by accident.

Backports commit 5bcf8ed9401e62c73158ba110864ee1375558bf7 from qemu
2019-04-30 10:12:17 -04:00
Shahab Vahedi 7f59d62f4a
cputlb: Fix io_readx() to respect the access_type
This change adapts io_readx() to its input access_type. Currently
io_readx() treats any memory access as a read, although it has an
input argument "MMUAccessType access_type". This results in:

1) Calling the tlb_fill() only with MMU_DATA_LOAD
2) Considering only entry->addr_read as the tlb_addr

Buglink: https://bugs.launchpad.net/qemu/+bug/1825359

Backports commit ef5dae6805cce7b59d129d801bdc5db71bcbd60d from qemu
2019-04-30 10:11:11 -04:00
Richard Henderson 5847d833b2
tcg/arm: Restrict constant pool displacement to 12 bits
This will not necessarily restrict the size of the TB, since for v7
the majority of constant pool usage is for calls from the out-of-line
ldst code, which is already at the end of the TB. But this does
allow us to save one insn per reference on the off-chance.

Backports commit b4b82d7e9caff7ccca5c621817b5a4b8e95eb9b1 from qemu
2019-04-30 10:10:21 -04:00
Richard Henderson 187e80c9a5
tcg/ppc: Allow the constant pool to overflow at 32k
There is no point in coding for a 2GB offset when the max TB size
is already limited to 64k. If we further restrict to 32k then we
can eliminate the extra ADDIS instruction.

Backports commit a7cdaf710f2aaaf0be855a338dd67463d4bb99e2 from qemu
2019-04-30 10:08:18 -04:00
Richard Henderson 6145e3fdd7
tcg: Restart TB generation after out-of-line ldst overflow
This is part c of relocation overflow handling.

Backports commit aeee05f53a5d67304a521d2644dc0a607e3c8b28 from qemu
2019-04-30 10:06:53 -04:00
Richard Henderson 196631e0a4
tcg: Restart TB generation after constant pool overflow
This is part b of relocation overflow handling.

Backports commit 1768987b73fa7e23e58b7844abe5882490ff8e42 from qemu
2019-04-30 10:00:52 -04:00
Richard Henderson 45315fd8ef
tcg: Restart TB generation after relocation overflow
If the TB generates too much code, such that backend relocations
overflow, try again with a smaller TB. In support of this, move
relocation processing from a random place within tcg_out_op, in
the handling of branch opcodes, to a new function at the end of
tcg_gen_code.

This is not a complete solution, as there are additional relocs
generated for out-of-line ldst handling and constant pools.

Backports commit 7ecd02a06f8f4c0bbf872ecc15e37035b7e1df5f from qemu
2019-04-30 09:58:45 -04:00
Richard Henderson 434b3ab9ec
tcg: Restart after TB code generation overflow
If a TB generates too much code, try again with fewer insns.

Fixes: https://bugs.launchpad.net/bugs/1824853

Backports commit 6e6c4efed995d9eca6ae0cfdb2252df830262f50 from qemu
2019-04-30 09:52:57 -04:00
Richard Henderson bca82cde84
tcg: Hoist max_insns computation to tb_gen_code
In order to handle TB's that translate to too much code, we
need to place the control of the length of the translation
in the hands of the code gen master loop.

Backports commit 8b86d6d25807e13a63ab6ea879f976b9f18cc45a from qemu
2019-04-30 09:49:57 -04:00
Richard Henderson 2479bbd3b2
tcg/aarch64: Support INDEX_op_extract2_{i32,i64}
Backports commit 464c2969d5d7a0a5d38d2aa5d930986df876d3fb from qemu
2019-04-30 09:40:40 -04:00
Richard Henderson cbc5f919c2
tcg/arm: Support INDEX_op_extract2_i32
Backports commit 3b832d67a993968868f4087a9720a5c911e23f7a from qemu
2019-04-30 09:39:30 -04:00
Richard Henderson 0f20a26b36
tcg/i386: Support INDEX_op_extract2_{i32,i64}
Backports commit c6fb8c0cf704c4a1a48c3e99e995ad4c58150dab from qemu
2019-04-30 09:37:39 -04:00
Richard Henderson da39922c60
tcg: Use extract2 in tcg_gen_deposit_{i32,i64}
Backports commit b0a6056719b4a409a5699d11bbfdf79301417221 from qemu
2019-04-30 09:35:49 -04:00
Richard Henderson 948635602c
tcg: Use deposit and extract2 in tcg_gen_shifti_i64
Backports commit 02616bad6f0788652deaca9a48d0dfa7716ff87a from qemu
2019-04-30 09:33:44 -04:00
Richard Henderson 269fa0daba
tcg: Add INDEX_op_extract2_{i32,i64}
This will let backends implement the double-word shift operation.

Backports commit fce1296f135669eca85dc42154a2a352c818ad76 from qemu
2019-04-30 09:29:05 -04:00
David Hildenbrand 458942d94e
tcg: Implement tcg_gen_extract2_{i32,i64}
Will be helpful for s390x. Input 128 bit and output 64 bit only,
which is sufficient for now.

Backports commit 2089fcc9e7b4174d1c351eaa7d277c02188a6dd2 from qemu
2019-04-30 09:20:45 -04:00
Stanislav Lanci 64f51949a7
Pass through cache information for TOPOEXT CPUs
Backports commit a4e0b436f44a4bb47ed4a75b0c05d2547cf12b1c from qemu
2019-04-30 09:15:25 -04:00
Pu Wen 4bbf02a5f6
i386: Add new Hygon 'Dhyana' CPU model
Add a new base CPU model called 'Dhyana' to model processors from Hygon
Dhyana(family 18h), which derived from AMD EPYC(family 17h).

The following features bits have been removed compare to AMD EPYC:
aes, pclmulqdq, sha_ni

The Hygon Dhyana support to KVM in Linux is already accepted upstream[1].
So add Hygon Dhyana support to Qemu is necessary to create Hygon's own
CPU model.

Reference:
[1] https://git.kernel.org/tip/fec98069fb72fb656304a3e52265e0c2fc9adf87

Backports commit 8d031cec366f26669807eb43f61eb335973b7053 from qemu
2019-04-30 09:13:55 -04:00
Lioncash f6911ea73d
target/arm: Handle AArch32 CRC instructions 2019-04-27 10:50:25 -04:00
Lioncash c3df12e534
target/arm/translate: Synchronize with Qemu 2019-04-27 10:13:01 -04:00
Lioncash 9dfe2b527b
cpu-exec: Synchronize with qemu 2019-04-26 16:07:51 -04:00
Lioncash 5daabe55a4
cputlb: Synchronize with qemu
Synchronizes the code with Qemu to reduce a few differences.
2019-04-26 15:48:45 -04:00
Lioncash ef9e607e1c
qemu: Update bitmap.c/.h
Keeps it up to date with Qemu.
2019-04-26 13:05:55 -04:00
Lioncash f0c271ca2f
tcg: Correct special-cased brcond handling 2019-04-26 10:25:46 -04:00
Lioncash 4a64ebf95e
tcg: Synchronize with qemu 2019-04-26 09:32:20 -04:00
Lioncash 3996153514
tcg: Synchronize with qemu 2019-04-26 08:48:32 -04:00
Lioncash 006a13026a
tcg: Remove inconsistent g_strdup usage
The ts's name was allocated with strdup, but ts2's was being done with
g_strdup. Makes them consistent with upstream Qemu.
2019-04-26 08:48:32 -04:00
Lioncash 6d80445fe1
unicorn_arm: Treat registers as unsigned values in casts
It isn't particularly advisable to treat these as signed values, given
the registers themselves have no notion of signedness associated with
them.
2019-04-26 08:48:31 -04:00
Lioncash f419015aa3
unicorn_arm: Don't steamroll CPSR bits defined as RAZ/SBZP
Prevents bits from being set that should always read as zero according
to the ARM architecture reference manual.
2019-04-26 08:47:50 -04:00
Peter Maydell 8b2a0554cf
Open 4.1 development tree
Backports commit 85947dafad13ef8aea02eef2b058ee7aee47ab3e from qemu
2019-04-24 11:59:00 -04:00
Peter Maydell b62ab65eca
Update version for v4.0.0 release
Backports commit 131b9a05705636086699df15d4a6d328bb2585e8 from qemu
2019-04-24 11:58:36 -04:00
Lioncash 70836028eb
exec/helper-*: Synchronize with qemu 2019-04-22 08:22:49 -04:00
Lioncash 0379335677
cpu_ldst: Remove unused macros 2019-04-22 08:17:20 -04:00
Peter Maydell ff9c67b8f0
cpu_ldst.h: Don't define helpers if MMU_MODE*_SUFFIX not defined
Not all targets define a full set of suffix strings for the
NB_MMU_MODES that they have. In this situation, don't define any
helper functions for that mode, rather than defining helper functions
with no suffix at all. The MMU mode is still functional; it is merely
not directly accessible via cpu_ld*_MODE from target helper functions.

Also add an "NB_MMU_MODES >= 2" check to the definition of the mode 1
helpers -- some targets only define one MMU mode.

Backports commit de5ee4a888667ca0a198f0743d70075d70564117 from qemu
2019-04-22 07:44:32 -04:00
Lioncash e75b32ca4b
cpu_ldst.h, cpu-all.h, bswap.h: Update documentation on ld/st accessors
Add documentation of what the cpu_*_* accessors look like.
Correct some minor errors in the existing documentation of the
direct _p accessor family. Remove the near-duplicate comment
on the _p accessors from cpu-all.h and replace it with a reference
to the comment in bswap.h.

Backports commit db5fd8d709fd57f4d4f11edfca9f421f657f4508 from qemu
2019-04-22 07:39:13 -04:00
Peter Maydell 84eafc0cf6
cpu_ldst_template.h: Drop unused cpu_ldfq/stfq/ldfl/stfl accessors
The cpu_ldfq/stfq/ldfl/stfl accessors for loading and storing
float32 and float64 are completely unused, so delete them.
(The union they use for converting from the float32/float64
type to uint32_t or uint64_t is the wrong way to do it anyway:
they should be using make_float* and float*_val.)

Backports commit 82f11917c99e3c7fa3d6aa98572ecc98c7324c2f from qemu
2019-04-22 07:21:03 -04:00
Peter Maydell 32650e7816
cpu_ldst.h: Drop unused _raw macros, saddr() and laddr()
The _raw macros and their helpers saddr() and laddr() are now
totally unused -- delete them.

Backports commit 800e2ecc896beb6b79e7333c762da163b6a9135a from qemu
2019-04-22 07:19:20 -04:00
Peter Maydell f1a1f3c642
cpu_ldst_template.h: Use ld*_p directly rather than via ld*_raw macros
The ld*_raw and st*_raw macros are now only used within the code
produced by cpu_ldst_template.h, and only in three places.
Expand these out to just call the ld_p and st_p functions directly.

Note that in all the callsites the address argument is a uintptr_t,
so we can drop that part of the double-cast used in the saddr() and
laddr() macros.

Backports commit 355392329e4a843580e53cb027ed85e0cbebb640 from qemu
2019-04-22 07:11:50 -04:00
Peter Maydell 1a880ef99b
cpu_ldst.h: Use inline functions for usermode cpu_ld/st accessors
Use inline functions rather than macros for cpu_ld/st accessors
for the *-user configurations, as we already do for softmmu.
This has a two advantages:
* we can actually typecheck our arguments
* we don't need to leak the _raw macros everywhere

Since the _kernel functions were only used by target-i386/seg_helper.c,
put the definitions for them in that file too. (It already has the
similar template include code to define them for the softmmu case,
so it makes sense to have it deal with defining them for user-only.)

Backports commit 9220fe54c679d145232a28df6255e166ebf91bab from qemu
2019-04-22 07:08:39 -04:00
Peter Maydell 4fe3b4f95c
cpu_ldst.h: Remove unused very short ld*/st* defines
The very short ld*/st* defines are now not used anywhere; delete them.

Backports commit 177ea79f65c90b3bc84d59565b7519e47ea02f63 from qemu
2019-04-22 06:57:28 -04:00
Peter Maydell 36cd9f0df0
cpu_ldst.h: Drop unused ld/st*_kernel defines
The ld*_kernel and st*_kernel defines are not used anywhere;
delete them.

Backports commit 5a0826f7d2f9bea6e02157985b103d0a4c458aaa from qemu
2019-04-22 06:54:26 -04:00
Lioncash 830756a725
gen-icount: Use tcg_ctx where applicable in commented out code
If this is ever used in the future, it'll already be able to be used.
2019-04-22 06:17:10 -04:00
Lioncash d844d7cc9d
exec: Backport tb_cflags accessor 2019-04-22 06:12:59 -04:00
Lioncash 9f0e469142
gen-icount: Synchronize with qemu 2019-04-22 05:53:46 -04:00
Lioncash 96c52ea053
tcg: Synchronize with qemu 2019-04-22 02:03:01 -04:00
Lioncash 14f1cb03e9
target/arm/unicorn_arm: Get rid of magic constants where applicable 2019-04-19 20:23:41 -04:00
Lioncash 8ebb8f9fec
memory: Extract memory region freeing code to a common function
Makes deallocation behavior consistent and also fixes a memory leak
where the name of the region would be neglected to be freed.
2019-04-19 18:44:28 -04:00
Lioncash 7de690e87c
memory: Use a uint64_t instead of target_ulong for representing the incremented address
Prevents an infinite loop case if mapping near the upper boundary of an
address space on 32-bit emulated targets. i.e. mapping at 0xFFFFF000
with a size of 4096 won't overflow back to zero.

While we're at it, also tidy up the unicorn-specific functions.
2019-04-19 18:32:28 -04:00
Lioncash 5968b3d96f
target/arm: Synchronize with qemu 2019-04-19 15:31:18 -04:00
Lioncash ccf16bc572
softmmu_template: Fix invalid argument to tlb_fill in helper_be_st_name
This should be passing in the page2 value like in the little-endian
handler.
2019-04-18 08:43:14 -04:00
Lioncash bf6dfeb175
target/arm/translate: Synchronize with qemu
Backports a few other missing pieces from mainline qemu.
2019-04-18 06:22:36 -04:00
Lioncash 5b062dacf2
target/arm: Simplify and correct thumb instruction tracing
This wasn't subtracting the size of the instruction off the PC like how
the ARM mode tracing was performing the tracing. This simplifies it and
makes the behavior identical.
2019-04-18 06:00:15 -04:00
Lioncash 5d6ddec7fb
target/arm/translate: Subtract PC value properly for thumb tracecode calls 2019-04-18 05:44:48 -04:00
Lioncash b9d1002609
tcg-op: Make sure to free temporaries within gen_uc_tracecode()
After the helper is generated, these are no longer needed and can be
reclaimed.
2019-04-18 05:40:48 -04:00
Lioncash e579832dcb
tcg: Synchronize with qemu 2019-04-18 04:57:19 -04:00
Lioncash 3521e72580
target/arm: Sychronize with qemu
Synchronizes with bits and pieces that were missed due to merging
incorrectly (sorry :<)
2019-04-18 04:49:11 -04:00
Peter Maydell 753b6601c5
Update version for v4.0.0-rc4 release
Backports commit eeba63fc7fface36f438bcbc0d3b02e7dcb59983 from qemu
2019-04-16 22:39:02 -04:00
Lukas Dresel 4b94a8cc44
support for YMM registers ymm8-ymm15 (#1079)
Backports 55d8d073bd80935e807289ae2ff6161145a2afb6 from qemu
2019-04-16 06:35:41 -04:00
Lioncash 5de5b69344
target/i386: Fix compilation of the x86 target
Thanks to @rk700 for reporting it.
2019-04-16 06:29:06 -04:00
Lioncash ddcf400955
arm: Always enable access to coprocessors initially
Allows non-AArch64 environments to always access coprocessors initially.
Removes the need to do avoidable register management when testing
floating-point code.
2019-04-13 19:49:43 -04:00
Peter Maydell 5bcb1e13ab
Update version for v4.0.0-rc3 release
Backports commit 532cc6da74ec25b5ba6893b5757c977d54582949 from qemu
2019-04-10 15:00:31 -04:00
Peter Maydell 3ff38c2402
include/qemu/bswap.h: Use __builtin_memcpy() in accessor functions
In the accessor functions ld*_he_p() and st*_he_p() we use memcpy()
to perform a load or store to a pointer which might not be aligned
for the size of the type. We rely on the compiler to optimize this
memcpy() into an efficient load or store instruction where possible.
This is required for good performance, but at the moment it is also
required for correct operation, because some users of these functions
require that the access is atomic if the pointer is aligned, which
will only be the case if the compiler has optimized out the memcpy().
(The particular example where we discovered this is the virtio
vring_avail_idx() which calls virtio_lduw_phys_cached() which
eventually ends up calling lduw_he_p().)

Unfortunately some compile environments, such as the fortify-source
setup used in Alpine Linux, define memcpy() to a wrapper function
in a way that inhibits this compiler optimization.

The correct long-term fix here is to add a set of functions for
doing atomic accesses into AddressSpaces (and to other relevant
families of accessor functions like the virtio_*_phys_cached()
ones), and make sure that callsites which want atomic behaviour
use the correct functions.

In the meantime, switch to using __builtin_memcpy() in the
bswap.h accessor functions. This will make us robust against things
like this fortify library in the short term. In the longer term
it will mean that we don't end up with these functions being really
badly-performing even if the semantics of the out-of-line memcpy()
are correct.
2019-04-10 14:57:52 -04:00
Peter Maydell 6b413ffa97
target/i386: Generate #UD for LOCK on a register increment
Fix a TCG crash due to attempting an atomic increment
operation without having set up the address first.
This is a similar case to that dealt with in commit
e84fcd7f662a0d8198703, and we fix it in the same way.

Fixes: https://bugs.launchpad.net/qemu/+bug/1807675

Backports commit 8cb2ca3d7479748587313f0b34034a3f8aa08c92 from qemu
2019-04-09 09:28:46 -04:00
Peter Maydell 1a50ee5826
Update version for v4.0.0-rc2 release
Backports commit 061b51e9195670e9d190cdec46fabcb3c77763fb from qemu
2019-04-03 10:02:46 -04:00
Paolo Bonzini b4dd8afa14
config-all-devices.mak: rebuild on reconfigure
This ensures that softmmu directories are culled after a
"./configure --target-list=x86_64-linux-user".

Backports commit b7c11e574977a0addfbbdb89377c6f52affe64ec from qemu
2019-03-29 19:31:32 -04:00
Singh, Brijesh 54b9701799
memory: Fix the memory region type assignment order
Currently, a callback registered through the RAMBlock notifier
is not able to get the memory region type (i.e callback is not
able to use memory_region_is_ram_device function). This is
because mr->ram assignment happens _after_ the memory is allocated
whereas the callback is executed during allocation.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1667249

Backports commit 2ddb89b00f947f785c9ca6742f28f954e3b75e62 from qemu
2019-03-29 19:28:41 -04:00
Peter Maydell d57c77afcf
Update version for v4.0.0-rc1 release
Backports commit 49fc899f8d673dd9e73f3db0d9e9ea60b77c331b from qemu
2019-03-26 20:44:04 -04:00
Richard Henderson f5cb1a5865
target/arm: Set SIMDMISC and FPMISC for 32-bit -cpu max
Fixes: https://bugs.launchpad.net/bugs/1821430

Backports commit c8877d0f2f662bf01346a03bc9fd279954b4132d from qemu
2019-03-26 20:41:01 -04:00
Kito Cheng 5a7ad783e9
target/riscv: Fix wrong expanding for c.fswsp
base register is no rs1 not rs2 for fsw.

Backports commit 620455350a8da7cc62ae82cb69dd5c556f744136 from qemu
2019-03-26 20:39:34 -04:00
Palmer Dabbelt fc662c281a
target/riscv: Zero extend the inputs of divuw and remuw
While running the GCC test suite against 4.0.0-rc0, Kito found a
regression introduced by the decodetree conversion that caused divuw and
remuw to sign-extend their inputs. The ISA manual says they are
supposed to be zero extended:

DIVW and DIVUW instructions are only valid for RV64, and divide the
lower 32 bits of rs1 by the lower 32 bits of rs2, treating them as
signed and unsigned integers respectively, placing the 32-bit
quotient in rd, sign-extended to 64 bits. REMW and REMUW
instructions are only valid for RV64, and provide the corresponding
signed and unsigned remainder operations respectively. Both REMW
and REMUW always sign-extend the 32-bit result to 64 bits, including
on a divide by zero.

Here's Kito's reduced test case from the GCC test suite

unsigned calc_mp(unsigned mod)
{
unsigned a,b,c;
c=-1;
a=c/mod;
b=0-a*mod;
if (b > mod) { a += 1; b-=mod; }
return b;
}

int main(int argc, char *argv[])
{
unsigned x = 1234;
unsigned y = calc_mp(x);

if ((sizeof (y) == 4 && y != 680)
|| (sizeof (y) == 2 && y != 134))
abort ();
exit (0);
}

I haven't done any other testing on this, but it does fix the test case.

Backports commit f17e02cd3731bdfe2942d1d0b2a92f26da02408c from qemu
2019-03-26 20:38:17 -04:00
Andrew Jones 8719b3edb3
target/arm: make pmccntr_op_start/finish static
These functions are not used outside helper.c

Backports commit f2b2f53f6429b5abd7cd86bd65747f5f13e195eb from qemu
2019-03-26 20:35:34 -04:00
Andrew Jones 6482182ba5
target/arm: cortex-a7 and cortex-a15 have pmus
cortex-a7 and cortex-a15 have pmus (PMUv2) and they advertise
them in ID_DFR0. Let's allow them to function. This also enables
the pmu cpu property to work with these cpu types, i.e. we can
now do '-cpu cortex-a15,pmu=off' to remove the pmu.

Backports commit a46118fc16537a593119e5b316052a98514046bb from qemu
2019-03-26 20:34:11 -04:00
Andrew Jones 3c50e72c40
target/arm: fix crash on pmu register access
Fix a QEMU NULL derefence that occurs when the guest attempts to
enable PMU counters with a non-v8 cpu model or a v8 cpu model
which has not configured a PMU.

Backports commit cbbb3041fe2f57a475cef5d6b0ef836118aad106 from qemu
2019-03-26 20:32:49 -04:00
Richard Henderson 2427ace0c0
target/arm: Fix non-parallel expansion of CASP
The second word has been loaded from the unincremented
address since the first commit.

Backports commit a036f5302c13634f3d375615b2949fd1fa1657b6 from qemu
2019-03-26 20:31:01 -04:00
Eduardo Habkost df51e8bbb3
i386: Disable OSPKE on CPU model definitions
Currently, the Cascadelake-Server, Icelake-Client, and
Icelake-Server are always generating the following warning:

qemu-system-x86_64: warning: \
host doesn't support requested feature: CPUID.07H:ECX [bit 4]

This happens because OSPKE was never returned by
GET_SUPPORTED_CPUID or x86_cpu_get_supported_feature_word().
OSPKE is a runtime flag automatically set by the KVM module or by
TCG code, was always cleared by x86_cpu_filter_features(), and
was not supposed to appear on the CPU model table.

Remove the OSPKE flag from the CPU model table entries, to avoid
the bogus warning and avoid returning invalid feature data on
query-cpu-* QMP commands. As OSPKE was always cleared by
x86_cpu_filter_features(), this won't have any guest-visible
impact.

Include a test case that should detect the problem if we introduce
a similar bug again.

Fixes: c7a88b52f62b ("i386: Add new model of Cascadelake-Server")
Fixes: 8a11c62da914 ("i386: Add new CPU model Icelake-{Server,Client}")

Backports commit bb4928c7cafe50ab2137a0034e350ef1bfa044d9 from qemu
2019-03-22 09:46:44 -04:00
Eduardo Habkost a71df717c9
i386: Make arch_capabilities migratable
Now that kvm_arch_get_supported_cpuid() will only return
arch_capabilities if QEMU is able to initialize the MSR properly,
we know that the feature is safely migratable.

Backports commit 014018e19b3c54dd1bf5072bc912ceffea40abe8 from qemu
2019-03-22 09:45:43 -04:00
Peter Maydell db02d0b733
Update version for v4.0.0-rc0 release
Backports commit 62a172e6a77d9072bb1a18f295ce0fcf4b90a4f2 from qemu
2019-03-19 23:58:31 -04:00
Alistair Francis a9cc62cb23
target/riscv: Remove unused struct
Backports commit 6b745d4fada5c73db44f596a62e29a5dbe3fc53f from qemu
2019-03-19 23:58:31 -04:00
Michael Clark b247ee234d
RISC-V: Update load reservation comment in do_interrupt
Backports commit d9360e96885dbd69ce4aa925d1701c7a10cf54ae from qemu
2019-03-19 23:58:31 -04:00
Michael Clark d3dbcb6dfc
RISC-V: Add support for vectored interrupts
If vectored interrupts are enabled (bits[1:0]
of mtvec/stvec == 1) then use the following
logic for trap entry address calculation:

pc = mtvec + cause * 4

In addition to adding support for vectored interrupts
this patch simplifies the interrupt delivery logic
by making sync/async cause decoding and encoding
steps distinct.

The cause code and the sign bit indicating sync/async
is split at the beginning of the function and fixed
cause is renamed to cause. The MSB setting for async
traps is delayed until setting mcause/scause to allow
redundant variables to be eliminated. Some variables
are renamed for conciseness and moved so that decls
are at the start of the block.

Backports commit acbbb94e5730c9808830938e869d243014e2923a from qemu
2019-03-19 23:58:31 -04:00
Michael Clark 8ffa68e757
RISC-V: Change local interrupts from edge to level
This effectively changes riscv_cpu_update_mip
from edge to level. i.e. cpu_interrupt or
cpu_reset_interrupt are called regardless of
the current interrupt level.

Fixes WFI doesn't return when a IPI is issued:

- https://github.com/riscv/riscv-qemu/issues/132

To test:

1) Apply RISC-V Linux CPU hotplug patch:

- http://lists.infradead.org/pipermail/linux-riscv/2018-May/000603.html

2) Enable CONFIG_CPU_HOTPLUG in linux .config

3) Try to offline and online cpus:

echo 1 > /sys/devices/system/cpu/cpu2/online
echo 0 > /sys/devices/system/cpu/cpu2/online
echo 1 > /sys/devices/system/cpu/cpu2/online

Backports commit d26f5a423438e579d3ff0ca35e44edb966a36233 from qemu
2019-03-19 23:58:31 -04:00
Kito Cheng bd3e9ebaea
RISC-V: linux-user support for RVE ABI
This change checks elf_flags for EF_RISCV_RVE and if
present uses the RVE linux syscall ABI which uses t0
for the syscall number instead of a7.

Warn and exit if a non-RVE ABI binary is run on a
cpu with the RVE extension as it is incompatible.

Backports relevant parts of 5836c3eccedb6dfab16b8f606f2de24b8938b69c
from qemu
2019-03-19 23:58:31 -04:00
Michael Clark 2e0c040062
RISC-V: Allow interrupt controllers to claim interrupts
We can't allow the supervisor to control SEIP as this would allow the
supervisor to clear a pending external interrupt which will result in
lost a interrupt in the case a PLIC is attached. The SEIP bit must be
hardware controlled when a PLIC is attached.

This logic was previously hard-coded so SEIP was always masked even
if no PLIC was attached. This patch adds riscv_cpu_claim_interrupts
so that the PLIC can register control of SEIP. In the case of models
without a PLIC (spike), the SEIP bit remains software controlled.

This interface allows for hardware control of supervisor timer and
software interrupts by other interrupt controller models.

Backports commit e3e7039cc24ecf47d81c091e8bb04552d6564ad8 from qemu
2019-03-19 23:48:12 -04:00
Alistair Francis a4f2dcde28
riscv: pmp: Log pmp access errors as guest errors
Backports commit aad5ac2311f3ad2c0be12d0eaaf4ef4398438fc2 from qemu
2019-03-19 23:45:03 -04:00
Jim Wilson 65903cf9a4
RISC-V: Add debug support for accessing CSRs.
Add a debugger field to CPURISCVState. Add riscv_csrrw_debug function
to set it. Disable mode checks when debugger field true.

Backports commit 753e3fe207db08ce0ef0405e8452c3397c9b9308 from qemu
2019-03-19 23:42:48 -04:00
Jim Wilson 30ab335bb3
RISC-V: Fixes to CSR_* register macros.
This adds some missing CSR_* register macros, and documents some as being
priv v1.9.1 specific.

Backports commit 8e73df6aa3f2f0e5c26c03a94a88406616291815 from qemu
2019-03-19 23:39:49 -04:00
Bastian Koppelmann c0f036578c
target/riscv: Fix manually parsed 16 bit insn
during the refactor to decodetree we removed the manual decoding that is
necessary for c.jal/c.addiw and removed the translation of c.flw/c.ld
and c.fsw/c.sd. This reintroduces the manual parsing and the
omited implementation.

Backports commit f330433b3633647b047cfa418c2ca4d18fda69c7 from qemu
2019-03-19 05:44:58 -04:00
Amir Charif 2392d8b8ab
target/arm: Check access permission to ADDVL/ADDPL/RDVL
These instructions do not trap when SVE is disabled in EL0,
causing them to be executed with wrong size information.

Backports commit 5de56742a3c91de3d646326bec43a989bba83ca4 from qemu
2019-03-19 05:42:59 -04:00
Dongjiu Geng 4dc3d59fd3
target/arm: change arch timer registers access permission
Some generic arch timer registers are Config-RW in the EL0,
which means the EL0 exception level can have write permission
if it is appropriately configured.

When VM access registers, QEMU firstly checks whether they have RW
permission, then check whether it is appropriately configured.
If they are defined to read only in EL0, even though they have been
appropriately configured, they still do not have write permission.
So need to add the write permission according to ARMV8 spec when
define it.

Backports commit daf1dc5f82cefe2a57f184d5053e8b274ad2ba9a from qemu
2019-03-19 05:40:44 -04:00
Bastian Koppelmann e96282eb28
target/riscv: Remove decode_RV32_64G()
decodetree handles all instructions now so the fallback is not necessary
anymore.

Backports commit 25e6ca30c668783cd72ff97080ff44e141b99f9b from qemu
2019-03-19 05:37:42 -04:00
Bastian Koppelmann a371684da9
target/riscv: Remove gen_system()
with all 16 bit insns moved to decodetree no path is falling back to
gen_system(), so we can remove it.

Backports commit 8f7bc273868939f0821e07fb23792db63d45bffb from qemu
2019-03-19 05:36:48 -04:00
Bastian Koppelmann 1765e6a090
target/riscv: Rename trans_arith to gen_arith
Backports commit 8dc9e8a8b04c4308cf275aa6480d289dcd3cf9b3 from qemu
2019-03-19 05:35:44 -04:00
Bastian Koppelmann 28daad082b
target/riscv: Remove manual decoding of RV32/64M insn
Backports commit 1288701682d81b93f62e01cd87001dc90b30b881 from qemu
2019-03-19 05:34:32 -04:00
Bastian Koppelmann b9eda7c464
target/riscv: Remove shift and slt insn manual decoding
Backports commit 34446e845829f55eaa9a07a915950af0b2710b47 from qemu
2019-03-19 05:23:47 -04:00
Bastian Koppelmann 177726afb8
target/riscv: make ADD/SUB/OR/XOR/AND insn use arg lists
manual decoding in gen_arith() is not necessary with decodetree. For now
the function is called trans_arith as the original gen_arith still
exists. The former will be renamed to gen_arith as soon as the old
gen_arith can be removed.

Backports commit f2ab1728675772cd475a33f4df3d2f68a22c188f from qemu
2019-03-19 05:17:54 -04:00
Bastian Koppelmann cb7c94fbc4
target/riscv: Move gen_arith_imm() decoding into trans_* functions
gen_arith_imm() does a lot of decoding manually, which was hard to read
in case of the shift instructions and is not necessary anymore with
decodetree.

Backports commit 7a50d3e2ae7f13b24fe55990ea0b8ddcbbb43130 from qemu
2019-03-19 05:14:21 -04:00
Bastian Koppelmann 6190837e2f
target/riscv: Remove manual decoding from gen_store()
With decodetree we don't need to convert RISC-V opcodes into to MemOps
as the old gen_store() did.

Backports commit bce8a342a1f0919479d18ec812b100136daa746b from qemu
2019-03-19 05:05:14 -04:00
Bastian Koppelmann f91f286ed2
target/riscv: Remove manual decoding from gen_load()
With decodetree we don't need to convert RISC-V opcodes into to MemOps
as the old gen_load() did.

Backports commit 98898b20e9cca462843c22ad952c216ffd57d654 from qemu
2019-03-19 05:02:25 -04:00