Commit graph

6150 commits

Author SHA1 Message Date
Peter Maydell ab63f1a102
target/m68k: In get_physical_address() check for memory access failures
In get_physical_address(), use address_space_ldl() and
address_space_stl() instead of ldl_phys() and stl_phys().
This allows us to check whether the memory access failed.
For the moment, we simply return -1 in this case;
add a TODO comment that we should ideally generate the
appropriate kind of fault.

Backports commit adcf0bf017351776510121e47b9226095836023c from qemu
2019-05-17 11:59:01 -04:00
Richard Henderson 2a4a7b9391
tcg: Use tlb_fill probe from tlb_vaddr_to_host
Most of the existing users would continue around a loop which
would fault the tlb entry in via a normal load/store.

But for AArch64 SVE we have an existing emulation bug wherein we
would mark the first element of a no-fault vector load as faulted
(within the FFR, not via exception) just because we did not have
its address in the TLB. Now we can properly only mark it as faulted
if there really is no valid, readable translation, while still not
raising an exception. (Note that beyond the first element of the
vector, the hardware may report a fault for any reason whatsoever;
with at least one element loaded, forward progress is guaranteed.)

Backports commit 4811e9095c0491bc6f5450e5012c9c4796b9e59d from qemu
2019-05-16 18:27:03 -04:00
Laurent Vivier 8cdfed1032
linux-user: fix 32bit g2h()/h2g()
sparc32plus has 64bit long type but only 32bit virtual address space.

For instance, "apt-get upgrade" failed because of a mmap()/msync()
sequence.

mmap() returned 0xff252000 but msync() used g2h(0xffffffffff252000)
to find the host address. The "(target_ulong)" in g2h() doesn't fix the
address because it is 64bit long.

This patch introduces an "abi_ptr" that is set to uint32_t
if the virtual address space is addressed using 32bit in the linux-user
case. It stays set to target_ulong with softmmu case.

Backports commit 3e23de15237c81fe7af7c3ffa299a6ae5fec7d43 from qemu
2019-05-16 18:20:55 -04:00
Richard Henderson e736ef3238
tcg: Remove CPUClass::handle_mmu_fault
This hook is now completely replaced by tlb_fill.

Backports commit 69963f5709a0645934c169784820d0bee22208ba from qemu
2019-05-16 18:12:17 -04:00
Lioncash fcaa52c1fe
tcg: Synchronize with qemu
Resolves any formatting discrepancies and bad merges that slipped
through.
2019-05-16 18:11:08 -04:00
Richard Henderson dab0061a0d
tcg: Use CPUClass::tlb_fill in cputlb.c
We can now use the CPUClass hook instead of a named function.

Create a static tlb_fill function to avoid other changes within
cputlb.c. This also isolates the asserts within. Remove the
named tlb_fill function from all of the targets.

Backports commit c319dc13579a92937bffe02ad2c9f1a550e73973 from qemu
2019-05-16 17:35:37 -04:00
Richard Henderson 5d83199931
target/sparc: Convert to CPUClass::tlb_fill
Backports commit e84942f2ceaa79430414f2cb68d77c044dadca96 from qemu
2019-05-16 17:29:35 -04:00
Richard Henderson e98c731550
target/riscv: Convert to CPUClass::tlb_fill
Note that env->pc is removed from the qemu_log as that value is garbage.
The PC isn't recovered until cpu_restore_state, called from
cpu_loop_exit_restore, called from riscv_raise_exception.

Backports commit 8a4ca3c10a96be6ed7f023b685b688c4d409bbcb from qemu
2019-05-16 17:24:01 -04:00
Richard Henderson 14d48974a4
target/mips: Convert to CPUClass::tlb_fill
Note that env->active_tc.PC is removed from the qemu_log as that value
is garbage. The PC isn't recovered until cpu_restore_state, called from
cpu_loop_exit_restore, called from do_raise_exception_err.

Backports commit 931d019f5b2e7bbacb162869497123be402ddd86 from qemu
2019-05-16 17:19:47 -04:00
Richard Henderson 49cb8cfe5b
target/mips: Tidy control flow in mips_cpu_handle_mmu_fault
Since the only non-negative TLBRET_* value is TLBRET_MATCH,
the subsequent test for ret < 0 is useless. Use early return
to allow subsequent blocks to be unindented.

Backports commit e38f4eb63020075432cb77bf48398187809cf4a3 from qemu
2019-05-16 17:15:33 -04:00
Richard Henderson f175e89ca2
target/mips: Pass a valid error to raise_mmu_exception for user-only
At present we give ret = 0, or TLBRET_MATCH. This gets matched
by the default case, which falls through to TLBRET_BADADDR.
However, it makes more sense to use a proper value. All of the
tlb-related exceptions are handled identically in cpu_loop.c,
so TLBRET_BADADDR is as good as any other. Retain it.

Backports commit 995ffde9622c01f5b307cab47f9bd7962ac09db2 from qemu
2019-05-16 17:14:02 -04:00
Richard Henderson 52998fe46d
target/m68k: Convert to CPUClass::tlb_fill
Backports commit fe5f7b1b3a2317f598687218c348b54e02a75e1f from qemu
2019-05-16 17:12:41 -04:00
Richard Henderson fe9ac6e1c4
target/i386: Convert to CPUClass::tlb_fill
We do not support probing, but we do not need it yet either.

Backports commit 5d0044212c375c0696baef7bba13699277dac5b5 from qemu
2019-05-16 17:08:14 -04:00
Richard Henderson 31ecdb5341
target/arm: Convert to CPUClass::tlb_fill
Backports commit 7350d553b5066abdc662045d7db5cdb73d0f9d53 from qemu
2019-05-16 16:55:12 -04:00
Richard Henderson 1f30062c41
tcg: Add CPUClass::tlb_fill
This hook will replace the (user-only mode specific) handle_mmu_fault
hook, and the (system mode specific) tlb_fill function.

The handle_mmu_fault hook was written as if there was a valid
way to recover from an mmu fault, and had 3 possible return states.
In reality, the only valid action is to raise an exception,
return to the main loop, and deliver the SIGSEGV to the guest.

Note that all of the current implementations of handle_mmu_fault
for guests which support linux-user do in fact only ever return 1,
which is the signal to return to the main loop.

Using the hook for system mode requires that all targets be converted,
so for now the hook is (optionally) used only from user-only mode.

Backports commit da6bbf8513e621a8fc2fd315d77318f36547474d from qemu
2019-05-16 16:46:19 -04:00
Richard Henderson de260cfbd6
tcg/aarch64: Do not advertise minmax for MO_64
The min/max instructions are not available for 64-bit elements.

Backports commit a7b6d286cfb5205b9f5330aefc5727269b3d810f from qemu
2019-05-16 16:44:34 -04:00
Richard Henderson 552e48f14e
target/arm: Use tcg_gen_abs_i64 and tcg_gen_gvec_abs
Backports commit 4e027a710673f5d4dc6cff88728bcfd32e4c47b0 from qemu
2019-05-16 16:43:02 -04:00
Richard Henderson 7c9b3a9021
tcg/aarch64: Support vector absolute value
Backports commit a456394ae540f852cd0d10fd693fe9f33598dc01 from qemu
2019-05-16 16:39:14 -04:00
Richard Henderson fd35490991
tcg/i386: Support vector absolute value
Backports commit 18f9b65f1a4225dd314cb9b0a8dea968c5bc2ef3 from qemu
2019-05-16 16:37:33 -04:00
Richard Henderson 6d5e7856ff
tcg: Add support for vector absolute value
Backports commit bcefc90208f8a1d6f619d61c2647281d92277015 from qemu
2019-05-16 16:33:43 -04:00
Richard Henderson 6d1730048d
tcg: Add support for integer absolute value
Remove a function of the same name from target/arm/.
Use a branchless implementation of abs gleaned from gcc.

Backports commit ff1f11f7f8710a768f9313f24bd7f509d3db27e5 from qemu
2019-05-16 16:25:15 -04:00
Richard Henderson 18b3df6e4e
tcg/i386: Support vector scalar shift opcodes
Backports commit 0a8d7a3bf5a149a82450eef555fd61728703dd84 from qemu
2019-05-16 16:19:44 -04:00
Richard Henderson 79b9dc559e
tcg: Add gvec expanders for vector shift by scalar
Allow expansion either via shift by scalar or by replicating
the scalar for shift by vector.

Backports commit b4578cd91cda4cef1c413304353ca6dc5b957b60 from qemu
2019-05-16 16:17:58 -04:00
Richard Henderson 0217ee7b24
tcg/aarch64: Support vector variable shift opcodes
Backports commit 79525dfd08262d8de10d271f17e5a4096ef96d16 from qemu
2019-05-16 15:58:54 -04:00
Richard Henderson f793ec847d
tcg/i386: Support vector variable shift opcodes
Backports commit a2ce146a06807fe1d1a81e878b8f249ff1e14038 from qemu
2019-05-16 15:53:33 -04:00
Richard Henderson 8c17687934
tcg: Add gvec expanders for variable shift
The gvec expanders perform a modulo on the shift count. If the target
requires alternate behaviour, then it cannot use the generic gvec
expanders anyway, and will have to have its own custom code.

Backports commit 5ee5c14cacda27e904cd6b0d9e7ffe1acff42838 from qemu
2019-05-16 15:51:09 -04:00
Richard Henderson 66e6bea084
tcg: Add INDEX_op_dupm_vec
Allow the backend to expand dup from memory directly, instead of
forcing the value into a temp first. This is especially important
if integer/vector register moves do not exist.

Note that officially tcg_out_dupm_vec is allowed to fail.
If it did, we could fix this up relatively easily:

VECE == 32/64:
Load the value into a vector register, then dup.
Both of these must work.

VECE == 8/16:
If the value happens to be at an offset such that an aligned
load would place the desired value in the least significant
end of the register, go ahead and load w/garbage in high bits.

Load the value w/INDEX_op_ld{8,16}_i32.
Attempt a move directly to vector reg, which may fail.
Store the value into the backing store for OTS.
Load the value into the vector reg w/TCG_TYPE_I32, which must work.
Duplicate from the vector reg into itself, which must work.

All of which is well and good, except that all supported
hosts can support dupm for all vece, so all of the failure
paths would be dead code and untestable.

Backports commit 37ee55a081b7863ffab2151068dd1b2f11376914 from qemu
2019-05-16 15:38:02 -04:00
Richard Henderson fd7a67e4a7
tcg/aarch64: Implement tcg_out_dupm_vec
The LD1R instruction does all the work. Note that the only
useful addressing mode is a base register with no offset.

Backports commit f23e5e15edfd49d5dd72cab2ed2d85ac354b2eeb from qemu
2019-05-16 15:29:04 -04:00
Richard Henderson a6fd4e2345
tcg/i386: Implement tcg_out_dupm_vec
At the same time, improve tcg_out_dupi_vec wrt broadcast
from the constant pool.

Backports commit 1e262b49b5331441f697461e4305fe06719758a7 from qemu
2019-05-16 15:27:15 -04:00
Richard Henderson d4e7c6a8c5
tcg: Add tcg_out_dupm_vec to the backend interface
Currently stubbed out in all backends that support vectors.

Backports commit d6ecb4a978b718dbe108a9fa9ecccc8b7f7cb579 from qemu
2019-05-16 15:24:48 -04:00
Richard Henderson cf238d3544
tcg: Manually expand INDEX_op_dup_vec
This case is similar to INDEX_op_mov_* in that we need to do
different things depending on the current location of the source.

Backports commit bab1671f0fa928fd678a22f934739f06fd5fd035 from qemu
2019-05-16 15:22:29 -04:00
Richard Henderson 3d20e1678c
tcg: Promote tcg_out_{dup,dupi}_vec to backend interface
The i386 backend already has these functions, and the aarch64 backend
could easily split out one. Nothing is done with these functions yet,
but this will aid register allocation of INDEX_op_dup_vec in a later patch.

Adjust the aarch64 tcg_out_dupi_vec signature to match the new interface.

Backports commit e7632cfa8b76cdbbc1c76e8737338ef5844e7d60 from qemu
2019-05-16 15:18:48 -04:00
Richard Henderson d58d9ad16e
tcg: Support cross-class moves without instruction support
PowerPC Altivec does not support direct moves between vector registers
and general registers. So when tcg_out_mov fails, we can use the
backing memory for the temporary to perform the move.

Backports commit 240c08d0998f402c325fce489de0d14831048128 from qemu
2019-05-16 15:16:23 -04:00
Richard Henderson f86bd1c5d6
tcg: Return bool success from tcg_out_mov
This patch merely changes the interface, aborting on all failures,
of which there are currently none.

Backports commit 78113e83e0007e869c9f0cb4c0497a77538988e3 from qemu
2019-05-16 15:14:42 -04:00
Richard Henderson f7d9ee8451
tcg/arm: Use tcg_out_mov_reg in tcg_out_mov
We have a function that takes an additional condition parameter
over the standard backend interface. It already takes care of
eliding no-op moves.

Backports commit c16f52b2c5d91c36e121795bd3b386cea0b7573c from qemu
2019-05-16 15:10:52 -04:00
Richard Henderson fef5700c9c
tcg: Assert fixed_reg is read-only
The only fixed_reg is cpu_env, and it should not be modified
during any TB. Therefore code that tries to special-case moves
into a fixed_reg is dead. Remove it.

Backports commit d63e3b6e694ad6c887be135dddb9cd4893f1a844 from qemu
2019-05-16 15:09:37 -04:00
Richard Henderson c54b2776f6
tcg: Specify optional vector requirements with a list
Replace the single opcode in .opc with a null-terminated
array in .opt_opc. We still require that all opcodes be
used with the same .vece.

Validate the contents of this list with CONFIG_DEBUG_TCG.
All tcg_gen_*_vec functions will check any list active
during .fniv expansion. Swap the active list in and out
as we expand other opcodes, or take control away from the
front-end function.

Convert all existing vector aware front ends.

Backports commit 53229a7703eeb2bbe101a19a33ef22aaf960c65b from qemu
2019-05-16 15:05:02 -04:00
Richard Henderson 37762fd92b
tcg: Allow add_vec, sub_vec, neg_vec, not_vec to be expanded
PowerPC Altivec does not support add and subtract of 64-bit elements.
Prepare for that configuration by not assuming the operation is
universally supported.

Backports commit ce27c5d1a38e93da38653af71fb468c5eded4c7b from qemu
2019-05-16 14:33:18 -04:00
Richard Henderson 9a9b681b38
tcg: Do not recreate INDEX_op_neg_vec unless supported
Use tcg_can_emit_vec_op instead of just TCG_TARGET_HAS_neg_vec,
so that we check the type and vece for the actual operation.

Backports commit ac383dde33405106469d04a78de1d76f1a730cb1 from qemu
2019-05-16 14:28:41 -04:00
David Hildenbrand f3b4a64d27
tcg: Implement tcg_gen_gvec_3i()
Let's add tcg_gen_gvec_3i(), similar to tcg_gen_gvec_2i(), however
without introducing "gen_helper_gvec_3i *fnoi", as it isn't needed
for now.

Backports commit e1227bb6e59173117f094a6a13b998587b45c928 from qemu
2019-05-16 14:26:50 -04:00
Markus Armbruster 1b2c8c44d5
Clean up ill-advised or unusual header guards
Leading underscores are ill-advised because such identifiers are
reserved. Trailing underscores are merely ugly. Strip both.

Our header guards commonly end in _H. Normalize the exceptions.

Done with scripts/clean-header-guards.pl.

Backports commit a8b991b52dcde75ab5065046653626951aac666d from qemu
2019-05-14 08:02:53 -04:00
Richard Henderson 9a02741c13
cputlb: Do unaligned store recursion to outermost function
This is less tricky than for loads, because we always fall
back to single byte stores to implement unaligned stores.

Backports commit 4601f8d10d7628bcaf2a8179af36e04b42879e91 from qemu
2019-05-14 07:45:15 -04:00
Richard Henderson bcab6f1719
cputlb: Do unaligned load recursion to outermost function
If we attempt to recurse from load_helper back to load_helper,
even via intermediary, we do not get all of the constants
expanded away as desired.

But if we recurse back to the original helper (or a shim that
has a consistent function signature), the operands are folded
away as desired.

Backports commit 2dd926067867c2dd19e66d31a7990e8eea7258f6 from qemu
2019-05-14 07:43:31 -04:00
Richard Henderson f12f36aebd
cputlb: Drop attribute flatten
Going to approach this problem via __attribute__((always_inline))
instead, but full conversion will take several steps.

Backports commit fc1bc777910dc14a3db4e2ad66f3e536effc297d from qemu
2019-05-14 07:33:39 -04:00
Richard Henderson 7991cd601f
cputlb: Move TLB_RECHECK handling into load/store_helper
Having this in io_readx/io_writex meant that we forgot to
re-compute index after tlb_fill. It also means we can use
the normal aligned memory load path. It also fixes a bug
in that we had cached a use of index across a tlb_fill.

Backports commit f1be36969de2fb9b6b64397db1098f115210fcd9 from qemu
2019-05-14 07:28:15 -04:00
Alex Bennée ccee796272
accel/tcg: demacro cputlb
Instead of expanding a series of macros to generate the load/store
helpers we move stuff into common functions and rely on the compiler
to eliminate the dead code for each variant.

Backports commit eed5664238ea5317689cf32426d9318686b2b75c from qemu
2019-05-14 07:28:11 -04:00
Peter Maydell 26cb1b8767
target/arm: Stop using variable length array in dc_zva
Currently the dc_zva helper function uses a variable length
array. In fact we know (as the comment above remarks) that
the length of this array is bounded because the architecture
limits the block size and QEMU limits the target page size.
Use a fixed array size and assert that we don't run off it.

Backports commit 63159601fb3e396b28da14cbb71e50ed3f5a0331 from qemu
2019-05-09 17:48:25 -04:00
Peter Maydell 7861820e94
target/arm: Implement XPSR GE bits
In the M-profile architecture, if the CPU implements the DSP extension
then the XPSR has GE bits, in the same way as the A-profile CPSR. When
we added DSP extension support we forgot to add support for reading
and writing the GE bits, which are stored in env->GE. We did put in
the code to add XPSR_GE to the mask of bits to update in the v7m_msr
helper, but forgot it in v7m_mrs. We also must not allow the XPSR we
pull off the stack on exception return to set the nonexistent GE bits.
Correct these errors:
* read and write env->GE in xpsr_read() and xpsr_write()
* only set GE bits on exception return if DSP present
* read GE bits for MRS if DSP present

Backports commit f1e2598c46d480c9e21213a244bc514200762828 from qemu
2019-05-09 17:46:31 -04:00
Cao Jiaxi bcb1270f23
osdep: Fix mingw compilation regarding stdio formats
I encountered the following compilation error on mingw:

/mnt/d/qemu/include/qemu/osdep.h:97:9: error: '__USE_MINGW_ANSI_STDIO' macro redefined [-Werror,-Wmacro-redefined]
\#define __USE_MINGW_ANSI_STDIO 1
^
/mnt/d/llvm-mingw/aarch64-w64-mingw32/include/_mingw.h:433:9: note: previous definition is here
\#define __USE_MINGW_ANSI_STDIO 0 /* was not defined so it should be 0 */

It turns out that __USE_MINGW_ANSI_STDIO must be set before any
system headers are included, not just before stdio.h.

Backports commit 946376c21be1cd9dcc3c7936b204b113781603f7 from qemu
2019-05-09 17:44:14 -04:00
Cao Jiaxi 3922118434
util/cacheinfo: Use uint64_t on LLP64 model to satisfy Windows ARM64
Windows ARM64 uses LLP64 model, which breaks current assumptions.

Backports commit 8041336ef74e19ca607c1601016333c986de8f9c from qemu
2019-05-09 17:43:27 -04:00