Commit graph

788 commits

Author SHA1 Message Date
Alex Bennée 064f50cdf6
fpu: make softfloat-macros self-contained
The macros use the "flags" type and to be consistent if anyone just
needs the macros we should bring in the header we need. There is an
outstanding TODO to audit the use of "flags" and replace with bool at
which point this include could be dropped.

Backports commit 5937fb63a92d54cc4e5270256e4387c4d3a70091 from qemu
2019-11-18 21:10:39 -05:00
Alex Bennée 5ab2ac8dac
fpu: move inline helpers into a separate header
There are a bunch of users of the inline helpers who do not need
access to the entire softfloat API. Move those inline helpers into a
new header file which can be included without bringing in the rest of
the world.

Backports commit e34c47ea3fb5f324b58db117b3c010a494c8d6ca from qemu
2019-11-18 21:09:02 -05:00
Alex Bennée be878fb0e9
fpu: remove the LIT64 macro
Now the rest of the code has been cleaned up we can remove this.

Backports commit 472038ccf5d69ebcad44d835488a36ee640c219f from qemu
2019-11-18 21:06:55 -05:00
Alex Bennée dbddafe2df
fpu: replace LIT64 with UINT64_C macros
In our quest to eliminate the home rolled LIT64 macro we fixup usage
inside the softfloat code. While we are at it we remove some of the
extraneous spaces to closer fit the house style.

Backports commit e932112420f063776f2b9d9e5512830cd6890a7a from qemu
2019-11-18 20:57:12 -05:00
Markus Armbruster bb1f47545c
typedefs: Separate incomplete types and function types
While there, rewrite the obsolete file comment.

Backports commit 2a28720d773df2193c9fb633c02092cca107a9e5 from qemu
2019-11-18 16:42:51 -05:00
Dr. David Alan Gilbert 0b8add4e6f
memory: Provide an equality function for MemoryRegionSections
Provide a comparison function that checks all the fields are the same.

Backports commit 42b6571357a083f721a27daa6dfdc69e4bd516bd from qemu
2019-11-18 16:42:50 -05:00
Dr. David Alan Gilbert 3807ec09de
memory: Align MemoryRegionSections fields
MemoryRegionSection includes an Int128 'size' field;
on some platforms the compiler causes an alignment of this to
a 128bit boundary, leaving 8 bytes of dead space.
This deadspace can be filled with junk.

Move the size field to the top avoiding unnecessary alignment.

Backports commit c0aca9352d51c102c55fe29ce5c1bf8e74a5183e from qemu
2019-11-18 16:42:47 -05:00
Richard Henderson ec81aef8c0
include/qemu/atomic.h: Add signal_barrier
We have some potential race conditions vs our user-exec signal
handler that will be solved with this barrier.

Backports commit 359896dfa4e9707e1acea99129d324250fccab04 from qemu
2019-08-08 19:26:41 -04:00
Lioncash 802c626145
Revert "cputlb: Filter flushes on already clean tlbs"
This reverts commit 5ab9723787.
2019-06-30 19:21:20 -04:00
Richard Henderson d7ea41c3a3
cpu: Move icount_decr to CPUNegativeOffsetState
Amusingly, we had already ignored the comment to keep this value
at the end of CPUState. This restores the minimum negative offset
from TCG_AREG0 for code generation.

For the couple of uses within qom/cpu.c, without NEED_CPU_H, add
a pointer from the CPUState object to the IcountDecr object within
CPUNegativeOffsetState.

Backports commit 5e1401969b25f676fee6b1c564441759cf967a43 from qemu
2019-06-13 15:34:28 -04:00
Richard Henderson 8f53f09a05
cpu: Introduce CPUNegativeOffsetState
Nothing in there so far, but all of the plumbing done
within the target ArchCPU state.

Backports commit 5b146dc716cfd247f99556c04e6e46fbd67565a0 from qemu
2019-06-13 15:08:25 -04:00
Richard Henderson a672b89e3b
cpu: Introduce cpu_set_cpustate_pointers
Consolidate some boilerplate from foo_cpu_initfn.

Backports commit 7506ed902eb97fe4e2a1dd16766c621d32ecc40d from qemu
2019-06-12 12:27:16 -04:00
Richard Henderson ac176ccb38
cpu: Move ENV_OFFSET to exec/gen-icount.h
Now that we have ArchCPU, we can define this generically,
in the one place that needs it.

Backports commit 677c4d69ac21961e76a386f9bfc892a44923acc0 from qemu
2019-06-12 12:20:21 -04:00
Richard Henderson 8b108f3607
cpu: Introduce env_archcpu
This will replace foo_env_get_cpu with a generic definition.
No changes to the target specific code so far.

Backports commit 083dc73d7a3cf2a75b5625fd8f0669b57a855d16 from qemu
2019-06-12 11:17:47 -04:00
Richard Henderson fbf91a6535
cpu: Replace ENV_GET_CPU with env_cpu
Now that we have both ArchCPU and CPUArchState, we can define
this generically instead of via macro in each target's cpu.h.

Backports commit 29a0af618ddd21f55df5753c3e16b0625f534b3c from qemu
2019-06-12 11:16:16 -04:00
Markus Armbruster 5e5197b136
Supply missing header guards
Backports applicable parts of commit
f91005e195e7e1485e60cb121731589960f1a3c9 from qemu
2019-06-12 10:59:10 -04:00
Lioncash 5ab9723787
cputlb: Filter flushes on already clean tlbs
Especially for guests with large numbers of tlbs, like ARM or PPC,
we may well not use all of them in between flush operations.
Remember which tlbs have been used since the last flush, and
avoid any useless flushing.

Backports much of 3d1523ced6060cdfe9e768a814d064067ccabfe5 from qemu
along with a bunch of updating changes.
2019-06-10 20:42:15 -04:00
Richard Henderson df2a890bd7
tcg: Split out target/arch/cpu-param.h
For all targets, into this new file move TARGET_LONG_BITS,
TARGET_PAGE_BITS, TARGET_PHYS_ADDR_SPACE_BITS,
TARGET_VIRT_ADDR_SPACE_BITS, and NB_MMU_MODES.

Include this new file from exec/cpu-defs.h.

This now removes the somewhat odd requirement that target/arch/cpu.h
defines TARGET_LONG_BITS before including exec/cpu-defs.h, so push the
bulk of the includes within target/arch/cpu.h to the top.

Backports commit 74433bf083b0766aba81534f92de13194f23ff3e from qemu
2019-06-10 19:35:46 -04:00
Richard Henderson 2a4a7b9391
tcg: Use tlb_fill probe from tlb_vaddr_to_host
Most of the existing users would continue around a loop which
would fault the tlb entry in via a normal load/store.

But for AArch64 SVE we have an existing emulation bug wherein we
would mark the first element of a no-fault vector load as faulted
(within the FFR, not via exception) just because we did not have
its address in the TLB. Now we can properly only mark it as faulted
if there really is no valid, readable translation, while still not
raising an exception. (Note that beyond the first element of the
vector, the hardware may report a fault for any reason whatsoever;
with at least one element loaded, forward progress is guaranteed.)

Backports commit 4811e9095c0491bc6f5450e5012c9c4796b9e59d from qemu
2019-05-16 18:27:03 -04:00
Laurent Vivier 8cdfed1032
linux-user: fix 32bit g2h()/h2g()
sparc32plus has 64bit long type but only 32bit virtual address space.

For instance, "apt-get upgrade" failed because of a mmap()/msync()
sequence.

mmap() returned 0xff252000 but msync() used g2h(0xffffffffff252000)
to find the host address. The "(target_ulong)" in g2h() doesn't fix the
address because it is 64bit long.

This patch introduces an "abi_ptr" that is set to uint32_t
if the virtual address space is addressed using 32bit in the linux-user
case. It stays set to target_ulong with softmmu case.

Backports commit 3e23de15237c81fe7af7c3ffa299a6ae5fec7d43 from qemu
2019-05-16 18:20:55 -04:00
Richard Henderson e736ef3238
tcg: Remove CPUClass::handle_mmu_fault
This hook is now completely replaced by tlb_fill.

Backports commit 69963f5709a0645934c169784820d0bee22208ba from qemu
2019-05-16 18:12:17 -04:00
Richard Henderson dab0061a0d
tcg: Use CPUClass::tlb_fill in cputlb.c
We can now use the CPUClass hook instead of a named function.

Create a static tlb_fill function to avoid other changes within
cputlb.c. This also isolates the asserts within. Remove the
named tlb_fill function from all of the targets.

Backports commit c319dc13579a92937bffe02ad2c9f1a550e73973 from qemu
2019-05-16 17:35:37 -04:00
Richard Henderson 1f30062c41
tcg: Add CPUClass::tlb_fill
This hook will replace the (user-only mode specific) handle_mmu_fault
hook, and the (system mode specific) tlb_fill function.

The handle_mmu_fault hook was written as if there was a valid
way to recover from an mmu fault, and had 3 possible return states.
In reality, the only valid action is to raise an exception,
return to the main loop, and deliver the SIGSEGV to the guest.

Note that all of the current implementations of handle_mmu_fault
for guests which support linux-user do in fact only ever return 1,
which is the signal to return to the main loop.

Using the hook for system mode requires that all targets be converted,
so for now the hook is (optionally) used only from user-only mode.

Backports commit da6bbf8513e621a8fc2fd315d77318f36547474d from qemu
2019-05-16 16:46:19 -04:00
Cao Jiaxi bcb1270f23
osdep: Fix mingw compilation regarding stdio formats
I encountered the following compilation error on mingw:

/mnt/d/qemu/include/qemu/osdep.h:97:9: error: '__USE_MINGW_ANSI_STDIO' macro redefined [-Werror,-Wmacro-redefined]
\#define __USE_MINGW_ANSI_STDIO 1
^
/mnt/d/llvm-mingw/aarch64-w64-mingw32/include/_mingw.h:433:9: note: previous definition is here
\#define __USE_MINGW_ANSI_STDIO 0 /* was not defined so it should be 0 */

It turns out that __USE_MINGW_ANSI_STDIO must be set before any
system headers are included, not just before stdio.h.

Backports commit 946376c21be1cd9dcc3c7936b204b113781603f7 from qemu
2019-05-09 17:44:14 -04:00
Richard Henderson 8fdd009a9d
tcg: Remove CF_IGNORE_ICOUNT
Now that we have curr_cflags, we can include CF_USE_ICOUNT
early and then remove it as necessary.

Backports commit 416986d3f97329655e30da7271a2d11c6d707b06 from qemu
2019-05-06 00:57:09 -04:00
Richard Henderson 12f9def3a2
tcg: Add CF_LAST_IO + CF_USE_ICOUNT to CF_HASH_MASK
These flags are used by target/*/translate.c,
and affect code generation.

Backports commit 0cf8a44c2f56ba884c2f6db47d27fbb24975daa3 from qemu
2019-05-06 00:53:35 -04:00
Richard Henderson 4a858100f4
tcg: Include CF_COUNT_MASK in CF_HASH_MASK
Backports commit cdfef1715c779eb528d633e8b76cbc8a10e71ac8 from qemu
2019-05-04 22:31:32 -04:00
Richard Henderson 30c0950567
tcg: Add CPUState cflags_next_tb
We were generating code during tb_invalidate_phys_page_range,
check_watchpoint, cpu_io_recompile, and (seemingly) discarding
the TB, assuming that it would magically be picked up during
the next iteration through the cpu_exec loop.

Instead, record the desired cflags in CPUState so that we request
the proper TB so that there is no more magic.

Backports commit 9b990ee5a3cc6aa38f81266fb0c6ef37a36c45b9 from qemu
2019-05-04 22:30:22 -04:00
Richard Henderson ee1ddf4a92
tcg: define CF_PARALLEL and use it for TB hashing along with CF_COUNT_MASK
This will enable us to decouple code translation from the value
of parallel_cpus at any given time. It will also help us minimize
TB flushes when generating code via EXCP_ATOMIC.

Note that the declaration of parallel_cpus is brought to exec-all.h
to be able to define there the "curr_cflags" inline.

Backports commit 4e2ca83e71b51577b06b1468e836556912bd5b6e from qemu
2019-05-04 22:22:06 -04:00
Eduardo Habkost 42c35d968a
accel: Remove unused AccelClass::available field
The field is not used anymore, we can remove it.

Backports commit 8d006d4bc2ab4f72877d8bd47cba9aa8d24b54d0 from qemu
2019-05-03 11:31:27 -04:00
Richard Henderson bca82cde84
tcg: Hoist max_insns computation to tb_gen_code
In order to handle TB's that translate to too much code, we
need to place the control of the length of the translation
in the hands of the code gen master loop.

Backports commit 8b86d6d25807e13a63ab6ea879f976b9f18cc45a from qemu
2019-04-30 09:49:57 -04:00
Lioncash f6911ea73d
target/arm: Handle AArch32 CRC instructions 2019-04-27 10:50:25 -04:00
Lioncash c3df12e534
target/arm/translate: Synchronize with Qemu 2019-04-27 10:13:01 -04:00
Lioncash 5daabe55a4
cputlb: Synchronize with qemu
Synchronizes the code with Qemu to reduce a few differences.
2019-04-26 15:48:45 -04:00
Lioncash ef9e607e1c
qemu: Update bitmap.c/.h
Keeps it up to date with Qemu.
2019-04-26 13:05:55 -04:00
Lioncash 70836028eb
exec/helper-*: Synchronize with qemu 2019-04-22 08:22:49 -04:00
Lioncash 0379335677
cpu_ldst: Remove unused macros 2019-04-22 08:17:20 -04:00
Peter Maydell ff9c67b8f0
cpu_ldst.h: Don't define helpers if MMU_MODE*_SUFFIX not defined
Not all targets define a full set of suffix strings for the
NB_MMU_MODES that they have. In this situation, don't define any
helper functions for that mode, rather than defining helper functions
with no suffix at all. The MMU mode is still functional; it is merely
not directly accessible via cpu_ld*_MODE from target helper functions.

Also add an "NB_MMU_MODES >= 2" check to the definition of the mode 1
helpers -- some targets only define one MMU mode.

Backports commit de5ee4a888667ca0a198f0743d70075d70564117 from qemu
2019-04-22 07:44:32 -04:00
Lioncash e75b32ca4b
cpu_ldst.h, cpu-all.h, bswap.h: Update documentation on ld/st accessors
Add documentation of what the cpu_*_* accessors look like.
Correct some minor errors in the existing documentation of the
direct _p accessor family. Remove the near-duplicate comment
on the _p accessors from cpu-all.h and replace it with a reference
to the comment in bswap.h.

Backports commit db5fd8d709fd57f4d4f11edfca9f421f657f4508 from qemu
2019-04-22 07:39:13 -04:00
Peter Maydell 84eafc0cf6
cpu_ldst_template.h: Drop unused cpu_ldfq/stfq/ldfl/stfl accessors
The cpu_ldfq/stfq/ldfl/stfl accessors for loading and storing
float32 and float64 are completely unused, so delete them.
(The union they use for converting from the float32/float64
type to uint32_t or uint64_t is the wrong way to do it anyway:
they should be using make_float* and float*_val.)

Backports commit 82f11917c99e3c7fa3d6aa98572ecc98c7324c2f from qemu
2019-04-22 07:21:03 -04:00
Peter Maydell 32650e7816
cpu_ldst.h: Drop unused _raw macros, saddr() and laddr()
The _raw macros and their helpers saddr() and laddr() are now
totally unused -- delete them.

Backports commit 800e2ecc896beb6b79e7333c762da163b6a9135a from qemu
2019-04-22 07:19:20 -04:00
Peter Maydell f1a1f3c642
cpu_ldst_template.h: Use ld*_p directly rather than via ld*_raw macros
The ld*_raw and st*_raw macros are now only used within the code
produced by cpu_ldst_template.h, and only in three places.
Expand these out to just call the ld_p and st_p functions directly.

Note that in all the callsites the address argument is a uintptr_t,
so we can drop that part of the double-cast used in the saddr() and
laddr() macros.

Backports commit 355392329e4a843580e53cb027ed85e0cbebb640 from qemu
2019-04-22 07:11:50 -04:00
Peter Maydell 1a880ef99b
cpu_ldst.h: Use inline functions for usermode cpu_ld/st accessors
Use inline functions rather than macros for cpu_ld/st accessors
for the *-user configurations, as we already do for softmmu.
This has a two advantages:
* we can actually typecheck our arguments
* we don't need to leak the _raw macros everywhere

Since the _kernel functions were only used by target-i386/seg_helper.c,
put the definitions for them in that file too. (It already has the
similar template include code to define them for the softmmu case,
so it makes sense to have it deal with defining them for user-only.)

Backports commit 9220fe54c679d145232a28df6255e166ebf91bab from qemu
2019-04-22 07:08:39 -04:00
Peter Maydell 4fe3b4f95c
cpu_ldst.h: Remove unused very short ld*/st* defines
The very short ld*/st* defines are now not used anywhere; delete them.

Backports commit 177ea79f65c90b3bc84d59565b7519e47ea02f63 from qemu
2019-04-22 06:57:28 -04:00
Peter Maydell 36cd9f0df0
cpu_ldst.h: Drop unused ld/st*_kernel defines
The ld*_kernel and st*_kernel defines are not used anywhere;
delete them.

Backports commit 5a0826f7d2f9bea6e02157985b103d0a4c458aaa from qemu
2019-04-22 06:54:26 -04:00
Lioncash 830756a725
gen-icount: Use tcg_ctx where applicable in commented out code
If this is ever used in the future, it'll already be able to be used.
2019-04-22 06:17:10 -04:00
Lioncash d844d7cc9d
exec: Backport tb_cflags accessor 2019-04-22 06:12:59 -04:00
Lioncash 9f0e469142
gen-icount: Synchronize with qemu 2019-04-22 05:53:46 -04:00
Peter Maydell 3ff38c2402
include/qemu/bswap.h: Use __builtin_memcpy() in accessor functions
In the accessor functions ld*_he_p() and st*_he_p() we use memcpy()
to perform a load or store to a pointer which might not be aligned
for the size of the type. We rely on the compiler to optimize this
memcpy() into an efficient load or store instruction where possible.
This is required for good performance, but at the moment it is also
required for correct operation, because some users of these functions
require that the access is atomic if the pointer is aligned, which
will only be the case if the compiler has optimized out the memcpy().
(The particular example where we discovered this is the virtio
vring_avail_idx() which calls virtio_lduw_phys_cached() which
eventually ends up calling lduw_he_p().)

Unfortunately some compile environments, such as the fortify-source
setup used in Alpine Linux, define memcpy() to a wrapper function
in a way that inhibits this compiler optimization.

The correct long-term fix here is to add a set of functions for
doing atomic accesses into AddressSpaces (and to other relevant
families of accessor functions like the virtio_*_phys_cached()
ones), and make sure that callsites which want atomic behaviour
use the correct functions.

In the meantime, switch to using __builtin_memcpy() in the
bswap.h accessor functions. This will make us robust against things
like this fortify library in the short term. In the longer term
it will mean that we don't end up with these functions being really
badly-performing even if the semantics of the out-of-line memcpy()
are correct.
2019-04-10 14:57:52 -04:00
Lioncash d6b706a296
qemu/fpu: Synchronize with Qemu
Resolves a few formatting discrepancies
2019-03-09 18:27:31 -05:00
Lioncash b6f752970b
target/riscv: Initial introduction of the RISC-V target
This ports over the RISC-V architecture from Qemu. This is currently a
very barebones transition. No code hooking or any fancy stuff.
Currently, you can feed it instructions and query the CPU state itself.

This also allows choosing whether or not RISC-V 32-bit or RISC-V 64-bit
is desirable through Unicorn's interface as well.

Extremely basic examples of executing a single instruction have been
added to the samples directory to help demonstrate how to use the basic
functionality.
2019-03-08 21:46:10 -05:00
David Hildenbrand 7373819b1a
softfloat: Implement float128_to_uint32
Handling it just like float128_to_uint32_round_to_zero, that hopefully
is free of bugs :)

Documentation basically copied from float128_to_uint64

Backports commit e45de9922e43c1ce4f4739b62142314a13029d5c from qemu
2019-02-28 15:13:09 -05:00
David Hildenbrand 24a2bd702c
softfloat: add float128_is_{normal,denormal}
Needed on s390x, to test for the data class of a number. So it will
gain soon a user.

A number is considered normal if the exponent is neither 0 nor all 1's.
That can be checked by adding 1 to the exponent, and comparing against
>= 2 after dropping an eventual overflow into the sign bit.

While at it, convert the other floatXX_is_normal functions to use a
similar, less error prone calculation, as suggested by Richard H.

Backports commit 47393181604d507f4fe2a15a65b1eede0f974d6a from qemu
2019-02-28 15:11:50 -05:00
David Hildenbrand 8583c8f1f6
include/exec/helper-head.h: support "const void *" in helper calls
Especially when dealing with out-of-line gvec helpers, it is often
helpful to specify some vector pointers as constant. E.g. when
we have two inputs and one output, marking the two inputs as consts
pointers helps to avoid bugs.

Const pointers can be specified via "cptr", however behave in TCG just
like ordinary pointers. We can specify helpers like:

DEF_HELPER_FLAGS_4(gvec_vbperm, TCG_CALL_NO_RWG, void, ptr, cptr, cptr, i32)

void HELPER(gvec_vbperm)(void *v1, const void *v2, const void *v3,
uint32_t desc)

And make sure that here, only v1 will be written (as long as const is
not casted away, of course).

Backports commit 8c6edfdd90522caa4fc429144d393aba5b99f584 from qemu
2019-02-22 19:12:09 -05:00
Alex Bennée bf9c8499ca
target/arm: expose remaining CPUID registers as RAZ
There are a whole bunch more registers in the CPUID space which are
currently not used but are exposed as RAZ. To avoid too much
duplication we expand ARMCPRegUserSpaceInfo to understand glob
patterns so we only need one entry to tweak whole ranges of registers.

Backports commit d040242effe47850060d2ef1c461ff637d88a84d from qemu
2019-02-15 17:48:37 -05:00
Emilio G. Cota 1b44fd94ac
exec-all: document that tlb_fill can trigger a TLB resize
Backports commit ae56a2ff92ac73782279abf8857585c34b15f509 from qemu
2019-02-12 11:38:28 -05:00
Catherine Ho 17477ac1ca
tcg: add early clober modifier in atomic16_cmpxchg on aarch64
Without this patch, gcc might up the Input/Output registers and
cause unpredictable error.

Fixes: 1ec182c33379 ("target/arm: Convert to HAVE_CMPXCHG128")

Backports commit 7400d6938c6d455c4eba2b80c06d60c8fa5c5ba3 from qemu
2019-02-07 08:58:53 -05:00
Richard Henderson 9c2a5963d0
exec: Add target-specific tlb bits to MemTxAttrs
These bits can be used to cache target-specific data in cputlb
read from the page tables.

Backports commit d3765835ed02f91f0c6cbb452874209a6af4a730 from qemu
2019-02-05 17:00:56 -05:00
Murilo Opsfelder Araujo 0010078e4b
mmap-alloc: fix hugetlbfs misaligned length in ppc64
The commit 7197fb4058bcb68986bae2bb2c04d6370f3e7218 ("util/mmap-alloc:
fix hugetlb support on ppc64") fixed Huge TLB mappings on ppc64.

However, we still need to consider the underlying huge page size
during munmap() because it requires that both address and length be a
multiple of the underlying huge page size for Huge TLB mappings.
Quote from "Huge page (Huge TLB) mappings" paragraph under NOTES
section of the munmap(2) manual:

"For munmap(), addr and length must both be a multiple of the
underlying huge page size."

On ppc64, the munmap() in qemu_ram_munmap() does not work for Huge TLB
mappings because the mapped segment can be aligned with the underlying
huge page size, not aligned with the native system page size, as
returned by getpagesize().

This has the side effect of not releasing huge pages back to the pool
after a hugetlbfs file-backed memory device is hot-unplugged.

This patch fixes the situation in qemu_ram_mmap() and
qemu_ram_munmap() by considering the underlying page size on ppc64.

After this patch, memory hot-unplug releases huge pages back to the
pool.

Fixes: 7197fb4058bcb68986bae2bb2c04d6370f3e7218

Backports commit 53adb9d43e1abba187387a51f238e878e934c647 from qemu
2019-02-05 16:52:39 -05:00
Julia Suvorova 93acc4dc56
arm: Clarify the logic of set_pc()
Until now, the set_pc logic was unclear, which raised questions about
whether it should be used directly, applying a value to PC or adding
additional checks, for example, set the Thumb bit in Arm cpu. Let's set
the set_pc logic for “Configure the PC, as was done in the ELF file”
and implement synchronize_with_tb hook for preserving PC to cpu_tb_exec.

Backports commit 42f6ed919325413392bea247a1e6f135deb469cd from qemu
2019-02-03 17:55:30 -05:00
Thomas Huth aa9e5f9abe
Don't talk about the LGPL if the file is licensed under the GPL
Some files claim that the code is licensed under the GPL, but then
suddenly suggest that the user should have a look at the LGPL.
That's of course non-sense, replace it with the correct GPL wording
instead.

Backports commit e361a772ffcd33675ffdd4637eea98a460dfed1b from qemu
2019-02-03 17:55:28 -05:00
Lioncash 0de4a47169
qemu/host-utils: Handle ctpop8/16/32/64 on MSVC
Maybe not the most platform friendly way of doing so
2019-01-30 13:29:58 -05:00
Lioncash 4e605ba038
qemu/compiler: Include <intrin.h> on MSVC 2019-01-30 13:25:26 -05:00
Lioncash 5745f2f75d
qemu/host_utils: Handle MSVC within clrsb32/64 2019-01-30 13:23:24 -05:00
Lioncash 205035a267
qemu/host_utils: Provide MSVC compatible equivalents of clz32/64 and ctz32/64 2019-01-30 13:19:03 -05:00
Peter Maydell d5298c5370
qom/cpu: Add cluster_index to CPUState
For TCG we want to distinguish which cluster a CPU is in, and
we need to do it quickly. Cache the cluster index in the CPUState
struct, by having the cluster object set cpu->cluster_index for
each CPU child when it is realized.

This means that board/SoC code must add all CPUs to the cluster
before realizing the cluster object. Regrettably QOM provides no
way to prevent adding children to a realized object and no way for
the parent to be notified when a new child is added to it, so
we don't have any way to enforce/assert this constraint; all
we can do is document it in a comment. We can at least put in a
check that the cluster contains at least one CPU, which should
catch the typical cases of "realized cluster too early" or
"forgot to parent the CPUs into it".

The restriction on how many clusters can exist in the system
is imposed by TCG code which will be added in a subsequent commit,
but the check to enforce it in cluster.c fits better in this one.

Backports relevant parts of commit 7ea7b9ad532e59c3efbcabff0e3484f4df06104c from qemu
2019-01-30 12:59:59 -05:00
Lioncash 4aaa75d05b
compiler: Add missing container_of macro for MSVC 2019-01-28 09:27:55 -05:00
Lioncash 65e5d72a94
compiler: Add glue macros for MSVC 2019-01-28 09:26:01 -05:00
Lioncash 3d4f37b78f
osdep: Conditionally include non-Windows headers 2019-01-28 09:24:20 -05:00
Lioncash d020acd771
qemu/compiler: Define likely() and unlikely() preprocessor macros if they don't exist
Makes the code more functional with MSVC
2019-01-28 09:10:03 -05:00
Lioncash 29d84a9296
target: Resolve repeated typedef warnings 2019-01-22 20:27:35 -05:00
Lioncash b17d2d4059
qemu/compiler: Add fallback macro for __has_builtin
Prevents compilation errors on non-clang compilers.
2019-01-22 19:02:49 -05:00
Philippe Mathieu-Daudé 24c56c65a3
qemu/compiler: Define QEMU_NONSTRING
GCC 8 introduced the -Wstringop-truncation checker to detect truncation by
the strncat and strncpy functions (closely related to -Wstringop-overflow,
which detect buffer overflow by string-modifying functions declared in
<string.h>).

In tandem of -Wstringop-truncation, the "nonstring" attribute was added:

The nonstring variable attribute specifies that an object or member
declaration with type array of char, signed char, or unsigned char,
or pointer to such a type is intended to store character arrays that
do not necessarily contain a terminating NUL. This is useful in detecting
uses of such arrays or pointers with functions that expect NUL-terminated
strings, and to avoid warnings when such an array or pointer is used as
an argument to a bounded string manipulation function such as strncpy.

From the GCC manual: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-nonstring-variable-attribute

Add the QEMU_NONSTRING macro which checks if the compiler supports this
attribute.

Backports commit 1daff2f8193496b0e5e0ab56dc48c570c81f804e from qemu
2019-01-22 15:06:09 -05:00
Marc-André Lureau 585ebf50f7
build-sys: build with Vista API by default
Both qemu & qga build with Vista API by default already, by defining
_WIN32_WINNT 0x0600. Set it globally in osdep.h instead.

This replaces WINVER by _WIN32_WINNT in osdep.h. WINVER doesn't seem
to be really useful these days.
(see also https://blogs.msdn.microsoft.com/oldnewthing/20070411-00/?p=27283)

Backports commit 56cdca1d7a6a9c8ce28287b8c986ac9ea87ba603 from qemu
2019-01-13 20:28:51 -05:00
Marc-André Lureau 9ff8b70682
build-sys: move windows defines in osdep.h header
This removes some clutter in compilation logging, and allows some
easier tweaking per compilation unit/CFLAGS overriding.

Note that we can't move those define in os-win32.h, since they must be
set before the first system headers are included.

Backports commit 007e722c349839f430f10639ba8c94fe43acfe50 from qemu
2019-01-13 20:27:27 -05:00
Paul Burton 1c6732b053
atomics: Set ATOMIC_REG_SIZE=8 for MIPS n32
ATOMIC_REG_SIZE is currently defined as the default sizeof(void *) for
all MIPS host builds, including those using the n32 ABI. n32 is the
MIPS64 ILP32 ABI and as such tcg/mips/tcg-target.h defines
TCG_TARGET_REG_BITS as 64 for n32 builds. If we attempt to build QEMU
for an n32 host with support for a 64b target architecture then
TCG_OVERSIZED_GUEST is 0 and accel/tcg/cputlb.c attempts to use
atomic_* functions. This fails because ATOMIC_REG_SIZE is 4, causing
the calls to QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE) in the
various atomic_* functions to generate errors.

Fix this by defining ATOMIC_REG_SIZE as 8 for all MIPS64 builds, which
will cover both n32 (ILP32) & n64 (LP64) ABIs in much the same was as
we already do for x86_64/x32.

Backports commit c5b00c1684f3317e887c7401b58dde54c2b05354 from qemu
2019-01-05 07:26:14 -05:00
Richard Henderson 80b4bef1cc
tcg: Add TCG_CALL_NO_RETURN
Remember which helpers have been marked noreturn.

Backports commit 15d7409260498505e991e7b9d87118627165e613 from qemu
2019-01-05 06:35:21 -05:00
Emilio G. Cota 5d3ccde625
softfloat: add float{32,64}_is_zero_or_normal
These will gain some users very soon.

Backports commit 315df0d193929b167b9d7be4665d5f2c0e2427e0 from qemu
2018-12-19 10:31:10 -05:00
Emilio G. Cota 3a8f7d6d84
softfloat: add float{32,64}_is_{de,}normal
This paves the way for upcoming work.

Backports commit 588e6dfd8774e6da56b6995611655fbe59ff564a from qemu
2018-12-19 10:30:33 -05:00
Emilio G. Cota 3d0359c0f5
xxhash: match output against the original xxhash32
Change the order in which we extract a/b and c/d to
match the output of the upstream xxhash32.

Tested with:
https://github.com/cota/xxhash/tree/qemu

Backports commit b7c2cd08a6f68010ad27c9c0bf2fde02fb743a0e from qemu
2018-12-18 06:09:01 -05:00
Emilio G. Cota 308f4c1e0c
include: move exec/tb-hash-xx.h to qemu/xxhash.h
Backports commit fe656e3185fa10973d43492c867643e80fa433cd from qemu
2018-12-18 06:07:55 -05:00
Emilio G. Cota 63082a4d20
exec: introduce qemu_xxhash{2,4,5,6,7}
Before moving them all to include/qemu/xxhash.h.

Backports commit c971d8fa73ff92996d751fa87d90f220cf3c8194 from qemu
2018-12-18 06:04:57 -05:00
David Hildenbrand 8f69c83634
qapi: Rewrite string-input-visitor's integer and list parsing
The input visitor has some problems right now, especially
- unsigned type "Range" is used to process signed ranges, resulting in
inconsistent behavior and ugly/magical code
- uint64_t are parsed like int64_t, so big uint64_t values are not
supported and error messages are misleading
- lists/ranges of int64_t are accepted although no list is parsed and
we should rather report an error
- lists/ranges are preparsed using int64_t, making it hard to
implement uint64_t values or uint64_t lists
- types that don't support lists don't bail out
- visiting beyond the end of a list is not handled properly
- we don't actually parse lists, we parse *sets*: members are sorted,
and duplicates eliminated

So let's rewrite it by getting rid of usage of the type "Range" and
properly supporting lists of int64_t and uint64_t (including ranges of
both types), fixing the above mentioned issues.

Lists of other types are not supported and will properly report an
error. Virtual walks are now supported.

Tests have to be fixed up:
- Two BUGs were hardcoded that are fixed now
- The string-input-visitor now actually returns a parsed list and not
an ordered set.

Please note that no users/callers have to be fixed up. Candidates using
visit_type_uint16List() and friends are:
- backends/hostmem.c:host_memory_backend_set_host_nodes()
-- Code can deal with duplicates/unsorted lists
- numa.c::query_memdev()
-- via object_property_get_uint16List(), the list will still be sorted
and without duplicates (via host_memory_backend_get_host_nodes())
- qapi-visit.c::visit_type_Memdev_members()
- qapi-visit.c::visit_type_NumaNodeOptions_members()
- qapi-visit.c::visit_type_RockerOfDpaGroup_members
- qapi-visit.c::visit_type_RxFilterInfo_members()
-- Not used with string-input-visitor.

Backports commit c9fba9de89db51a07689e2cba4865a1e564b8f0f from qemu
2018-12-18 04:57:25 -05:00
David Hildenbrand 67f9141b13
cutils: Fix qemu_strtosz() & friends to reject non-finite sizes
qemu_strtosz() & friends reject NaNs, but happily accept infinities.
They shouldn't. Fix that.

The fix makes use of qemu_strtod_finite(). To avoid ugly casts,
change the @end parameter of qemu_strtosz() & friends from char **
to const char **.

Also, add two test cases, testing that "inf" and "NaN" are properly
rejected. While at it, also fixup the function documentation.

Backports commit af02f4c5179675ad4e26b17ba26694a8fcde17fa from qemu
2018-12-18 04:48:12 -05:00
David Hildenbrand bf15f4924b
cutils: Add qemu_strtod() and qemu_strtod_finite()
Let's provide a wrapper for strtod().

Backports commit ca28f5481607e5c59481e70e429f5dd23662cb69 from qemu
2018-12-18 04:45:19 -05:00
Thomas Huth 7855f9acf0
Remove QEMU_ARTIFICIAL macro
The code that used it has already been removed a while ago with commit
dc41aa7d34989b552ef ("tcg: Remove GET_TCGV_* and MAKE_TCGV_*").

Backports commit 78751ea855f89b5a352ccc332162fed3ad4c9496 from qemu
2018-12-18 03:56:48 -05:00
Thomas Huth c584171cf8
includes: Replace QEMU_GNUC_PREREQ with "__has_builtin || !defined(__clang__)"
Since we require GCC version 4.8 or newer now, we can be sure that
the builtin functions are always available on GCC. And for Clang,
we can check the availablility with __has_builtin instead.

Backports commit f773b423cc61f3ca18af5337101c158a52aaae2c from qemu
2018-12-18 03:55:43 -05:00
Gerd Hoffmann 94c8893678
move ObjectClass to typedefs.h
Backports commit 7cfda775e575e9561043c26853b4ca6f891cce70 from qemu
2018-12-11 20:37:04 -05:00
David Hildenbrand d783407cff
range: pass const pointer where possible
If there are no changes, let's use a const pointer.

Backports commit d56978f41b357cc84f2d3fe7d5fef2ae9cddfa61 from qemu
2018-12-11 20:35:26 -05:00
Peter Maydell 1301becdab
tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
Add support for MMU protection regions that are smaller than
TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
pages with a flag TLB_RECHECK. This flag causes us to always
take the slow-path for accesses. In the slow path we can then
special case them to always call tlb_fill() again, so we have
the correct information for the exact address being accessed.

This change allows us to handle reading and writing from small
regions; we cannot deal with execution from the small region.

Backports commit 55df6fcf5476b44bc1b95554e686ab3e91d725c5 from qemu
2018-11-16 21:35:54 -05:00
Lioncash 3a0ab1a64a
Partial backport of: exec.c: Handle IOMMUs in address_space_translate_for_iotlb()
We just want the parameter changes here.

Partial backport of commit 1f871c5e6b0f30644a60a81a6a7aadb3afb030ac from
qemu
2018-11-16 21:24:55 -05:00
Marc-André Lureau fc354aa464
memory: learn about non-volatile memory region
Add a new flag to mark memory region that are used as non-volatile, by
NVDIMM for example. That bit is propagated down to the flat view, and
reflected in HMP info mtree with a "nv-" prefix on the memory type.

This way, guest_phys_blocks_region_add() can skip the NV memory
regions for dumps and TCG memory clear in a following patch.

Backports commit c26763f8ec70b1011098cab0da9178666d8256a5 from qemu
2018-11-11 08:50:39 -05:00
Li Qiang 33422a04bc
cpu.h: fix a typo in comment
Found by reading the code.

Backports commit 7e63bc38adfcc5bd9e20e3dd8a170f0e8d830b60 from qemu
2018-11-11 07:32:05 -05:00
Li Qiang b79f16c331
memory.h: fix typos in comments
Backports commit 847b31f0d608bfcbc9ea11d5013ae62e956f32cd from qemu
2018-11-11 07:31:35 -05:00
Emilio G. Cota 1677898a09
cputlb: read CPUTLBEntry.addr_write atomically
Updates can come from other threads, so readers that do not
take tlb_lock must use atomic_read to avoid undefined
behaviour (UB).

This completes the conversion to tlb_lock. This conversion results
on average in no performance loss, as the following experiments
(run on an Intel i7-6700K CPU @ 4.00GHz) show.

1. aarch64 bootup+shutdown test:

- Before:
Performance counter stats for 'taskset -c 0 ../img/aarch64/die.sh' (10 runs):

7487.087786 task-clock (msec) # 0.998 CPUs utilized ( +- 0.12% )
31,574,905,303 cycles # 4.217 GHz ( +- 0.12% )
57,097,908,812 instructions # 1.81 insns per cycle ( +- 0.08% )
10,255,415,367 branches # 1369.747 M/sec ( +- 0.08% )
173,278,962 branch-misses # 1.69% of all branches ( +- 0.18% )

7.504481349 seconds time elapsed ( +- 0.14% )

- After:
Performance counter stats for 'taskset -c 0 ../img/aarch64/die.sh' (10 runs):

7462.441328 task-clock (msec) # 0.998 CPUs utilized ( +- 0.07% )
31,478,476,520 cycles # 4.218 GHz ( +- 0.07% )
57,017,330,084 instructions # 1.81 insns per cycle ( +- 0.05% )
10,251,929,667 branches # 1373.804 M/sec ( +- 0.05% )
173,023,787 branch-misses # 1.69% of all branches ( +- 0.11% )

7.474970463 seconds time elapsed ( +- 0.07% )

2. SPEC06int:
SPEC06int (test set)
[Y axis: Speedup over master]
1.15 +-+----+------+------+------+------+------+-------+------+------+------+------+------+------+----+-+
| |
1.1 +-+.................................+++.............................+ tlb-lock-v2 (m+++x) +-+
| +++ | +++ tlb-lock-v3 (spinl|ck) |
| +++ | | +++ +++ | | |
1.05 +-+....+++...........####.........|####.+++.|......|.....###....+++...........+++....###.........+-+
| ### ++#| # |# |# ***### +++### +++#+# | +++ | #|# ### |
1 +-+++***+#++++####+++#++#++++++++++#++#+*+*++#++++#+#+****+#++++###++++###++++###++++#+#++++#+#+++-+
| *+* # #++# *** # #### *** # * *++# ****+# *| * # ****|# |# # #|# #+# # # |
0.95 +-+..*.*.#....#..#.*|*..#...#..#.*|*..#.*.*..#.*|.*.#.*++*.#.*++*+#.****.#....#+#....#.#..++#.#..+-+
| * * # # # *|* # # # *|* # * * # *++* # * * # * * # * |* # ++# # # # *** # |
| * * # ++# # *+* # # # *|* # * * # * * # * * # * * # *++* # **** # ++# # * * # |
0.9 +-+..*.*.#...|#..#.*.*..#.++#..#.*|*..#.*.*..#.*..*.#.*..*.#.*..*.#.*..*.#.*.|*.#...|#.#..*.*.#..+-+
| * * # *** # * * # |# # *+* # * * # * * # * * # * * # * * # *++* # |# # * * # |
0.85 +-+..*.*.#..*|*..#.*.*..#.***..#.*.*..#.*.*..#.*..*.#.*..*.#.*..*.#.*..*.#.*..*.#.****.#..*.*.#..+-+
| * * # *+* # * * # *|* # * * # * * # * * # * * # * * # * * # * * # * |* # * * # |
| * * # * * # * * # *+* # * * # * * # * * # * * # * * # * * # * * # * |* # * * # |
0.8 +-+..*.*.#..*.*..#.*.*..#.*.*..#.*.*..#.*.*..#.*..*.#.*..*.#.*..*.#.*..*.#.*..*.#.*++*.#..*.*.#..+-+
| * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # |
0.75 +-+--***##--***###-***###-***###-***###-***###-****##-****##-****##-****##-****##-****##--***##--+-+
400.perlben401.bzip2403.gcc429.m445.gob456.hmme45462.libqua464.h26471.omnet473483.xalancbmkgeomean

png: https://imgur.com/a/BHzpPTW

Notes:
- tlb-lock-v2 corresponds to an implementation with a mutex.
- tlb-lock-v3 corresponds to the current implementation, i.e.
a spinlock and a single lock acquisition in tlb_set_page_with_attrs.

Backports commit 403f290c0603f35f2d09c982bf5549b6d0803ec1 from qemu
2018-10-23 15:37:43 -04:00
Richard Henderson d74e00a30a
tcg: Split CONFIG_ATOMIC128
GCC7+ will no longer advertise support for 16-byte __atomic operations
if only cmpxchg is supported, as for x86_64. Fortunately, x86_64 still
has support for __sync_compare_and_swap_16 and we can make use of that.
AArch64 does not have, nor ever has had such support, so open-code it.

Backports commit e6cd4bb59b8154fa00da611200beef7eb4e8ec56 from qemu
2018-10-23 15:17:39 -04:00
Richard Henderson c911ea7128
tcg: Add tlb_index and tlb_entry helpers
Isolate the computation of an index from an address into a
helper before we change that function.

Backports commit 383beda9cf32f795616c3b93f7d6154d70372d4b from qemu
2018-10-23 15:04:27 -04:00
Emilio G. Cota dfb3954571
exec: introduce tlb_init
Paves the way for the addition of a per-TLB lock.

Backports commit 5005e2537d090bee87aca3b924dcd17920fd146a from qemu
2018-10-23 14:41:29 -04:00
Emilio G. Cota 9f08fb35bc
softfloat: remove float64_trunc_to_int
It has not had users since f83311e476 ("target-m68k: use floatx80
internally", 2017-06-21).

Note that no other bit-width has floatX_trunc_to_int.

Backports commit c953da8f0be5e026d1c9128660736d72294feb3e from qemu
2018-10-08 14:15:11 -04:00
Peter Maydell 01683fe97e
memory: Remove old_mmio accessors
Now that all the users of old_mmio MemoryRegion accessors
have been converted, we can remove the core code support.

Backports commit 62a0db942dec6ebfec19aac2b604737d3c9a2d75 from qemu
2018-10-04 04:45:30 -04:00