Move cpu_get_fp80()/cpu_set_fp80() from fpu_helper.c to
machine.c because fpu_helper.c will be disabled if tcg is
disabled in the build.
Backports commit db573d2cf7ae6b5a4fc324be6f55e078fc218464 from qemu.
In unicorn's case, they can be moved into unicorn.c
Move cpu_sync_bndcs_hflags() function from mpx_helper.c
to helper.c because mpx_helper.c need be disabled when
tcg is disabled.
Backports commit ab0a19d4f08d924e052eb369420d264240872f8a from qemu
Add CONFIG_TCG around TLB-related functions and structure declarations.
Some of these functions are defined in ./accel/tcg/cputlb.c, which will
not be linked in if TCG is disabled, and have no stubs; therefore, their
callers will also be compiled out for --disable-tcg.
Backports commit b11ec7f2e44b285a3967d629b55d1a6970b06787 from qemu
This lets you build without TCG (hardware accelerationor qtest only). When
this flag is passed to configure, it will automatically filter out the target
list to only those that support KVM or Xen or HAX.
Backports commit b3f6ea7e55e8228d6f84d5cee7cb11cae917ba95 from qemu
translate-all.c will be disabled if tcg is disabled in the build,
so page_size_init() function and related variables will be moved
to exec.c file.
Backports commit a0be0c585f5dcc4d50a37f6a20d3d625c5ef3a2c from qemu
Commit 1f5c00cfdb8114c ("qom/cpu: move tlb_flush to cpu_common_reset")
moved the call to tlb_flush() from the target-specific reset handlers
into the common code qom/cpu.c file, and protected the call with
"#ifdef CONFIG_SOFTMMU" to avoid that it is called for linux-user
only targets. But since qom/cpu.c is common code, CONFIG_SOFTMMU is
*never* defined here, so the tlb_flush() was simply never executed
anymore. Fix it by introducing a wrapper for tlb_flush() in a file
that is re-compiled for each target, i.e. in translate-all.c.
Backports commit 2cd53943115be5118b5b2d4b80ee0a39c94c4f73 from qemu
Move the handling of conforming code segments before the handling
of stack switch.
Because dpl == cpl after the new "if", it's now unnecessary to check
the C bit when testing dpl < cpl. Furthermore, dpl > cpl is checked
slightly above the modified code, so the final "else" is unreachable
and we can remove it.
Backports commit 1110bfe6f5600017258fa6578f9c17ec25b32277 from qemu
In do_interrupt64(), when interrupt stack table(ist) is enabled
and the the target code segment is conforming(e2 & DESC_C_MASK), the
old implementation always set new CPL to 0, and SS.RPL to 0.
This is incorrect for when CPL3 code access a CPL0 conforming code
segment, the CPL should remain unchanged. Otherwise higher privileged
code can be compromised.
The patch fix this for always set dpl = cpl when the target code segment
is conforming, and modify the last parameter `flags`, which contains
correct new CPL, in cpu_x86_load_seg_cache().
Backports commit e95e9b88ba5f4a6c17f4d0c3a3a6bf3f648bb328 from qemu
Some code paths can lead to atomic accesses racing with memset()
on cpu->tb_jmp_cache, which can result in torn reads/writes
and is undefined behaviour in C11.
These torn accesses are unlikely to show up as bugs, but from code
inspection they seem possible. For example, tb_phys_invalidate does:
/* remove the TB from the hash list */
h = tb_jmp_cache_hash_func(tb->pc);
CPU_FOREACH(cpu) {
if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) {
atomic_set(&cpu->tb_jmp_cache[h], NULL);
}
}
Here atomic_set might race with a concurrent memset (such as the
ones scheduled via "unsafe" async work, e.g. tlb_flush_page) and
therefore we might end up with a torn pointer (or who knows what,
because we are under undefined behaviour).
This patch converts parallel accesses to cpu->tb_jmp_cache to use
atomic primitives, thereby bringing these accesses back to defined
behaviour. The price to pay is to potentially execute more instructions
when clearing cpu->tb_jmp_cache, but given how infrequently they happen
and the small size of the cache, the performance impact I have measured
is within noise range when booting debian-arm.
Note that under "safe async" work (e.g. do_tb_flush) we could use memset
because no other vcpus are running. However I'm keeping these accesses
atomic as well to keep things simple and to avoid confusing analysis
tools such as ThreadSanitizer.
Backports commit f3ced3c59287dabc253f83f0c70aa4934470c15e from qemu
We are relying on cpu_env being defined as a global, yet most
targets (i.e. all but arm/a64) have it defined as a local variable.
Luckily all of them use the same "cpu_env" name, but really
compilation shouldn't break if the name of that local variable
changed.
Fix it by using tcg_ctx.tcg_env, which all targets set in their
translate_init function. This change also helps paving the way
for the upcoming "translation loop common to all targets" work.
Backports commit 53f6672bcf57d82b794a2cc3a3469be7d35c8653 from qemu
Add fsabs, fdabs, fsneg, fdneg, fsmove and fdmove.
The value is converted using the new floatx80_round() function.
Backports commit 77bdb2292492fafc4bc0fbb4d8c44fdd0ef1fa8e from qemu
Add a function to round a floatx80 to the defined precision
(floatx80_rounding_precision)
Backports commit 0f72129281765ed64d26353284059f2bdcde7a23 from qemu
fmovecr moves a floating point constant from the
FPU ROM to a floating point register.
Backports commit 9d403660d91229922c2786e81c23cc9dd8e644f1 from qemu
This may be used for deprecated object properties that are kept for
backwards compatibility.
Backports commit a733371214b68881d84725a3c71f60e2faf3b8e2 from qemu
This replaces env1 and page_index variables by env and index
so we can use VICTIM_TLB_HIT macro later.
Backports commit 3416343255cbe01fbe12e5e36cd4bb5042425b27 from qemu
Coldfire uses float64, but 680x0 use floatx80.
This patch introduces the use of floatx80 internally
and enables 680x0 80bits FPU.
Backports commit f83311e4764f1f25a8abdec2b32c64483be1759b from qemu
Switch to use QNum/uint where appropriate to remove i64 limitation.
The input visitor will cast i64 input to u64 for compatibility
reasons (existing json QMP client already use negative i64 for large
u64, and expect an implicit cast in qemu).
Note: before the patch, uint64_t values above INT64_MAX are sent over
json QMP as negative values, e.g. UINT64_MAX is sent as -1. After the
patch, they are sent unmodified. Clearly a bug fix, but we have to
consider compatibility issues anyway. libvirt should cope fine,
because its parsing of unsigned integers accepts negative values
modulo 2^64. There's hope that other clients will, too.
Backports commit 5923f85fb82df7c8c60a89458a5ae856045e5ab1 from qemu
In order to store integer values between INT64_MAX and UINT64_MAX, add
a uint64_t internal representation.
Backports commit 61a8f418b26a2d974e38e4ae55020aca8d402d88 from qemu
Before the previous commit, parameter promote_int = true made
visit_start_alternate() with an input visitor avoid QTYPE_QINT
variants and create QTYPE_QFLOAT variants instead. This was used
where QTYPE_QINT variants were invalid.
The previous commit fused QTYPE_QINT with QTYPE_QFLOAT, rendering
promote_int useless and unused.
Backports commit 60390d2dc85ffade8981ca41e02335cb07353a6d from qemu
We would like to use a same QObject type to represent numbers, whether
they are int, uint, or floats. Getters will allow some compatibility
between the various types if the number fits other representations.
Add a few more tests while at it.
Backports commit 01b2ffcedd94ad7b42bc870e4c6936c87ad03429 from qemu
QAPI_CLONE() returns a newly allocated QAPI object. Inconvenient when
we want to clone into an existing object. QAPI_CLONE_MEMBERS() does
exactly that.
Backports commit 4626a19c86c30d96cedbac2bd44ef8103303cb37 from qemu
Rather than making lots of callers wrap a scalar in a QInt, QString,
or QBool, provide helper macros that do the wrapping automatically.
Update the Coccinelle script to make mass conversions easy, although
the conversion itself will be done as a separate patches to ease
review and backport efforts.
Backports commit a92c21591b5bb9543996538f14854ca6b528318b from qemu
Visiting a list when input is the empty string should result in an
empty list, not an error. Noticed when commit 3d089ce belatedly added
tests, but simply accepted as weird then. It's actually a regression:
broken in commit 74f24cb, v2.7.0. Fix it, and throw in another test
case for empty string.
Backports commit d2788227c6185c72d88ef3127e9fed41686f8e39 from qemu
We can call tb_htable_lookup even when the tb_jmp_cache is completely
empty. Therefore, un-nest most of the code dependent on tb != NULL
from the read from the cache.
This improves the hit rate of lookup_tb_ptr; for instance, when booting
and immediately shutting down debian-arm, the hit rate improves from
93.2% to 99.4%.
Backports commit b97a879de980e99452063851597edb98e7e8039c from qemu
The new placement of the TB means that we can use one insn
to load the goto_tb destination directly from the TB.
Backports commit 308714e6bc945389c64faf1b9213e2c0d3f03391 from qemu
Since we're no longer using a direct branch, we have no
limit on the branch distance.
Backports commit acb0b292b6d0f49972dc98f742e79ed53973e438 from qemu
The new placement of the TB means that we can use one insn
to load the return value for exit_tb returning the TB pointer.
Backports commit cc74d332ff9a78684374847375ef63fc4bd10436 from qemu
We are partially initializing tb in tb_alloc. Instead, fully
initialize it in tb_gen_code, which is tb_alloc's only caller.
This saves an unnecessary write to tb->cflags.
Backports commit 2b48e10f888059a98043b4816769fa2a326a1d2c from qemu
Allocating an arbitrarily-sized array of tbs results in either
(a) a lot of memory wasted or (b) unnecessary flushes of the code
cache when we run out of TB structs in the array.
An obvious solution would be to just malloc a TB struct when needed,
and keep the TB array as an array of pointers (recall that tb_find_pc()
needs the TB array to run in O(log n)).
Perhaps a better solution, which is implemented in this patch, is to
allocate TB's right before the translated code they describe. This
results in some memory waste due to padding to have code and TBs in
separate cache lines--for instance, I measured 4.7% of padding in the
used portion of code_gen_buffer when booting aarch64 Linux on a
host with 64-byte cache lines. However, it can allow for optimizations
in some host architectures, since TCG backends could safely assume that
the TB and the corresponding translated code are very close to each
other in memory. See this message by rth for a detailed explanation:
https://lists.gnu.org/archive/html/qemu-devel/2017-03/msg05172.html
Subject: Re: GSoC 2017 Proposal: TCG performance enhancements
Backports commit 6e3b2bfd6af488a896f7936e99ef160f8f37e6f2 from qemu