link's check callback is supposed to verify/permit setting it,
however currently nothing restricts it from misusing it
and modifying target object from within.
Make sure that readonly semantics are checked by compiler
to prevent callback's misuse.
Backports commit 8f5d58ef2c92d7b82d9a6eeefd7c8854a183ba4a from qemu
Now that we have proper locking after MTTCG patches have landed, we
can revert the commit. This reverts commit
a9353fe897ca2687e5b3385ed39e3db3927a90e0.
Backports commit 406bc339b0505fcfc2ffcbca1f05a3756e338a65 from qemu
Add CONFIG_TCG around TLB-related functions and structure declarations.
Some of these functions are defined in ./accel/tcg/cputlb.c, which will
not be linked in if TCG is disabled, and have no stubs; therefore, their
callers will also be compiled out for --disable-tcg.
Backports commit b11ec7f2e44b285a3967d629b55d1a6970b06787 from qemu
translate-all.c will be disabled if tcg is disabled in the build,
so page_size_init() function and related variables will be moved
to exec.c file.
Backports commit a0be0c585f5dcc4d50a37f6a20d3d625c5ef3a2c from qemu
Commit 1f5c00cfdb8114c ("qom/cpu: move tlb_flush to cpu_common_reset")
moved the call to tlb_flush() from the target-specific reset handlers
into the common code qom/cpu.c file, and protected the call with
"#ifdef CONFIG_SOFTMMU" to avoid that it is called for linux-user
only targets. But since qom/cpu.c is common code, CONFIG_SOFTMMU is
*never* defined here, so the tlb_flush() was simply never executed
anymore. Fix it by introducing a wrapper for tlb_flush() in a file
that is re-compiled for each target, i.e. in translate-all.c.
Backports commit 2cd53943115be5118b5b2d4b80ee0a39c94c4f73 from qemu
Some code paths can lead to atomic accesses racing with memset()
on cpu->tb_jmp_cache, which can result in torn reads/writes
and is undefined behaviour in C11.
These torn accesses are unlikely to show up as bugs, but from code
inspection they seem possible. For example, tb_phys_invalidate does:
/* remove the TB from the hash list */
h = tb_jmp_cache_hash_func(tb->pc);
CPU_FOREACH(cpu) {
if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) {
atomic_set(&cpu->tb_jmp_cache[h], NULL);
}
}
Here atomic_set might race with a concurrent memset (such as the
ones scheduled via "unsafe" async work, e.g. tlb_flush_page) and
therefore we might end up with a torn pointer (or who knows what,
because we are under undefined behaviour).
This patch converts parallel accesses to cpu->tb_jmp_cache to use
atomic primitives, thereby bringing these accesses back to defined
behaviour. The price to pay is to potentially execute more instructions
when clearing cpu->tb_jmp_cache, but given how infrequently they happen
and the small size of the cache, the performance impact I have measured
is within noise range when booting debian-arm.
Note that under "safe async" work (e.g. do_tb_flush) we could use memset
because no other vcpus are running. However I'm keeping these accesses
atomic as well to keep things simple and to avoid confusing analysis
tools such as ThreadSanitizer.
Backports commit f3ced3c59287dabc253f83f0c70aa4934470c15e from qemu
We are relying on cpu_env being defined as a global, yet most
targets (i.e. all but arm/a64) have it defined as a local variable.
Luckily all of them use the same "cpu_env" name, but really
compilation shouldn't break if the name of that local variable
changed.
Fix it by using tcg_ctx.tcg_env, which all targets set in their
translate_init function. This change also helps paving the way
for the upcoming "translation loop common to all targets" work.
Backports commit 53f6672bcf57d82b794a2cc3a3469be7d35c8653 from qemu
Add a function to round a floatx80 to the defined precision
(floatx80_rounding_precision)
Backports commit 0f72129281765ed64d26353284059f2bdcde7a23 from qemu
In order to store integer values between INT64_MAX and UINT64_MAX, add
a uint64_t internal representation.
Backports commit 61a8f418b26a2d974e38e4ae55020aca8d402d88 from qemu
Before the previous commit, parameter promote_int = true made
visit_start_alternate() with an input visitor avoid QTYPE_QINT
variants and create QTYPE_QFLOAT variants instead. This was used
where QTYPE_QINT variants were invalid.
The previous commit fused QTYPE_QINT with QTYPE_QFLOAT, rendering
promote_int useless and unused.
Backports commit 60390d2dc85ffade8981ca41e02335cb07353a6d from qemu
We would like to use a same QObject type to represent numbers, whether
they are int, uint, or floats. Getters will allow some compatibility
between the various types if the number fits other representations.
Add a few more tests while at it.
Backports commit 01b2ffcedd94ad7b42bc870e4c6936c87ad03429 from qemu
QAPI_CLONE() returns a newly allocated QAPI object. Inconvenient when
we want to clone into an existing object. QAPI_CLONE_MEMBERS() does
exactly that.
Backports commit 4626a19c86c30d96cedbac2bd44ef8103303cb37 from qemu
Rather than making lots of callers wrap a scalar in a QInt, QString,
or QBool, provide helper macros that do the wrapping automatically.
Update the Coccinelle script to make mass conversions easy, although
the conversion itself will be done as a separate patches to ease
review and backport efforts.
Backports commit a92c21591b5bb9543996538f14854ca6b528318b from qemu
Allocating an arbitrarily-sized array of tbs results in either
(a) a lot of memory wasted or (b) unnecessary flushes of the code
cache when we run out of TB structs in the array.
An obvious solution would be to just malloc a TB struct when needed,
and keep the TB array as an array of pointers (recall that tb_find_pc()
needs the TB array to run in O(log n)).
Perhaps a better solution, which is implemented in this patch, is to
allocate TB's right before the translated code they describe. This
results in some memory waste due to padding to have code and TBs in
separate cache lines--for instance, I measured 4.7% of padding in the
used portion of code_gen_buffer when booting aarch64 Linux on a
host with 64-byte cache lines. However, it can allow for optimizations
in some host architectures, since TCG backends could safely assume that
the TB and the corresponding translated code are very close to each
other in memory. See this message by rth for a detailed explanation:
https://lists.gnu.org/archive/html/qemu-devel/2017-03/msg05172.html
Subject: Re: GSoC 2017 Proposal: TCG performance enhancements
Backports commit 6e3b2bfd6af488a896f7936e99ef160f8f37e6f2 from qemu
Instead of exporting goto_ptr directly to TCG frontends, export
tcg_gen_lookup_and_goto_ptr(), which calls goto_ptr with the pointer
returned by the lookup_tb_ptr() helper. This is the only use case
we have for goto_ptr and lookup_tb_ptr, so having this function is
very convenient. Furthermore, it trivially allows us to avoid calling
the lookup helper if goto_ptr is not implemented by the backend.
Backports commit cedbcb01529cb6cf9a2289cdbebbc63f6149fc18 from qemu
We need to coordinate with the TCG_OVERSIZED_GUEST test in cputlb.c,
and allow 64-bit atomics even though sizeof(void *) == 4.
Backports commit 374aae653499f4d405caf32b7fff0c8639113fe4 from qemu
This patch converts the old "is_write" bool into IOMMUAccessFlags. The
difference is that "is_write" can only express either read/write, but
sometimes what we really want is "none" here (neither read nor write).
Replay is an good example - during replay, we should not check any RW
permission bits since thats not an actual IO at all.
Backports commit bf55b7afce53718ef96f4e6616da62c0ccac37dd from qemu
We already require gcc 4.1 or newer (for the atomic
support), so the fallback codepaths for older gcc
versions than that are now dead code and we can
just delete them.
NB: clang reports itself as gcc 4.2 (regardless of
clang version), so clang won't be using the fallbacks
either.
Backports commit fa54abb8c298f892639ffc4bc2f61448ac3be4a1 from qemu
MemoryRegionCache did not know about virtio support for IOMMUs (because the
two features were developed at the same time). Revert MemoryRegionCache
to "normal" address_space_* operations for 2.9, as it is simpler than
undoing the virtio patches.
Backports commit 90c4fe5fc517a045e7a7cf2f23472e114042ca29 from qemu
The v7m CONTROL register bit 1 is SPSEL, which indicates
the stack being used. We were storing this information
not in v7m.control but in the separate v7m.other_sp
structure field. Unfortunately, the code handling reads
of the CONTROL register didn't take account of this, and
so if SPSEL was updated by an exception entry or exit then
a subsequent guest read of CONTROL would get the wrong value.
Using a separate structure field doesn't really gain us
anything in efficiency, so drop this unnecessary complexity
in favour of simply storing all the bits in v7m.control.
This is a migration compatibility break for M profile
CPUs only.
Backports commit abc24d86cc0364f402e438fae3acb14289b40734 from qemu
The 'name' parameter to memory_region_init_* had been marked as debug
only, however vmstate_region_ram uses it as a parameter to
qemu_ram_set_idstr to set RAMBlock names and these form part of the
migration stream.
Backports commit e8f5fe2de125a0bfbefbaa6a69af81f4817cb7a0 from qemu
Fix the design flaw demonstrated in the previous commit: new method
check_list() lets input visitors report that unvisited input remains
for a list, exactly like check_struct() lets them report that
unvisited input remains for a struct or union.
Implement the method for the qobject input visitor (straightforward),
and the string input visitor (less so, due to the magic list syntax
there). The opts visitor's list magic is even more impenetrable, and
all I can do there today is a stub with a FIXME comment. No worse
than before.
Backports commit a4a1c70dc759e5b81627e96564f344ab43ea86eb from qemu
The split between tests/test-qobject-input-visitor.c and
tests/test-qobject-input-strict.c now makes less sense than ever. The
next commit will take care of that.
Backports commit 048abb7b20c9f822ad9d4b730bade73b3311a47a from qemu
visit_optional() is to be called only between visit_start_struct() and
visit_end_struct(). Visitors that don't support struct visits,
i.e. don't implement start_struct(), end_struct(), have no use for it.
Clarify documentation.
The string input visitor doesn't support struct visits. Its
parse_optional() is therefore useless. Drop it.
Backports commit a8aec6de2ac1a5e36989fdfba29067b361009b75 from qemu
Error messages refer to nodes of the QObject being visited by name.
Trouble is the names are sometimes less than helpful:
* The name of the root QObject is whatever @name argument got passed
to the visitor, except NULL gets mapped to "null". We commonly pass
NULL. Not good.
Avoiding errors "at the root" mitigates. For instance,
visit_start_struct() can only fail when the visited object is not a
dictionary, and we commonly ensure it is beforehand.
* The name of a QDict's member is the member key. Good enough only
when this happens to be unique.
* The name of a QList's member is "null". Not good.
Improve error messages by referring to nodes by path instead, as
follows:
* The path of the root QObject is whatever @name argument got passed
to the visitor, except NULL gets mapped to "<anonymous>".
* The path of a root QDict's member is the member key.
* The path of a root QList's member is "[%u]", where %u is the list
index, starting at zero.
* The path of a non-root QDict's member is the path of the QDict
concatenated with "." and the member key.
* The path of a non-root QList's member is the path of the QList
concatenated with "[%u]", where %u is the list index.
For example, the incorrect QMP command
{ "execute": "blockdev-add", "arguments": { "node-name": "foo", "driver": "raw", "file": {"driver": "file" } } }
now fails with
{"error": {"class": "GenericError", "desc": "Parameter 'file.filename' is missing"}}
instead of
{"error": {"class": "GenericError", "desc": "Parameter 'filename' is missing"}}
and
{ "execute": "input-send-event", "arguments": { "device": "bar", "events": [ [] ] } }
now fails with
{"error": {"class": "GenericError", "desc": "Invalid parameter type for 'events[0]', expected: object"}}
instead of
{"error": {"class": "GenericError", "desc": "Invalid parameter type for 'null', expected: QDict"}}
Aside: calling the thing "parameter" is suboptimal for QMP, because
the root object is "arguments" there.
The qobject output visitor doesn't have this problem because it should
not fail. Same for dealloc and clone visitors.
The string visitors don't have this problem because they visit just
one value, whose name needs to be passed to the visitor as @name. The
string output visitor shouldn't fail anyway.
The options visitor uses QemuOpts names. Their name space is flat, so
the use of QDict member keys as names is fine. NULL names used with
roots and lists could conceivably result in bad error messages. Left
for another day.
Backports commit a9fc37f6bc3f2ab90585cb16493da9f6dcfbfbcf from qemu
The QERR_ macros are leftovers from the days of "rich" error objects.
QERR_QMP_BAD_INPUT_OBJECT, QERR_QMP_BAD_INPUT_OBJECT_MEMBER,
QERR_QMP_EXTRA_MEMBER are used in just one place now, except for one
use that has crept into qobject-input-visitor.c.
Drop these macros, to make the (bad) error messages more visible.
Backports commit 99fb0c53c038105bae68b02a3d9f1cbf7951ba10 from qemu
At the moment ram device's memory regions are DEVICE_NATIVE_ENDIAN. It's
incorrect. This memory region is backed by a MMIO area in host, so the
uint64_t data that MemoryRegionOps read from/write to this area should be
host-endian rather than target-endian. Hence, current code does not work
when target and host endianness are different which is the most common case
on PPC64. To fix it, this introduces DEVICE_HOST_ENDIAN for the ram device.
This has been tested on PPC64 BE/LE host/guest in all possible combinations
including TCG.
Backports commit c99a29e702528698c0ce2590f06ca7ff239f7c39 from qemu
While the vargs approach was flexible the original MTTCG ended up
having munge the bits to a bitmap so the data could be used in
deferred work helpers. Instead of hiding that in cputlb we push the
change to the API to make it take a bitmap of MMU indexes instead.
For ARM some the resulting flushes end up being quite long so to aid
readability I've tended to move the index shifting to a new line so
all the bits being or-ed together line up nicely, for example:
tlb_flush_page_by_mmuidx(other_cs, pageaddr,
(1 << ARMMMUIdx_S1SE1) |
(1 << ARMMMUIdx_S1SE0));
Backports commit 0336cbf8532935d8e23c2aabf3e2ce2c0697b6ac from qemu
We know there will be cases where MTTCG won't work until additional work
is done in the front/back ends to support. It will however be useful to
be able to turn it on.
As a result MTTCG will default to off unless the combination is
supported. However the user can turn it on for the sake of testing.
Backports commit 8d4e9146b3568022ea5730d92841345d41275d66 from qemu
This makes qemu_strtosz(), qemu_strtosz_mebi() and
qemu_strtosz_metric() similar to qemu_strtoi64(), except negative
values are rejected.
Backports commit f17fd4fdf0df3d2f3444399d04c38d22b9a3e1b7 from qemu
Most callers of qemu_strtosz_suffix() pass QEMU_STRTOSZ_DEFSUFFIX_B.
Capture the pattern in new qemu_strtosz().
Inline qemu_strtosz_suffix() into its only remaining caller.
Backports commit 466dea14e677555dd24465aca75d00a3537ad062 from qemu
With qemu_strtosz(), no suffix means mebibytes. It's used rarely.
I'm going to add a similar function where no suffix means bytes.
Rename qemu_strtosz() to qemu_strtosz_MiB() to make the name
qemu_strtosz() available for the new function.
Backports commit e591591b323772eea733de6027f5e8b50692d0ff from qemu
To parse numbers with metric suffixes, we use
qemu_strtosz_suffix_unit(nptr, &eptr, QEMU_STRTOSZ_DEFSUFFIX_B, 1000)
Capture this in a new function for legibility:
qemu_strtosz_metric(nptr, &eptr)
Replace test_qemu_strtosz_suffix_unit() by test_qemu_strtosz_metric().
Rename qemu_strtosz_suffix_unit() to do_strtosz() and give it internal
linkage.
Backports commit d2734d2629266006b0413433778474d5801c60be from qemu
The name qemu_strtoll() suggests conversion to long long, but it
actually converts to int64_t. Rename to qemu_strtoi64().
The name qemu_strtoull() suggests conversion to unsigned long long,
but it actually converts to uint64_t. Rename to qemu_strtou64().
Backports commit b30d188677456b17c1cd68969e08ddc634cef644 from qemu
float128_to_uint32_round_to_zero() is needed by xscvqpuwz instruction
of PowerPC ISA 3.0.
Backports commit fd425037d25cecaaffdb3831697e0adc10ca2ba3 from qemu
Implement float128_to_uint64() and use that to implement
float128_to_uint64_round_to_zero()
This is required by xscvqpudz instruction of PowerPC ISA 3.0.
Backports commit 2e6d85683576c970c714c1cc071dca742835b9d4 from qemu
Power ISA 3.0 introduces a few quadruple precision floating point
instructions that support round-to-odd rounding mode. The
round-to-odd mode is explained as under:
Let Z be the intermediate arithmetic result or the operand of a convert
operation. If Z can be represented exactly in the target format, the
result is Z. Otherwise the result is either Z1 or Z2 whichever is odd.
Here Z1 and Z2 are the next larger and smaller numbers representable
in the target format respectively.
Backports commit 9ee6f678f473007e252934d6acd09c24490d9d42 from qemu
In BE32 mode, sub-word size watchpoints can fail to trigger because the
address of the access is adjusted in the opcode helpers before being
compared with the watchpoint registers. This patch reverses the address
adjustment before performing the comparison with the help of a new CPUClass
hook.
This version of the patch augments and tidies up comments a little.
Backports commit 40612000599e52e792d23c998377a0fa429c4036 from qemu
It's a familiar pattern: some code uses ARRAY_SIZE, then refactoring
changes the argument from an array to a pointer to a dynamically
allocated buffer. Code keeps compiling but any ARRAY_SIZE calls now
return the size of the pointer divided by element size.
Let's add build time checks to ARRAY_SIZE before we allow more
of these in the code-base.
Backports commit ed63ec0d22ccdce3b2222d9a514423b7fbba3a0d from qemu
QEMU_BUILD_BUG_ON uses a typedef in order to be safe
to use outside functions, but sometimes it's useful
to have a version that can be used within an expression.
Following what Linux does, introduce QEMU_BUILD_BUG_ON_ZERO
that return zero after checking condition at build time.
Backports commit d757573e69f2ef58a4a7b41f6c55d65fa1e1c5c2 from qemu
There are theoretical concerns that some compilers might not trigger
build failures on attempts to define an array of size (x ? -1 : 1) where
x is a variable and make it a variable sized array instead. Let rewrite
using a struct with a negative bit field size instead as there are no
dynamic bit field sizes. This is similar to what Linux does.
Backports commit f291887e8eef5d37d31484638f6e62401b4b99a2 from qemu
Some headers use QEMU_BUILD_BUG_ON. This causes a problem
if the C file including that header happens to have
QEMU_BUILD_BUG_ON at the same line number.
Fix using a widely available extension: __COUNTER__.
If unavailable, provide a stub.
Backports commit 60abf0a5e05134187e274ce5f32524ccf0cae1a6 from qemu
C11 allows errno to be clobbered by pretty much any library function
call, so in general callers need to take care to save errno before
calling other functions.
However, for error reporting functions this is rather awkward and can
make the code on the caller side more complicated than
necessary. error_setg_errno() already takes care of preserving errno
and some functions rely on that, so just promise that we continue to
do so in the future.
Backports commit 98cb89af4df7e1776ce418ed6167b6e214a64435 from qemu
We've currently got 18 architectures in QEMU, and thus 18 target-xxx
folders in the root folder of the QEMU source tree. More architectures
(e.g. RISC-V, AVR) are likely to be included soon, too, so the main
folder of the QEMU sources slowly gets quite overcrowded with the
target-xxx folders.
To disburden the main folder a little bit, let's move the target-xxx
folders into a dedicated target/ folder, so that target-xxx/ simply
becomes target/xxx/ instead.
Backports commit fcf5ef2ab52c621a4617ebbef36bf43b4003f4c0 from qemu
We have never has the concept of global TLB entries which would avoid
the flush so we never actually use this flag. Drop it and make clear
that tlb_flush is the sledge-hammer it has always been.
Backports commit d10eb08f5d8389c814b554d01aa2882ac58221bf from qemu
This patch introduces a helper to query the iotlb entry for a
possible iova. This will be used by later device IOTLB API to enable
the capability for a dataplane (e.g vhost) to query the IOTLB.
Backports commit 052c8fa9983f553fdfa0d61034774070dd639c2b from qemu
Device models often have to perform multiple access to a single
memory region that is known in advance, but would to use "DMA-style"
functions instead of address_space_map/unmap. This can happen
for example when the data has to undergo endianness conversion.
Introduce a new data structure to cache the result of
address_space_translate without forcing usage of a host address
like address_space_map does.
Backports commit 1f4e496e1fc2eb6c8bf377a0f9695930c380bfd3 from qemu
Templatize the address_space_* and *_phys functions, so that we can add
similar functions in the next patch that work with a lightweight,
cache-like version of address_space_map/unmap.
Backports commit 0ce265ffef87f19f4dd1ff0663e09a63d66ae408 from qemu
When icount is active, tb_add_jump is surprisingly called with an
out of bounds basic block index. I have no idea how that can work,
but it does not seem like a good idea. Clear *last_tb for all
TB_EXIT_ICOUNT_EXPIRED cases, even when all you have to do is
refill icount_extra.
Backports commit d8dea6fbcbed177ca5d23ab77b3834a9437f0e88 from qemu
In the user emulation code path, tlb_vaddr_to_host erronesously passed
vaddr as the guest address to be translated, instead of addr, the parameter
which actually contained the guest address.
This resulted in incorrect addresses being used when emulating block copy
(mvc/mvpg) and block clear (xc) instructions for the s390x target.
Backports commit c2a85316902e67530da9d6548139fcce73c0cac6 from qemu
softmmu requires more functions to be thread-safe, because translation
blocks can be invalidated from e.g. notdirty callbacks. Probably the
same holds for user-mode emulation, it's just that no one has ever
tried to produce a coherent locking there.
This patch will guide the introduction of more tb_lock and tb_unlock
calls for system emulation.
Note that after this patch some (most) of the mentioned functions are
still called outside tb_lock/tb_unlock. The next one will rectify this.
Backports commit 7d7500d99895f888f97397ef32bb536bb0df3b74 from qemu
This adds asserts to check the locking on the various translation
engines structures. There are two sets of structures that are protected
by locks.
The first the l1map and PageDesc structures used to track which
translation blocks are associated with which physical addresses. In
user-mode this is covered by the mmap_lock.
The second case are TB context related structures which are protected by
tb_lock which is also user-mode only.
Currently the asserts do nothing in SoftMMU mode but this will change
for MTTCG.
Backports commit 301e40ed8005306c009978be295ed9a4b725178b from qemu
Force the use of cmpxchg16b on x86_64.
Wikipedia suggests that only very old AMD64 (circa 2004) did not have
this instruction. Further, it's required by Windows 8 so no new cpus
will ever omit it.
If we truely care about these, then we could check this at startup time
and then avoid executing paths that use it.
Backports commit 7ebee43ee3e2fcd7b5063058b7ef74bc43216733 from qemu
Add all of cmpxchg, op_fetch, fetch_op, and xchg.
Handle both endian-ness, and sizes up to 8.
Handle expanding non-atomically, when emulating in serial.
Backports commit c482cb117cc418115ca9c6d21a7a2315414c0a40 from qemu
Probe for whether the specified guest write access is permitted.
If it is not permitted then an exception will be taken in the same
way as if this were a real write access (and we will not return).
Otherwise the function will return, and there will be a valid
entry in the TLB for this access.
Backports commit 3b4afc9e75ab1a95f33e41f462921093f8a109c4 from qemu
When we cannot emulate an atomic operation within a parallel
context, this exception allows us to stop the world and try
again in a serial context.
Backports commit fdbc2b5722f6092e47181a947c90fd4bdcc1c121 from qemu
Also backports parts of commit 02d57ea115b7669f588371c86484a2e8ebc369be
Allows Int128 to be used more generally, rather than having to
begin with 64-bit inputs and accumulate.
Backports commit 1edaeee0955fba7d834b7c8f4e372e7eae030745 from qemu
While the check against sizeof(void *) is appropriate for
normal usage within qemu, there are places in which we want
wider operaions and have checked for their existance.
Backports commit 84bca3927b36fb1d9a2ca85cbbdf9023d2b84678 from qemu
Making these functional rather than object macros will
prevent later problems with complex macro expansion.
Backports commit d1a9f2d12fcfc942924956fbe321aedf4226ccb7 from qemu
The QmpOutputVisitor has no direct dependency on QMP. It is
valid to use it anywhere that one wants a QObject. Rename it
to better reflect its functionality as a generic QAPI
to QObject converter.
The commit before previous renamed the files, this one renames C
identifiers.
Backports commit 7d5e199ade76c53ec316ab6779800581bb47c50a from qemu
The QmpInputVisitor has no direct dependency on QMP. It is
valid to use it anywhere that one has a QObject. Rename it
to better reflect its functionality as a generic QObject
to QAPI converter.
The previous commit renamed the files, this one renames C identifiers.
Backports commit 09e68369a88d7de0f988972bf28eec1b80cc47f9 from qemu
The QMP visitors have no direct dependency on QMP. It is
valid to use them anywhere that one has a QObject. Rename them
to better reflect their functionality as a generic QObject
to QAPI converter.
This is the first of three parts: rename the files. The next two
parts will rename C identifiers. The split is necessary to make git
rename detection work.
Backports commit b3db211f3c80bb996a704d665fe275619f728bd4 from qemu
Support target CPUs having a page size which isn't knownn
at compile time. To use this, the CPU implementation should:
* define TARGET_PAGE_BITS_VARY
* not define TARGET_PAGE_BITS
* define TARGET_PAGE_BITS_MIN to the smallest value it
might possibly want for TARGET_PAGE_BITS
* call set_preferred_target_page_bits() in its realize
function to indicate the actual preferred target page
size for the CPU (and report any error from it)
In CONFIG_USER_ONLY, the CPU implementation should continue
to define TARGET_PAGE_BITS appropriately for the guest
OS page size.
Machines which want to take advantage of having the page
size something larger than TARGET_PAGE_BITS_MIN must
set the MachineClass minimum_page_bits field to a value
which they guarantee will be no greater than the preferred
page size for any CPU they create.
Note that changing the target page size by setting
minimum_page_bits is a migration compatibility break
for that machine.
For debugging purposes, attempts to use TARGET_PAGE_SIZE
before it has been finally confirmed will assert.
Backports commit 20bccb82ff3ea09bcb7c4ee226d3160cab15f7da from qemu
This speeds up MEMORY_LISTENER_CALL noticeably. Right now,
with many PCI devices you have N regions added to M AddressSpaces
(M = # PCI devices with bus-master enabled) and each call looks
up the whole listener list, with at least M listeners in it.
Because most of the regions in N are BARs, which are also roughly
proportional to M, the whole thing is O(M^3). This changes it
to O(M^2), which is the best we can do without rewriting the
whole thing.
Backports commit 9a54635dcb51a3fcf7507af630168f514a8cd4e7 from qemu
This introduces load-acquire and store-release operations in QEMU.
For now, just use them as an implementation detail of atomic_mb_read
and atomic_mb_set.
Since docs/atomics.txt documents that atomic_mb_read only synchronizes
with an atomic_mb_set of the same variable, we can use the new implementation
everywhere instead of seq-cst loads and stores.
Backports commit 803cf26a9e019b5d2256a8edeb22e3538c4f3261 from qemu
Only very modern GCC's actually set this define when building with the
ThreadSanitizer so this little typo slipped though.
Backports commit 23ea7f57949f2f5934f4d5bbc29fe321b3a7067b from qemu
Add some notes on the use of the relaxed atomic access helpers and their
importance for defined behaviour in C11's multi-threaded memory model.
Backports commit e653bc6b0ff645c25b8a2eb607c18a5c98b59db6 from qemu
CPUState is a fairly common pointer to pass to these helpers. This means
if you need other arguments for the async_run_on_cpu case you end up
having to do a g_malloc to stuff additional data into the routine. For
the current users this isn't a massive deal but for MTTCG this gets
cumbersome when the only other parameter is often an address.
This adds the typedef run_on_cpu_func for helper functions which has an
explicit CPUState * passed as the first parameter. All the users of
run_on_cpu and async_run_on_cpu have had their helpers updated to use
CPUState where available.
Backports commit e0eeb4a21a3ca4b296220ce4449d8acef9de9049 from qemu
As discussed on the list [1], having a comment stating that this file
is "public domain" is arguably wrong and not legally binding. This patch
replaces that comment with a clear GPLv2+ license as proposed in [2].
[1] http://lists.nongnu.org/archive/html/qemu-devel/2016-09/msg06151.html
[2] http://lists.nongnu.org/archive/html/qemu-devel/2016-09/msg06217.html
Worth noting, compiler.h was originally created on 5c026320 by splitting
qemu-common.h. At the time, qemu-common.h was already GPLv2+.
Backports commit cc9d8a3b2c41c22fb09f90f3085e6036c199c3ca from qemu
The return address argument to the softmmu template helpers was
confused. In the legacy case, we wanted to indicate that there
is no return address, and so passed in NULL. However, we then
immediately subtracted GETPC_ADJ from NULL, resulting in a non-zero
value, indicating the presence of an (invalid) return address.
Push the GETPC_ADJ subtraction down to the only point it's required:
immediately before use within cpu_restore_state_from_tb, after all
NULL pointer checks have been completed.
This makes GETPC and GETRA identical. Remove GETRA as the lesser
used macro, replacing all uses with GETPC.
Backports commit 01ecaf438b1eb46abe23392c8ce5b7628b0c8cf5 from qemu
All operations that take a floatx80 as an operand need to have their
inputs checked for malformed encodings. In all of these cases, use the
function floatx80_invalid_encoding to perform the check. If an invalid
operand is found, raise an invalid operation exception, and then return
either NaN (for fp-typed results) or the integer indefinite value (the
minimum representable signed integer value, for int-typed results).
For the non-quiet comparison operations, this touches adjacent code in
order to pass style checks.
Backports cast correction portion of commit d1eb8f2acba579830cf3798c3c15ce51be852c56m from qemu
Use the __atomic_*_n() primitives which take the value as argument. It
is not necessary to store the value locally before calling the
primitive, hence saving us a stack store and load.
Backports commit 89943de17c4e276f2c47f05b4604e8816a6a636c from qemu
When invalidating a translation block, set an invalid flag into the
TranslationBlock structure first. It is also necessary to check whether
the target TB is still valid after acquiring 'tb_lock' but before calling
tb_add_jump() since TB lookup is to be performed out of 'tb_lock' in
future. Note that we don't have to check 'last_tb'; an already invalidated
TB will not be executed anyway and it is thus safe to patch it.
Backports commit 6d21e4208f382dd8ca1f7995a6dd9ea7ca281163 from qemu
With a vfio assigned device we lay down a base MemoryRegion registered
as an IO region, giving us read & write accessors. If the region
supports mmap, we lay down a higher priority sub-region MemoryRegion
on top of the base layer initialized as a RAM device pointer to the
mmap. Finally, if we have any quirks for the device (ie. address
ranges that need additional virtualization support), we put another IO
sub-region on top of the mmap MemoryRegion. When this is flattened,
we now potentially have sub-page mmap MemoryRegions exposed which
cannot be directly mapped through KVM.
This is as expected, but a subtle detail of this is that we end up
with two different access mechanisms through QEMU. If we disable the
mmap MemoryRegion, we make use of the IO MemoryRegion and service
accesses using pread and pwrite to the vfio device file descriptor.
If the mmap MemoryRegion is enabled and results in one of these
sub-page gaps, QEMU handles the access as RAM, using memcpy to the
mmap. Using either pread/pwrite or the mmap directly should be
correct, but using memcpy causes us problems. I expect that not only
does memcpy not necessarily honor the original width and alignment in
performing a copy, but it potentially also uses processor instructions
not intended for MMIO spaces. It turns out that this has been a
problem for Realtek NIC assignment, which has such a quirk that
creates a sub-page mmap MemoryRegion access.
To resolve this, we disable memory_access_is_direct() for ram_device
regions since QEMU assumes that it can use memcpy for those regions.
Instead we access through MemoryRegionOps, which replaces the memcpy
with simple de-references of standard sizes to the host memory.
With this patch we attempt to provide unrestricted access to the RAM
device, allowing byte through qword access as well as unaligned
access. The assumption here is that accesses initiated by the VM are
driven by a device specific driver, which knows the device
capabilities. If unaligned accesses are not supported by the device,
we don't want them to work in a VM by performing multiple aligned
accesses to compose the unaligned access. A down-side of this
philosophy is that the xp command from the monitor attempts to use
the largest available access weidth, unaware of the underlying
device. Using memcpy had this same restriction, but at least now an
operator can dump individual registers, even if blocks of device
memory may result in access widths beyond the capabilities of a
given device (RTL NICs only support up to dword).
Backports commit 1b16ded6a512809f99c133a97f19026fe612b2de from qemu
Setting skip_dump on a MemoryRegion allows us to modify one specific
code path, but the restriction we're trying to address encompasses
more than that. If we have a RAM MemoryRegion backed by a physical
device, it not only restricts our ability to dump that region, but
also affects how we should manipulate it. Here we recognize that
MemoryRegions do not change to sometimes allow dumps and other times
not, so we replace setting the skip_dump flag with a new initializer
so that we know exactly the type of region to which we're applying
this behavior.
Backports commit ca83f87a66d19fdaabf23d4f5ebb49396fe232c1 from qemu
Rather than rely on recursion during the middle of register allocation,
lower indirect registers to loads and stores off the indirect base into
plain temps.
For an x86_64 host, with sufficient registers, this results in identical
code, modulo the actual register assignments.
For an i686 host, with insufficient registers, this means that temps can
be (temporarily) spilled to the stack in order to satisfy an allocation.
This as opposed to the possibility of not being able to spill, to allocate
a register for the indirect base, in order to perform a spill.
Backports commit 5a18407f55ade924aa6397c9a043a9ffd59645fe from qemu
Instead of using -1 as end of chain, use 0, and link through the 0
entry as a fully circular double-linked list.
Backports commit dcb8e75870e2de199db853697f8839cb603beefe from qemu
Make it obvious which macros are safe in which situations.
Useful since QEMU_ALIGN_UP and ROUND_UP both purport to do
the same thing, but differ on whether the alignment must be
a power of 2.