Commit graph

644 commits

Author SHA1 Message Date
Peter Xu 3d5fa79305
bitmap: introduce bitmap_count_one()
Count how many bits set in the bitmap.

Backports commit fc7deeea26af3d08f45bad85b8bd3fc3d790a090 from qemu
2018-03-05 01:08:29 -05:00
Cornelia Huck 6f9b7a9363
cpu: drop old comments describing members
These comments are obviously stale.

Backports commit 6fda014e1a65474c4877b36cc42e8a0f377817a4 from qemu
2018-03-05 00:03:31 -05:00
Eric Blake 3017797f7d
osdep.h: Prohibit disabling assert() in supported builds
We already have several files that knowingly require assert()
to work, sometimes because refactoring the code for proper
error handling has not been tackled yet; there are probably
other files that have a similar situation but with no comments
documenting the same. In fact, we have places in migration
that handle untrusted input with assertions, where disabling
the assertions risks a worse security hole than the current
behavior of losing the guest to SIGABRT when migration fails
because of the assertion. Promote our current per-file
safety-valve to instead be project-wide, and expand it to also
cover glib's g_assert().

Note that we do NOT want to encourage 'assert(side-effects);'
(that is a bad practice that prevents copy-and-paste of code to
other projects that CAN disable assertions; plus it costs
unnecessary reviewer mental cycles to remember whether a project
special-cases the crippling of asserts); and we would LIKE to
fix migration to not rely on asserts (but that takes a big code
audit). But in the meantime, we DO want to send a message
that anyone that disables assertions has to tweak code in order
to compile, making it obvious that they are taking on additional
risk that we are not going to support. At the same time, leave
comments mentioning NDEBUG in files that we know still need to
be scrubbed, so there is at least something to grep for.

It would be possible to come up with some other mechanism for
doing runtime checking by default, but which does not abort
the program on failure, while leaving side effects in place
(unlike how crippling assert() avoids even the side effects),
perhaps under the name q_verify(); but it was not deemed worth
the effort (developers should not have to learn a replacement
when the standard C macro works just fine, and it would be a lot
of churn for little gain). The patch specifically uses #error
rather than #warn so that a user is forced to tweak the header
to acknowledge the issue, even when not using a -Werror
compilation.

Backports commit 262a69f4282e44426c7a132138581d400053e0a1 from qemu
2018-03-05 00:01:57 -05:00
Richard Henderson bc23bab79d
tcg/s390: Use constant pool for movi
Split out maybe_out_small_movi for use with other operations
that want to add to the constant pool.

Backports commit 28eef8aaece5e83df4568d9842ab9611ec130b2c from qemu
2018-03-04 22:32:04 -05:00
Richard Henderson 31b8b67cd3
tcg: Move USE_DIRECT_JUMP discriminator to tcg/cpu/tcg-target.h
Replace the USE_DIRECT_JUMP ifdef with a TCG_TARGET_HAS_direct_jump
boolean test. Replace the tb_set_jmp_target1 ifdef with an unconditional
function tb_target_set_jmp_target.

While we're touching all backends, add a parameter for tb->tc_ptr;
we're going to need it shortly for some backends.

Move tb_set_jmp_target and tb_add_jump from exec-all.h to cpu-exec.c.

Backports commit a85833933628384d74ec412024d55cf012640287 from qemu
2018-03-04 21:52:35 -05:00
Peter Maydell 2070ef1c37
boards.h: Define new flag ignore_memory_transaction_failures
Define a new MachineClass field ignore_memory_transaction_failures.
If this is flag is true then the CPU will ignore memory transaction
failures which should cause the CPU to take an exception due to an
access to an unassigned physical address; the transaction will
instead return zero (for a read) or be ignored (for a write). This
should be set only by legacy board models which rely on the old
RAZ/WI behaviour for handling devices that QEMU does not yet model.
New board models should instead use "unimplemented-device" for all
memory ranges where the guest will attempt to probe for a device that
QEMU doesn't implement and a stub device is required.

We need this for ARM boards, where we're about to implement support for
generating external aborts on memory transaction failures. Too many
of our legacy board models rely on the RAZ/WI behaviour and we
would break currently working guests when their "probe for device"
code provoked an external abort rather than a RAZ.

Backports commit ed860129acd3fcd0b1e47884e810212aaca4d21b from qemu
2018-03-04 21:27:15 -05:00
Lluís Vilanova ed7225e685
tcg: Add generic translation framework
Backports commit bb2e0039dc07177f928f9fe24758967da02d60a2 from qemu
2018-03-04 14:31:16 -05:00
Paolo Bonzini 6997a5a090
gen-icount: check cflags instead of use_icount global
Backports commit cd42d5b23691ad73edfd6dbcfc935a960a9c5a65 from qemu
2018-03-04 14:26:26 -05:00
Lluís Vilanova 3a196c62ae
target: [tcg] Use a generic enum for DISAS_ values
Used later. An enum makes expected values explicit and
bounds the value space of switches.

Backports commit 77fc6f5e28667634916f114ae04c6029cd7b9c45 from qemu
2018-03-04 14:08:43 -05:00
Richard Henderson b8a16f841a
tcg: Add generic DISAS_NORETURN
This will allow some amount of cleanup to happen before
switching the backends over to enum DisasJumpType.

Backports commit 5dc66895b0113034cd37fd5e65911d7959fc26a9 from qemu
2018-03-04 13:49:18 -05:00
Peter Maydell c44d323359
cpu: Define new cpu_transaction_failed() hook
Currently we have a rather half-baked setup for allowing CPUs to
generate exceptions on accesses to invalid memory: the CPU has a
cpu_unassigned_access() hook which the memory system calls in
unassigned_mem_write() and unassigned_mem_read() if the current_cpu
pointer is non-NULL. This was originally designed before we
implemented the MemTxResult type that allows memory operations to
report a success or failure code, which is why the hook is called
right at the bottom of the memory system. The major problem with
this is that it means that the hook can be called even when the
access was not actually done by the CPU: for instance if the CPU
writes to a DMA engine register which causes the DMA engine to begin
a transaction which has been set up by the guest to operate on
invalid memory then this will casue the CPU to take an exception
incorrectly. Another minor problem is that currently if a device
returns a transaction error then this won't turn into a CPU exception
at all.

The right way to do this is to have allow the CPU to respond
to memory system transaction failures at the point where the
CPU specific code calls into the memory system.

Define a new QOM CPU method and utility function
cpu_transaction_failed() which is called in these cases.
The functionality here overlaps with the existing
cpu_unassigned_access() because individual target CPUs will
need some work to convert them to the new system. When this
transition is complete we can remove the old cpu_unassigned_access()
code.

Backports commit 0dff0939f6fc6a7abd966d4295f06a06d7a01df9 from qemu
2018-03-04 13:11:50 -05:00
Peter Maydell 26c8f31d9e
memory.h: Move MemTxResult type to memattrs.h
Move the MemTxResult type to memattrs.h. We're going to want to
use it in cpu/qom.h, which doesn't want to include all of
memory.h. In practice MemTxResult and MemTxAttrs are pretty
closely linked since both are used for the new-style
read_with_attrs and write_with_attrs callbacks, so memattrs.h
is a reasonable home for this rather than creating a whole
new header file for it.

Backports commit 3114d092b1740f9db9aa559aeb48ee387011e1da from qemu
2018-03-04 13:10:47 -05:00
Eduardo Habkost 382022929e
cpu: cpu_by_arch_id() helper
The helper can be used for CPU object lookup using the CPU's
arch-specific ID (the one returned by CPUClass::get_arch_id()).

Backports commit 5ce46cb34eecec0bc94a4b1394763f9a1bbe20c3 from qemu
2018-03-04 12:16:39 -05:00
Alexey Kardashevskiy e723b8dd49
memory: Open code FlatView rendering
We are going to share FlatView's between AddressSpace's and per-AS
memory listeners won't suit the purpose anymore so open code
the dispatch tree rendering.

Since there is a good chance that dispatch_listener was the only
listener, this avoids address_space_update_topology_pass() if there is
no registered listeners; this should improve starting time.

This should cause no behavioural change.

Backports commit 1b04a1580917d9e41fd37ca62cbff9b4bf061e96 from qemu
2018-03-04 02:06:48 -05:00
Eric Blake be742759b0
osdep: Fix ROUND_UP(64-bit, 32-bit)
When using bit-wise operations that exploit the power-of-two
nature of the second argument of ROUND_UP(), we still need to
ensure that the mask is as wide as the first argument (done
by using a ternary to force proper arithmetic promotion).
Unpatched, ROUND_UP(2ULL*1024*1024*1024*1024, 512U) produces 0,
instead of the intended 2TiB, because negation of an unsigned
32-bit quantity followed by widening to 64-bits does not
sign-extend the mask.

Broken since its introduction in commit 292c8e50 (v1.5.0).
Callers that passed the same width type to both macro parameters,
or that had other code to ensure the first parameter's maximum
runtime value did not exceed the second parameter's width, are
unaffected, but I did not audit to see which (if any) existing
clients of the macro could trigger incorrect behavior (I found
the bug while adding a new use of the macro).

While preparing the patch, checkpatch complained about poor
spacing, so I also fixed that here and in the nearby DIV_ROUND_UP.

Backports commit 33a599667a9e70588483a31286dfff8cfc27d513 from qemu
2018-03-04 01:54:09 -05:00
Michael S. Tsirkin fd472c53c6
Revert "cpu: add APIs to allocate/free CPU environment"
This reverts commit e2a7f28693aea7e194ec1435697ec4feb24f8a6f.

This was not supposed to go upstream yet. Reverting.

Backports commit cde0a63ad721dbb538419a00f9405587680be436 from qemu
2018-03-04 01:42:49 -05:00
Michael S. Tsirkin 71bf994214
cpu: add APIs to allocate/free CPU environment
These will be implemented and then used by follow-up patches.

Backports commit e2a7f28693aea7e194ec1435697ec4feb24f8a6f from qemu
2018-03-04 01:39:09 -05:00
Lluís Vilanova 32b3c3815d
tcg: Pass generic CPUState to gen_intermediate_code()
Needed to implement a target-agnostic gen_intermediate_code()
in the future.

Backports commit 9c489ea6bed134fecfd556b439c68bba48fbe102 from qemu
2018-03-03 23:34:18 -05:00
Richard Henderson fc52eea5e2
tcg: Expand glue macros before stringifying helper names
Backports commit 44368ac62dc5ba014b68b2c1a8ec6fedc3242a5d from qemu
2018-03-03 23:07:21 -05:00
Alex Bennée 7d02489baf
include/exec/exec-all: document common exit conditions
As a precursor to later patches attempt to come up with a more
concrete wording for what each of the common exit cases would be.

Backports commit df0311e634828fdc99ca59352aef68503d631aad from qemu
2018-03-03 22:31:28 -05:00
Peter Maydell 3bd5694a0a
memory: Rename memory_region_init_rom() and _rom_device() to _nomigrate()
Rename memory_region_init_rom() to memory_region_init_rom_nomigrate()
and memory_region_init_rom_device() to
memory_region_init_rom_device_nomigrate().

Backports commit b59821a95bd1d7cb4697fd7748725c910582e0e7 from qemu
2018-03-03 22:29:01 -05:00
Peter Maydell 7b0027a828
memory: Rename memory_region_init_ram() to memory_region_init_ram_nomigrate()
Rename memory_region_init_ram() to memory_region_init_ram_nomigrate().
This leaves the way clear for us to provide a memory_region_init_ram()
which does handle migration.

Backports commit 1cfe48c1ce219b60a9096312f7a61806fae64ab3 from qemu
2018-03-03 22:25:39 -05:00
Peter Maydell 152c56f6a9
memory: Document that the RAM MR initializers do not handle migration
The various functions for initializing RAM MemoryRegions do not do
anything to cause the data in the MemoryRegion to be migrated.
Note in their documentation comments that this is the responsibility
of the caller.

(We will shortly add a new function that *does* do this for you.)

Backports commit a5c0234bb2754f5248e67929a34c843dbe039da5 from qemu
2018-03-03 22:20:32 -05:00
Peter Maydell 3c2d3d8363
include/hw/boards.h: Document memory_region_allocate_system_memory()
Add a documentation comment for memory_region_allocate_system_memory().

In particular, the reason for this function's existence and the
requirement on board code to call it exactly once are non-obvious.

Backports commit 09ad643823dcda0a86eddce1291c28d0ccb09a3b from qemu
2018-03-03 22:18:49 -05:00
Igor Mammedov fe4152c6a5
qom: enforce readonly nature of link's check callback
link's check callback is supposed to verify/permit setting it,
however currently nothing restricts it from misusing it
and modifying target object from within.
Make sure that readonly semantics are checked by compiler
to prevent callback's misuse.

Backports commit 8f5d58ef2c92d7b82d9a6eeefd7c8854a183ba4a from qemu
2018-03-03 22:17:20 -05:00
Pranith Kumar d0a70720a3
Revert "exec.c: Fix breakpoint invalidation race"
Now that we have proper locking after MTTCG patches have landed, we
can revert the commit. This reverts commit

a9353fe897ca2687e5b3385ed39e3db3927a90e0.

Backports commit 406bc339b0505fcfc2ffcbca1f05a3756e338a65 from qemu
2018-03-03 22:14:35 -05:00
Yang Zhong 1135db176f
tcg: add CONFIG_TCG guards in headers
Add CONFIG_TCG around TLB-related functions and structure declarations.
Some of these functions are defined in ./accel/tcg/cputlb.c, which will
not be linked in if TCG is disabled, and have no stubs; therefore, their
callers will also be compiled out for --disable-tcg.

Backports commit b11ec7f2e44b285a3967d629b55d1a6970b06787 from qemu
2018-03-03 21:37:52 -05:00
Yang Zhong d70c141675
tcg: move page_size_init() function
translate-all.c will be disabled if tcg is disabled in the build,
so page_size_init() function and related variables will be moved
to exec.c file.

Backports commit a0be0c585f5dcc4d50a37f6a20d3d625c5ef3a2c from qemu
2018-03-03 21:30:08 -05:00
Thomas Huth cf5d583ef0
cpu: Introduce a wrapper for tlb_flush() that can be used in common code
Commit 1f5c00cfdb8114c ("qom/cpu: move tlb_flush to cpu_common_reset")
moved the call to tlb_flush() from the target-specific reset handlers
into the common code qom/cpu.c file, and protected the call with
"#ifdef CONFIG_SOFTMMU" to avoid that it is called for linux-user
only targets. But since qom/cpu.c is common code, CONFIG_SOFTMMU is
*never* defined here, so the tlb_flush() was simply never executed
anymore. Fix it by introducing a wrapper for tlb_flush() in a file
that is re-compiled for each target, i.e. in translate-all.c.

Backports commit 2cd53943115be5118b5b2d4b80ee0a39c94c4f73 from qemu
2018-03-03 21:24:55 -05:00
Emilio G. Cota f66e74d65b
tcg: consistently access cpu->tb_jmp_cache atomically
Some code paths can lead to atomic accesses racing with memset()
on cpu->tb_jmp_cache, which can result in torn reads/writes
and is undefined behaviour in C11.

These torn accesses are unlikely to show up as bugs, but from code
inspection they seem possible. For example, tb_phys_invalidate does:
/* remove the TB from the hash list */
h = tb_jmp_cache_hash_func(tb->pc);
CPU_FOREACH(cpu) {
if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) {
atomic_set(&cpu->tb_jmp_cache[h], NULL);
}
}
Here atomic_set might race with a concurrent memset (such as the
ones scheduled via "unsafe" async work, e.g. tlb_flush_page) and
therefore we might end up with a torn pointer (or who knows what,
because we are under undefined behaviour).

This patch converts parallel accesses to cpu->tb_jmp_cache to use
atomic primitives, thereby bringing these accesses back to defined
behaviour. The price to pay is to potentially execute more instructions
when clearing cpu->tb_jmp_cache, but given how infrequently they happen
and the small size of the cache, the performance impact I have measured
is within noise range when booting debian-arm.

Note that under "safe async" work (e.g. do_tb_flush) we could use memset
because no other vcpus are running. However I'm keeping these accesses
atomic as well to keep things simple and to avoid confusing analysis
tools such as ThreadSanitizer.

Backports commit f3ced3c59287dabc253f83f0c70aa4934470c15e from qemu
2018-03-03 21:12:36 -05:00
Emilio G. Cota 1a4e5da043
gen-icount: use tcg_ctx.tcg_env instead of cpu_env
We are relying on cpu_env being defined as a global, yet most
targets (i.e. all but arm/a64) have it defined as a local variable.
Luckily all of them use the same "cpu_env" name, but really
compilation shouldn't break if the name of that local variable
changed.

Fix it by using tcg_ctx.tcg_env, which all targets set in their
translate_init function. This change also helps paving the way
for the upcoming "translation loop common to all targets" work.

Backports commit 53f6672bcf57d82b794a2cc3a3469be7d35c8653 from qemu
2018-03-03 21:08:58 -05:00
Laurent Vivier 4e8e8572c3
softfloat: define floatx80_round()
Add a function to round a floatx80 to the defined precision
(floatx80_rounding_precision)

Backports commit 0f72129281765ed64d26353284059f2bdcde7a23 from qemu
2018-03-03 20:57:27 -05:00
Marc-André Lureau ca25248ecd
object: add uint property setter/getter
Backports commit 3152779cd63ba41331ef41659406f65b03e7911a from qemu
2018-03-03 18:43:17 -05:00
Marc-André Lureau 6ca6050206
qnum: add uint type
In order to store integer values between INT64_MAX and UINT64_MAX, add
a uint64_t internal representation.

Backports commit 61a8f418b26a2d974e38e4ae55020aca8d402d88 from qemu
2018-03-03 18:37:56 -05:00
Marc-André Lureau a57d8a5b50
qapi: Remove visit_start_alternate() parameter promote_int
Before the previous commit, parameter promote_int = true made
visit_start_alternate() with an input visitor avoid QTYPE_QINT
variants and create QTYPE_QFLOAT variants instead. This was used
where QTYPE_QINT variants were invalid.

The previous commit fused QTYPE_QINT with QTYPE_QFLOAT, rendering
promote_int useless and unused.

Backports commit 60390d2dc85ffade8981ca41e02335cb07353a6d from qemu
2018-03-03 18:34:35 -05:00
Marc-André Lureau dd77730d49
qapi: merge QInt and QFloat in QNum
We would like to use a same QObject type to represent numbers, whether
they are int, uint, or floats. Getters will allow some compatibility
between the various types if the number fits other representations.

Add a few more tests while at it.

Backports commit 01b2ffcedd94ad7b42bc870e4c6936c87ad03429 from qemu
2018-03-03 18:16:28 -05:00
Markus Armbruster e9174563be
qapi: Document intended use of @name within alternate visits
Backports commit ed0ba0f47e8cb6d924db0a54090bbb7b095fe9ea from qemu
2018-03-03 17:37:12 -05:00
Markus Armbruster 5ab0d5af81
qapi: New QAPI_CLONE_MEMBERS()
QAPI_CLONE() returns a newly allocated QAPI object. Inconvenient when
we want to clone into an existing object. QAPI_CLONE_MEMBERS() does
exactly that.

Backports commit 4626a19c86c30d96cedbac2bd44ef8103303cb37 from qemu
2018-03-03 17:36:02 -05:00
Eric Blake 734778da93
qobject: Add helper macros for common scalar insertions
Rather than making lots of callers wrap a scalar in a QInt, QString,
or QBool, provide helper macros that do the wrapping automatically.

Update the Coccinelle script to make mass conversions easy, although
the conversion itself will be done as a separate patches to ease
review and backport efforts.

Backports commit a92c21591b5bb9543996538f14854ca6b528318b from qemu
2018-03-03 17:33:30 -05:00
Richard Henderson 68275ba6f3
tcg/arm: Use indirect branch for goto_tb
Backports commit 3fb53fb4d12f2e7833bd1659e6013237b130ef20 from qemu
2018-03-03 17:11:18 -05:00
Emilio G. Cota d3ada2feb5
tcg: allocate TB structs before the corresponding translated code
Allocating an arbitrarily-sized array of tbs results in either
(a) a lot of memory wasted or (b) unnecessary flushes of the code
cache when we run out of TB structs in the array.

An obvious solution would be to just malloc a TB struct when needed,
and keep the TB array as an array of pointers (recall that tb_find_pc()
needs the TB array to run in O(log n)).

Perhaps a better solution, which is implemented in this patch, is to
allocate TB's right before the translated code they describe. This
results in some memory waste due to padding to have code and TBs in
separate cache lines--for instance, I measured 4.7% of padding in the
used portion of code_gen_buffer when booting aarch64 Linux on a
host with 64-byte cache lines. However, it can allow for optimizations
in some host architectures, since TCG backends could safely assume that
the TB and the corresponding translated code are very close to each
other in memory. See this message by rth for a detailed explanation:

https://lists.gnu.org/archive/html/qemu-devel/2017-03/msg05172.html
Subject: Re: GSoC 2017 Proposal: TCG performance enhancements

Backports commit 6e3b2bfd6af488a896f7936e99ef160f8f37e6f2 from qemu
2018-03-03 17:05:49 -05:00
Emilio G. Cota 7d0440dec4
tb-hash: improve tb_jmp_cache hash function in user mode
Optimizations to cross-page chaining and indirect branches make
performance more sensitive to the hit rate of tb_jmp_cache.
The constraint of reserving some bits for the page number
lowers the achievable quality of the hashing function.

However, user-mode does not have this requirement. Thus,
with this change we use for user-mode a hashing function that
is both faster and of better quality than the previous one.

Measurements:

Note: baseline (i.e. speedup == 1x) is QEMU v2.9.0.

- SPECint06 (test set), x86_64-linux-user. Host: Intel i7-6700K @ 4.00GHz

2.2x +-+--------------------------------------------------------------------------------------------------------------+-+
| |
| jr |
2x +jr+multhash +....................................................+++++...................................+-+
| jr+hash |$$$ |
| |$+$ |
| ### $ |
1.8x +-+......................................................................#|#.$...................................+-+
| ++#+# $ |
| |# # $ |
1.6x +-+....................................................................***.#.$....................++$$$..........+-+
| $$$ *+* # $ |$+$ |
| ++$$$ ### $ * * # $ +++|$ $ |
| ++###+$ # # $ * * # $ ### ****## $ |
1.4x +-+...................***+#.$.........***.#.$..........................*.*.#.$...........#+#$$.*++*|#.$..........+-+
| *+* # $ * * # $ * * # $ # # $ * *+# $ |
| * * # $ +++++ * * # $ * * # $ *** # $ * * # $ ###$$ |
1.2x +-+...................*.*.#.$.***##$$.*.*.#.$..........................*.*.#.$.........*.*.#.$.*..*.#.$.***+#+$..+-+
| * * # $ *+* # $ * * # $ +++ * * # $ ++###$$ * * # $ * * # $ * * # $ |
| ***##$$ * * # $ * * # $ * * # $ ***##$$ ++### * * # $ *** #+$ * * # $ * * # $ * * # $ |
| *+*+#+$ ***##$$$ * * # $ * * # $ * * # $ *+* # $ ++####$$ ***+# * * # $ * * # $ * * # $ * * # $ * * # $ |
1x +-++-*+*+#+$+*+*+#-+$+*+*-#+$+*+*+#+$+*+*+#+$+*-*+#+$+***++#+$+*+*+#$$+*+*+#+$+*+*+#+$+*+*-#+$+*+-*+#+$+*+*+#+$-++-+
| * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ |
| * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ |
0.8x +-+--***##$$-***##$$$-***##$$-***##$$-***##$$-***##$$-***###$$-***##$$-***##$$-***##$$-***##$$-****##$$-***##$$--+-+
astar bzip2 gcc gobmk h264ref hmmlibquantum mcf omnetpperlbench sjengxalancbmk hmean
png: http://imgur.com/4UXTrEc

Here I also tried the hash function suggested by Paolo ("multhash"):

return ((uint64_t) (pc * 2654435761) >> 32) & (TB_JMP_CACHE_SIZE - 1);

As you can see it is just as good as the other new function ("hash"),
which is what I ended up going with.

- SPECint06 (train set), x86_64-linux-user. Host: Intel i7-6700K @ 4.00GHz

2.6x +-+--------------------------------------------------------------------------------------------------------------+-+
| |
| jr ### |
2.4x +jr+hash...........................................................................................#.#...........+-+
| # # |
| # # |
2.2x +-+................................................................................................#.#...........+-+
| # # |
| # # |
2x +-+................................................................................................#.#...........+-+
| **** # |
| * * # |
1.8x +-+.............................................................................................*..*.#...........+-+
| +++ * * # |
| #### #### * * # |
1.6x +-+......................................####.............................#..#.****..#..........*..*.#...........+-+
| +++ #++# **** # * * # #### * * # |
| ### # # * * # * * # # # * * # |
1.4x +-+...................****+#..........****..#..........................*..*..#.*..*..#....#..#..*..*.#...........+-+
| *++* # * * # * * # * * # *** # * * # #### |
| * * # #### * * # * * # * * # * * # * * # **** # |
1.2x +-+...................*..*.#..****++#.*..*..#..........................*..*..#.*..*..#..*.*..#..*..*.#..*..*..#..+-+
| ****### * * # * * # * * # * * # * * # * * # * * # * * # |
| * * # ***### * * # * * # * * # ****## * * # * * # * * # * * # * * # |
1x +-+--****###--***###--****##--****###-****###--***###--***###--****##--****###-****###--***###--****##--****###--+-+
astar bzip2 gcc gobmk h264ref hmmlibquantum mcf omnetpperlbench sjengxalancbmk hmean
png: http://imgur.com/ArCbHqo

- NBench, x86_64-linux-user. Host: Intel i7-6700K @ 4.00GHz

1.12x +-+-------------------------------------------------------------------------------------------------------------+-+
| |
| jr +++ |
1.1x +jr+hash...........................................................####.........................................+-+
| +++#| # |
| | #++# |
1.08x +-+................................+++................+++.+++..*****..#.........................................+-+
| | +++ | | * | * # |
| | | | | *+++* # |
1.06x +-+................................****###.............|...|...*...*..#.........................+++.............+-+
| *| * |# ****### * * # | |
| *| *++# *| * |# * * # #### |
1.04x +-+................................*++*..#............*|.*.|#..*...*..#........................#.|#.............+-+
| * * # *++*++# * * # +++#++# |
| * * # * * # * * # | # # +++#### |
1.02x +-+................................*..*..#......+++...*..*..#..*...*..#.....................****..#..*****++#...+-+
| +++ * * # +++ | * * # * * # +++ *| * # *+++* # |
| +++ | +++ +++ ++++++ * * # *****### * * # * * # | +++ ++++++ *++* # * * # |
1x +-++-+++++####++****###++++-+####+-*++*++#-+*+++*-+#++*++*++#++*+-+*++#+-+++####-+*****###++*++*++#++*+-+*++#+-++-+
| *****| # *++* |# *****| # * * # * *++# * * # * * # **** |# * * # * * # * * # |
| * | *| # * *++# * | *++# * * # * * # * * # * * # *| *++# * * # * * # * * # |
0.98x +-+...*.|.*++#..*..*..#..*+++*..#..*..*..#..*...*..#..*..*..#..*...*..#..*++*..#..*...*..#..*..*..#..*...*..#...+-+
| *+++* # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # |
| * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # |
0.96x +-+---*****###--****###--*****###--****###--*****###--****###--*****###--****###--*****###--****###--*****###---+-+
ASSIGNMENT BITFIELD FOURFP EMULATION HUFFMAN LU DECOMPOSITIONEURAL NNUMERIC SOSTRING SORT hmean
png: http://imgur.com/ZXFX0hJ

- NBench, arm-linux-user. Host: Intel i7-4790K @ 4.00GHz

1.3x +-+-------------------------------------------------------------------------------------------------------------+-+
| #### |
| jr # # +++ |
1.25x +jr+hash.....................#..#...........................................####................................+-+
| # # # # |
| # # # # |
1.2x +-+..........................#..#...........................................#..#................................+-+
| # # # # |
| # # # # |
1.15x +-+..........................#..#...........................................#..#................................+-+
| # # #### # # |
| # # # # # # |
1.1x +-+..........................#..#..................................#..#.....#..#................................+-+
| # # # # # # +++ |
| # # #### # # # # #### |
1.05x +-+..........................#..#...............#..#.....####......#..#.....#..#.........................#..#...+-+
| # # # # # # # # # # +++ # # |
| +++ ***** # #### ***** # # # +++# # **** # ****### # # |
1x +-++-+*****###++****+++++*+-+*++#+-****++#-+*+++*-+#+++++#++#++*****++#+-*++*++#-+*****-++++*++*++#++*****++#+-++-+
| * * # * * | * * # * * # * * # **** # * * # * * # * *### * *++# * * # |
| * * # * *### * * # * * # * * # * * # * * # * * # * * # * * # * * # |
0.95x +-+...*...*..#..*..*.|#..*...*..#..*..*..#..*...*..#..*..*..#..*...*..#..*..*..#..*...*..#..*..*..#..*...*..#...+-+
| * * # * * |# * * # * * # * * # * * # * * # * * # * * # * * # * * # |
| * * # * * |# * * # * * # * * # * * # * * # * * # * * # * * # * * # |
0.9x +-+---*****###--****###--*****###--****###--*****###--****###--*****###--****###--*****###--****###--*****###---+-+
ASSIGNMENT BITFIELD FOURFP EMULATION HUFFMAN LU DECOMPOSITIONEURAL NNUMERIC SOSTRING SORT hmean
png: http://imgur.com/FfD27ey

Backports commit 6f1653180f5701c6a8f1b35b89a80b1e3260928e from qemu
2018-03-03 14:11:29 -05:00
Emilio G. Cota 8f4f15e5f5
tcg: Introduce goto_ptr opcode and tcg_gen_lookup_and_goto_ptr
Instead of exporting goto_ptr directly to TCG frontends, export
tcg_gen_lookup_and_goto_ptr(), which calls goto_ptr with the pointer
returned by the lookup_tb_ptr() helper. This is the only use case
we have for goto_ptr and lookup_tb_ptr, so having this function is
very convenient. Furthermore, it trivially allows us to avoid calling
the lookup helper if goto_ptr is not implemented by the backend.

Backports commit cedbcb01529cb6cf9a2289cdbebbc63f6149fc18 from qemu
2018-03-02 21:05:18 -05:00
Richard Henderson 23d8f5fba2
qemu/atomic: Loosen restrictions for 64-bit ILP32 hosts
We need to coordinate with the TCG_OVERSIZED_GUEST test in cputlb.c,
and allow 64-bit atomics even though sizeof(void *) == 4.

Backports commit 374aae653499f4d405caf32b7fff0c8639113fe4 from qemu
2018-03-02 20:06:39 -05:00
Peter Xu fce1b469e5
memory: tune last param of iommu_ops.translate()
This patch converts the old "is_write" bool into IOMMUAccessFlags. The
difference is that "is_write" can only express either read/write, but
sometimes what we really want is "none" here (neither read nor write).
Replay is an good example - during replay, we should not check any RW
permission bits since thats not an actual IO at all.

Backports commit bf55b7afce53718ef96f4e6616da62c0ccac37dd from qemu
2018-03-02 18:59:12 -05:00
Gerd Hoffmann 108354cc4a
bitmap: add bitmap_copy_and_clear_atomic
Backports commit d6eb1413920affb7be3df9982682dd183a805dd7 from qemu
2018-03-02 18:59:11 -05:00
Peter Maydell b8b70dfcd2
Drop QEMU_GNUC_PREREQ() checks for gcc older than 4.1
We already require gcc 4.1 or newer (for the atomic
support), so the fallback codepaths for older gcc
versions than that are now dead code and we can
just delete them.

NB: clang reports itself as gcc 4.2 (regardless of
clang version), so clang won't be using the fallbacks
either.

Backports commit fa54abb8c298f892639ffc4bc2f61448ac3be4a1 from qemu
2018-03-02 18:59:05 -05:00
Paolo Bonzini c27870520a
exec: revert MemoryRegionCache
MemoryRegionCache did not know about virtio support for IOMMUs (because the
two features were developed at the same time). Revert MemoryRegionCache
to "normal" address_space_* operations for 2.9, as it is simpler than
undoing the virtio patches.

Backports commit 90c4fe5fc517a045e7a7cf2f23472e114042ca29 from qemu
2018-03-02 14:30:41 -05:00
Michael Davidsaver 2769c6ada0
armv7m: Fix reads of CONTROL register bit 1
The v7m CONTROL register bit 1 is SPSEL, which indicates
the stack being used. We were storing this information
not in v7m.control but in the separate v7m.other_sp
structure field. Unfortunately, the code handling reads
of the CONTROL register didn't take account of this, and
so if SPSEL was updated by an exception entry or exit then
a subsequent guest read of CONTROL would get the wrong value.

Using a separate structure field doesn't really gain us
anything in efficiency, so drop this unnecessary complexity
in favour of simply storing all the bits in v7m.control.

This is a migration compatibility break for M profile
CPUs only.

Backports commit abc24d86cc0364f402e438fae3acb14289b40734 from qemu
2018-03-02 13:26:38 -05:00
Dr. David Alan Gilbert 55d79cf4c0
RAMBlocks: qemu_ram_is_shared
Provide a helper to say whether a RAMBlock was created as a
shared mapping.

Backports commit 463a4ac23bcf0f0b65c850fa66f5ae6e43edd243 from qemu
2018-03-02 13:05:35 -05:00
Dr. David Alan Gilbert 5dfbee8930
memory_region: Fix name comments
The 'name' parameter to memory_region_init_* had been marked as debug
only, however vmstate_region_ram uses it as a parameter to
qemu_ram_set_idstr to set RAMBlock names and these form part of the
migration stream.

Backports commit e8f5fe2de125a0bfbefbaa6a69af81f4817cb7a0 from qemu
2018-03-02 13:01:23 -05:00
Markus Armbruster 8a8dc93945
qapi: Improve qobject visitor documentation
Backports commit aa3a982e674b09ae32502940f93ba98b3a8ad50e from qemu
2018-03-02 12:24:21 -05:00
Markus Armbruster ac1a61af47
qapi: Make input visitors detect unvisited list tails
Fix the design flaw demonstrated in the previous commit: new method
check_list() lets input visitors report that unvisited input remains
for a list, exactly like check_struct() lets them report that
unvisited input remains for a struct or union.

Implement the method for the qobject input visitor (straightforward),
and the string input visitor (less so, due to the magic list syntax
there). The opts visitor's list magic is even more impenetrable, and
all I can do there today is a stub with a FIXME comment. No worse
than before.

Backports commit a4a1c70dc759e5b81627e96564f344ab43ea86eb from qemu
2018-03-02 12:21:04 -05:00
Markus Armbruster e0ee098c4a
qapi: Drop unused non-strict qobject input visitor
The split between tests/test-qobject-input-visitor.c and
tests/test-qobject-input-strict.c now makes less sense than ever. The
next commit will take care of that.

Backports commit 048abb7b20c9f822ad9d4b730bade73b3311a47a from qemu
2018-03-02 12:14:52 -05:00
Markus Armbruster 50e3cda49a
qapi: Drop string input visitor method optional()
visit_optional() is to be called only between visit_start_struct() and
visit_end_struct(). Visitors that don't support struct visits,
i.e. don't implement start_struct(), end_struct(), have no use for it.
Clarify documentation.

The string input visitor doesn't support struct visits. Its
parse_optional() is therefore useless. Drop it.

Backports commit a8aec6de2ac1a5e36989fdfba29067b361009b75 from qemu
2018-03-02 12:07:55 -05:00
Markus Armbruster 84e5261cdf
qapi: Improve qobject input visitor error reporting
Error messages refer to nodes of the QObject being visited by name.
Trouble is the names are sometimes less than helpful:

* The name of the root QObject is whatever @name argument got passed
to the visitor, except NULL gets mapped to "null". We commonly pass
NULL. Not good.

Avoiding errors "at the root" mitigates. For instance,
visit_start_struct() can only fail when the visited object is not a
dictionary, and we commonly ensure it is beforehand.

* The name of a QDict's member is the member key. Good enough only
when this happens to be unique.

* The name of a QList's member is "null". Not good.

Improve error messages by referring to nodes by path instead, as
follows:

* The path of the root QObject is whatever @name argument got passed
to the visitor, except NULL gets mapped to "<anonymous>".

* The path of a root QDict's member is the member key.

* The path of a root QList's member is "[%u]", where %u is the list
index, starting at zero.

* The path of a non-root QDict's member is the path of the QDict
concatenated with "." and the member key.

* The path of a non-root QList's member is the path of the QList
concatenated with "[%u]", where %u is the list index.

For example, the incorrect QMP command

{ "execute": "blockdev-add", "arguments": { "node-name": "foo", "driver": "raw", "file": {"driver": "file" } } }

now fails with

{"error": {"class": "GenericError", "desc": "Parameter 'file.filename' is missing"}}

instead of

{"error": {"class": "GenericError", "desc": "Parameter 'filename' is missing"}}

and

{ "execute": "input-send-event", "arguments": { "device": "bar", "events": [ [] ] } }

now fails with

{"error": {"class": "GenericError", "desc": "Invalid parameter type for 'events[0]', expected: object"}}

instead of

{"error": {"class": "GenericError", "desc": "Invalid parameter type for 'null', expected: QDict"}}

Aside: calling the thing "parameter" is suboptimal for QMP, because
the root object is "arguments" there.

The qobject output visitor doesn't have this problem because it should
not fail. Same for dealloc and clone visitors.

The string visitors don't have this problem because they visit just
one value, whose name needs to be passed to the visitor as @name. The
string output visitor shouldn't fail anyway.

The options visitor uses QemuOpts names. Their name space is flat, so
the use of QDict member keys as names is fine. NULL names used with
roots and lists could conceivably result in bad error messages. Left
for another day.

Backports commit a9fc37f6bc3f2ab90585cb16493da9f6dcfbfbcf from qemu
2018-03-02 12:05:53 -05:00
Markus Armbruster d07bcef231
qmp: Eliminate silly QERR_QMP_* macros
The QERR_ macros are leftovers from the days of "rich" error objects.

QERR_QMP_BAD_INPUT_OBJECT, QERR_QMP_BAD_INPUT_OBJECT_MEMBER,
QERR_QMP_EXTRA_MEMBER are used in just one place now, except for one
use that has crept into qobject-input-visitor.c.

Drop these macros, to make the (bad) error messages more visible.

Backports commit 99fb0c53c038105bae68b02a3d9f1cbf7951ba10 from qemu
2018-03-02 11:28:17 -05:00
Yongji Xie 23f5b17a08
memory: Introduce DEVICE_HOST_ENDIAN for ram device
At the moment ram device's memory regions are DEVICE_NATIVE_ENDIAN. It's
incorrect. This memory region is backed by a MMIO area in host, so the
uint64_t data that MemoryRegionOps read from/write to this area should be
host-endian rather than target-endian. Hence, current code does not work
when target and host endianness are different which is the most common case
on PPC64. To fix it, this introduces DEVICE_HOST_ENDIAN for the ram device.

This has been tested on PPC64 BE/LE host/guest in all possible combinations
including TCG.

Backports commit c99a29e702528698c0ce2590f06ca7ff239f7c39 from qemu
2018-03-02 11:24:32 -05:00
Alex Bennée 454932263c
cputlb and arm/sparc targets: convert mmuidx flushes from varg to bitmap
While the vargs approach was flexible the original MTTCG ended up
having munge the bits to a bitmap so the data could be used in
deferred work helpers. Instead of hiding that in cputlb we push the
change to the API to make it take a bitmap of MMU indexes instead.

For ARM some the resulting flushes end up being quite long so to aid
readability I've tended to move the index shifting to a new line so
all the bits being or-ed together line up nicely, for example:

tlb_flush_page_by_mmuidx(other_cs, pageaddr,
(1 << ARMMMUIdx_S1SE1) |
(1 << ARMMMUIdx_S1SE0));

Backports commit 0336cbf8532935d8e23c2aabf3e2ce2c0697b6ac from qemu
2018-03-02 10:12:40 -05:00
KONRAD Frederic c5730ff194
tcg: add options for enabling MTTCG
We know there will be cases where MTTCG won't work until additional work
is done in the front/back ends to support. It will however be useful to
be able to turn it on.

As a result MTTCG will default to off unless the combination is
supported. However the user can turn it on for the sake of testing.

Backports commit 8d4e9146b3568022ea5730d92841345d41275d66 from qemu
2018-03-02 09:25:01 -05:00
Markus Armbruster 89d8e58718
util/cutils: Change qemu_strtosz*() from int64_t to uint64_t
This will permit its use in parse_option_size().

Backports commit f46bfdbfc8f95cf65d7818ef68a801e063c40332 from qemu
2018-03-02 08:58:55 -05:00
Markus Armbruster 8650d0213c
util/cutils: Return qemu_strtosz*() error and value separately
This makes qemu_strtosz(), qemu_strtosz_mebi() and
qemu_strtosz_metric() similar to qemu_strtoi64(), except negative
values are rejected.

Backports commit f17fd4fdf0df3d2f3444399d04c38d22b9a3e1b7 from qemu
2018-03-02 08:57:16 -05:00
Markus Armbruster 858acd4142
util/cutils: New qemu_strtosz()
Most callers of qemu_strtosz_suffix() pass QEMU_STRTOSZ_DEFSUFFIX_B.
Capture the pattern in new qemu_strtosz().

Inline qemu_strtosz_suffix() into its only remaining caller.

Backports commit 466dea14e677555dd24465aca75d00a3537ad062 from qemu
2018-03-02 08:50:56 -05:00
Markus Armbruster a3358798d6
util/cutils: Rename qemu_strtosz() to qemu_strtosz_MiB()
With qemu_strtosz(), no suffix means mebibytes. It's used rarely.
I'm going to add a similar function where no suffix means bytes.
Rename qemu_strtosz() to qemu_strtosz_MiB() to make the name
qemu_strtosz() available for the new function.

Backports commit e591591b323772eea733de6027f5e8b50692d0ff from qemu
2018-03-02 08:49:26 -05:00
Markus Armbruster f656cd91ec
util/cutils: New qemu_strtosz_metric()
To parse numbers with metric suffixes, we use

qemu_strtosz_suffix_unit(nptr, &eptr, QEMU_STRTOSZ_DEFSUFFIX_B, 1000)

Capture this in a new function for legibility:

qemu_strtosz_metric(nptr, &eptr)

Replace test_qemu_strtosz_suffix_unit() by test_qemu_strtosz_metric().

Rename qemu_strtosz_suffix_unit() to do_strtosz() and give it internal
linkage.

Backports commit d2734d2629266006b0413433778474d5801c60be from qemu
2018-03-02 08:47:40 -05:00
Markus Armbruster 41c2e1168f
util/cutils: Rename qemu_strtoll(), qemu_strtoull()
The name qemu_strtoll() suggests conversion to long long, but it
actually converts to int64_t. Rename to qemu_strtoi64().

The name qemu_strtoull() suggests conversion to unsigned long long,
but it actually converts to uint64_t. Rename to qemu_strtou64().

Backports commit b30d188677456b17c1cd68969e08ddc634cef644 from qemu
2018-03-02 08:39:45 -05:00
Bharata B Rao 7fadaf0bc4
softfloat: Add float128_to_uint32_round_to_zero()
float128_to_uint32_round_to_zero() is needed by xscvqpuwz instruction
of PowerPC ISA 3.0.

Backports commit fd425037d25cecaaffdb3831697e0adc10ca2ba3 from qemu
2018-03-02 08:33:09 -05:00
Bharata B Rao 64d32a2237
softfloat: Add float128_to_uint64_round_to_zero()
Implement float128_to_uint64() and use that to implement
float128_to_uint64_round_to_zero()

This is required by xscvqpudz instruction of PowerPC ISA 3.0.

Backports commit 2e6d85683576c970c714c1cc071dca742835b9d4 from qemu
2018-03-02 08:32:02 -05:00
Bharata B Rao 80e522b499
softfloat: Add round-to-odd rounding mode
Power ISA 3.0 introduces a few quadruple precision floating point
instructions that support round-to-odd rounding mode. The
round-to-odd mode is explained as under:

Let Z be the intermediate arithmetic result or the operand of a convert
operation. If Z can be represented exactly in the target format, the
result is Z. Otherwise the result is either Z1 or Z2 whichever is odd.
Here Z1 and Z2 are the next larger and smaller numbers representable
in the target format respectively.

Backports commit 9ee6f678f473007e252934d6acd09c24490d9d42 from qemu
2018-03-02 08:25:00 -05:00
Julian Brown cc217b0c90
arm: Correctly handle watchpoints for BE32 CPUs
In BE32 mode, sub-word size watchpoints can fail to trigger because the
address of the access is adjusted in the opcode helpers before being
compared with the watchpoint registers. This patch reverses the address
adjustment before performing the comparison with the help of a new CPUClass
hook.

This version of the patch augments and tidies up comments a little.

Backports commit 40612000599e52e792d23c998377a0fa429c4036 from qemu
2018-03-02 00:24:33 -05:00
Michael S. Tsirkin 0455644974
ARRAY_SIZE: check that argument is an array
It's a familiar pattern: some code uses ARRAY_SIZE, then refactoring
changes the argument from an array to a pointer to a dynamically
allocated buffer. Code keeps compiling but any ARRAY_SIZE calls now
return the size of the pointer divided by element size.

Let's add build time checks to ARRAY_SIZE before we allow more
of these in the code-base.

Backports commit ed63ec0d22ccdce3b2222d9a514423b7fbba3a0d from qemu
2018-03-02 00:09:51 -05:00
Michael S. Tsirkin ac013df0a2
compiler: expression version of QEMU_BUILD_BUG_ON
QEMU_BUILD_BUG_ON uses a typedef in order to be safe
to use outside functions, but sometimes it's useful
to have a version that can be used within an expression.
Following what Linux does, introduce QEMU_BUILD_BUG_ON_ZERO
that return zero after checking condition at build time.

Backports commit d757573e69f2ef58a4a7b41f6c55d65fa1e1c5c2 from qemu
2018-03-02 00:07:33 -05:00
Michael S. Tsirkin 634a8094f1
compiler: rework BUG_ON using a struct
There are theoretical concerns that some compilers might not trigger
build failures on attempts to define an array of size (x ? -1 : 1) where
x is a variable and make it a variable sized array instead. Let rewrite
using a struct with a negative bit field size instead as there are no
dynamic bit field sizes. This is similar to what Linux does.

Backports commit f291887e8eef5d37d31484638f6e62401b4b99a2 from qemu
2018-03-02 00:05:07 -05:00
Michael S. Tsirkin 7f9fb3395c
QEMU_BUILD_BUG_ON: use __COUNTER__
Some headers use QEMU_BUILD_BUG_ON. This causes a problem
if the C file including that header happens to have
QEMU_BUILD_BUG_ON at the same line number.

Fix using a widely available extension: __COUNTER__.
If unavailable, provide a stub.

Backports commit 60abf0a5e05134187e274ce5f32524ccf0cae1a6 from qemu
2018-03-02 00:03:44 -05:00
Michael S. Tsirkin beca05eb5f
compiler: drop ; after BUILD_BUG_ON
All users include the trailing ; anyway, let's require that -
it seems cleaner.

Backports commit f29831828441318c7916ae28e6e16e4a1c4a6795 from qemu
2018-03-02 00:01:44 -05:00
Sascha Silbe 11c66029b7
error: error_setg_errno(): errno gets preserved
C11 allows errno to be clobbered by pretty much any library function
call, so in general callers need to take care to save errno before
calling other functions.

However, for error reporting functions this is rather awkward and can
make the code on the caller side more complicated than
necessary. error_setg_errno() already takes care of preserving errno
and some functions rely on that, so just promise that we continue to
do so in the future.

Backports commit 98cb89af4df7e1776ce418ed6167b6e214a64435 from qemu
2018-03-01 23:38:25 -05:00
Thomas Huth b2f1326437
Move target-* CPU file into a target/ folder
We've currently got 18 architectures in QEMU, and thus 18 target-xxx
folders in the root folder of the QEMU source tree. More architectures
(e.g. RISC-V, AVR) are likely to be included soon, too, so the main
folder of the QEMU sources slowly gets quite overcrowded with the
target-xxx folders.
To disburden the main folder a little bit, let's move the target-xxx
folders into a dedicated target/ folder, so that target-xxx/ simply
becomes target/xxx/ instead.

Backports commit fcf5ef2ab52c621a4617ebbef36bf43b4003f4c0 from qemu
2018-03-01 22:50:58 -05:00
Alex Bennée e3e57ca08e
cputlb: drop flush_global flag from tlb_flush
We have never has the concept of global TLB entries which would avoid
the flush so we never actually use this flag. Drop it and make clear
that tlb_flush is the sledge-hammer it has always been.

Backports commit  d10eb08f5d8389c814b554d01aa2882ac58221bf from qemu
2018-03-01 19:36:04 -05:00
Richard Henderson 5ca8ac1aeb
qemu/host-utils.h: Reduce the operation count in the fallback ctpop
Backports commit 7bdcecb7b2d79c292d1256f7d6cf0f1da50d381f from qemu
2018-03-01 18:35:51 -05:00
Jason Wang 29932d0719
memory: handle alias in memory_region_is_iommu()
Backports commit 12d37882f0c0def5dee1c21be5d8fea9c21baada from qemu
2018-03-01 13:06:18 -05:00
Jason Wang fdca6292a1
exec: introduce address_space_get_iotlb_entry()
This patch introduces a helper to query the iotlb entry for a
possible iova. This will be used by later device IOTLB API to enable
the capability for a dataplane (e.g vhost) to query the IOTLB.

Backports commit 052c8fa9983f553fdfa0d61034774070dd639c2b from qemu
2018-03-01 13:05:08 -05:00
Paolo Bonzini 81ad780e5e
exec: introduce MemoryRegionCache
Device models often have to perform multiple access to a single
memory region that is known in advance, but would to use "DMA-style"
functions instead of address_space_map/unmap. This can happen
for example when the data has to undergo endianness conversion.
Introduce a new data structure to cache the result of
address_space_translate without forcing usage of a host address
like address_space_map does.

Backports commit 1f4e496e1fc2eb6c8bf377a0f9695930c380bfd3 from qemu
2018-03-01 10:50:30 -05:00
Paolo Bonzini 88ad0f4f6e
exec: introduce memory_ldst.inc.c
Templatize the address_space_* and *_phys functions, so that we can add
similar functions in the next patch that work with a lightweight,
cache-like version of address_space_map/unmap.

Backports commit 0ce265ffef87f19f4dd1ff0663e09a63d66ae408 from qemu
2018-03-01 09:59:34 -05:00
Paolo Bonzini 9404dbf74e
cpu-exec: fix icount out-of-bounds access
When icount is active, tb_add_jump is surprisingly called with an
out of bounds basic block index. I have no idea how that can work,
but it does not seem like a good idea. Clear *last_tb for all
TB_EXIT_ICOUNT_EXPIRED cases, even when all you have to do is
refill icount_extra.

Backports commit d8dea6fbcbed177ca5d23ab77b3834a9437f0e88 from qemu
2018-03-01 09:17:26 -05:00
Bobby Bingham d46e52d9d0
cpu_ldst.h: use correct guest address parameter
In the user emulation code path, tlb_vaddr_to_host erronesously passed
vaddr as the guest address to be translated, instead of addr, the parameter
which actually contained the guest address.

This resulted in incorrect addresses being used when emulating block copy
(mvc/mvpg) and block clear (xc) instructions for the s390x target.

Backports commit c2a85316902e67530da9d6548139fcce73c0cac6 from qemu
2018-03-01 08:56:37 -05:00
Paolo Bonzini 9d64a89acf
tcg: comment on which functions have to be called with tb_lock held
softmmu requires more functions to be thread-safe, because translation
blocks can be invalidated from e.g. notdirty callbacks. Probably the
same holds for user-mode emulation, it's just that no one has ever
tried to produce a coherent locking there.

This patch will guide the introduction of more tb_lock and tb_unlock
calls for system emulation.

Note that after this patch some (most) of the mentioned functions are
still called outside tb_lock/tb_unlock. The next one will rectify this.

Backports commit 7d7500d99895f888f97397ef32bb536bb0df3b74 from qemu
2018-02-28 10:26:28 -05:00
Alex Bennée 7aab0bd9a6
translate-all: add DEBUG_LOCKING asserts
This adds asserts to check the locking on the various translation
engines structures. There are two sets of structures that are protected
by locks.

The first the l1map and PageDesc structures used to track which
translation blocks are associated with which physical addresses. In
user-mode this is covered by the mmap_lock.

The second case are TB context related structures which are protected by
tb_lock which is also user-mode only.

Currently the asserts do nothing in SoftMMU mode but this will change
for MTTCG.

Backports commit 301e40ed8005306c009978be295ed9a4b725178b from qemu
2018-02-28 08:56:15 -05:00
Richard Henderson da01e53757
tcg: Add atomic128 helpers
Force the use of cmpxchg16b on x86_64.

Wikipedia suggests that only very old AMD64 (circa 2004) did not have
this instruction. Further, it's required by Windows 8 so no new cpus
will ever omit it.

If we truely care about these, then we could check this at startup time
and then avoid executing paths that use it.

Backports commit 7ebee43ee3e2fcd7b5063058b7ef74bc43216733 from qemu
2018-02-27 21:43:48 -05:00
Richard Henderson 5c0ce1b99c
tcg: Add atomic helpers
Add all of cmpxchg, op_fetch, fetch_op, and xchg.
Handle both endian-ness, and sizes up to 8.
Handle expanding non-atomically, when emulating in serial.

Backports commit c482cb117cc418115ca9c6d21a7a2315414c0a40 from qemu
2018-02-27 15:57:47 -05:00
Yongbok Kim 79e4c001a9
softmmu: Add probe_write()
Probe for whether the specified guest write access is permitted.
If it is not permitted then an exception will be taken in the same
way as if this were a real write access (and we will not return).
Otherwise the function will return, and there will be a valid
entry in the TLB for this access.

Backports commit 3b4afc9e75ab1a95f33e41f462921093f8a109c4 from qemu
2018-02-27 12:20:50 -05:00
Richard Henderson e35aacd5ae
tcg: Add EXCP_ATOMIC
When we cannot emulate an atomic operation within a parallel
context, this exception allows us to stop the world and try
again in a serial context.

Backports commit fdbc2b5722f6092e47181a947c90fd4bdcc1c121 from qemu

Also backports parts of commit 02d57ea115b7669f588371c86484a2e8ebc369be
2018-02-27 11:57:58 -05:00
Richard Henderson d5510a546f
int128: Add int128_make128
Allows Int128 to be used more generally, rather than having to
begin with 64-bit inputs and accumulate.

Backports commit 1edaeee0955fba7d834b7c8f4e372e7eae030745 from qemu
2018-02-27 11:06:33 -05:00
Richard Henderson 9084e5fe1b
int128: Use __int128 if available
Backports commit 0846beb36641e8f0c3ee55a5bb84d468b653c852 from qemu
2018-02-27 11:03:06 -05:00
Richard Henderson 4fdbe94eea
exec: Avoid direct references to Int128 parts
Backports commit 258dfaaad05a5fbe32a142b794e1df3e16501d0e from qemu
2018-02-27 11:01:43 -05:00
Richard Henderson ba1c63572e
atomics: Add __nocheck atomic operations
While the check against sizeof(void *) is appropriate for
normal usage within qemu, there are places in which we want
wider operaions and have checked for their existance.

Backports commit 84bca3927b36fb1d9a2ca85cbbdf9023d2b84678 from qemu
2018-02-27 11:00:20 -05:00
Lioncash a59eef391e
atomic: MSVC compatible equivalents to some functions 2018-02-27 10:56:04 -05:00
Emilio G. Cota c837d76a86
atomics: add atomic_op_fetch variants
This paves the way for upcoming work.

Backports commit 83d0c719f837724d9e3963b078211b2242bdd2a5 from qemu
2018-02-27 10:28:27 -05:00
Emilio G. Cota 102a53aa50
atomics: add atomic_xor
This paves the way for upcoming work.

Backports commit 61696ddbdc74263ddb6869856772cfe355a5d3bd from qemu
2018-02-27 10:23:31 -05:00
Richard Henderson 3fe8d46a15
atomics: Add parameters to macros
Making these functional rather than object macros will
prevent later problems with complex macro expansion.

Backports commit d1a9f2d12fcfc942924956fbe321aedf4226ccb7 from qemu
2018-02-27 10:21:35 -05:00
Daniel P. Berrange 83a5bf2d25
qapi: rename QmpOutputVisitor to QObjectOutputVisitor
The QmpOutputVisitor has no direct dependency on QMP. It is
valid to use it anywhere that one wants a QObject. Rename it
to better reflect its functionality as a generic QAPI
to QObject converter.

The commit before previous renamed the files, this one renames C
identifiers.

Backports commit 7d5e199ade76c53ec316ab6779800581bb47c50a from qemu
2018-02-27 08:05:33 -05:00
Daniel P. Berrange 2949a90977
qapi: rename QmpInputVisitor to QObjectInputVisitor
The QmpInputVisitor has no direct dependency on QMP. It is
valid to use it anywhere that one has a QObject. Rename it
to better reflect its functionality as a generic QObject
to QAPI converter.

The previous commit renamed the files, this one renames C identifiers.

Backports commit 09e68369a88d7de0f988972bf28eec1b80cc47f9 from qemu
2018-02-26 15:54:15 -05:00
Daniel P. Berrange 228f122248
qapi: rename *qmp-*-visitor* to *qobject-*-visitor*
The QMP visitors have no direct dependency on QMP. It is
valid to use them anywhere that one has a QObject. Rename them
to better reflect their functionality as a generic QObject
to QAPI converter.

This is the first of three parts: rename the files. The next two
parts will rename C identifiers. The split is necessary to make git
rename detection work.

Backports commit b3db211f3c80bb996a704d665fe275619f728bd4 from qemu
2018-02-26 15:42:37 -05:00
Peter Maydell db8b0a82b1
cpu: Support a target CPU having a variable page size
Support target CPUs having a page size which isn't knownn
at compile time. To use this, the CPU implementation should:
* define TARGET_PAGE_BITS_VARY
* not define TARGET_PAGE_BITS
* define TARGET_PAGE_BITS_MIN to the smallest value it
might possibly want for TARGET_PAGE_BITS
* call set_preferred_target_page_bits() in its realize
function to indicate the actual preferred target page
size for the CPU (and report any error from it)

In CONFIG_USER_ONLY, the CPU implementation should continue
to define TARGET_PAGE_BITS appropriately for the guest
OS page size.

Machines which want to take advantage of having the page
size something larger than TARGET_PAGE_BITS_MIN must
set the MachineClass minimum_page_bits field to a value
which they guarantee will be no greater than the preferred
page size for any CPU they create.

Note that changing the target page size by setting
minimum_page_bits is a migration compatibility break
for that machine.

For debugging purposes, attempts to use TARGET_PAGE_SIZE
before it has been finally confirmed will assert.

Backports commit 20bccb82ff3ea09bcb7c4ee226d3160cab15f7da from qemu
2018-02-26 12:29:08 -05:00
Paolo Bonzini eb75004013
memory: add a per-AddressSpace list of listeners
This speeds up MEMORY_LISTENER_CALL noticeably. Right now,
with many PCI devices you have N regions added to M AddressSpaces
(M = # PCI devices with bus-master enabled) and each call looks
up the whole listener list, with at least M listeners in it.
Because most of the regions in N are BARs, which are also roughly
proportional to M, the whole thing is O(M^3). This changes it
to O(M^2), which is the best we can do without rewriting the
whole thing.

Backports commit 9a54635dcb51a3fcf7507af630168f514a8cd4e7 from qemu
2018-02-26 10:46:50 -05:00
Paolo Bonzini 4b06e8bbb7
memory: eliminate global MemoryListeners
There is none, so just drop the code.

Backports commit d45fa784cd0c111131696808d1168259d66b7519 from qemu
2018-02-26 10:19:28 -05:00
Paolo Bonzini 8b239bd48b
atomic: base mb_read/mb_set on load-acquire and store-release
This introduces load-acquire and store-release operations in QEMU.
For now, just use them as an implementation detail of atomic_mb_read
and atomic_mb_set.

Since docs/atomics.txt documents that atomic_mb_read only synchronizes
with an atomic_mb_set of the same variable, we can use the new implementation
everywhere instead of seq-cst loads and stores.

Backports commit 803cf26a9e019b5d2256a8edeb22e3538c4f3261 from qemu
2018-02-26 10:02:46 -05:00
Paolo Bonzini fd7ef4c184
atomic: introduce smp_mb_acquire and smp_mb_release 2018-02-26 09:58:22 -05:00
Alex Bennée fbf6fb1e25
atomic.h: fix __SANITIZE_THREAD__ build
Only very modern GCC's actually set this define when building with the
ThreadSanitizer so this little typo slipped though.

Backports commit 23ea7f57949f2f5934f4d5bbc29fe321b3a7067b from qemu
2018-02-26 05:12:17 -05:00
Alex Bennée 4046235e92
atomic.h: comment on use of atomic_read/set
Add some notes on the use of the relaxed atomic access helpers and their
importance for defined behaviour in C11's multi-threaded memory model.

Backports commit e653bc6b0ff645c25b8a2eb607c18a5c98b59db6 from qemu
2018-02-26 05:03:59 -05:00
Alex Bennée 33589eb75f
cpus: pass CPUState to run_on_cpu helpers
CPUState is a fairly common pointer to pass to these helpers. This means
if you need other arguments for the async_run_on_cpu case you end up
having to do a g_malloc to stuff additional data into the routine. For
the current users this isn't a massive deal but for MTTCG this gets
cumbersome when the only other parameter is often an address.

This adds the typedef run_on_cpu_func for helper functions which has an
explicit CPUState * passed as the first parameter. All the users of
run_on_cpu and async_run_on_cpu have had their helpers updated to use
CPUState where available.

Backports commit e0eeb4a21a3ca4b296220ce4449d8acef9de9049 from qemu
2018-02-26 04:54:55 -05:00
Felipe Franciosi 0ed8880525
compiler: Swap 'public domain' header for license
As discussed on the list [1], having a comment stating that this file
is "public domain" is arguably wrong and not legally binding. This patch
replaces that comment with a clear GPLv2+ license as proposed in [2].

[1] http://lists.nongnu.org/archive/html/qemu-devel/2016-09/msg06151.html
[2] http://lists.nongnu.org/archive/html/qemu-devel/2016-09/msg06217.html

Worth noting, compiler.h was originally created on 5c026320 by splitting
qemu-common.h. At the time, qemu-common.h was already GPLv2+.

Backports commit cc9d8a3b2c41c22fb09f90f3085e6036c199c3ca from qemu
2018-02-26 04:49:45 -05:00
Richard Henderson 66d79ac959
tcg: Merge GETPC and GETRA
The return address argument to the softmmu template helpers was
confused. In the legacy case, we wanted to indicate that there
is no return address, and so passed in NULL. However, we then
immediately subtracted GETPC_ADJ from NULL, resulting in a non-zero
value, indicating the presence of an (invalid) return address.

Push the GETPC_ADJ subtraction down to the only point it's required:
immediately before use within cpu_restore_state_from_tb, after all
NULL pointer checks have been completed.

This makes GETPC and GETRA identical. Remove GETRA as the lesser
used macro, replacing all uses with GETPC.

Backports commit 01ecaf438b1eb46abe23392c8ce5b7628b0c8cf5 from qemu
2018-02-26 02:54:44 -05:00
Andrew Dutcher 26b36e5ff8
fpu: add mechanism to check for invalid long double formats
All operations that take a floatx80 as an operand need to have their
inputs checked for malformed encodings. In all of these cases, use the
function floatx80_invalid_encoding to perform the check. If an invalid
operand is found, raise an invalid operation exception, and then return
either NaN (for fp-typed results) or the integer indefinite value (the
minimum representable signed integer value, for int-typed results).

For the non-quiet comparison operations, this touches adjacent code in
order to pass style checks.

Backports cast correction portion of commit d1eb8f2acba579830cf3798c3c15ce51be852c56m from qemu
2018-02-26 02:27:40 -05:00
Pranith Kumar 9e6fec8741
atomics: Use __atomic_*_n() variant primitives
Use the __atomic_*_n() primitives which take the value as argument. It
is not necessary to store the value locally before calling the
primitive, hence saving us a stack store and load.

Backports commit 89943de17c4e276f2c47f05b4604e8816a6a636c from qemu
2018-02-26 02:16:48 -05:00
Paolo Bonzini 30845ae475
tcg: Prepare TB invalidation for lockless TB lookup
When invalidating a translation block, set an invalid flag into the
TranslationBlock structure first. It is also necessary to check whether
the target TB is still valid after acquiring 'tb_lock' but before calling
tb_add_jump() since TB lookup is to be performed out of 'tb_lock' in
future. Note that we don't have to check 'last_tb'; an already invalidated
TB will not be executed anyway and it is thus safe to patch it.

Backports commit 6d21e4208f382dd8ca1f7995a6dd9ea7ca281163 from qemu
2018-02-26 01:48:13 -05:00
Lioncash 2d87095858
glib_compat: Amend header guard 2018-02-25 23:12:20 -05:00
Alex Williamson fe66c2e088
memory: Don't use memcpy for ram_device regions
With a vfio assigned device we lay down a base MemoryRegion registered
as an IO region, giving us read & write accessors. If the region
supports mmap, we lay down a higher priority sub-region MemoryRegion
on top of the base layer initialized as a RAM device pointer to the
mmap. Finally, if we have any quirks for the device (ie. address
ranges that need additional virtualization support), we put another IO
sub-region on top of the mmap MemoryRegion. When this is flattened,
we now potentially have sub-page mmap MemoryRegions exposed which
cannot be directly mapped through KVM.

This is as expected, but a subtle detail of this is that we end up
with two different access mechanisms through QEMU. If we disable the
mmap MemoryRegion, we make use of the IO MemoryRegion and service
accesses using pread and pwrite to the vfio device file descriptor.
If the mmap MemoryRegion is enabled and results in one of these
sub-page gaps, QEMU handles the access as RAM, using memcpy to the
mmap. Using either pread/pwrite or the mmap directly should be
correct, but using memcpy causes us problems. I expect that not only
does memcpy not necessarily honor the original width and alignment in
performing a copy, but it potentially also uses processor instructions
not intended for MMIO spaces. It turns out that this has been a
problem for Realtek NIC assignment, which has such a quirk that
creates a sub-page mmap MemoryRegion access.

To resolve this, we disable memory_access_is_direct() for ram_device
regions since QEMU assumes that it can use memcpy for those regions.
Instead we access through MemoryRegionOps, which replaces the memcpy
with simple de-references of standard sizes to the host memory.

With this patch we attempt to provide unrestricted access to the RAM
device, allowing byte through qword access as well as unaligned
access. The assumption here is that accesses initiated by the VM are
driven by a device specific driver, which knows the device
capabilities. If unaligned accesses are not supported by the device,
we don't want them to work in a VM by performing multiple aligned
accesses to compose the unaligned access. A down-side of this
philosophy is that the xp command from the monitor attempts to use
the largest available access weidth, unaware of the underlying
device. Using memcpy had this same restriction, but at least now an
operator can dump individual registers, even if blocks of device
memory may result in access widths beyond the capabilities of a
given device (RTL NICs only support up to dword).

Backports commit 1b16ded6a512809f99c133a97f19026fe612b2de from qemu
2018-02-25 23:06:36 -05:00
Alex Williamson 5db45219c9
memory: Replace skip_dump flag with ram_device
Setting skip_dump on a MemoryRegion allows us to modify one specific
code path, but the restriction we're trying to address encompasses
more than that. If we have a RAM MemoryRegion backed by a physical
device, it not only restricts our ability to dump that region, but
also affects how we should manipulate it. Here we recognize that
MemoryRegions do not change to sometimes allow dumps and other times
not, so we replace setting the skip_dump flag with a new initializer
so that we know exactly the type of region to which we're applying
this behavior.

Backports commit ca83f87a66d19fdaabf23d4f5ebb49396fe232c1 from qemu
2018-02-25 23:00:45 -05:00
Pranith Kumar 1b19fe260a
softfloat: Fix warn about implicit conversion from int to int8_t
Change the flag type to 'uint8_t' to fix the implicit conversion error.

Backports commit dfd607671037ff46d5b16ade10e10efdf0d260be from qemu
2018-02-25 22:54:39 -05:00
Richard Henderson ede1cae3dc
tcg: Lower indirect registers in a separate pass
Rather than rely on recursion during the middle of register allocation,
lower indirect registers to loads and stores off the indirect base into
plain temps.

For an x86_64 host, with sufficient registers, this results in identical
code, modulo the actual register assignments.

For an i686 host, with insufficient registers, this means that temps can
be (temporarily) spilled to the stack in order to satisfy an allocation.
This as opposed to the possibility of not being able to spill, to allocate
a register for the indirect base, in order to perform a spill.

Backports commit 5a18407f55ade924aa6397c9a043a9ffd59645fe from qemu
2018-02-25 22:32:28 -05:00
Richard Henderson 2aa46dd9a1
tcg: Include liveness info in the dumps
Backports commit bdfb460ef77500f7b186759b585f06ff2120929d from qemu
2018-02-25 22:13:08 -05:00
Richard Henderson 1547048a22
tcg: Reorg TCGOp chaining
Instead of using -1 as end of chain, use 0, and link through the 0
entry as a fully circular double-linked list.

Backports commit dcb8e75870e2de199db853697f8839cb603beefe from qemu
2018-02-25 21:44:50 -05:00
Eric Blake 30cbcafc05
osdep: Document differences in rounding macros
Make it obvious which macros are safe in which situations.

Useful since QEMU_ALIGN_UP and ROUND_UP both purport to do
the same thing, but differ on whether the alignment must be
a power of 2.
2018-02-25 21:05:21 -05:00
Igor Mammedov 62c89b9cd4
exec: Reduce CONFIG_USER_ONLY ifdeffenery
Backports commit 1bc7e522d9cf1b58f2de9c8f1737be0bb5129c35 from qemu
2018-02-25 20:57:48 -05:00
Igor Mammedov d30410dc9a
target-i386: Add x86_cpu_unrealizefn()
First remove VCPU from exec loop and only then remove lapic.

Backports commit c884776e9dc947105827bd6c22192863f97267d2 from qemu
2018-02-25 20:54:13 -05:00
Paolo Bonzini a47c68164d
compiler: never omit assertions if using a static analysis tool
Assertions help both Coverity and the clang static analyzer avoid
false positives, but on the other hand both are confused when
the condition is compiled as (void)(x != FOO). Always expand
assertion macros when using Coverity or clang, through a new
QEMU_STATIC_ANALYSIS preprocessor symbol.

This fixes a couple false positives in TCG.

Backports commit 8bff06a0bbf257a2083223534c1607bf87d913e6 from qemu
2018-02-25 19:19:28 -05:00
Markus Armbruster c2ffbc575d
Clean up decorations and whitespace around header guards
Cleaned up with scripts/clean-header-guards.pl.

Backports commit 175de52487ce0b0c78daa4cdf41a5a465a168a25 from qemu
2018-02-25 04:26:02 -05:00
Markus Armbruster 1275b9b459
Clean up ill-advised or unusual header guards
Cleaned up with scripts/clean-header-guards.pl.

Backports commit 2a6a4076e117113ebec97b1821071afccfdfbc96 from qemu
2018-02-25 04:22:46 -05:00
Markus Armbruster 9ae2fc4d9e
Clean up header guards that don't match their file name
Header guard symbols should match their file name to make guard
collisions less likely. Offenders found with
scripts/clean-header-guards.pl -vn.

Cleaned up with scripts/clean-header-guards.pl, followed by some
renaming of new guard symbols picked by the script to better ones.

Backports commit 121d07125bb6d7079c7ebafdd3efe8c3a01cc440 from qemu
2018-02-25 04:18:42 -05:00
Markus Armbruster 60e8836b74
Use #include "..." for our own headers, <...> for others
Tracked down with an ugly, brittle and probably buggy Perl script.

Also move includes converted to <...> up so they get included before
ours where that's obviously okay.

Backports commit a9c94277f07d19d3eb14f199c3e93491aa3eae0e from qemu
2018-02-25 04:10:33 -05:00
Peter Maydell f6f843b4d4
bswap.h: Document cpu_to_* and *_to_cpu conversion functions
Add a documentation comment describing the functions for
converting between the cpu and little or bigendian formats.

Backports commit 7d820b766a2049f33ca7e078aa51018f2335f8c5 from qemu
2018-02-25 04:06:28 -05:00
Peter Maydell 1d7f813942
bswap.h: Remove unused cpu_to_*w() and *_to_cpup()
Now that all uses of cpu_to_*w() and *_to_cpup() have been replaced
with either ld*_p()/st*_p() or by doing direct dereferences and
using the cpu_to_*()/*_to_cpu() byteswap functions, we can remove
the unused implementations.

Backports commit f76bde702916d0230bf359d478bcac8d7f3b30ae from qemu
2018-02-25 04:04:46 -05:00
Sergey Sorokin d1e4ac0451
Fix confusing argument names in some common functions
There are functions tlb_fill(), cpu_unaligned_access() and
do_unaligned_access() that are called with access type and mmu index
arguments. But these arguments are named 'is_write' and 'is_user' in their
declarations. The patches fix the arguments to avoid a confusion.

Backports commit b35399bb4e9968296a12303b00f9f2066470e987 from qemu
2018-02-25 03:58:27 -05:00
Sergey Sorokin e4d123caa9
tcg: Improve the alignment check infrastructure
Some architectures (e.g. ARMv8) need the address which is aligned
to a size more than the size of the memory access.
To support such check it's enough the current costless alignment
check implementation in QEMU, but we need to support
an alignment size specifying.

Backports commit 1f00b27f17518a1bcb4cedca49eaec96a4d560bd from qemu
2018-02-25 02:23:28 -05:00
Lioncash 532f840dc3
qapi: Add new clone visitor
We have a couple places in the code base that want to deep-clone
one QAPI object into another, and they were resorting to serializing
the struct out to QObject then reparsing it. A much more efficient
version can be done by adding a new clone visitor.

Since cloning is still relatively uncommon, expose the use of the
new visitor via a QAPI_CLONE() macro that takes care of type-punning
the underlying function pointer, rather than generating lots of
unused functions for types that won't be cloned. And yes, we're
relying on the compiler treating all pointers equally, even though
a strict C program cannot portably do so - but we're not the first
one in the qemu code base to expect it to work (hello, glib!).

The choice of adding a fourth visitor type deserves some explanation.
On the surface, the clone visitor is mostly an input visitor (it
takes arbitrary input - in this case, another QAPI object - and
creates a new QAPI object during the course of the visit). But
ever since commit da72ab0 consolidated enum visits based on the
visitor type, using VISITOR_INPUT would cause us to run
visit_type_str(), even though for cloning there is nothing to do
(we just copy the enum value across, without regards to its mapping
to strings). Also, since our input happens to be a QAPI object,
we can also satisfy the internal checks for VISITOR_OUTPUT. So in
the end, I settled with a new VISITOR_CLONE, and chose its value
such that many internal checks can use 'v->type & mask', sticking
to 'v->type == value' where the difference matters.

Note that we can only clone objects (including alternates) and lists,
not built-ins or enums. The visitor core hides integer width from
the actual visitor (since commit 04e070d), and as long as that's the
case, we can't clone top-level integers. Then again, those can
always be cloned by direct copy, since they are not objects with
deep pointers, so it's no real loss. And restricting cloning to
just objects and lists is cleaner than restricting it to non-integers.
As such, I documented that the clone visitor is for direct use only
by code internal to QAPI, and should not be used on incomplete objects
(other than a hack to work around the fact that we allow NULL in place
of "" in visit_type_str() in other output visitors). Note that as
written, the clone visitor will never fail on a complete object.

Scalars (including enums) not at the root of the clone copy just fine
with no additional effort while visiting the scalar, by virtue of a
g_memdup() each time we push another struct onto the stack. Cloning
a string requires deduplication of a pointer, which means it can also
provide the guarantee of an input visitor of never producing NULL
even when still accepting NULL in place of "" the way the QMP output
visitor does.

Cloning an 'any' type could be possible by incrementing the QObject
refcnt, but it's not obvious whether that is better than implementing
a QObject deep clone. So for now, we document it as unsupported,
and intentionally omit the .type_any() callback to let a developer
know their usage needs implementation.

Add testsuite coverage for several different clone situations, to
ensure that the code is working. I also tested that valgrind was
happy with the test.

Backports commit a15fcc3cf69ee3d408f60d6cc316488d2b0249b4 from qemu
2018-02-25 01:34:12 -05:00
Eric Blake 85af4b2030
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.

This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).

Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.

The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.

Generated code is simplified as follows for events:

|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);

and for commands:

| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);

Backports commit 3b098d56979d2f7fd707c5be85555d114353a28d from qemu
2018-02-25 01:20:03 -05:00
Eric Blake ec53301cda
qmp-output-visitor: Favor new visit_free() function
Now that we have a polymorphic visit_free(), we no longer need
qmp_output_visitor_cleanup(); however, we still need to
expose the subtype for qmp_output_get_qobject().

Backports commit 1830f22a6777cedaccd67a08f675d30f7a85ebfd from qemu
2018-02-25 01:12:27 -05:00
Eric Blake f008d93ac0
qmp-input-visitor: Favor new visit_free() function
Now that we have a polymorphic visit_free(), we no longer need
qmp_input_visitor_cleanup(); which in turn means we no longer
need to return a subtype from qmp_input_visitor_new() nor a
public upcast function.

Generated code changes to qmp-marshal.c look like:

|@@ -52,11 +52,10 @@ void qmp_marshal_add_fd(QDict *args, QOb
| {
| Error *err = NULL;
| AddfdInfo *retval;
|- QmpInputVisitor *qiv = qmp_input_visitor_new(QOBJECT(args), true);
| Visitor *v;
| q_obj_add_fd_arg arg = {0};
|
|- v = qmp_input_get_visitor(qiv);
|+ v = qmp_input_visitor_new(QOBJECT(args), true);
| visit_start_struct(v, NULL, NULL, 0, &err);
| if (err) {
| goto out;

Backports commit b70ce1018a251c0c33498d9c927a07cade655a5e from qemu
2018-02-25 01:10:53 -05:00
Eric Blake e88a7e260b
string-input-visitor: Favor new visit_free() function
Now that we have a polymorphic visit_free(), we no longer need
string_input_visitor_cleanup(); which in turn means we no longer
need to return a subtype from string_input_visitor_new() nor a
public upcast function.

Backports commit 7a0525c7be6b38d32d586e3fd12e7377ded21faa from qemu
2018-02-25 01:08:04 -05:00
Eric Blake 7f741a6c9b
qapi: Add new visit_free() function
Making each visitor provide its own (awkwardly-named) FOO_cleanup()
is unusual, when we can instead have a polymorphic visit_free()
interface. Over the next few patches, we can use the polymorphic
functions to eliminate the need for a FOO_get_visitor() function
for accessing specific visitor functionality, once everything can
be accessed directly through the Visitor* interfaces.

The dealloc visitor is the first one converted to completely use
the new entry point, since qapi_dealloc_visitor_cleanup() was the
only reason that qapi_dealloc_get_visitor() existed, and only
generated and testsuite code was even using it. With the new
visit_free() entry point in place, we no longer need to expose
the QapiDeallocVisitor subtype through qapi_dealloc_visitor_new(),
and can get by with less generated code, with diffs that look like:

| void qapi_free_ACPIOSTInfo(ACPIOSTInfo *obj)
| {
|- QapiDeallocVisitor *qdv;
| Visitor *v;
|
| if (!obj) {
| return;
| }
|
|- qdv = qapi_dealloc_visitor_new();
|- v = qapi_dealloc_get_visitor(qdv);
|+ v = qapi_dealloc_visitor_new();
| visit_type_ACPIOSTInfo(v, NULL, &obj, NULL);
|- qapi_dealloc_visitor_cleanup(qdv);
|+ visit_free(v);
|}

Backports commit 2c0ef9f411ae6081efa9eca5b3eab2dbeee45a6c from qemu
2018-02-25 01:05:41 -05:00
Eric Blake 37ae4dfdfd
qapi: Add parameter to visit_end_*
Rather than making the dealloc visitor track of stack of pointers
remembered during visit_start_* in order to free them during
visit_end_*, it's a lot easier to just make all callers pass the
same pointer to visit_end_*. The generated code has access to the
same pointer, while all other users are doing virtual walks and
can pass NULL. The dealloc visitor is then greatly simplified.

All three visit_end_*() functions intentionally take a void**,
even though the visit_start_*() functions differ between void**,
GenericList**, and GenericAlternate**. This is done for several
reasons: when doing a virtual walk, passing NULL doesn't care
what the type is, but when doing a generated walk, we already
have to cast the caller's specific FOO* to call visit_start,
while using void** lets us use visit_end without a cast. Also,
an upcoming patch will add a clone visitor that wants to use
the same implementation for all three visit_end callbacks,
which is made easier if all three share the same signature.

For visitors with already track per-object state (the QMP visitors
via a stack, and the string visitors which do not allow nesting),
add an assertion that the caller is indeed passing the same
pointer to paired calls.

Backports commit 1158bb2a058fcdd0c8fc3e60dc77f7a57ddbb271 from qemu
2018-02-25 00:57:54 -05:00
Changlong Xie 2ca07642f1
qom: Fix comment typo
It's qom_unref, not qdef_unref.

Backports commit ada03a0e8423ef8950e30d216f56a9661a4070e2 from qemu
2018-02-25 00:46:15 -05:00
Markus Armbruster eeef227560
range: Replace internal representation of Range
Range represents a range as follows. Member @start is the inclusive
lower bound, member @end is the exclusive upper bound. Zero @end is
special: if @start is also zero, the range is empty, else @end is to
be interpreted as 2^64. No other empty ranges may occur.

The range [0,2^64-1] cannot be represented. If you try to create it
with range_set_bounds1(), you get the empty range instead. If you try
to create it with range_set_bounds() or range_extend(), assertions
fail. Before range_set_bounds() existed, the open-coded creation
usually got you the empty range instead. Open deathtrap.

Moreover, the code dealing with the janus-faced @end is too clever by
half.

Dumb this down to a more pedestrian representation: members @lob and
@upb are inclusive lower and upper bounds. The empty range is encoded
as @lob = 1, @upb = 0.

Backports commit 6dd726a2bf1b800289d90a84d5fcb5ce7b78a8e1 from qemu
2018-02-25 00:44:36 -05:00
Markus Armbruster 8b2a0c4ece
range: Eliminate direct Range member access
Users of struct Range mess liberally with its members, which makes
refactoring hard. Create a set of methods, and convert all users to
call them instead of accessing members. The methods have carefully
worded contracts, and use assertions to check them.

Backports commit a0efbf16604770b9d805bcf210ec29942321134f from qemu
2018-02-25 00:39:43 -05:00
Alistair Francis fbb0645fb3
bitops: Add MAKE_64BIT_MASK macro
Add a macro that creates a 64bit value which has length number of ones
shifted across by the value of shift.

Backports commit ae2923b5c20a21c6457680330506a9c13873485c from qemu
2018-02-25 00:30:39 -05:00
Peter Maydell efc6cc2b83
memory: Assert that memory_region_init_rom_device() ops aren't NULL
It doesn't make sense to pass a NULL ops argument to
memory_region_init_rom_device(), because the effect will
be that if the guest tries to write to the memory region
then QEMU will segfault. Catch the bug earlier by sanity
checking the arguments to this function, and remove the
misleading documentation that suggests that passing NULL
might be sensible.

Backports commit 39e0b03dec518254fabd2acff29548d3f1d2b754 from qemu
2018-02-25 00:29:52 -05:00
Peter Maydell 334e951ec1
memory: Provide memory_region_init_rom()
Provide a new helper function memory_region_init_rom() for memory
regions which are read-only (and unlike those created by
memory_region_init_rom_device() don't have special behaviour
for writes). This has the same behaviour as calling
memory_region_init_ram() and then memory_region_set_readonly()
(which is what we do today in boards with pure ROMs) but is a
more easily discoverable API for the purpose.

Backports commit a1777f7f6462c66e1ee6e98f0d5c431bfe988aa5 from qemu
2018-02-25 00:28:17 -05:00
Alexey Kardashevskiy 7187d77cfa
memory: Add MemoryRegionIOMMUOps.notify_started/stopped callbacks
The IOMMU driver may change behavior depending on whether a notifier
client is present. In the case of POWER, this represents a change in
the visibility of the IOTLB, for other drivers such as intel-iommu and
future AMD-Vi emulation, notifier support is not yet enabled and this
provides the opportunity to flag that incompatibility.

Backports commit d22d8956b185c002b50a4d0883aff61f857347ef from qemu
2018-02-25 00:23:00 -05:00
Eric Blake c14d8226ab
qapi: Fix memleak in string visitors on int lists
Commit 7f8f9ef1 introduced the ability to store a list of
integers as a sorted list of ranges, but when merging ranges,
it leaks one or more ranges. It was also using range_get_last()
incorrectly within range_compare() (a range is a start/end pair,
but range_get_last() is for start/len pairs), and will also
mishandle a range ending in UINT64_MAX (remember, we document
that no range covers 2**64 bytes, but that ranges that end on
UINT64_MAX have end < begin).

The whole merge algorithm was rather complex, and included
unnecessary passes over data within glib functions, and enough
indirection to make it hard to easily plug the data leaks.
Since we are already hard-coding things to a list of ranges,
just rewrite the thing to open-code the traversal and
comparisons, by making the range_compare() helper function give
us an answer that is easier to use, at which point we avoid the
need to pass any callbacks to g_list_*(). Then by reusing
range_extend() instead of duplicating effort with range_merge(),
we cover the corner cases correctly.

Drop the now-unused range_merge() and ranges_can_merge().

Doing this lets test-string-{input,output}-visitor pass under
valgrind without leaks.

Backports commit db486cc334aafd3dbdaf107388e37fc3d6d3e171 from qemu
2018-02-25 00:20:34 -05:00
Eric Blake ef357d06bc
qapi: Simplify use of range.h
Calling our function g_list_insert_sorted_merged is a misnomer,
since we are NOT writing a glib function. Furthermore, we are
making every caller pass the same comparator function of
range_merge(): any caller that would try otherwise would break
in weird ways since our internal call to ranges_can_merge() is
hard-coded to operate only on ranges, rather than paying
attention to the caller's comparator.

Better is to fix things so that callers don't have to care about
our internal comparator, by picking a function name and updating
the parameter type away from a gratuitous use of void*, to make
it obvious that we are operating specifically on a list of ranges
and not a generic list. Plus, refactoring the code here will
make it easier to plug a memory leak in the next patch.

range_compare() is now internal only, and moves to the .c file.

Backports commit 7c47959d0cb05db43014141a156ada0b6d53a750 from qemu
2018-02-25 00:02:42 -05:00
Eric Blake 5e22c7e180
range: Create range.c for code that should not be inline
g_list_insert_sorted_merged() is rather large to be an inline
function; move it to its own file. range_merge() and
ranges_can_merge() can likewise move, as they are only used
internally. Also, it becomes obvious that the condition within
range_merge() is already satisfied by its caller, and that the
return value is not used.

The diffstat is misleading, because of the copyright boilerplate.

Backports commit fec0fc0a13ac7f1a1130433a6740cd850c3db34a from qemu
2018-02-24 23:59:13 -05:00
Aleksandar Markovic 6eb4fa54f6
softfloat: Implement run-time-configurable meaning of signaling NaN bit
This patch modifies SoftFloat library so that it can be configured in
run-time in relation to the meaning of signaling NaN bit, while, at the
same time, strictly preserving its behavior on all existing platforms.

Background:

In floating-point calculations, there is a need for denoting undefined or
unrepresentable values. This is achieved by defining certain floating-point
numerical values to be NaNs (which stands for "not a number"). For additional
reasons, virtually all modern floating-point unit implementations use two
kinds of NaNs: quiet and signaling. The binary representations of these two
kinds of NaNs, as a rule, differ only in one bit (that bit is, traditionally,
the first bit of mantissa).

Up to 2008, standards for floating-point did not specify all details about
binary representation of NaNs. More specifically, the meaning of the bit
that is used for distinguishing between signaling and quiet NaNs was not
strictly prescribed. (IEEE 754-2008 was the first floating-point standard
that defined that meaning clearly, see [1], p. 35) As a result, different
platforms took different approaches, and that presented considerable
challenge for multi-platform emulators like QEMU.

Mips platform represents the most complex case among QEMU-supported
platforms regarding signaling NaN bit. Up to the Release 6 of Mips
architecture, "1" in signaling NaN bit denoted signaling NaN, which is
opposite to IEEE 754-2008 standard. From Release 6 on, Mips architecture
adopted IEEE standard prescription, and "0" denotes signaling NaN. On top of
that, Mips architecture for SIMD (also known as MSA, or vector instructions)
also specifies signaling bit in accordance to IEEE standard. MSA unit can be
implemented with both pre-Release 6 and Release 6 main processor units.

QEMU uses SoftFloat library to implement various floating-point-related
instructions on all platforms. The current QEMU implementation allows for
defining meaning of signaling NaN bit during build time, and is implemented
via preprocessor macro called SNAN_BIT_IS_ONE.

On the other hand, the change in this patch enables SoftFloat library to be
configured in run-time. This configuration is meant to occur during CPU
initialization, at the moment when it is definitely known what desired
behavior for particular CPU (or any additional FPUs) is.

The change is implemented so that it is consistent with existing
implementation of similar cases. This means that structure float_status is
used for passing the information about desired signaling NaN bit on each
invocation of SoftFloat functions. The additional field in float_status is
called snan_bit_is_one, which supersedes macro SNAN_BIT_IS_ONE.

IMPORTANT:

This change is not meant to create any change in emulator behavior or
functionality on any platform. It just provides the means for SoftFloat
library to be used in a more flexible way - in other words, it will just
prepare SoftFloat library for usage related to Mips platform and its
specifics regarding signaling bit meaning, which is done in some of
subsequent patches from this series.

Further break down of changes:

1) Added field snan_bit_is_one to the structure float_status, and
correspondent setter function set_snan_bit_is_one().

2) Constants <float16|float32|float64|floatx80|float128>_default_nan
(used both internally and externally) converted to functions
<float16|float32|float64|floatx80|float128>_default_nan(float_status*).
This is necessary since they are dependent on signaling bit meaning.
At the same time, for the sake of code cleanup and simplicity, constants
<floatx80|float128>_default_nan_<low|high> (used only internally within
SoftFloat library) are removed, as not needed.

3) Added a float_status* argument to SoftFloat library functions
XXX_is_quiet_nan(XXX a_), XXX_is_signaling_nan(XXX a_),
XXX_maybe_silence_nan(XXX a_). This argument must be present in
order to enable correct invocation of new version of functions
XXX_default_nan(). (XXX is <float16|float32|float64|floatx80|float128>
here)

4) Updated code for all platforms to reflect changes in SoftFloat library.
This change is twofolds: it includes modifications of SoftFloat library
functions invocations, and an addition of invocation of function
set_snan_bit_is_one() during CPU initialization, with arguments that
are appropriate for each particular platform. It was established that
all platforms zero their main CPU data structures, so snan_bit_is_one(0)
in appropriate places is not added, as it is not needed.

[1] "IEEE Standard for Floating-Point Arithmetic",
IEEE Computer Society, August 29, 2008.

Backports commit af39bc8c49224771ec0d38f1b693ea78e221d7bc from qemu
2018-02-24 20:27:12 -05:00
Alexey Kardashevskiy 096ca207af
memory: Add reporting of supported page sizes
Every IOMMU has some granularity which MemoryRegionIOMMUOps::translate
uses when translating, however this information is not available outside
the translate context for various checks.

This adds a get_min_page_size callback to MemoryRegionIOMMUOps and
a wrapper for it so IOMMU users (such as VFIO) can know the minimum
actual page size supported by an IOMMU.

As IOMMU MR represents a guest IOMMU, this uses TARGET_PAGE_SIZE
as fallback.

This removes vfio_container_granularity() and uses new helper in
memory_region_iommu_replay() when replaying IOMMU mappings on added
IOMMU memory region.

Backports the relevant parts of commit f682e9c244af7166225f4a50cc18ff296bb9d43e from qemu
2018-02-24 19:23:28 -05:00
Peter Maydell f893dacef0
bitops.h: Implement half-shuffle and half-unshuffle ops
A half-shuffle operation takes a word with zeros in the high half:
0000 0000 0000 0000 ABCD EFGH IJKL MNOP
and spreads the bits out so they are in every other bit of the word:
0A0B 0C0D 0E0F 0G0H 0I0J 0K0L 0M0N 0O0P
A half-unshuffle performs the reverse operation.

Provide functions in bitops.h which implement these operations
for 32-bit and 64-bit inputs, and add tests for them.

Backports commit b355438de52d0782983bf4bdc47936189a0c988b from qemu
2018-02-24 19:02:36 -05:00
Bharata B Rao 851dec945d
qom: API to get instance_size of a type
Add an API object_type_get_size(const char *typename) that returns the
instance_size of the give typename.

Backports commit 3f97b53a682d2595747c926c00d78b9d406f1be0 from qemu
2018-02-24 19:00:16 -05:00
Emilio G. Cota ae3e22a689
tb hash: hash phys_pc, pc, and flags with xxhash
For some workloads such as arm bootup, tb_phys_hash is performance-critical.
The is due to the high frequency of accesses to the hash table, originated
by (frequent) TLB flushes that wipe out the cpu-private tb_jmp_cache's.
More info:
https://lists.nongnu.org/archive/html/qemu-devel/2016-03/msg05098.html

To dig further into this I modified an arm image booting debian jessie to
immediately shut down after boot. Analysis revealed that quite a bit of time
is unnecessarily spent in tb_phys_hash: the cause is poor hashing that
results in very uneven loading of chains in the hash table's buckets;
the longest observed chain had ~550 elements.

The appended addresses this with two changes:

1) Use xxhash as the hash table's hash function. xxhash is a fast,
high-quality hashing function.

2) Feed the hashing function with not just tb_phys, but also pc and flags.

This improves performance over using just tb_phys for hashing, since that
resulted in some hash buckets having many TB's, while others getting very few;
with these changes, the longest observed chain on a single hash bucket is
brought down from ~550 to ~40.

Tests show that the other element checked for in tb_find_physical,
cs_base, is always a match when tb_phys+pc+flags are a match,
so hashing cs_base is wasteful. It could be that this is an ARM-only
thing, though. UPDATE:
On Tue, Apr 05, 2016 at 08:41:43 -0700, Richard Henderson wrote:
> The cs_base field is only used by i386 (in 16-bit modes), and sparc (for a TB
> consisting of only a delay slot).
> It may well still turn out to be reasonable to ignore cs_base for hashing.

BTW, after this change the hash table should not be called "tb_hash_phys"
anymore; this is addressed later in this series.

This change gives consistent bootup time improvements. I tested two
host machines:
- Intel Xeon E5-2690: 11.6% less time
- Intel i7-4790K: 19.2% less time

Increasing the number of hash buckets yields further improvements. However,
using a larger, fixed number of buckets can degrade performance for other
workloads that do not translate as many blocks (600K+ for debian-jessie arm
bootup). This is dealt with later in this series.

Backports commit 42bd32287f3a18d823f2258b813824a39ed7c6d9 from qemu
2018-02-24 18:00:14 -05:00
Emilio G. Cota 9ef9de9cf8
exec: add tb_hash_func5, derived from xxhash
This will be used by upcoming changes for hashing the tb hash.

Add this into a separate file to include the copyright notice from
xxhash.

Backports commit dc8b295d05ec35a8c032f9abca421772347ba5d4 from qemu
2018-02-24 17:36:35 -05:00
Emilio G. Cota 8518f55df7
compiler.h: add QEMU_ALIGNED() to enforce struct alignment
Backports commit 911a4d2215b05267b16925503218f49d607c6b29 from qemu
2018-02-24 17:32:43 -05:00
Peter Maydell d7dccff836
cpu-exec: Rename cpu_resume_from_signal() to cpu_loop_exit_noexc()
The function cpu_resume_from_signal() is now always called with a
NULL puc argument, and is rather misnamed since it is never called
from a signal handler. It is essentially forcing an exit to the
top level cpu loop but without raising any exception, so rename
it to cpu_loop_exit_noexc() and drop the useless unused argument.

Backports commit 6886b98036a8f8f5bce8b10756ce080084cef11b from qemu
2018-02-24 17:25:28 -05:00
Peter Maydell 8d0faac1dc
qemu-common.h: Drop WORDS_ALIGNED define
The WORDS_ALIGNED #define is not used anywhere, and hasn't been since
2013 when commit 612d590ebc6cef rewrote the various ld<type>_<endian>_p
functions to not use it. Remove the #define and the comment describing it.
Also remove the line in the comment about TARGET_WORDS_ALIGNED, since
it has never actually existed.

Backports commit 0d5c21f2b3bf1e0b562a2c74e353d2e03f2f50ef from qemu
2018-02-24 17:01:55 -05:00
Paolo Bonzini 8df5ad80b1
exec: hide mr->ram_addr from qemu_get_ram_ptr users
Let users of qemu_get_ram_ptr and qemu_ram_ptr_length pass in an
address that is relative to the MemoryRegion. This basically means
what address_space_translate returns.

Because the semantics of the second parameter change, rename the
function to qemu_map_ram_ptr.

Backports commit 0878d0e11ba8013dd759c6921cbf05ba6a41bd71 from qemu
2018-02-24 16:17:49 -05:00
Paolo Bonzini b2e1b34bcc
memory: split memory_region_from_host from qemu_ram_addr_from_host
Move the old qemu_ram_addr_from_host to memory_region_from_host and
make it return an offset within the region. For qemu_ram_addr_from_host
return the ram_addr_t directly, similar to what it was before
commit 1b5ec23 ("memory: return MemoryRegion from qemu_ram_addr_from_host",
2013-07-04).

Backports commit 07bdaa4196b51bc7ffa7c3f74e9e4a9dc8a7966a from qemu
2018-02-24 16:06:49 -05:00
Paolo Bonzini 918c626847
exec: remove ram_addr argument from qemu_ram_block_from_host
Of the two callers, one does not use it, and the other can compute
it itself based on the other output argument (offset) and the RAMBlock.

Backports commit f615f39616c4fd1a3a3b078af8d75bb4be6390de from qemu
2018-02-24 03:37:40 -05:00
Paolo Bonzini f26f1f123c
memory: remove qemu_get_ram_fd, qemu_set_ram_fd, qemu_ram_block_host_ptr
Remove direct uses of ram_addr_t and optimize memory_region_{get,set}_fd
now that a MemoryRegion knows its RAMBlock directly.

Backports commit 4ff87573df3606856a92c14eef3393a63d736d11 from qemu
2018-02-24 03:34:44 -05:00
Emilio G. Cota ab569f5cde
atomics: do not emit consume barrier for atomic_rcu_read
Currently we emit a consume-load in atomic_rcu_read. Because of
limitations in current compilers, this is overkill for non-Alpha hosts
and it is only useful to make Thread Sanitizer work.

This patch leaves the consume-load in atomic_rcu_read when
compiling with Thread Sanitizer enabled, and resorts to a
relaxed load + smp_read_barrier_depends otherwise.

On an RMO host architecture, such as aarch64, the performance
improvement of this change is easily measurable. For instance,
qht-bench performs an atomic_rcu_read on every lookup. Performance
before and after applying this patch:

$ tests/qht-bench -d 5 -n 1
Before: 9.78 MT/s
After: 10.96 MT/s

Backports commit 15487aa132109891482f79d78a30d6cfd465a391 from qemu
2018-02-24 03:28:11 -05:00
Emilio G. Cota 87ef2a2c5f
atomics: emit an smp_read_barrier_depends() barrier only for Alpha and Thread Sanitizer
For correctness, smp_read_barrier_depends() is only required to
emit a barrier on Alpha hosts. However, we are currently emitting
a consume fence unconditionally, and most compilers currently treat
consume and acquire fences as equivalent.

Fix it by keeping the consume fence if we're compiling with Thread
Sanitizer, since this might help prevent false warnings. Otherwise,
only emit the barrier for Alpha hosts. Note that we still guarantee
that smp_read_barrier_depends() is a compiler barrier.

Backports commit c983895258a771f8a5e4a53950bfb7fd2216651c from qemu
2018-02-24 03:26:52 -05:00
Eduardo Habkost aa3d46ef83
osdep: Move default qemu_hw_version() value to a macro
The macro will be used by code that will stop calling
qemu_hw_version() at runtime and just need a constant value.

Backports commit d494352c2f7818aeba184a8ef757569083740bb2 from qemu
2018-02-24 03:16:34 -05:00
Fam Zheng fb8135cd0d
memory: Remove code for mr->may_overlap
The collision check does nothing and hasn't been used. Remove the
variable together with related code.

Backports commit b61359781958759317ee6fd1a45b59be0b7dbbe1 from qemu
2018-02-24 02:55:25 -05:00
Gonglei feff56cc11
memory: drop find_ram_block()
On the one hand, we have already qemu_get_ram_block() whose function
is similar. On the other hand, we can directly use mr->ram_block but
searching RAMblock by ram_addr which is a kind of waste.

Backports commit fa53a0e53efdc7002497ea4a76aacf6cceb170ef from qemu
2018-02-24 02:52:20 -05:00
Paolo Bonzini 9bb67a3f58
hw: clean up hw/hw.h includes
Include qom/object.h and exec/memory.h instead of exec/ioport.h;
exec/ioport.h was almost everywhere required only for those two
includes, not for the content of the header itself.

Remove block/aio.h, everybody is already including it through
another path.

With this change, include/hw/hw.h is freed from qemu-common.h.

Backports commit df43d49cb8708b9c88a20afe0d1a3089b550a5b8 from qemu
2018-02-24 02:46:41 -05:00
Paolo Bonzini d0d3712417
hw: remove pio_addr_t
pio_addr_t is almost unused, because these days I/O ports are simply
accessed through the address space. cpu_{in,out}[bwl] themselves are
almost unused; monitor.c and xen-hvm.c could use address_space_read/write
directly, since they have an integer size at hand. This leaves qtest as
the only user of those functions.

On the other hand even portio_* functions use this type; the only
interesting use of pio_addr_t thus is include/hw/sysbus.h. I guess I
could move it there, but I don't see much benefit in that either. Using
uint32_t is enough and avoids the need to include ioport.h everywhere.

Backports commit 89a80e7400f7225d9401b35ef32454b4ab29dc67 from qemu
2018-02-24 02:43:16 -05:00
Paolo Bonzini 9485b7c2e1
cpu: move exec-all.h inclusion out of cpu.h
exec-all.h contains TCG-specific definitions. It is not needed outside
TCG-specific files such as translate.c, exec.c or *helper.c.

One generic function had snuck into include/exec/exec-all.h; move it to
include/qom/cpu.h.

Backports commit 63c915526d6a54a95919ebece83fa9ca631b2508 from qemu
2018-02-24 02:39:08 -05:00
Paolo Bonzini 58693409ea
exec: extract exec/tb-context.h
TCG backends do not need most of exec-all.h; extract what they actually
need to a separate file or move it directly to tcg.h. The next patch
will stop including exec-all.h from everywhere.

Backports commit 00f6da6a1a5d1ce085334eccbb50ec899ceed513 from qemu
2018-02-24 02:09:58 -05:00
Paolo Bonzini f9b9d0ba0f
hw: explicitly include qemu/log.h
Move the inclusion out of hw/hw.h, most files do not need it.

Backports commit 03dd024ff57733a55cd2e455f361d053c81b1b29 from qemu
2018-02-24 02:00:45 -05:00
Paolo Bonzini 37f26922dd
qemu-common: push cpu.h inclusion out of qemu-common.h
Backports commit 33c11879fd422b759483ed25fef133ea900ea8d7 from qemu
2018-02-24 01:50:56 -05:00
Paolo Bonzini e84da64a2b
qemu-common: stop including qemu/bswap.h from qemu-common.h
Move it to the actual users. There are still a few includes of
qemu/bswap.h in headers; removing them is left for future work.

Backports commit 58369e22cf971448411bfbc8c894b2addebe2111 from qemu
2018-02-24 01:06:03 -05:00
Paolo Bonzini 78fd1aab94
cpu: move endian-dependent load/store functions to cpu-all.h
Disentangle cpu-common.h and memory.h from NEED_CPU_H. Prototypes are
not defined for !NEED_CPU_H, so remove them from poison.h too. Only
macros need poisoning.

Backports commit a7d6039cb35592683ecc56d2b37817da2d2f8b00 from qemu
2018-02-24 01:04:26 -05:00
Paolo Bonzini fee6dcb22a
include: move CPU-related definitions out of qemu-common.h
Backports commit 4b4629d9d26fd0e100d9be526367a96aa35b541d from qemu
2018-02-24 00:33:49 -05:00
Wei Jiangang 7cf135457a
accel: make configure_accelerator return void
Return the negated value of accel_initialised is meaningless,
and the caller vl doesn't check it.

Backports commit bdc3f61dec2f9c227235bb5f677a0272e1184c82 from qemu
2018-02-24 00:31:28 -05:00
Sergey Fedorov 1a768018c2
tcg: Remove needless CPUState::current_tb
This field was used for telling cpu_interrupt() to unlink a chain of TBs
being executed when it worked that way. Now, cpu_interrupt() don't do
this anymore. So we don't need this field anymore.

Backports commit 3213525f8ab48742db09dab18cb9ae6f36a6c921 from qemu
2018-02-23 23:45:42 -05:00
Sergey Fedorov ba9a237586
tcg: Rework tb_invalidated_flag
'tb_invalidated_flag' was meant to catch two events:
* some TB has been invalidated by tb_phys_invalidate();
* the whole translation buffer has been flushed by tb_flush().

Then it was checked:
* in cpu_exec() to ensure that the last executed TB can be safely
linked to directly call the next one;
* in cpu_exec_nocache() to decide if the original TB should be provided
for further possible invalidation along with the temporarily
generated TB.

It is always safe to patch an invalidated TB since it is not going to be
used anyway. It is also safe to call tb_phys_invalidate() for an already
invalidated TB. Thus, setting this flag in tb_phys_invalidate() is
simply unnecessary. Moreover, it can prevent from pretty proper linking
of TBs, if any arbitrary TB has been invalidated. So just don't touch it
in tb_phys_invalidate().

If this flag is only used to catch whether tb_flush() has been called
then rename it to 'tb_flushed'. Declare it as 'bool' and stick to using
only 'true' and 'false' to set its value. Also, instead of setting it in
tb_gen_code(), just after tb_flush() has been called, do it right inside
of tb_flush().

In cpu_exec(), this flag is used to track if tb_flush() has been called
and have made 'next_tb' (a reference to the last executed TB) invalid
for linking it to directly call the next TB. tb_flush() can be called
during the CPU execution loop from tb_gen_code(), during TB execution or
by another thread while 'tb_lock' is released. Catch for translation
buffer flush reliably by resetting this flag once before first TB lookup
and each time we find it set before trying to add a direct jump. Don't
touch in in tb_find_physical().

Each vCPU has its own execution loop in multithreaded mode and thus
should have its own copy of the flag to be able to reset it with its own
'next_tb' and don't affect any other vCPU execution thread. So make this
flag per-vCPU and move it to CPUState.

In cpu_exec_nocache(), we only need to check if tb_flush() has been
called from tb_gen_code() called by cpu_exec_nocache() itself. To do
this reliably, preserve the old value of the flag, reset it before
calling tb_gen_code(), check afterwards, and combine the saved value
back to the flag.

This patch is based on the patch "tcg: move tb_invalidated_flag to
CPUState" from Paolo Bonzini <pbonzini@redhat.com>.

Backports commit 6f789be56d3f38e9214dafcfab3bf9be7191f370 from qemu
2018-02-23 23:34:51 -05:00
Sergey Fedorov d60af028c5
tcg: Clarify thread safety check in tb_add_jump()
The check is to make sure that another thread hasn't already done the
same while we were outside of tb_lock. Mention this in a comment.

Backports commit 9962c478b153a18fe88a6509fe58cd178aff8abc from qemu
2018-02-23 21:32:47 -05:00
Sergey Fedorov fbc0a1105f
tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB
These fields do not contain pure pointers to a TranslationBlock
structure. So uintptr_t is the most appropriate type for them.
Also put some asserts to assure that the two least significant bits of
the pointer are always zero before assigning it to jmp_list_first.

Backports commit c37e6d7e3589ecb96914faa21025ad7ba6654aea from qemu
2018-02-23 21:28:19 -05:00
Sergey Fedorov e60c24cecf
tcg: Clean up direct block chaining data fields
Briefly describe in a comment how direct block chaining is done. It
should help in understanding of the following data fields.

Rename some fields in TranslationBlock and TCGContext structures to
better reflect their purpose (dropping excessive 'tb_' prefix in
TranslationBlock but keeping it in TCGContext):
tb_next_offset => jmp_reset_offset
tb_jmp_offset => jmp_insn_offset
tb_next => jmp_target_addr
jmp_next => jmp_list_next
jmp_first => jmp_list_first

Avoid using a magic constant as an invalid offset which is used to
indicate that there's no n-th jump generated.

Backports commit f309101c26b59641fc1aa8fb2a98a5441cdaea03 from qemu
2018-02-23 21:28:19 -05:00
Sergey Fedorov c5b234ed1f
tcg: Note requirement on atomic direct jump patching
Backports commit 10b4f4855537dd421e193a7d0416513116370558 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov 52e2972300
tcg/arm: Make direct jump patching thread-safe
Ensure direct jump patching in ARM is atomic by using
atomic_read()/atomic_set() for code patching.

Backports commit 7d14e0e2d661479985197203589c38840e1066df from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov 57359fbe6c
tcg/s390: Make direct jump patching thread-safe
Ensure direct jump patching in s390 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.

Backports commit ed3d51ecd7fe248d3959e469d53890ac9ffe0cd2 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov 5eb2d6618f
tcg/i386: Make direct jump patching thread-safe
Ensure direct jump patching in i386 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.

Backports commit 0d07abf05e98903c7faf204a9a90f7d45b7554dc from qemu
2018-02-23 21:28:17 -05:00
Lioncash fffa27d269
osdep: MSVC-compatible alignment macros 2018-02-23 21:28:17 -05:00
Sergey Fedorov 3456f0879e
include/qemu/osdep.h: Add macros for pointer alignment
These macros provide a convenient way to n-byte align pointers up and
down and check if a pointer is n-byte aligned.

Backports commit 6b587d3cda48e7ba26de8d30bf0d8a7063970715 from qemu
2018-02-23 21:28:17 -05:00
Sergey Fedorov 47eac70cb9
include/qemu/osdep.h: Add a macro to check for alignment
Backports commit 18a60a76147569ca9e11b0607e50ce4012fe1aaa from qemu
2018-02-23 21:28:17 -05:00
Emilio G. Cota 170f6e0b3b
tb: consistently use uint32_t for tb->flags
We are inconsistent with the type of tb->flags: usage varies loosely
between int and uint64_t. Settle to uint32_t everywhere, which is
superior to both: at least one target (aarch64) uses the most significant
bit in the u32, and uint64_t is wasteful.

Compile-tested for all targets.

Backports commit 89fee74a0f066dfd73830a7b5fa137e87888c870 from qemu
2018-02-23 21:28:11 -05:00
Edgar E. Iglesias bfc74c4da2
gen-icount: Use tcg_set_insn_param
Use tcg_set_insn_param() instead of directly accessing internal
tcg data structures to update an insn param.

Backports commit 25caa94c4a26daaab1e65c6d887e2972aeb5749e from qemu
2018-02-23 20:01:17 -05:00
Eric Blake 2f42c2c195
qapi: Change visit_type_FOO() to no longer return partial objects
Returning a partial object on error is an invitation for a careless
caller to leak memory. We already fixed things in an earlier
patch to guarantee NULL if visit_start fails ("qapi: Guarantee
NULL obj on input visitor callback error"), but that does not
help the case where visit_start succeeds but some other failure
happens before visit_end, such that we leak a partially constructed
object outside visit_type_FOO(). As no one outside the testsuite
was actually relying on these semantics, it is cleaner to just
document and guarantee that ALL pointer-based visit_type_FOO()
functions always leave a safe value in *obj during an input visitor
(either the new object on success, or NULL if an error is
encountered), so callers can now unconditionally use
qapi_free_FOO() to clean up regardless of whether an error occurred.

The decision is done by adding visit_is_input(), then updating the
generated code to check if additional cleanup is needed based on
the type of visitor in use.

Note that we still leave *obj unchanged after a scalar-based
visit_type_FOO(); I did not feel like auditing all uses of
visit_type_Enum() to see if the callers would tolerate a specific
sentinel value (not to mention having to decide whether it would
be better to use 0 or ENUM__MAX as that sentinel).

Backports commit 68ab47e4b4ecc1c4649362b8cc1e49794d1a6537 from qemu
2018-02-23 19:53:17 -05:00
Eric Blake 0d52542da2
qapi: Simplify semantics of visit_next_list()
The semantics of the list visit are somewhat baroque, with the
following pseudocode when FooList is used:

start()
for (prev = head; cur = next(prev); prev = &cur) {
visit(&cur->value)
}

Note that these semantics (advance before visit) requires that
the first call to next() return the list head, while all other
calls return the next element of the list; that is, every visitor
implementation is required to track extra state to decide whether
to return the input as-is, or to advance. It also requires an
argument of 'GenericList **' to next(), solely because the first
iteration might need to modify the caller's GenericList head, so
that all other calls have to do a layer of dereferencing.

Thankfully, we only have two uses of list visits in the entire
code base: one in spapr_drc (which completely avoids
visit_next_list(), feeding in integers from a different source
than uint8List), and one in qapi-visit.py. That is, all other
list visitors are generated in qapi-visit.c, and share the same
paradigm based on a qapi FooList type, so we can refactor how
lists are laid out with minimal churn among clients.

We can greatly simplify things by hoisting the special case
into the start() routine, and flipping the order in the loop
to visit before advance:

start(head)
for (tail = *head; tail; tail = next(tail)) {
visit(&tail->value)
}

With the simpler semantics, visitors have less state to track,
the argument to next() is reduced to 'GenericList *', and it
also becomes obvious whether an input visitor is allocating a
FooList during visit_start_list() (rather than the old way of
not knowing if an allocation happened until the first
visit_next_list()). As a minor drawback, we now allocate in
two functions instead of one, and have to pass the size to
both functions (unless we were to tweak the input visitors to
cache the size to start_list for reuse during next_list, but
that defeats the goal of less visitor state).

The signature of visit_start_list() is chosen to match
visit_start_struct(), with the new parameters after 'name'.

The spapr_drc case is a virtual visit, done by passing NULL for
list, similarly to how NULL is passed to visit_start_struct()
when a qapi type is not used in those visits. It was easy to
provide these semantics for qmp-output and dealloc visitors,
and a bit harder for qmp-input (several prerequisite patches
refactored things to make this patch straightforward). But it
turned out that the string and opts visitors munge enough other
state during visit_next_list() to make it easier to just
document and require a GenericList visit for now; an assertion
will remind us to adjust things if we need the semantics in the
future.

Several pre-requisite cleanup patches made the reshuffling of
the various visitors easier; particularly the qmp input visitor.

Backports commit d9f62dde1303286b24ac8ce88be27e2b9b9c5f46 from qemu
2018-02-23 19:50:26 -05:00
Eric Blake 6084be1882
qapi: Split visit_end_struct() into pieces
As mentioned in previous patches, we want to call visit_end_struct()
functions unconditionally, so that visitors can release resources
tied up since the matching visit_start_struct() without also having
to worry about error priority if more than one error occurs.

Even though error_propagate() can be safely used to ignore a second
error during cleanup caused by a first error, it is simpler if the
cleanup cannot set an error. So, split out the error checking
portion (basically, input visitors checking for unvisited keys) into
a new function visit_check_struct(), which can be safely skipped if
any earlier errors are encountered, and leave the cleanup portion
(which never fails, but must be called unconditionally if
visit_start_struct() succeeded) in visit_end_struct().

Generated code in qapi-visit.c has diffs resembling:

|@@ -59,10 +59,12 @@ void visit_type_ACPIOSTInfo(Visitor *v,
| goto out_obj;
| }
| visit_type_ACPIOSTInfo_members(v, obj, &err);
|- error_propagate(errp, err);
|- err = NULL;
|+ if (err) {
|+ goto out_obj;
|+ }
|+ visit_check_struct(v, &err);
| out_obj:
|- visit_end_struct(v, &err);
|+ visit_end_struct(v);
| out:

and in qapi-event.c:

@@ -47,7 +47,10 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
| visit_type_q_obj_ACPI_DEVICE_OST_arg_members(v, &param, &err);
|- visit_end_struct(v, err ? NULL : &err);
|+ if (!err) {
|+ visit_check_struct(v, &err);
|+ }
|+ visit_end_struct(v);
| if (err) {
| goto out;

Backports commit 15c2f669e3fb2bc97f7b42d1871f595c0ac24af8 from qemu
2018-02-23 19:13:47 -05:00
Eric Blake ef6b7b50f6
qapi: Add visit_type_null() visitor
Right now, qmp-output-visitor happens to produce a QNull result
if nothing is actually visited between the creation of the visitor
and the request for the resulting QObject. A stronger protocol
would require that a QMP output visit MUST visit something. But
to still be able to produce a JSON 'null' output, we need a new
visitor function that states our intentions. Yes, we could say
that such a visit must go through visit_type_any(), but that
feels clunky.

So this patch introduces the new visit_type_null() interface and
its no-op interface in the dealloc visitor, and stubs in the
qmp visitors (the next patch will finish the implementation).
For the visitors that will not implement the callback, document
the situation. The code in qapi-visit-core unconditionally
dereferences the callback pointer, so that a segfault will inform
a developer if they need to implement the callback for their
choice of visitor.

Note that JSON has a primitive null type, with the single value
null; likewise with the QNull type for QObject; but for QAPI,
we just have the 'null' value without a null type. We may
eventually want to add more support in QAPI for null (most likely,
we'd use it via an alternate type that permits 'null' or an
object); but we'll create that usage when we need it.

Backports commit 3bc97fd5924561d92f32758c67eaffd2e4e25038 from qemu
2018-02-23 15:48:57 -05:00
Eric Blake fafb3e354b
qapi: Document visitor interfaces, add assertions
The visitor interface for mapping between QObject/QemuOpts/string
and QAPI is scandalously under-documented, making changes to visitor
core, individual visitors, and users of visitors difficult to
coordinate. Among other questions: when is it safe to pass NULL,
vs. when a string must be provided; which visitors implement which
callbacks; the difference between concrete and virtual visits.

Correct this by retrofitting proper contracts, and document where some
of the interface warts remain (for example, we may want to modify
visit_end_* to require the same 'obj' as the visit_start counterpart,
so the dealloc visitor can be simplified). Later patches in this
series will tackle some, but not all, of these warts.

Add assertions to (partially) enforce the contract. Some of these
were only made possible by recent cleanup commits.

Backports commit adfb264c9ed04bfc694921b72173be8e29e90024 from qemu
2018-02-23 15:45:31 -05:00
Eric Blake 9e999acc83
qapi: Change visit_start_implicit_struct to visit_start_alternate
After recent changes, the only remaining use of
visit_start_implicit_struct() is for allocating the space needed
when visiting an alternate. Since the term 'implicit struct' is
hard to explain, rename the function to its current usage. While
at it, we can merge the functionality of visit_get_next_type()
into the same function, making it more like visit_start_struct().

Generated code is now slightly smaller:

| {
| Error *err = NULL;
|
|- visit_start_implicit_struct(v, (void**) obj, sizeof(BlockdevRef), &err);
|+ visit_start_alternate(v, name, (GenericAlternate **)obj, sizeof(**obj),
|+ true, &err);
| if (err) {
| goto out;
| }
|- visit_get_next_type(v, name, &(*obj)->type, true, &err);
|- if (err) {
|- goto out_obj;
|- }
| switch ((*obj)->type) {
| case QTYPE_QDICT:
| visit_start_struct(v, name, NULL, 0, &err);
...
| }
|-out_obj:
|- visit_end_implicit_struct(v);
|+ visit_end_alternate(v);
| out:
| error_propagate(errp, err);
| }

Backports commit dbf11922622685934bfb41e7cf2be9bd4a0405c0 from qemu
2018-02-23 15:33:25 -05:00
Eric Blake 559304aed9
qapi: Consolidate QMP input visitor creation
Rather than having two separate ways to create a QMP input
visitor, where the safer approach has the more verbose name,
it is better to consolidate things into a single function
where the caller must explicitly choose whether to be strict
or to ignore excess input. This patch is the strictly
mechanical conversion; the next patch will then audit which
uses can be made stricter.

Backports commit fc471c18d5d2ec713d5a019f9530398675494bc8 from qemu
2018-02-23 15:09:57 -05:00