Commit graph

1780 commits

Author SHA1 Message Date
Paolo Bonzini 8df5ad80b1
exec: hide mr->ram_addr from qemu_get_ram_ptr users
Let users of qemu_get_ram_ptr and qemu_ram_ptr_length pass in an
address that is relative to the MemoryRegion. This basically means
what address_space_translate returns.

Because the semantics of the second parameter change, rename the
function to qemu_map_ram_ptr.

Backports commit 0878d0e11ba8013dd759c6921cbf05ba6a41bd71 from qemu
2018-02-24 16:17:49 -05:00
Paolo Bonzini b2e1b34bcc
memory: split memory_region_from_host from qemu_ram_addr_from_host
Move the old qemu_ram_addr_from_host to memory_region_from_host and
make it return an offset within the region. For qemu_ram_addr_from_host
return the ram_addr_t directly, similar to what it was before
commit 1b5ec23 ("memory: return MemoryRegion from qemu_ram_addr_from_host",
2013-07-04).

Backports commit 07bdaa4196b51bc7ffa7c3f74e9e4a9dc8a7966a from qemu
2018-02-24 16:06:49 -05:00
Paolo Bonzini 918c626847
exec: remove ram_addr argument from qemu_ram_block_from_host
Of the two callers, one does not use it, and the other can compute
it itself based on the other output argument (offset) and the RAMBlock.

Backports commit f615f39616c4fd1a3a3b078af8d75bb4be6390de from qemu
2018-02-24 03:37:40 -05:00
Paolo Bonzini f26f1f123c
memory: remove qemu_get_ram_fd, qemu_set_ram_fd, qemu_ram_block_host_ptr
Remove direct uses of ram_addr_t and optimize memory_region_{get,set}_fd
now that a MemoryRegion knows its RAMBlock directly.

Backports commit 4ff87573df3606856a92c14eef3393a63d736d11 from qemu
2018-02-24 03:34:44 -05:00
Emilio G. Cota ab569f5cde
atomics: do not emit consume barrier for atomic_rcu_read
Currently we emit a consume-load in atomic_rcu_read. Because of
limitations in current compilers, this is overkill for non-Alpha hosts
and it is only useful to make Thread Sanitizer work.

This patch leaves the consume-load in atomic_rcu_read when
compiling with Thread Sanitizer enabled, and resorts to a
relaxed load + smp_read_barrier_depends otherwise.

On an RMO host architecture, such as aarch64, the performance
improvement of this change is easily measurable. For instance,
qht-bench performs an atomic_rcu_read on every lookup. Performance
before and after applying this patch:

$ tests/qht-bench -d 5 -n 1
Before: 9.78 MT/s
After: 10.96 MT/s

Backports commit 15487aa132109891482f79d78a30d6cfd465a391 from qemu
2018-02-24 03:28:11 -05:00
Emilio G. Cota 87ef2a2c5f
atomics: emit an smp_read_barrier_depends() barrier only for Alpha and Thread Sanitizer
For correctness, smp_read_barrier_depends() is only required to
emit a barrier on Alpha hosts. However, we are currently emitting
a consume fence unconditionally, and most compilers currently treat
consume and acquire fences as equivalent.

Fix it by keeping the consume fence if we're compiling with Thread
Sanitizer, since this might help prevent false warnings. Otherwise,
only emit the barrier for Alpha hosts. Note that we still guarantee
that smp_read_barrier_depends() is a compiler barrier.

Backports commit c983895258a771f8a5e4a53950bfb7fd2216651c from qemu
2018-02-24 03:26:52 -05:00
Sergey Fedorov 3a9c5e7509
cpu-exec: Fix direct jump to TB spanning page
It is not safe to make a direct jump to a TB spanning two pages in
system emulation because the mapping for the second page can get changed
but we don't take care of direct jumps in this case.

However in user mode emulation, this is not the case because there's
only static address translation and TBs are always invalidated properly.

Backports commit c88c67e58b61618a904d2333ceebefc3c852d32e from qemu
2018-02-24 03:24:53 -05:00
Eduardo Habkost 9c04a28bd2
target-i386: Move TCG initialization to realize time
QOM instance_init functions are not supposed to have any side-effects,
as new objects may be created at any moment for querying property
information (see qmp_device_list_properties()).

Move TCG initialization to realize time so it won't be called when just
doing object_new() on a X86CPU subclass.

Backports commit 57f2453ab48a771b30aeced01b329ee85853bb7b from qemu
2018-02-24 03:23:09 -05:00
Eduardo Habkost fee2c27f2b
cpu: Eliminate cpudef_init(), cpudef_setup()
x86_cpudef_init() doesn't do anything anymore, cpudef_init(),
cpudef_setup(), and x86_cpudef_init() can be finally removed.

Backports commit 3e2c0e062f0963a6b73b0cd1990fad79495463d9 from qemu
2018-02-24 03:20:46 -05:00
Eduardo Habkost 956e20ea6b
target-i386: Set constant model_id for qemu64/qemu32/athlon
Newer PC machines don't set hw_version, and older machines set
model-id on compat_props explicitly, so we don't need the
x86_cpudef_setup() code that sets model_id using
qemu_hw_version() anymore.

Backports commit 9cf2cc3d8237732946720d78bf9aec0064026ed8 from qemu
2018-02-24 03:18:11 -05:00
Eduardo Habkost aa3d46ef83
osdep: Move default qemu_hw_version() value to a macro
The macro will be used by code that will stop calling
qemu_hw_version() at runtime and just need a constant value.

Backports commit d494352c2f7818aeba184a8ef757569083740bb2 from qemu
2018-02-24 03:16:34 -05:00
Eduardo Habkost 923dcf1cb8
target-i386: Use xsave structs for ext_save_area
This doesn't introduce any change in the code, as the offsets and
struct sizes match what was present in the table. This can be
validated by the QEMU_BUILD_BUG_ON lines on target-i386/cpu.h,
which ensures the struct sizes and offsets match the existing
values in ext_save_area.

Backports commit ee1b09f695dcd8532f470e53297473bd3bc88718 from qemu
2018-02-24 03:13:16 -05:00
Lioncash 05963470a2
target-i386: Include log.h in smm_helper
Fixes a compilation error
2018-02-24 03:06:07 -05:00
Eduardo Habkost 128f7c078a
target-i386: Define structs for layout of xsave area
Add structs that define the layout of the xsave areas used by
Intel processors. Add some QEMU_BUILD_BUG_ON lines to ensure the
structs match the XSAVE_* macros in target-i386/kvm.c and the
offsets and sizes at target-i386/cpu.c:ext_save_areas.

Backports commit b503717d28e8f7eff39bf38624e6cf42687d951a from qemu
2018-02-24 03:04:31 -05:00
Paolo Bonzini 77305ce4ee
memory: remove unnecessary masking of MemoryRegion ram_addr
mr->ram_block->offset is already aligned to both host and target size
(see qemu_ram_alloc_internal). Remove further masking as it is
unnecessary.

Backports commit e4e697940dff612b789b0858270c20a8b680f78d from qemu
2018-02-24 03:01:34 -05:00
Fam Zheng 74962feee1
memory: Drop FlatRange.romd_mode
Its value is alway set to mr->romd_mode, so the removed comparisons are
fully superseded by "a->mr == b->mr".

Backports commit 5b5660adf1fdb61db14ec681b10463b8cba633f1 from qemu
2018-02-24 02:57:29 -05:00
Fam Zheng fb8135cd0d
memory: Remove code for mr->may_overlap
The collision check does nothing and hasn't been used. Remove the
variable together with related code.

Backports commit b61359781958759317ee6fd1a45b59be0b7dbbe1 from qemu
2018-02-24 02:55:25 -05:00
Gonglei feff56cc11
memory: drop find_ram_block()
On the one hand, we have already qemu_get_ram_block() whose function
is similar. On the other hand, we can directly use mr->ram_block but
searching RAMblock by ram_addr which is a kind of waste.

Backports commit fa53a0e53efdc7002497ea4a76aacf6cceb170ef from qemu
2018-02-24 02:52:20 -05:00
Paolo Bonzini 9bb67a3f58
hw: clean up hw/hw.h includes
Include qom/object.h and exec/memory.h instead of exec/ioport.h;
exec/ioport.h was almost everywhere required only for those two
includes, not for the content of the header itself.

Remove block/aio.h, everybody is already including it through
another path.

With this change, include/hw/hw.h is freed from qemu-common.h.

Backports commit df43d49cb8708b9c88a20afe0d1a3089b550a5b8 from qemu
2018-02-24 02:46:41 -05:00
Paolo Bonzini d0d3712417
hw: remove pio_addr_t
pio_addr_t is almost unused, because these days I/O ports are simply
accessed through the address space. cpu_{in,out}[bwl] themselves are
almost unused; monitor.c and xen-hvm.c could use address_space_read/write
directly, since they have an integer size at hand. This leaves qtest as
the only user of those functions.

On the other hand even portio_* functions use this type; the only
interesting use of pio_addr_t thus is include/hw/sysbus.h. I guess I
could move it there, but I don't see much benefit in that either. Using
uint32_t is enough and avoids the need to include ioport.h everywhere.

Backports commit 89a80e7400f7225d9401b35ef32454b4ab29dc67 from qemu
2018-02-24 02:43:16 -05:00
Paolo Bonzini 9485b7c2e1
cpu: move exec-all.h inclusion out of cpu.h
exec-all.h contains TCG-specific definitions. It is not needed outside
TCG-specific files such as translate.c, exec.c or *helper.c.

One generic function had snuck into include/exec/exec-all.h; move it to
include/qom/cpu.h.

Backports commit 63c915526d6a54a95919ebece83fa9ca631b2508 from qemu
2018-02-24 02:39:08 -05:00
Paolo Bonzini 58693409ea
exec: extract exec/tb-context.h
TCG backends do not need most of exec-all.h; extract what they actually
need to a separate file or move it directly to tcg.h. The next patch
will stop including exec-all.h from everywhere.

Backports commit 00f6da6a1a5d1ce085334eccbb50ec899ceed513 from qemu
2018-02-24 02:09:58 -05:00
Paolo Bonzini f9b9d0ba0f
hw: explicitly include qemu/log.h
Move the inclusion out of hw/hw.h, most files do not need it.

Backports commit 03dd024ff57733a55cd2e455f361d053c81b1b29 from qemu
2018-02-24 02:00:45 -05:00
Paolo Bonzini adf97a4d59
mips: move CP0 functions out of cpu.h
These are here for historical reasons: they are needed from both gdbstub.c
and op_helper.c, and the latter was compiled with fixed AREG0. It is
not needed anymore, so uninline them.

Backports commit e6623d88f44aae9e9c78276c0cb7bd352283d50a from qemu
2018-02-24 01:57:30 -05:00
Paolo Bonzini 058624b9e4
arm: move arm_log_exception into .c file
Avoid need for qemu/log.h inclusion, and make the function static too.

Backports commit 27a7ea8a1f351578ce869b41ba1ba662c063fd62 from qemu
2018-02-24 01:52:55 -05:00
Paolo Bonzini 37f26922dd
qemu-common: push cpu.h inclusion out of qemu-common.h
Backports commit 33c11879fd422b759483ed25fef133ea900ea8d7 from qemu
2018-02-24 01:50:56 -05:00
Paolo Bonzini e84da64a2b
qemu-common: stop including qemu/bswap.h from qemu-common.h
Move it to the actual users. There are still a few includes of
qemu/bswap.h in headers; removing them is left for future work.

Backports commit 58369e22cf971448411bfbc8c894b2addebe2111 from qemu
2018-02-24 01:06:03 -05:00
Paolo Bonzini 78fd1aab94
cpu: move endian-dependent load/store functions to cpu-all.h
Disentangle cpu-common.h and memory.h from NEED_CPU_H. Prototypes are
not defined for !NEED_CPU_H, so remove them from poison.h too. Only
macros need poisoning.

Backports commit a7d6039cb35592683ecc56d2b37817da2d2f8b00 from qemu
2018-02-24 01:04:26 -05:00
Paolo Bonzini 9c0e31ed3a
target-sparc: make cpu-qom.h not target specific
Make SPARCCPU an opaque type within cpu-qom.h, and move all definitions
of private methods, as well as all type definitions that require knowledge
of the layout to cpu.h. This helps making files independent of NEED_CPU_H
if they only need to pass around CPU pointers.

Backports commit d61d1b20610e4655d7846e4cb43d22188e935f5f from qemu
2018-02-24 01:00:56 -05:00
Paolo Bonzini 01bd1c1a73
target-mips: make cpu-qom.h not target specific
Make MIPSCPU an opaque type within cpu-qom.h, and move all definitions of
private methods, as well as all type definitions that require knowledge
of the layout to cpu.h. This helps making files independent of NEED_CPU_H
if they only need to pass around CPU pointers.

Backports commit 416bf936864f16caad6993b9ebd452fb34f801bd from qemu
2018-02-24 00:59:03 -05:00
Paolo Bonzini 27ebc27beb
target-m68k: make cpu-qom.h not target specific
Make M68KCPU an opaque type within cpu-qom.h, and move all definitions of
private methods, as well as all type definitions that require knowledge
of the layout to cpu.h. This helps making files independent of NEED_CPU_H
if they only need to pass around CPU pointers.

Backports commit a836b8fa00fa1032ccd234a71b33943627d211ea from qemu
2018-02-24 00:56:58 -05:00
Paolo Bonzini 2f4ae94b5c
target-i386: make cpu-qom.h not target specific
Make X86CPU an opaque type within cpu-qom.h, and move all definitions of
private methods, as well as all type definitions that require knowledge
of the layout to cpu.h. This helps making files independent of NEED_CPU_H
if they only need to pass around CPU pointers.

Backports commit 4da6f8d954429c0cd1471d25cb9dbe909607374e from qemu
2018-02-24 00:55:22 -05:00
Lioncash 791413630e
target-arm: make cpu-qom.h not target specific
Make ARMCPU an opaque type within cpu-qom.h, and move all definitions of
private methods, as well as all type definitions that require knowledge
of the layout to cpu.h. This helps making files independent of NEED_CPU_H
if they only need to pass around CPU pointers.

Backports commit 74e755647c1598a6845df1ee4f8b96d01afd96e7 from qemu
2018-02-24 00:48:59 -05:00
Paolo Bonzini fee6dcb22a
include: move CPU-related definitions out of qemu-common.h
Backports commit 4b4629d9d26fd0e100d9be526367a96aa35b541d from qemu
2018-02-24 00:33:49 -05:00
Wei Jiangang 7cf135457a
accel: make configure_accelerator return void
Return the negated value of accel_initialised is meaningless,
and the caller vl doesn't check it.

Backports commit bdc3f61dec2f9c227235bb5f677a0272e1184c82 from qemu
2018-02-24 00:31:28 -05:00
Sergey Fedorov eab60b7c77
cpu-exec: Clean up 'interrupt_request' reloading in cpu_handle_interrupt()
Backports commit 8b1fe3f439eaa2f0a6ee7737942bb6c405725867 from qemu
2018-02-24 00:27:05 -05:00
Sergey Fedorov b4b7b88f69
cpu-exec: Remove unused 'x86_cpu' and 'env' from cpu_exec()
Backports commit ba048a4ae15ba0f70c6dcb12ee05db120408de78 from qemu
2018-02-24 00:16:40 -05:00
Sergey Fedorov aefb8935a9
cpu-exec: Move TB execution stuff out of cpu_exec()
Simplify cpu_exec() by extracting TB execution code outside of
cpu_exec() into a new static inline function cpu_loop_exec_tb().

Backports commit 928de9ee14b0b63ee9f9275732ed3e1c8b5f4790 from qemu
2018-02-24 00:15:24 -05:00
Sergey Fedorov d4ef96abf2
cpu-exec: Move interrupt handling out of cpu_exec()
Simplify cpu_exec() by extracting interrupt handling code outside of
cpu_exec() into a new static inline function cpu_handle_interrupt().

Backports commit c385e6e49763c6dd5dbbd90fadde95d986f8bd38 from qemu
2018-02-24 00:09:06 -05:00
Sergey Fedorov c1b52a4387
cpu-exec: Move exception handling out of cpu_exec()
Simplify cpu_exec() by extracting exception handling code out of
cpu_exec() into a new static inline function cpu_handle_exception().
Also make cpu_handle_debug_exception() inline as it is used only once.

Backports commit ea284766ec6b9f1712369249566b4c372f3cec8b from qemu
2018-02-24 00:03:37 -05:00
Sergey Fedorov fc3d135dac
cpu-exec: Move halt handling out of cpu_exec()
Simplify cpu_exec() by extracting CPU halt state handling code out of
cpu_exec() into a new static inline function cpu_handle_halt().

Backports commit 8b2d34e997371c9729a0f41e3cc624d4300bbe78 from qemu
2018-02-23 23:53:20 -05:00
Lioncash 88d00a75ca
cpu-exec: move cpu_exec to the bottom of the file
Remove forward declarations
2018-02-23 23:50:28 -05:00
Sergey Fedorov 0088ca994f
cpu-exec: Remove relic orphaned comment
This comment should have been deleted by commit 0ac087f1f3ae ("removed
unused code") but somehow it is still here. There's no point to keep it.

Backports commit c6f0d9f84c43ae973270df1a77482466558ee487 from qemu
2018-02-23 23:47:05 -05:00
Sergey Fedorov 1a768018c2
tcg: Remove needless CPUState::current_tb
This field was used for telling cpu_interrupt() to unlink a chain of TBs
being executed when it worked that way. Now, cpu_interrupt() don't do
this anymore. So we don't need this field anymore.

Backports commit 3213525f8ab48742db09dab18cb9ae6f36a6c921 from qemu
2018-02-23 23:45:42 -05:00
Sergey Fedorov 73c75b4cf7
cpu-exec: Move TB chaining into tb_find_fast()
Move tb_add_jump() call and surrounding code from cpu_exec() into
tb_find_fast(). That simplifies cpu_exec() a little by hiding the direct
chaining optimization details into tb_find_fast(). It also allows to
move tb_lock()/tb_unlock() pair into tb_find_fast(), putting it closer
to tb_find_slow() which also manipulates the lock.

Backports commit a0522c7a55cc8ac76d82884cf8e52f76daa664cc from qemu
2018-02-23 23:38:57 -05:00
Sergey Fedorov ba9a237586
tcg: Rework tb_invalidated_flag
'tb_invalidated_flag' was meant to catch two events:
* some TB has been invalidated by tb_phys_invalidate();
* the whole translation buffer has been flushed by tb_flush().

Then it was checked:
* in cpu_exec() to ensure that the last executed TB can be safely
linked to directly call the next one;
* in cpu_exec_nocache() to decide if the original TB should be provided
for further possible invalidation along with the temporarily
generated TB.

It is always safe to patch an invalidated TB since it is not going to be
used anyway. It is also safe to call tb_phys_invalidate() for an already
invalidated TB. Thus, setting this flag in tb_phys_invalidate() is
simply unnecessary. Moreover, it can prevent from pretty proper linking
of TBs, if any arbitrary TB has been invalidated. So just don't touch it
in tb_phys_invalidate().

If this flag is only used to catch whether tb_flush() has been called
then rename it to 'tb_flushed'. Declare it as 'bool' and stick to using
only 'true' and 'false' to set its value. Also, instead of setting it in
tb_gen_code(), just after tb_flush() has been called, do it right inside
of tb_flush().

In cpu_exec(), this flag is used to track if tb_flush() has been called
and have made 'next_tb' (a reference to the last executed TB) invalid
for linking it to directly call the next TB. tb_flush() can be called
during the CPU execution loop from tb_gen_code(), during TB execution or
by another thread while 'tb_lock' is released. Catch for translation
buffer flush reliably by resetting this flag once before first TB lookup
and each time we find it set before trying to add a direct jump. Don't
touch in in tb_find_physical().

Each vCPU has its own execution loop in multithreaded mode and thus
should have its own copy of the flag to be able to reset it with its own
'next_tb' and don't affect any other vCPU execution thread. So make this
flag per-vCPU and move it to CPUState.

In cpu_exec_nocache(), we only need to check if tb_flush() has been
called from tb_gen_code() called by cpu_exec_nocache() itself. To do
this reliably, preserve the old value of the flag, reset it before
calling tb_gen_code(), check afterwards, and combine the saved value
back to the flag.

This patch is based on the patch "tcg: move tb_invalidated_flag to
CPUState" from Paolo Bonzini <pbonzini@redhat.com>.

Backports commit 6f789be56d3f38e9214dafcfab3bf9be7191f370 from qemu
2018-02-23 23:34:51 -05:00
Sergey Fedorov c9700af2bd
tcg: Clean up from 'next_tb'
The value returned from tcg_qemu_tb_exec() is the value passed to the
corresponding tcg_gen_exit_tb() at translation time of the last TB
attempted to execute. It is a little confusing to store it in a variable
named 'next_tb'. In fact, it is a combination of 4-byte aligned pointer
and additional information in its two least significant bits. Break it
down right away into two variables named 'last_tb' and 'tb_exit' which
are a pointer to the last TB attempted to execute and the TB exit
reason, correspondingly. This simplifies the code and improves its
readability.

Correct a misleading documentation comment for tcg_qemu_tb_exec() and
fix logging in cpu_tb_exec(). Also rename a misleading 'next_tb' in
another couple of places.

Backports commit 819af24b9c1e95e6576f1cefd32f4d6bf56dfa56 from qemu
2018-02-23 23:29:04 -05:00
Paolo Bonzini 66faf3b5df
tcg: code_bitmap and code_write_count are not used by user-mode emulation
Backports commit 6fad459c91e8a1dedbb6681d3f57ede5222a225c from qemu
2018-02-23 23:17:37 -05:00
Sergey Fedorov ffdc9d6323
tcg: Allow goto_tb to any target PC in user mode
In user mode, there's only a static address translation, TBs are always
invalidated properly and direct jumps are reset when mapping change.
Thus the destination address is always valid for direct jumps and
there's no need to restrict it to the pages the TB resides in.

Backports commit 90aa39a1cc4837360889f0e033ca25cc82100308 from qemu
2018-02-23 23:12:14 -05:00
Sergey Fedorov 73c59faad5
tcg: Clean up direct block chaining safety checks
We don't take care of direct jumps when address mapping changes. Thus we
must be sure to generate direct jumps so that they always keep valid
even if address mapping changes. Luckily, we can only allow to execute a
TB if it was generated from the pages which match with current mapping.

Document tcg_gen_goto_tb() declaration and note the reason for
destination PC limitations.

Some targets with variable length instructions allow TB to straddle a
page boundary. However, we make sure that both of TB pages match the
current address mapping when looking up TBs. So it is safe to do direct
jumps into the both pages. Correct the checks for some of those targets.

Given that, we can safely patch a TB which spans two pages. Remove the
unnecessary check in cpu_exec() and allow such TBs to be patched.

Backports commit 5b053a4a28278bca606eeff7d1c0730df1b047e9 from qemu
2018-02-23 22:26:00 -05:00
Sergey Fedorov 39d262f0d2
tcg: Clean up tb_jmp_unlink()
Unify the code of this function with tb_jmp_remove_from_list(). Making
these functions similar improves their readability. Also this could be a
step towards making this function thread-safe.

Backports commit f9c5b66f487a04d3747dc6997b1503f9258df945 from qemu
2018-02-23 21:40:07 -05:00
Lioncash 68272af618
translate-all: Remove unused variable in size_code_gen_buffer
Also eliminates the unused parameter
2018-02-23 21:38:34 -05:00
Sergey Fedorov c530eb06a9
tcg: Extract removing of jumps to TB from tb_phys_invalidate()
Move the code for removing jumps to a TB out of tb_phys_invalidate() to
a separate static inline function tb_jmp_unlink(). This simplifies
tb_phys_invalidate() and improves code structure.

Backports commit 89bba496322d4cf996d42cdd4bb0912231656c3d from qemu
2018-02-23 21:36:29 -05:00
Sergey Fedorov 0d2e91518b
tcg: Rename tb_jmp_remove() to tb_remove_from_jmp_list()
tb_jmp_remove() was only used to remove the TB from a list of all TBs
jumping to the same TB which is n-th jump destination of the given TB.
Put a comment briefly describing the function behavior and rename it to
better reflect its purpose.

Backports commit 133626783aa5a1bf86332fa3e6f7b8efe005f924 from qemu
2018-02-23 21:34:01 -05:00
Sergey Fedorov d60af028c5
tcg: Clarify thread safety check in tb_add_jump()
The check is to make sure that another thread hasn't already done the
same while we were outside of tb_lock. Mention this in a comment.

Backports commit 9962c478b153a18fe88a6509fe58cd178aff8abc from qemu
2018-02-23 21:32:47 -05:00
Sergey Fedorov e93f68a755
tcg: Init TB's direct jumps before making it visible
Initialize TB's direct jump list data fields and reset the jumps before
tb_link_page() puts it into the physical hash table and the physical
page list. So TB is completely initialized before it becomes visible.

This is pure rearrangement of code to a more suitable place, though it
could be a preparation for relaxing the locking scheme in future.

Backports commit 901bc3deb43bf37c85e43955905d003be7ae5fa5 from qemu
2018-02-23 21:31:36 -05:00
Sergey Fedorov 87f2bb42d4
tcg: Rearrange tb_link_page() to avoid forward declaration
Backports commit e90d96b158665a684ab89b4f002838034b5fafc8 from qemu
2018-02-23 21:28:20 -05:00
Sergey Fedorov fbc0a1105f
tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB
These fields do not contain pure pointers to a TranslationBlock
structure. So uintptr_t is the most appropriate type for them.
Also put some asserts to assure that the two least significant bits of
the pointer are always zero before assigning it to jmp_list_first.

Backports commit c37e6d7e3589ecb96914faa21025ad7ba6654aea from qemu
2018-02-23 21:28:19 -05:00
Sergey Fedorov e60c24cecf
tcg: Clean up direct block chaining data fields
Briefly describe in a comment how direct block chaining is done. It
should help in understanding of the following data fields.

Rename some fields in TranslationBlock and TCGContext structures to
better reflect their purpose (dropping excessive 'tb_' prefix in
TranslationBlock but keeping it in TCGContext):
tb_next_offset => jmp_reset_offset
tb_jmp_offset => jmp_insn_offset
tb_next => jmp_target_addr
jmp_next => jmp_list_next
jmp_first => jmp_list_first

Avoid using a magic constant as an invalid offset which is used to
indicate that there's no n-th jump generated.

Backports commit f309101c26b59641fc1aa8fb2a98a5441cdaea03 from qemu
2018-02-23 21:28:19 -05:00
Richard Henderson bb0b055a99
translate-all: Adjust 256mb testing for mips64
Make sure we preserve the high 32-bits when masking for mips64.

Backports commit 7ba6a512ae439c98c0c1f0f4348c079d90f9dd9d from qemu
2018-02-23 21:28:19 -05:00
Emilio G. Cota de17843702
translate-all: add missing munmap of the code_gen guard page for MIPS
Backports commit 8bdf4997823126a39bd4c99e4b2283b02cc7865f from qemu
2018-02-23 21:28:19 -05:00
Emilio G. Cota 9a2b02b241
translate-all: remove redundant setting of tcg_ctx.code_gen_buffer_size
The setting of tcg_ctx.code_gen_buffer_size is done by the only caller of
size_code_gen_buffer(), which is code_gen_alloc():

$ git grep size_code_gen_buffer
translate-all.c:static inline size_t size_code_gen_buffer(size_t tb_size)
translate-all.c: tcg_ctx.code_gen_buffer_size = size_code_gen_buffer(tb_size);

Backports commit 835154b6e2200460f04719d0028716a37c178368 from qemu
2018-02-23 21:28:19 -05:00
Sergey Fedorov c5b234ed1f
tcg: Note requirement on atomic direct jump patching
Backports commit 10b4f4855537dd421e193a7d0416513116370558 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov 87c3382dc8
tcg/mips: Make direct jump patching thread-safe
Ensure direct jump patching in MIPS is atomic by using
atomic_read()/atomic_set() for code patching.

Backports commit c82460a560176ef69c2f0662bd280612e274db96 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov 7538001da9
tcg/sparc: Make direct jump patching thread-safe
Ensure direct jump patching in SPARC is atomic by using
atomic_read()/atomic_set() for code patching.

Backports commit 84f79fb7c6e857edc807e4a251338243ce0cbac3 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov a45f8cb49d
tcg/aarch64: Make direct jump patching thread-safe
Ensure direct jump patching in AArch64 is atomic by using
atomic_read()/atomic_set() for code patching.

Backports commit 9e269112953be4d670cb0d25042bd6546fcf3e45 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov 52e2972300
tcg/arm: Make direct jump patching thread-safe
Ensure direct jump patching in ARM is atomic by using
atomic_read()/atomic_set() for code patching.

Backports commit 7d14e0e2d661479985197203589c38840e1066df from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov 57359fbe6c
tcg/s390: Make direct jump patching thread-safe
Ensure direct jump patching in s390 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.

Backports commit ed3d51ecd7fe248d3959e469d53890ac9ffe0cd2 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov 5eb2d6618f
tcg/i386: Make direct jump patching thread-safe
Ensure direct jump patching in i386 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.

Backports commit 0d07abf05e98903c7faf204a9a90f7d45b7554dc from qemu
2018-02-23 21:28:17 -05:00
Lioncash fffa27d269
osdep: MSVC-compatible alignment macros 2018-02-23 21:28:17 -05:00
Sergey Fedorov 3456f0879e
include/qemu/osdep.h: Add macros for pointer alignment
These macros provide a convenient way to n-byte align pointers up and
down and check if a pointer is n-byte aligned.

Backports commit 6b587d3cda48e7ba26de8d30bf0d8a7063970715 from qemu
2018-02-23 21:28:17 -05:00
Sergey Fedorov 47eac70cb9
include/qemu/osdep.h: Add a macro to check for alignment
Backports commit 18a60a76147569ca9e11b0607e50ce4012fe1aaa from qemu
2018-02-23 21:28:17 -05:00
Emilio G. Cota 170f6e0b3b
tb: consistently use uint32_t for tb->flags
We are inconsistent with the type of tb->flags: usage varies loosely
between int and uint64_t. Settle to uint32_t everywhere, which is
superior to both: at least one target (aarch64) uses the most significant
bit in the u32, and uint64_t is wasteful.

Compile-tested for all targets.

Backports commit 89fee74a0f066dfd73830a7b5fa137e87888c870 from qemu
2018-02-23 21:28:11 -05:00
Peter Maydell fe2000aa32
target-arm: Avoid unnecessary TLB flush on TCR_EL2, TCR_EL3 writes
The TCR_EL2 and TCR_EL3 regdefs were incorrectly using the
vmsa_tcr_el1_write function for writes. Since these registers don't
have the A1 bit that TCR_EL1 does, we don't need to do a tlb_flush()
when they are written. Remove the unnecessary .writefn and also the
harmless but unneeded .raw_writefn and .resetfn definitions.

Backports commit 6459b94c26dd666badb3547fef1456992a08e60b from qemu
2018-02-23 20:09:12 -05:00
Edgar E. Iglesias eb79db28d5
target-arm/translate-a64.c: Unify some of the ldst_reg decoding
The various load/store variants under disas_ldst_reg can all reuse the
same decoding for opc, size, rt and is_vector.

This patch unifies the decoding in preparation for generating
instruction syndromes for data aborts.
This will allow us to reduce the number of places to hook in updates
to the load/store state needed to generate the insn syndromes.

No functional change.

Backports commit cd694521ca061a5d0436d5df4ec8c17c8f4dfcdb from qemu
2018-02-23 20:06:31 -05:00
Edgar E. Iglesias 602e9e34b9
target-arm/translate-a64.c: Use extract32 in disas_ldst_reg_imm9
Use extract32 instead of open coding the bit masking when decoding
is_signed and is_extended. This streamlines the decoding with some
of the other ldst variants.

No functional change.

Backports commit 026a19c3128678d4fe301fc36e8ffacdc9ecccb8 from qemu
2018-02-23 20:04:11 -05:00
Peter Maydell 56e9d7c09e
target-arm: Split data abort syndrome generator
Split the data abort syndrome generator into two versions:
One with a valid Instruction Specific Syndrome (ISS) and another without.

The following new flags are supported by the syndrome generator
with ISS:
* isv - Instruction syndrome valid
* sas - Syndrome access size
* sse - Syndrome sign extend
* srt - Syndrome register transfer
* sf - Sixty-Four bit register width
* ar - Acquire/Release

These flags are not yet used, so this patch has no functional change
except that we will now correctly set the IL bit in data abort
syndromes without ISS information.

Backports commit 094d028a7968236cd2b7f7b96394f7a3b8ad97c8 from qemu
2018-02-23 20:03:04 -05:00
Edgar E. Iglesias bfc74c4da2
gen-icount: Use tcg_set_insn_param
Use tcg_set_insn_param() instead of directly accessing internal
tcg data structures to update an insn param.

Backports commit 25caa94c4a26daaab1e65c6d887e2972aeb5749e from qemu
2018-02-23 20:01:17 -05:00
Edgar E. Iglesias a30a478538
tcg: Add tcg_set_insn_param
Add tcg_set_insn_param as a mechanism to modify an insn
parameter after emiting the insn. This is useful for icount
and also for embedding fault information for a specific insn.

Backports commit 1d41478fd428e01f057d3248292e4cdcdb048523 from qemu
2018-02-23 19:58:49 -05:00
Sergey Sorokin 98a6d44c54
target-arm: Fix descriptor address masking in ARM address translation
There is a bug in ARM address translation regime with a long-descriptor
format. On the descriptor reading its address is formed from an index
which is a part of the input address. And on the first iteration this index
is incorrectly masked with 'grainsize' mask. But it can be wider according
to pseudo-code.
On the other hand on the iterations other than first the descriptor address
is formed from the previous level descriptor by masking with 'descaddrmask'
value. It always clears just 12 lower bits, but it must clear 'grainsize'
lower bits instead according to pseudo-code.
The patch fixes both cases.

Backports commit dddb5223413c5425ae6eaeb3b967627efc9675f7 from qemu
2018-02-23 19:56:56 -05:00
Sergey Sorokin 00e751f18e
target-arm: Stage 2 permission fault was fixed in AArch32 state
As described in AArch32.CheckS2Permission an instruction fetch fails if
XN bit is set or there is no read permission for the address.

Backports commit dfda68377e20943f474505e75238cb96bc6874bf from qemu
2018-02-23 19:55:11 -05:00
Eric Blake 2f42c2c195
qapi: Change visit_type_FOO() to no longer return partial objects
Returning a partial object on error is an invitation for a careless
caller to leak memory. We already fixed things in an earlier
patch to guarantee NULL if visit_start fails ("qapi: Guarantee
NULL obj on input visitor callback error"), but that does not
help the case where visit_start succeeds but some other failure
happens before visit_end, such that we leak a partially constructed
object outside visit_type_FOO(). As no one outside the testsuite
was actually relying on these semantics, it is cleaner to just
document and guarantee that ALL pointer-based visit_type_FOO()
functions always leave a safe value in *obj during an input visitor
(either the new object on success, or NULL if an error is
encountered), so callers can now unconditionally use
qapi_free_FOO() to clean up regardless of whether an error occurred.

The decision is done by adding visit_is_input(), then updating the
generated code to check if additional cleanup is needed based on
the type of visitor in use.

Note that we still leave *obj unchanged after a scalar-based
visit_type_FOO(); I did not feel like auditing all uses of
visit_type_Enum() to see if the callers would tolerate a specific
sentinel value (not to mention having to decide whether it would
be better to use 0 or ENUM__MAX as that sentinel).

Backports commit 68ab47e4b4ecc1c4649362b8cc1e49794d1a6537 from qemu
2018-02-23 19:53:17 -05:00
Eric Blake 0d52542da2
qapi: Simplify semantics of visit_next_list()
The semantics of the list visit are somewhat baroque, with the
following pseudocode when FooList is used:

start()
for (prev = head; cur = next(prev); prev = &cur) {
visit(&cur->value)
}

Note that these semantics (advance before visit) requires that
the first call to next() return the list head, while all other
calls return the next element of the list; that is, every visitor
implementation is required to track extra state to decide whether
to return the input as-is, or to advance. It also requires an
argument of 'GenericList **' to next(), solely because the first
iteration might need to modify the caller's GenericList head, so
that all other calls have to do a layer of dereferencing.

Thankfully, we only have two uses of list visits in the entire
code base: one in spapr_drc (which completely avoids
visit_next_list(), feeding in integers from a different source
than uint8List), and one in qapi-visit.py. That is, all other
list visitors are generated in qapi-visit.c, and share the same
paradigm based on a qapi FooList type, so we can refactor how
lists are laid out with minimal churn among clients.

We can greatly simplify things by hoisting the special case
into the start() routine, and flipping the order in the loop
to visit before advance:

start(head)
for (tail = *head; tail; tail = next(tail)) {
visit(&tail->value)
}

With the simpler semantics, visitors have less state to track,
the argument to next() is reduced to 'GenericList *', and it
also becomes obvious whether an input visitor is allocating a
FooList during visit_start_list() (rather than the old way of
not knowing if an allocation happened until the first
visit_next_list()). As a minor drawback, we now allocate in
two functions instead of one, and have to pass the size to
both functions (unless we were to tweak the input visitors to
cache the size to start_list for reuse during next_list, but
that defeats the goal of less visitor state).

The signature of visit_start_list() is chosen to match
visit_start_struct(), with the new parameters after 'name'.

The spapr_drc case is a virtual visit, done by passing NULL for
list, similarly to how NULL is passed to visit_start_struct()
when a qapi type is not used in those visits. It was easy to
provide these semantics for qmp-output and dealloc visitors,
and a bit harder for qmp-input (several prerequisite patches
refactored things to make this patch straightforward). But it
turned out that the string and opts visitors munge enough other
state during visit_next_list() to make it easier to just
document and require a GenericList visit for now; an assertion
will remind us to adjust things if we need the semantics in the
future.

Several pre-requisite cleanup patches made the reshuffling of
the various visitors easier; particularly the qmp input visitor.

Backports commit d9f62dde1303286b24ac8ce88be27e2b9b9c5f46 from qemu
2018-02-23 19:50:26 -05:00
Lioncash ed72ba0f8b
qapi: Fix string input visitor handling of invalid list
As shown in the previous commit, the string input visitor was
treating bogus input as an empty list rather than an error.
Fix parse_str() to set errp, then the callers to exit early if
an error was reported.

Meanwhile, fix the testsuite to use the generated
qapi_free_int16List() instead of rolling our own, and to
validate the fixed behavior, while at the same time documenting
one more change that we'd like to make in a later patch (a
failed visit_start_list should guarantee a NULL pointer,
regardless of what things were on input).

Backports commit 74f24cb6306d065045d0e2215a7d10533fa59c57 from qemu
2018-02-23 19:25:26 -05:00
Eric Blake 6084be1882
qapi: Split visit_end_struct() into pieces
As mentioned in previous patches, we want to call visit_end_struct()
functions unconditionally, so that visitors can release resources
tied up since the matching visit_start_struct() without also having
to worry about error priority if more than one error occurs.

Even though error_propagate() can be safely used to ignore a second
error during cleanup caused by a first error, it is simpler if the
cleanup cannot set an error. So, split out the error checking
portion (basically, input visitors checking for unvisited keys) into
a new function visit_check_struct(), which can be safely skipped if
any earlier errors are encountered, and leave the cleanup portion
(which never fails, but must be called unconditionally if
visit_start_struct() succeeded) in visit_end_struct().

Generated code in qapi-visit.c has diffs resembling:

|@@ -59,10 +59,12 @@ void visit_type_ACPIOSTInfo(Visitor *v,
| goto out_obj;
| }
| visit_type_ACPIOSTInfo_members(v, obj, &err);
|- error_propagate(errp, err);
|- err = NULL;
|+ if (err) {
|+ goto out_obj;
|+ }
|+ visit_check_struct(v, &err);
| out_obj:
|- visit_end_struct(v, &err);
|+ visit_end_struct(v);
| out:

and in qapi-event.c:

@@ -47,7 +47,10 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
| visit_type_q_obj_ACPI_DEVICE_OST_arg_members(v, &param, &err);
|- visit_end_struct(v, err ? NULL : &err);
|+ if (!err) {
|+ visit_check_struct(v, &err);
|+ }
|+ visit_end_struct(v);
| if (err) {
| goto out;

Backports commit 15c2f669e3fb2bc97f7b42d1871f595c0ac24af8 from qemu
2018-02-23 19:13:47 -05:00
Eric Blake ae8d475ae0
qmp: Tighten output visitor rules
Tighten assertions in the QMP output visitor, so that:

- qmp_output_get_qobject() can only be called after pairing a
visit_end_* for every visit_start_* (rather than allowing it on
a partially built object)

- qmp_output_get_qobject() cannot be called unless at least one
visit_type_* or visit_start/visit_end pair has occurred since
creation/reset (the accidental return of NULL fixed by commit
ab8bf1d7 would have been much easier to diagnose)

- ensure that we are encountering the expected object or list
type, to provide protection against mismatched push(struct)/
pop(list) or push(list)/pop(struct), similar to the qmp-input
protection added in commit bdd8e6b5.

- ensure that except for the root, 'name' is non-null inside a
dict, and NULL inside a list (this may need changing later if
we add "name.0" support for better error messages for a list,
but for now it makes sure all users are at least consistent)

Backports commit 56a6f02b8ce1fe41a2a9077593e46eca7d98267d from qemu
2018-02-23 19:04:41 -05:00
Eric Blake e5b2cff2bd
qmp: Support explicit null during visits
Implement the new type_null() callback for the qmp input and
output visitors. While we don't yet have a use for this in QAPI
input (the generator will need some tweaks first), some
potential usages have already been discussed on the list.
Meanwhile, the output visitor could already output explicit null
via type_any, but this gives us finer control.

At any rate, it's easy to test that we can round-trip an explicit
null through manual use of visit_type_null() wrapped by a virtual
visit_start_struct() walk, even if we can't do the visit in a
QAPI type. Repurpose the test_visitor_out_empty test,
particularly since a future patch will tighten semantics to
forbid use of qmp_output_get_qobject() without at least one
intervening visit_type_*.

Backports commit 3df016f185521f8dfa5bd89168722887156405c7 from qemu
2018-02-23 19:02:18 -05:00
Eric Blake ef6b7b50f6
qapi: Add visit_type_null() visitor
Right now, qmp-output-visitor happens to produce a QNull result
if nothing is actually visited between the creation of the visitor
and the request for the resulting QObject. A stronger protocol
would require that a QMP output visit MUST visit something. But
to still be able to produce a JSON 'null' output, we need a new
visitor function that states our intentions. Yes, we could say
that such a visit must go through visit_type_any(), but that
feels clunky.

So this patch introduces the new visit_type_null() interface and
its no-op interface in the dealloc visitor, and stubs in the
qmp visitors (the next patch will finish the implementation).
For the visitors that will not implement the callback, document
the situation. The code in qapi-visit-core unconditionally
dereferences the callback pointer, so that a segfault will inform
a developer if they need to implement the callback for their
choice of visitor.

Note that JSON has a primitive null type, with the single value
null; likewise with the QNull type for QObject; but for QAPI,
we just have the 'null' value without a null type. We may
eventually want to add more support in QAPI for null (most likely,
we'd use it via an alternate type that permits 'null' or an
object); but we'll create that usage when we need it.

Backports commit 3bc97fd5924561d92f32758c67eaffd2e4e25038 from qemu
2018-02-23 15:48:57 -05:00
Eric Blake fafb3e354b
qapi: Document visitor interfaces, add assertions
The visitor interface for mapping between QObject/QemuOpts/string
and QAPI is scandalously under-documented, making changes to visitor
core, individual visitors, and users of visitors difficult to
coordinate. Among other questions: when is it safe to pass NULL,
vs. when a string must be provided; which visitors implement which
callbacks; the difference between concrete and virtual visits.

Correct this by retrofitting proper contracts, and document where some
of the interface warts remain (for example, we may want to modify
visit_end_* to require the same 'obj' as the visit_start counterpart,
so the dealloc visitor can be simplified). Later patches in this
series will tackle some, but not all, of these warts.

Add assertions to (partially) enforce the contract. Some of these
were only made possible by recent cleanup commits.

Backports commit adfb264c9ed04bfc694921b72173be8e29e90024 from qemu
2018-02-23 15:45:31 -05:00
Eric Blake 9e999acc83
qapi: Change visit_start_implicit_struct to visit_start_alternate
After recent changes, the only remaining use of
visit_start_implicit_struct() is for allocating the space needed
when visiting an alternate. Since the term 'implicit struct' is
hard to explain, rename the function to its current usage. While
at it, we can merge the functionality of visit_get_next_type()
into the same function, making it more like visit_start_struct().

Generated code is now slightly smaller:

| {
| Error *err = NULL;
|
|- visit_start_implicit_struct(v, (void**) obj, sizeof(BlockdevRef), &err);
|+ visit_start_alternate(v, name, (GenericAlternate **)obj, sizeof(**obj),
|+ true, &err);
| if (err) {
| goto out;
| }
|- visit_get_next_type(v, name, &(*obj)->type, true, &err);
|- if (err) {
|- goto out_obj;
|- }
| switch ((*obj)->type) {
| case QTYPE_QDICT:
| visit_start_struct(v, name, NULL, 0, &err);
...
| }
|-out_obj:
|- visit_end_implicit_struct(v);
|+ visit_end_alternate(v);
| out:
| error_propagate(errp, err);
| }

Backports commit dbf11922622685934bfb41e7cf2be9bd4a0405c0 from qemu
2018-02-23 15:33:25 -05:00
Eric Blake 5389c1cd5f
qmp-input: Refactor when list is advanced
In the QMP input visitor, visiting a list traverses two objects:
the QAPI GenericList of the caller (which gets advanced in
visit_next_list() regardless of this patch), and the QList input
that we are converting to QAPI. For consistency with QDict
visits, we want to consume elements from the input QList during
the visit_type_FOO() for the list element; that is, we want ALL
the code for consuming an input to live in qmp_input_get_object(),
rather than having it split according to whether we are visiting
a dict or a list. Making qmp_input_get_object() the common point
of consumption will make it easier for a later patch to refactor
visit_start_list() to cover the GenericList * head of a QAPI list,
and in turn will get rid of the 'first' flag (which lived in
qmp_input_next_list() pre-patch, and is hoisted to StackObject
by this patch).

This patch is therefore altering the post-condition use of 'entry',
while keeping what gets visited unchanged, from:

start_list next_list type_ELT ... next_list type_ELT next_list end_list
visits 1st elt last elt
entry NULL 1st elt 1st elt last elt last elt NULL gone

where type_ELT() returns (entry ? entry : 1st elt) and next_list() steps
entry

to this usage:

start_list next_list type_ELT ... next_list type_ELT next_list end_list
visits 1st elt last elt
entry 1st elt 1nd elt 2nd elt last elt NULL NULL gone

where type_ELT() steps entry and returns the old entry, and next_list()
leaves entry alone.

Backports commit fcf3cb21783b2dae3358fdbe7001cb2f74e0cedf from qemu
2018-02-23 15:19:40 -05:00
Eric Blake 68cf25fafa
qmp-input: Require struct push to visit members of top dict
Don't embed the root of the visit into the stack of current
containers being visited. That way, we no longer get confused
on whether the first visit of a dictionary is to the dictionary
itself or to one of the members of the dictionary, based on
whether the caller passed name=NULL; and makes the QMP Input
visitor like other visitors where the value of 'name' is now
ignored on the root visit. (We may someday want to revisit
the rules on what 'name' should be on a top-level visit,
rather than just ignoring it; but that would be the topic of
another patch).

An audit of all qmp_input_visitor_new() call sites shows that
there were only two places where callers had previously been
visiting to a QDict with a non-NULL name to bypass a call to
visit_start_struct(), and those were fixed in prior patches.

Backports commit ce140b176920b5b65184020735a3c65ed3e9aeda from qemu
2018-02-23 15:16:43 -05:00
Eric Blake 1bb4e4c787
qmp-input: Don't consume input when checking has_member
Commit e8316d7 mistakenly passed consume=true within
qmp_input_optional() when checking if an optional member was
present, but the mistake was silently ignored since the code
happily let us extract a member more than once. Fix
qmp_input_optional() to not consume anything, then tighten up
the input visitor to ensure that a member is consumed exactly
once (all generated code follows this pattern; and the new
assert will catch any hand-written code that tries to visit
the same key more than once).

Backports commit e5826a2fd727f0be54a81083f31fe02a275465cd from qemu
2018-02-23 15:12:58 -05:00
Eric Blake cae9c2bd2d
qapi: Use strict QMP input visitor in more places
The following uses of a QMP input visitor should be strict
(that is, excess keys in QDict input should be flagged if not
converted to QAPI):

- Testsuite code unrelated to explicitly testing non-strict
mode (test-qmp-commands, test-visitor-serialization); since
we want more code to be strict by default, having more tests
of strict mode doesn't hurt

- Code used for cloning QAPI objects (replay-input.c,
qemu-sockets.c); we are reparsing a QObject just barely
produced by the qmp output visitor and which therefore should
not have any garbage, so while it is extra work to be strict,
it validates that our clone is correct [note that a later patch
series will simplify these two uses by creating an actual
clone visitor that is much more efficient than a
generate/reparse cycle]

- qmp_object_add(), which calls into user_creatable_add_type().
Since command line parsing for '-object' uses the same
user_creatable_add_type() through the OptsVisitor, and that is
always strict, we want to ensure that any nested dictionaries
would be treated the same in QMP and from the command line (I
don't actually know if such nested dictionaries exist). Note
that on this code change, strictness only matters for nested
dictionaries (if even possible), since we already flag excess
input at the top level during an earlier object_property_set()
on an unknown key, whether from QemuOpts:

$ ./x86_64-softmmu/qemu-system-x86_64 -nographic -nodefaults -qmp stdio -object secret,id=sec0,data=letmein,format=raw,foo=bar
qemu-system-x86_64: -object secret,id=sec0,data=letmein,format=raw,foo=bar: Property '.foo' not found

or from QMP:

$ ./x86_64-softmmu/qemu-system-x86_64 -nographic -nodefaults -qmp stdio
{"QMP": {"version": {"qemu": {"micro": 93, "minor": 5, "major": 2}, "package": ""}, "capabilities": []}}
{"execute":"qmp_capabilities"}
{"return": {}}
{"execute":"object-add","arguments":{"qom-type":"secret","id":"sec0","props":{"format":"raw","data":"letmein","foo":"bar"}}}
{"error": {"class": "GenericError", "desc": "Property '.foo' not found"}}

The only remaining uses of non-strict input visits are:

- QMP 'qom-set' (which eventually executes
object_property_set_qobject()) - mark it as something to revisit
in the future (I didn't want to spend any more time on this patch
auditing if we have any QOM dictionary properties that might be
impacted, and couldn't easily prove whether this code path is
shared with anything else).

- test-qmp-input-visitor: explicit tests of non-strict mode. If
we later get rid of users that don't need strictness, then this
test should be merged with test-qmp-input-strict

Backports relevant parts of commit 240f64b6dc3346d044d7beb7cc3a53668ce47384 from qemu
2018-02-23 15:11:35 -05:00
Eric Blake 559304aed9
qapi: Consolidate QMP input visitor creation
Rather than having two separate ways to create a QMP input
visitor, where the safer approach has the more verbose name,
it is better to consolidate things into a single function
where the caller must explicitly choose whether to be strict
or to ignore excess input. This patch is the strictly
mechanical conversion; the next patch will then audit which
uses can be made stricter.

Backports commit fc471c18d5d2ec713d5a019f9530398675494bc8 from qemu
2018-02-23 15:09:57 -05:00
Eric Blake b1c4558849
qmp-input: Clean up stack handling
Management of the top of stack was a bit verbose; creating a
temporary variable and adding some comments makes the existing
code more legible before the next few patches improve things.
No semantic changes other than asserting that we are always
visiting a QObject, and not a NULL value. In particular, the
check for 'name && qobject_type(qobj) == QTYPE_QDICT)' is a
bit overkill (a dict visit should always have a name); a later
patch revisits that, while this patch is only changing one
layer of indentation due to dropping 'if (qobj)'.

Backports commit b471d012e5d7bec1d2272738141e121b5581fcdf from qemu
2018-02-23 15:08:14 -05:00
Eric Blake 0ec9a5adaf
qapi: Guarantee NULL obj on input visitor callback error
Our existing input visitors were not very consistent on errors in a
function taking 'TYPE **obj'. These are start_struct(),
start_alternate(), type_str(), and type_any(). next_list() is
similar, but can't fail (see commit 08f9541). While all of them set
'*obj' to allocated storage on success, it was not obvious whether
'*obj' was guaranteed safe on failure, or whether it was left
uninitialized. But a future patch wants to guarantee that
visit_type_FOO() does not leak a partially-constructed obj back to
the caller; it is easier to implement this if we can reliably state
that input visitors assign '*obj' regardless of success or failure,
and that on failure *obj is NULL. Add assertions to enforce
consistency in the final setting of err vs. *obj.

The opts-visitor start_struct() doesn't set an error, but it
also was doing a weird check for 0 size; all callers pass in
non-zero size if obj is non-NULL.

The testsuite has at least one spot where we no longer need
to pre-initialize a variable prior to a visit; valgrind confirms
that the test is still fine with the cleanup.

A later patch will document the design constraint implemented
here.

Backports commit e58d695e6c3a5cfa0aa2fc91b87ade017ef28b05 from qemu
2018-02-23 14:53:23 -05:00
Eric Blake 3cf7b6dd3b
qapi: Adjust layout of FooList types
By sticking the next pointer first, we don't need a union with
64-bit padding for smaller types.  On 32-bit platforms, this
can reduce the size of uint8List from 16 bytes (or 12, depending
on whether 64-bit ints can tolerate 4-byte alignment) down to 8.
It has no effect on 64-bit platforms (where alignment still
dictates a 16-byte struct); but fewer anonymous unions is still
a win in my book.

It requires visit_next_list() to gain a size parameter, to know
what size element to allocate; comparable to the size parameter
of visit_start_struct().

I debated about going one step further, to allow for fewer casts,
by doing:
    typedef GenericList GenericList;
    struct GenericList {
        GenericList *next;
    };
    struct FooList {
        GenericList base;
        Foo *value;
    };
so that you convert to 'GenericList *' by '&foolist->base', and
back by 'container_of(generic, GenericList, base)' (as opposed to
the existing '(GenericList *)foolist' and '(FooList *)generic').
But doing that would require hoisting the declaration of
GenericList prior to inclusion of qapi-types.h, rather than its
current spot in visitor.h; it also makes iteration a bit more
verbose through 'foolist->base.next' instead of 'foolist->next'.

Note that for lists of objects, the 'value' payload is still
hidden behind a boxed pointer.  Someday, it would be nice to do:

struct FooList {
    FooList *next;
    Foo value;
};

for one less level of malloc for each list element.  This patch
is a step in that direction (now that 'next' is no longer at a
fixed non-zero offset within the struct, we can store more than
just a pointer's-worth of data as the value payload), but the
actual conversion would be a task for another series, as it will
touch a lot of code.

Backports commit e65d89bf1a4484e0db0f3dc820a8b209f2fb1e8b from qemu
2018-02-23 14:49:06 -05:00
Eric Blake eef0932471
qapi-visit: Add visitor.type classification
We have three classes of QAPI visitors: input, output, and dealloc.
Currently, all implementations of these visitors have one thing in
common based on their visitor type: the implementation used for the
visit_type_enum() callback. But since we plan to add more such
common behavior, in relation to documenting and further refining
the semantics, it makes more sense to have the visitor
implementations advertise which class they belong to, so the common
qapi-visit-core code can use that information in multiple places.

A later patch will better document the types of visitors directly
in visitor.h.

For this patch, knowing the class of a visitor implementation lets
us make input_type_enum() and output_type_enum() become static
functions, by replacing the callback function Visitor.type_enum()
with the simpler enum member Visitor.type. Share a common
assertion in qapi-visit-core as part of the refactoring.

Move comments in opts-visitor.c to match the refactored layout.

Backports commit 983f52d4b3f86fb9dc9f8b142132feb5a8723016 from qemu
2018-02-23 14:25:41 -05:00
Dave Hansen f50acc467f
target-i386: fix typo in xsetbv implementation
QEMU 2.6 added support for the XSAVE family of instructions, which
includes the XSETBV instruction which allows setting the XCR0
register.

But, when booting Linux kernels with XSAVE support enabled, I was
getting very early crashes where the instruction pointer was set
to 0x3. I tracked it down to a jump instruction generated by this:

gen_jmp_im(s->pc - pc_start);

where s->pc is pointing to the instruction after XSETBV and pc_start
is pointing _at_ XSETBV. Subtract the two and you get 0x3. Whoops.

The fix is to replace this typo with the pattern found everywhere
else in the file when folks want to end the translation buffer.

Richard Henderson confirmed that this is a bug and that this is the
correct fix.

Backports commit 502c8e86ea07294067578292c6d402601c196019 from qemu
2018-02-23 14:15:35 -05:00