Commit graph

58 commits

Author SHA1 Message Date
Lioncash 49166d71ab
cpu:Remove unused callbacks 2018-03-07 16:55:46 -05:00
Laurent Vivier 0aecb15f3b
accel/tcg: add size paremeter in tlb_fill()
The MC68040 MMU provides the size of the access that
triggers the page fault.

This size is set in the Special Status Word which
is written in the stack frame of the access fault
exception.

So we need the size in m68k_cpu_unassigned_access() and
m68k_cpu_handle_mmu_fault().

To be able to do that, this patch modifies the prototype of
handle_mmu_fault handler, tlb_fill() and probe_write().
do_unassigned_access() already includes a size parameter.

This patch also updates handle_mmu_fault handlers and
tlb_fill() of all targets (only parameter, no code change).

Backports commit 98670d47cd8d63a529ff230fd39ddaa186156f8c from qemu
2018-03-06 10:56:34 -05:00
Richard Henderson 28061c2e59
qom: Introduce CPUClass.tcg_initialize
Move target cpu tcg initialization to common code,
called from cpu_exec_realizefn.

Backports commit 55c3ceef61fcf06fc98ddc752b7cce788ce7680b from qemu
2018-03-05 09:49:26 -05:00
Igor Mammedov 6f265062ef
qom: add helper macro DEFINE_TYPES()
DEFINE_TYPES() will help to simplify following routine patterns:

static void foo_register_types(void)
{
type_register_static(&foo1_type_info);
type_register_static(&foo2_type_info);
...
}

type_init(foo_register_types)

or

static void foo_register_types(void)
{
int i;

for (i = 0; i < ARRAY_SIZE(type_infos); i++) {
type_register_static(&type_infos[i]);
}
}

type_init(foo_register_types)

with a single line

DEFINE_TYPES(type_infos)

where types have static definition which could be consolidated in
a single array of TypeInfo structures.
It saves us ~6-10LOC per use case and would help to replace
imperative foo_register_types() there with declarative style of
type registration.

Backports commit 38b5d79b2e8cf6085324066d84e8bb3b3bbe8548 from qemu
2018-03-05 03:51:54 -05:00
Igor Mammedov c97583fc42
qom: introduce type_register_static_array()
it will help to remove code duplication of registration
static types in places that have open coded loop to
perform batch type registering.

Backports commit aa04c9d20704fa5b9ab239d5111adbcce5f49808 from qemu
2018-03-05 03:49:50 -05:00
Peter Xu 0741c3880a
qom: provide root container for internal objs
We have object_get_objects_root() to keep user created objects, however
no place for objects that will be used internally. Create such a
container for internal objects.

Backports commit 7c47c4ead75d0b733ee8f2f51fd1de0644cc1308 from qemu
2018-03-05 01:16:50 -05:00
Cornelia Huck 6f9b7a9363
cpu: drop old comments describing members
These comments are obviously stale.

Backports commit 6fda014e1a65474c4877b36cc42e8a0f377817a4 from qemu
2018-03-05 00:03:31 -05:00
Peter Maydell 2070ef1c37
boards.h: Define new flag ignore_memory_transaction_failures
Define a new MachineClass field ignore_memory_transaction_failures.
If this is flag is true then the CPU will ignore memory transaction
failures which should cause the CPU to take an exception due to an
access to an unassigned physical address; the transaction will
instead return zero (for a read) or be ignored (for a write). This
should be set only by legacy board models which rely on the old
RAZ/WI behaviour for handling devices that QEMU does not yet model.
New board models should instead use "unimplemented-device" for all
memory ranges where the guest will attempt to probe for a device that
QEMU doesn't implement and a stub device is required.

We need this for ARM boards, where we're about to implement support for
generating external aborts on memory transaction failures. Too many
of our legacy board models rely on the RAZ/WI behaviour and we
would break currently working guests when their "probe for device"
code provoked an external abort rather than a RAZ.

Backports commit ed860129acd3fcd0b1e47884e810212aaca4d21b from qemu
2018-03-04 21:27:15 -05:00
Peter Maydell c44d323359
cpu: Define new cpu_transaction_failed() hook
Currently we have a rather half-baked setup for allowing CPUs to
generate exceptions on accesses to invalid memory: the CPU has a
cpu_unassigned_access() hook which the memory system calls in
unassigned_mem_write() and unassigned_mem_read() if the current_cpu
pointer is non-NULL. This was originally designed before we
implemented the MemTxResult type that allows memory operations to
report a success or failure code, which is why the hook is called
right at the bottom of the memory system. The major problem with
this is that it means that the hook can be called even when the
access was not actually done by the CPU: for instance if the CPU
writes to a DMA engine register which causes the DMA engine to begin
a transaction which has been set up by the guest to operate on
invalid memory then this will casue the CPU to take an exception
incorrectly. Another minor problem is that currently if a device
returns a transaction error then this won't turn into a CPU exception
at all.

The right way to do this is to have allow the CPU to respond
to memory system transaction failures at the point where the
CPU specific code calls into the memory system.

Define a new QOM CPU method and utility function
cpu_transaction_failed() which is called in these cases.
The functionality here overlaps with the existing
cpu_unassigned_access() because individual target CPUs will
need some work to convert them to the new system. When this
transition is complete we can remove the old cpu_unassigned_access()
code.

Backports commit 0dff0939f6fc6a7abd966d4295f06a06d7a01df9 from qemu
2018-03-04 13:11:50 -05:00
Eduardo Habkost 382022929e
cpu: cpu_by_arch_id() helper
The helper can be used for CPU object lookup using the CPU's
arch-specific ID (the one returned by CPUClass::get_arch_id()).

Backports commit 5ce46cb34eecec0bc94a4b1394763f9a1bbe20c3 from qemu
2018-03-04 12:16:39 -05:00
Michael S. Tsirkin fd472c53c6
Revert "cpu: add APIs to allocate/free CPU environment"
This reverts commit e2a7f28693aea7e194ec1435697ec4feb24f8a6f.

This was not supposed to go upstream yet. Reverting.

Backports commit cde0a63ad721dbb538419a00f9405587680be436 from qemu
2018-03-04 01:42:49 -05:00
Michael S. Tsirkin 71bf994214
cpu: add APIs to allocate/free CPU environment
These will be implemented and then used by follow-up patches.

Backports commit e2a7f28693aea7e194ec1435697ec4feb24f8a6f from qemu
2018-03-04 01:39:09 -05:00
Igor Mammedov fe4152c6a5
qom: enforce readonly nature of link's check callback
link's check callback is supposed to verify/permit setting it,
however currently nothing restricts it from misusing it
and modifying target object from within.
Make sure that readonly semantics are checked by compiler
to prevent callback's misuse.

Backports commit 8f5d58ef2c92d7b82d9a6eeefd7c8854a183ba4a from qemu
2018-03-03 22:17:20 -05:00
Emilio G. Cota f66e74d65b
tcg: consistently access cpu->tb_jmp_cache atomically
Some code paths can lead to atomic accesses racing with memset()
on cpu->tb_jmp_cache, which can result in torn reads/writes
and is undefined behaviour in C11.

These torn accesses are unlikely to show up as bugs, but from code
inspection they seem possible. For example, tb_phys_invalidate does:
/* remove the TB from the hash list */
h = tb_jmp_cache_hash_func(tb->pc);
CPU_FOREACH(cpu) {
if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) {
atomic_set(&cpu->tb_jmp_cache[h], NULL);
}
}
Here atomic_set might race with a concurrent memset (such as the
ones scheduled via "unsafe" async work, e.g. tlb_flush_page) and
therefore we might end up with a torn pointer (or who knows what,
because we are under undefined behaviour).

This patch converts parallel accesses to cpu->tb_jmp_cache to use
atomic primitives, thereby bringing these accesses back to defined
behaviour. The price to pay is to potentially execute more instructions
when clearing cpu->tb_jmp_cache, but given how infrequently they happen
and the small size of the cache, the performance impact I have measured
is within noise range when booting debian-arm.

Note that under "safe async" work (e.g. do_tb_flush) we could use memset
because no other vcpus are running. However I'm keeping these accesses
atomic as well to keep things simple and to avoid confusing analysis
tools such as ThreadSanitizer.

Backports commit f3ced3c59287dabc253f83f0c70aa4934470c15e from qemu
2018-03-03 21:12:36 -05:00
Marc-André Lureau ca25248ecd
object: add uint property setter/getter
Backports commit 3152779cd63ba41331ef41659406f65b03e7911a from qemu
2018-03-03 18:43:17 -05:00
KONRAD Frederic c5730ff194
tcg: add options for enabling MTTCG
We know there will be cases where MTTCG won't work until additional work
is done in the front/back ends to support. It will however be useful to
be able to turn it on.

As a result MTTCG will default to off unless the combination is
supported. However the user can turn it on for the sake of testing.

Backports commit 8d4e9146b3568022ea5730d92841345d41275d66 from qemu
2018-03-02 09:25:01 -05:00
Julian Brown cc217b0c90
arm: Correctly handle watchpoints for BE32 CPUs
In BE32 mode, sub-word size watchpoints can fail to trigger because the
address of the access is adjusted in the opcode helpers before being
compared with the watchpoint registers. This patch reverses the address
adjustment before performing the comparison with the help of a new CPUClass
hook.

This version of the patch augments and tidies up comments a little.

Backports commit 40612000599e52e792d23c998377a0fa429c4036 from qemu
2018-03-02 00:24:33 -05:00
Paolo Bonzini 9d64a89acf
tcg: comment on which functions have to be called with tb_lock held
softmmu requires more functions to be thread-safe, because translation
blocks can be invalidated from e.g. notdirty callbacks. Probably the
same holds for user-mode emulation, it's just that no one has ever
tried to produce a coherent locking there.

This patch will guide the introduction of more tb_lock and tb_unlock
calls for system emulation.

Note that after this patch some (most) of the mentioned functions are
still called outside tb_lock/tb_unlock. The next one will rectify this.

Backports commit 7d7500d99895f888f97397ef32bb536bb0df3b74 from qemu
2018-02-28 10:26:28 -05:00
Alex Bennée 33589eb75f
cpus: pass CPUState to run_on_cpu helpers
CPUState is a fairly common pointer to pass to these helpers. This means
if you need other arguments for the async_run_on_cpu case you end up
having to do a g_malloc to stuff additional data into the routine. For
the current users this isn't a massive deal but for MTTCG this gets
cumbersome when the only other parameter is often an address.

This adds the typedef run_on_cpu_func for helper functions which has an
explicit CPUState * passed as the first parameter. All the users of
run_on_cpu and async_run_on_cpu have had their helpers updated to use
CPUState where available.

Backports commit e0eeb4a21a3ca4b296220ce4449d8acef9de9049 from qemu
2018-02-26 04:54:55 -05:00
Sergey Sorokin d1e4ac0451
Fix confusing argument names in some common functions
There are functions tlb_fill(), cpu_unaligned_access() and
do_unaligned_access() that are called with access type and mmu index
arguments. But these arguments are named 'is_write' and 'is_user' in their
declarations. The patches fix the arguments to avoid a confusion.

Backports commit b35399bb4e9968296a12303b00f9f2066470e987 from qemu
2018-02-25 03:58:27 -05:00
Changlong Xie 2ca07642f1
qom: Fix comment typo
It's qom_unref, not qdef_unref.

Backports commit ada03a0e8423ef8950e30d216f56a9661a4070e2 from qemu
2018-02-25 00:46:15 -05:00
Bharata B Rao 851dec945d
qom: API to get instance_size of a type
Add an API object_type_get_size(const char *typename) that returns the
instance_size of the give typename.

Backports commit 3f97b53a682d2595747c926c00d78b9d406f1be0 from qemu
2018-02-24 19:00:16 -05:00
Paolo Bonzini 9485b7c2e1
cpu: move exec-all.h inclusion out of cpu.h
exec-all.h contains TCG-specific definitions. It is not needed outside
TCG-specific files such as translate.c, exec.c or *helper.c.

One generic function had snuck into include/exec/exec-all.h; move it to
include/qom/cpu.h.

Backports commit 63c915526d6a54a95919ebece83fa9ca631b2508 from qemu
2018-02-24 02:39:08 -05:00
Sergey Fedorov 1a768018c2
tcg: Remove needless CPUState::current_tb
This field was used for telling cpu_interrupt() to unlink a chain of TBs
being executed when it worked that way. Now, cpu_interrupt() don't do
this anymore. So we don't need this field anymore.

Backports commit 3213525f8ab48742db09dab18cb9ae6f36a6c921 from qemu
2018-02-23 23:45:42 -05:00
Sergey Fedorov ba9a237586
tcg: Rework tb_invalidated_flag
'tb_invalidated_flag' was meant to catch two events:
* some TB has been invalidated by tb_phys_invalidate();
* the whole translation buffer has been flushed by tb_flush().

Then it was checked:
* in cpu_exec() to ensure that the last executed TB can be safely
linked to directly call the next one;
* in cpu_exec_nocache() to decide if the original TB should be provided
for further possible invalidation along with the temporarily
generated TB.

It is always safe to patch an invalidated TB since it is not going to be
used anyway. It is also safe to call tb_phys_invalidate() for an already
invalidated TB. Thus, setting this flag in tb_phys_invalidate() is
simply unnecessary. Moreover, it can prevent from pretty proper linking
of TBs, if any arbitrary TB has been invalidated. So just don't touch it
in tb_phys_invalidate().

If this flag is only used to catch whether tb_flush() has been called
then rename it to 'tb_flushed'. Declare it as 'bool' and stick to using
only 'true' and 'false' to set its value. Also, instead of setting it in
tb_gen_code(), just after tb_flush() has been called, do it right inside
of tb_flush().

In cpu_exec(), this flag is used to track if tb_flush() has been called
and have made 'next_tb' (a reference to the last executed TB) invalid
for linking it to directly call the next TB. tb_flush() can be called
during the CPU execution loop from tb_gen_code(), during TB execution or
by another thread while 'tb_lock' is released. Catch for translation
buffer flush reliably by resetting this flag once before first TB lookup
and each time we find it set before trying to add a direct jump. Don't
touch in in tb_find_physical().

Each vCPU has its own execution loop in multithreaded mode and thus
should have its own copy of the flag to be able to reset it with its own
'next_tb' and don't affect any other vCPU execution thread. So make this
flag per-vCPU and move it to CPUState.

In cpu_exec_nocache(), we only need to check if tb_flush() has been
called from tb_gen_code() called by cpu_exec_nocache() itself. To do
this reliably, preserve the old value of the flag, reset it before
calling tb_gen_code(), check afterwards, and combine the saved value
back to the flag.

This patch is based on the patch "tcg: move tb_invalidated_flag to
CPUState" from Paolo Bonzini <pbonzini@redhat.com>.

Backports commit 6f789be56d3f38e9214dafcfab3bf9be7191f370 from qemu
2018-02-23 23:34:51 -05:00
Markus Armbruster 06668850e3
include/qemu/osdep.h: Don't include qapi/error.h
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.

Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.

Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.

This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.

Backports commit da34e65cb4025728566d6504a99916f6e7e1dd6a from qemu
2018-02-21 23:08:18 -05:00
Stefan Weil baa477d324
Remove unneeded include statements for setjmp.h
As soon as setjmp.h is included from qemu/osdep.h, those old include
statements are no longer needed.

Add also setjmp.h to the list in scripts/clean-includes.

Backports commit 8ff98f1ed2f50cd05c3c5027c7efdf69859ec664 from qemu
2018-02-21 22:57:32 -05:00
Daniel P. Berrange eddfb13c2c
qom: Change object property iterator API contract
Currently the ObjectProperty iterator API works as follows:

ObjectPropertyIterator *iter;

iter = object_property_iter_init(obj);
while ((prop = object_property_iter_next(iter))) {
...
}
object_property_iter_free(iter);

This has the benefit that the ObjectPropertyIterator struct
can be opaque, but has the downside that callers need to
explicitly call a free function. It is also not in keeping
with iterator style used elsewhere in QEMU/GLib2.

This patch changes the API to use stack allocation instead:

ObjectPropertyIterator iter;

object_property_iter_init(&iter, obj);
while ((prop = object_property_iter_next(&iter))) {
...
}

Backports commit 7746abd8e9ee9db20c0b0fdb19504f163ba3cbea from qemu
2018-02-21 21:03:58 -05:00
Daniel P. Berrange b97ab59f08
qom: Allow properties to be registered against classes
When there are many instances of a given class, registering
properties against the instance is wasteful of resources. The
majority of objects have a statically defined list of possible
properties, so most of the properties are easily registerable
against the class. Only those properties which are conditionally
registered at runtime need be recorded against the klass.

Registering properties against classes also makes it possible
to provide static introspection of QOM - currently introspection
is only possible after creating an instance of a class, which
severely limits its usefulness.

This impl only supports simple scalar properties. It does not
attempt to allow child object / link object properties against
the class. There are ways to support those too, but it would
make this patch more complicated, so it is left as an exercise
for the future.

There is no equivalent to object_property_del() provided, since
classes must be immutable once they are defined.

Backports commit 16bf7f522a2ff68993f80631ed86254c71eaf5d4 from qemu
2018-02-21 21:00:56 -05:00
Pavel Fedin 825bc2fb04
qom: Replace object property list with GHashTable
ARM GICv3 systems with large number of CPUs create lots of IRQ pins. Since
every pin is represented as a property, number of these properties becomes
very large. Every property add first makes sure there's no duplicates.
Traversing the list becomes very slow, therefore QEMU initialization takes
significant time (several seconds for e. g. 16 CPUs).

This patch replaces list with GHashTable, making lookup very fast. The only
drawback is that object_child_foreach() and object_child_foreach_recursive()
cannot add or remove properties during traversal, since GHashTableIter does
not have modify-safe version. However, the code seems not to modify objects
via these functions.

Backports commit b604a854e843505007c59d68112c654556102a20 from qemu
2018-02-21 13:35:10 -05:00
Lluís Vilanova d111e2df2d
typedefs: Add CPUState
Backports commit b23197f9cf2f221a6cc6272d36852f4f70cf9c1b from qemu
2018-02-21 01:55:22 -05:00
Alistair Francis a4bf026460
qom: Correct object_property_get_int() description
The description of object_property_get_int() stated that on an error
it returns NULL. This is not the case and the function will return -1
if an error occurs. Update the commented documentation accordingly.

Backports commit b29b47e9b35017428904e0e934700877dfaabe73 from qemu
2018-02-20 11:52:16 -05:00
Sergey Fedorov 6a3038db7c
cpu: Add callback to check architectural watchpoint match
When QEMU watchpoint matches, that is not definitely an architectural
watchpoint match yet. If it is a stop-before-access watchpoint then that
is hardly possible to ignore it after throwing a TCG exception.

A special callback is introduced to check for architectural watchpoint
match before raising a TCG exception.

Backports commit 568496c0c0f1863a4bc18539962cd8d81baa4e30 from qemu
2018-02-20 11:43:56 -05:00
Eric Blake 9ec25b4673
qom: Swap 'name' next to visitor in ObjectPropertyAccessor
Similar to the previous patch, it's nice to have all functions
in the tree that involve a visitor and a name for conversion to
or from QAPI to consistently stick the 'name' parameter next
to the Visitor parameter.

Done by manually changing include/qom/object.h and qom/object.c,
then running this Coccinelle script and touching up the fallout
(Coccinelle insisted on adding some trailing whitespace).

@ rule1 @
identifier fn;
typedef Object, Visitor, Error;
identifier obj, v, opaque, name, errp;
@@
void fn
- (Object *obj, Visitor *v, void *opaque, const char *name,
+ (Object *obj, Visitor *v, const char *name, void *opaque,
Error **errp) { ... }

@@
identifier rule1.fn;
expression obj, v, opaque, name, errp;
@@
fn(obj, v,
- opaque, name,
+ name, opaque,
errp)

Backports commit d7bce9999df85c56c8cb1fcffd944d51bff8ff48 from qemu
2018-02-19 23:14:37 -05:00
Eric Blake 5b6f0cbdb7
qom: Use typedef for Visitor
No need to repeat 'struct Visitor' when we already have it in
typedefs.h. Omitting the redundant 'struct' also makes a later
patch easier to search for all object property callbacks that
are associated with a Visitor.

Backports commit 4fa45492c3387c0fa51e8e81160ac9a7814f44a2 from qemu
2018-02-19 22:26:47 -05:00
Peter Crosthwaite ce997e1caf
qom/cpu: Add MemoryRegion property
Add a MemoryRegion property, which if set is used to construct
the CPU's initial (default) AddressSpace.

Backports commit 6731d864f80938e404dc3e5eb7f6b76b891e3e43 from qemu
2018-02-18 21:54:50 -05:00
Peter Maydell 8edd6ffdfd
cputlb.c: Use correct address space when looking up MemoryRegionSection
When looking up the MemoryRegionSection for the new TLB entry in
tlb_set_page_with_attrs(), use cpu_asidx_from_attrs() to determine
the correct address space index for the lookup, and pass it into
address_space_translate_for_iotlb().

Backports commit d7898cda81b6efa6b2d7a749882695cdcf280eaa from qemu
2018-02-17 23:15:22 -05:00
Peter Maydell d23831f4dd
cpu: Add new asidx_from_attrs() method
Add a new method to CPUClass which the memory system core can
use to obtain the correct address space index to use for a memory
access with a given set of transaction attributes, together
with the wrapper function cpu_asidx_from_attrs() which implements
the default behaviour ("always use asidx 0") for CPU classes
which don't provide the method.

Backports commit d7f25a9e6a6b2c69a0be6033903b7d6087bcf47d from qemu
2018-02-17 22:45:32 -05:00
Lioncash 1cc4b92c67
cpu: Add new get_phys_page_attrs_debug() method
Add a new optional method get_phys_page_attrs_debug() to CPUClass.
This is like the existing get_phys_page_debug(), but also returns
the memory transaction attributes to use for the access.
This will be necessary for CPUs which have multiple address
spaces and use the attributes to select the correct address
space.

We provide a wrapper function cpu_get_phys_page_attrs_debug()
which falls back to the existing get_phys_page_debug(), so we
don't need to change every target CPU.

Backports commit 1dc6fb1f5cc5cea5ba01010a19c6acefd0ae4b73 from qemu
2018-02-17 22:43:42 -05:00
Peter Maydell 51369b67cd
exec.c: Allow target CPUs to define multiple AddressSpaces
Allow multiple calls to cpu_address_space_init(); each
call adds an entry to the cpu->ases array at the specified
index. It is up to the target-specific CPU code to actually use
these extra address spaces.

Since this multiple AddressSpace support won't work with
KVM, add an assertion to avoid confusing failures.

Backports commit 12ebc9a76dd7702aef0a3618717a826c19c34ef4 from qemu
2018-02-17 22:35:13 -05:00
Peter Maydell f1b237236c
exec.c: Don't set cpu->as until cpu_address_space_init
Rather than setting cpu->as unconditionally in cpu_exec_init
(and then having target-i386 override this later), don't set
it until the first call to cpu_address_space_init.

This requires us to initialise the address space for
both TCG and KVM (KVM doesn't need the AS listener but
it does require cpu->as to be set).

For target CPUs which don't set up any address spaces (currently
everything except i386), add the default address_space_memory
in qemu_init_vcpu().

Backports commit 56943e8cc14b7eeeab67d1942fa5d8bcafe3e53f from qemu
2018-02-17 22:24:36 -05:00
Daniel P. Berrange 3ba8959dfd
qom: Introduce ObjectPropertyIterator struct for iteration
Some users of QOM need to be able to iterate over properties
defined against an object instance. Currently they are just
directly using the QTAIL macros against the object properties
data structure.

This is bad because it exposes them to changes in the data
structure used to store properties, as well as changes in
functionality such as ability to register properties against
the class.

This provides an ObjectPropertyIterator struct which will
insulate the callers from the particular data structure
used to store properties. It can be used thus

ObjectProperty *prop;
ObjectPropertyIterator *iter;

iter = object_property_iter_init(obj);
while ((prop = object_property_iter_next(iter))) {
... do something with prop ...
}
object_property_iter_free(iter);

Backports commit a00c94824126901168bca5b89147f9e334a49e87 from qemu
2018-02-17 18:39:00 -05:00
Cao jin 73c55b2eb3
qom/object: fix 2 comment typos
Also change the misleading definition of macro OBJECT_CLASS_CHECK

Backports commit b30d80546421c6ea919096b596887f496c80af0a from qemu
2018-02-17 15:38:14 -05:00
Richard Henderson 3ec0adcc07
target-*: Introduce and use cpu_breakpoint_test
Reduce the boilerplate required for each target. At the same time,
move the test for breakpoint after calling tcg_gen_insn_start.

Note that arm and aarch64 do not use cpu_breakpoint_test, but still
move the inline test down after tcg_gen_insn_start.

Backports commit b933066ae03d924a92b2616b4a24e7d91cd5b841 from qemu
2018-02-17 15:24:10 -05:00
Lioncash da8e73b887
qom/cpu: Add throttle_thread_scheduled member
Extracts the member out of commit 2adcc85d407c1ab985f5abed808c78dbb84f4773
2018-02-17 15:23:55 -05:00
Lioncash 5e2862b29d
cpu: Add crash_occurred flag into CPUState
CPUState::crash_occurred field inside CPUState marks
that guest crash occurred. This value is added into
cpu common migration subsection.

Backports commit bac05aa9a77af1ca7972c8dc07560f4daa7c2dfc from qemu
2018-02-17 15:23:51 -05:00
Peter Crosthwaite a249923d4d
qom: Add recursive version of object_child_for_each
Useful for iterating through an entire QOM subtree.

Backports commit d714b8de7747f20fe42e5716d1d44f91e2b891f4 from qemu
2018-02-17 15:23:35 -05:00
Peter Crosthwaite 6279dfc113
cpu: Add wrapper for the set_pc() hook
Add a wrapper around the CPUClass::set_pc() hook.

Backports commit 2991b8904730d663f12ad42e35798ecc22fe151c from qemu
2018-02-17 15:23:19 -05:00
Paolo Bonzini a46accd252
exec: make iotlb RCU-friendly
After the previous patch, TLBs will be flushed on every change to
the memory mapping. This patch augments that with synchronization
of the MemoryRegionSections referred to in the iotlb array.

With this change, it is guaranteed that iotlb_to_region will access
the correct memory map, even once the TLB will be accessed outside
the BQL.

Backports commit 9d82b5a792236db31a75b9db5c93af69ac07c7c5 from qemu
2018-02-12 15:20:39 -05:00
xorstream fac6a66860 platform.h move #3 2017-01-21 00:13:21 +11:00