The 'name' parameter to memory_region_init_* had been marked as debug
only, however vmstate_region_ram uses it as a parameter to
qemu_ram_set_idstr to set RAMBlock names and these form part of the
migration stream.
Backports commit e8f5fe2de125a0bfbefbaa6a69af81f4817cb7a0 from qemu
The power state spec section 5.1.5 AFFINITY_INFO defines the
affinity info return values as
0 ON
1 OFF
2 ON_PENDING
I grepped QEMU for power_state to ensure that no assumptions
of OFF=0 were being made.
Backports commit d5affb0d8677e1a8a8fe03fa25005b669e7cdc02 from qemu
In armv8, this register implements more than a single bit, with
fine-grained enables for read access to event counters, cycles
counters, and write access to the software increment. This change
implements those checks using custom access functions for the relevant
registers.
Backports commit 6ecd0b6ba0591ef280ed984103924d4bdca5ac32 from qemu
glibc blacklists TSX on Haswell CPUs with model==60 and
stepping < 4. To make the Haswell CPU model more useful, make
those guests actually use TSX by changing CPU stepping to 4.
References:
* glibc commit 2702856bf45c82cf8e69f2064f5aa15c0ceb6359
https://sourceware.org/git/?p=glibc.git;a=commit;h=2702856bf45c82cf8e69f2064f5aa15c0ceb6359
Backports commit ec56a4a7b07e2943f49da273a31e3195083b1f2e from qemu
Helper function for code that needs to check the host CPU
vendor/family/model/stepping values.
Backports commit 20271d484069f154fb262507e63adc3a37e885d2 from qemu
..just like the rest of the displayed ESR register. Otherwise people
might scratch their heads if a not obviously hex number is displayed
for the EC field.
Backports commit 6568da459b611845ef55526cd23afc9fa9f4647f from qemu
Paths through the softmmu code during code generation now need to be audited
to check for double locking of tb_lock. In particular, VMEXIT can take tb_lock
through cpu_vmexit -> cpu_x86_update_cr4 -> tlb_flush.
To avoid this, split VMEXIT delivery in two parts, similar to what is done with
exceptions. cpu_vmexit only records the VMEXIT exit code and information, and
cc->do_interrupt can then deliver it when it is safe to take the lock.
Backports commit 10cde894b63146139f981857e4eedf756fa53dcb from qemu
The translation code uses cpu_ld*_code which can trigger a tlb_fill
which if it fails will erroneously attempts a fault resolution. This
never works during translation as the TB being generated hasn't been
added yet. The target should have checked retaddr before calling
cpu_restore_state but for those that have yet to be fixed we do it
here to avoid a recursive tb_lock() under MTTCG's new locking regime
Backports commit d8b2239bcd8872a5c5f7534d1658fc2365caab2d from qemu
This suppresses the incorrect warning when forcing MTTCG for x86
guests on x86 hosts. A future patch will still warn when
TARGET_SUPPORT_MTTCG hasn't been defined for the guest (which is still
pending for x86).
Backports commit 72c1701f62e8d44eb24a0583a958edc280105455 from qemu
Fix the design flaw demonstrated in the previous commit: new method
check_list() lets input visitors report that unvisited input remains
for a list, exactly like check_struct() lets them report that
unvisited input remains for a struct or union.
Implement the method for the qobject input visitor (straightforward),
and the string input visitor (less so, due to the magic list syntax
there). The opts visitor's list magic is even more impenetrable, and
all I can do there today is a stub with a FIXME comment. No worse
than before.
Backports commit a4a1c70dc759e5b81627e96564f344ab43ea86eb from qemu
The split between tests/test-qobject-input-visitor.c and
tests/test-qobject-input-strict.c now makes less sense than ever. The
next commit will take care of that.
Backports commit 048abb7b20c9f822ad9d4b730bade73b3311a47a from qemu
Commit 240f64b made all qobject input visitors created outside tests
strict, except for the one in object_property_set_qobject(). That one
was left behind only because Eric couldn't spare the time to figure
out whether making it strict would break anything, with a TODO
comment. Time to resolve it.
Strict makes a difference only for otherwise successful visits of QAPI
structs or unions. Let's examine what the callers of
object_property_set_qobject() visit:
* object_property_set_str(), object_property_set_bool(),
object_property_set_int() visit a QString, QBool, QInt,
respectively. Strictness can't matter.
* qmp_qom_set visits its @value argument. Comes straight from QMP and
can be anything ('any' in the QAPI schema). Strictness matters when
the property's set() method visits a struct or union QAPI type.
No such methods exist, thus switching to strict can't break
anything.
If we acquire such methods in the future, we'll *want* the visitor
to be strict, so that unexpected members get rejected as they should
be.
Switch to strict.
Backports commit 05601ed2de60df0e344d6b783a6bc0c1ff2b5d1f from qemu
The string input visitor tries to cope with null input. Null input
isn't used anywhere, and isn't covered by tests. Unsurprisingly, it
doesn't fully work: start_list() crashes because it passes the input
via parse_str() to strtoll() unchecked.
Make string_input_visitor_new() assert its argument isn't null, and
drop the code trying to deal with null input.
The opts visitor crashes when you try to actually visit something with
null input. Make opts_visitor_new() assert its argument isn't null,
mostly for clarity.
qobject_input_visitor_new() already asserts its argument isn't null.
Backports commit f332e830e38b3ff3953ef02ac04e409ae53769c5 from qemu
visit_optional() is to be called only between visit_start_struct() and
visit_end_struct(). Visitors that don't support struct visits,
i.e. don't implement start_struct(), end_struct(), have no use for it.
Clarify documentation.
The string input visitor doesn't support struct visits. Its
parse_optional() is therefore useless. Drop it.
Backports commit a8aec6de2ac1a5e36989fdfba29067b361009b75 from qemu
Error messages refer to nodes of the QObject being visited by name.
Trouble is the names are sometimes less than helpful:
* The name of the root QObject is whatever @name argument got passed
to the visitor, except NULL gets mapped to "null". We commonly pass
NULL. Not good.
Avoiding errors "at the root" mitigates. For instance,
visit_start_struct() can only fail when the visited object is not a
dictionary, and we commonly ensure it is beforehand.
* The name of a QDict's member is the member key. Good enough only
when this happens to be unique.
* The name of a QList's member is "null". Not good.
Improve error messages by referring to nodes by path instead, as
follows:
* The path of the root QObject is whatever @name argument got passed
to the visitor, except NULL gets mapped to "<anonymous>".
* The path of a root QDict's member is the member key.
* The path of a root QList's member is "[%u]", where %u is the list
index, starting at zero.
* The path of a non-root QDict's member is the path of the QDict
concatenated with "." and the member key.
* The path of a non-root QList's member is the path of the QList
concatenated with "[%u]", where %u is the list index.
For example, the incorrect QMP command
{ "execute": "blockdev-add", "arguments": { "node-name": "foo", "driver": "raw", "file": {"driver": "file" } } }
now fails with
{"error": {"class": "GenericError", "desc": "Parameter 'file.filename' is missing"}}
instead of
{"error": {"class": "GenericError", "desc": "Parameter 'filename' is missing"}}
and
{ "execute": "input-send-event", "arguments": { "device": "bar", "events": [ [] ] } }
now fails with
{"error": {"class": "GenericError", "desc": "Invalid parameter type for 'events[0]', expected: object"}}
instead of
{"error": {"class": "GenericError", "desc": "Invalid parameter type for 'null', expected: QDict"}}
Aside: calling the thing "parameter" is suboptimal for QMP, because
the root object is "arguments" there.
The qobject output visitor doesn't have this problem because it should
not fail. Same for dealloc and clone visitors.
The string visitors don't have this problem because they visit just
one value, whose name needs to be passed to the visitor as @name. The
string output visitor shouldn't fail anyway.
The options visitor uses QemuOpts names. Their name space is flat, so
the use of QDict member keys as names is fine. NULL names used with
roots and lists could conceivably result in bad error messages. Left
for another day.
Backports commit a9fc37f6bc3f2ab90585cb16493da9f6dcfbfbcf from qemu
qobject_input_start_struct() sets *list, except when it fails because
qobject_input_get_object() fails, i.e. the input object doesn't exist.
All the other input visitor start_struct(), start_list(),
start_alternate() always set *obj / *list.
Change qobject_input_start_struct() to match.
Backports commit 58561c27669ddf1c6d39ff8ce25837c6f2d9d92c from qemu
The QObject input visitor has three error message formats:
* Parameter '%s' is missing
* "Invalid parameter type for '%s', expected: %s"
* "QMP input object member '%s' is unexpected"
The '%s' are member names (or "null", but I'll fix that later).
The last error message calls the thing "QMP input object member"
instead of "parameter". Misleading when the visitor is used on
QObjects that don't come from QMP. Change it to "Parameter '%s' is
unexpected".
Backports commit 910f738b851a263396fc85b2052e47f884ffead3 from qemu
The QERR_ macros are leftovers from the days of "rich" error objects.
QERR_QMP_BAD_INPUT_OBJECT, QERR_QMP_BAD_INPUT_OBJECT_MEMBER,
QERR_QMP_EXTRA_MEMBER are used in just one place now, except for one
use that has crept into qobject-input-visitor.c.
Drop these macros, to make the (bad) error messages more visible.
Backports commit 99fb0c53c038105bae68b02a3d9f1cbf7951ba10 from qemu
At the moment ram device's memory regions are DEVICE_NATIVE_ENDIAN. It's
incorrect. This memory region is backed by a MMIO area in host, so the
uint64_t data that MemoryRegionOps read from/write to this area should be
host-endian rather than target-endian. Hence, current code does not work
when target and host endianness are different which is the most common case
on PPC64. To fix it, this introduces DEVICE_HOST_ENDIAN for the ram device.
This has been tested on PPC64 BE/LE host/guest in all possible combinations
including TCG.
Backports commit c99a29e702528698c0ce2590f06ca7ff239f7c39 from qemu
The cpu->exit_request check in cpu_loop_exec_tb is unnecessary,
because cpu->tcg_exit_req is always set after cpu->exit_request.
So let the TB exit and we will pick up the exit request later
in cpu_handle_interrupt.
Backports commit 55ac0a9bf4e1b1adfc7d73586a7aa085f58c9851 from qemu
CPU runnability checks and CPU model expansion have slightly
different requirements. Document the steps involved in loading a
CPU model and realizing a CPU, so their requirements and purpose
are clearly defined.
This patch doesn't change any implementation. It just add
comments, rename the x86_cpu_load_features() function for clarity
(so it won't be confused with x86_cpu_load_def()), and move
x86_cpu_filter_features() closer to it.
Backports commit b8d834a00fa3ed4dad7d371e1a00938a126a54a0 from qemu
To fix the following warnings:
In file included from /users/pranith/qemu/tcg/tcg.c:255:
/users/pranith/qemu/tcg/aarch64/tcg-target.inc.c:879:24: warning: implicit conversion from enumeration type 'TCGMemOp' (aka 'enum TCGMemOp') to different enumeration type 'TCGType' (aka 'enum TCGType')
[-Wenum-conversion]
tcg_out_cmp(s, ext, a, b, b_const);
~~~~~~~~~~~ ^~~
/users/pranith/qemu/tcg/aarch64/tcg-target.inc.c:893:36: warning: implicit conversion from enumeration type 'TCGMemOp' (aka 'enum TCGMemOp') to different enumeration type 'TCGType' (aka 'enum TCGType')
[-Wenum-conversion]
tcg_out_insn(s, 3201, CBZ, ext, a, offset);
~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
/users/pranith/qemu/tcg/aarch64/tcg-target.inc.c:389:65: note: expanded from macro 'tcg_out_insn'
glue(tcg_out_insn_,FMT)(S, glue(glue(glue(I,FMT),_),OP), ## __VA_ARGS__)
^
/users/pranith/qemu/tcg/aarch64/tcg-target.inc.c:895:37: warning: implicit conversion from enumeration type 'TCGMemOp' (aka 'enum TCGMemOp') to different enumeration type 'TCGType' (aka 'enum TCGType')
[-Wenum-conversion]
tcg_out_insn(s, 3201, CBNZ, ext, a, offset);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
/users/pranith/qemu/tcg/aarch64/tcg-target.inc.c:389:65: note: expanded from macro 'tcg_out_insn'
glue(tcg_out_insn_,FMT)(S, glue(glue(glue(I,FMT),_),OP), ## __VA_ARGS__)
^
/users/pranith/qemu/tcg/aarch64/tcg-target.inc.c:1610:27: warning: implicit conversion from enumeration type 'TCGType' (aka 'enum TCGType') to different enumeration type 'TCGMemOp' (aka 'enum TCGMemOp')
[-Wenum-conversion]
tcg_out_brcond(s, ext, a2, a0, a1, const_args[1], arg_label(args[3]));
~~~~~~~~~~~~~~ ^~~
backports commit dc1eccd661ada3b746ca4438e444993c36a0f04f from qemu
In float64_to_uint64_round_to_zero() a typo meant that we were
taking the uint64_t return value from float64_to_uint64() and
putting it into an int64_t variable before returning it as
uint64_t again. Use uint64_t instead of pointlessly casting it
back and forth to int64_t.
Backports commit d000b477f2693dbca97cd8ea751c2e0b71890662 from qemu
In get_page_addr_code(), if the guest PC doesn't correspond to RAM
then we currently run the CPU's do_unassigned_access() hook if it has
one, and otherwise we give up and exit QEMU with a more-or-less
useful message. This code assumes that the do_unassigned_access hook
will never return, because if it does then we'll plough on attempting
to use a non-RAM TLB entry to get a RAM address and will abort() in
qemu_ram_addr_from_host_nofail(). Unfortunately some CPU
implementations of this hook do return: Microblaze, SPARC and the ARM
v7M.
Change the code to call report_bad_exec() if the hook returns, as
well as if it didn't have one. This means we can tidy it up to use
the cpu_unassigned_access() function which wraps the "get the CPU
class and call the hook if it has one" work, since we aren't trying
to distinguish "no hook" from "hook existed and returned" any more.
This brings the handling of this hook into line with the handling
used for data accesses, where "hook returned" is treated the
same as "no hook existed" and gets you the default behaviour.
Backports commit 44d7ce0ef39cb45e13d384574d79799eb3d39834 from qemu
The aarch64 crypto instructions for AES and SHA are missing the
check for if the FPU is enabled.
Backports commit a4f5c5b72380deeccd53a6890ea3782f10ca8054 from qemu
This enables the multi-threaded system emulation by default for ARMv7
and ARMv8 guests using the x86_64 TCG backend. This is because on the
guest side:
- The ARM translate.c/translate-64.c have been converted to
- use MTTCG safe atomic primitives
- emit the appropriate barrier ops
- The ARM machine has been updated to
- hold the BQL when modifying shared cross-vCPU state
- defer powerctl changes to async safe work
All the host backends support the barrier and atomic primitives but
need to provide same-or-better support for normal load/store
operations.
Backports commit ca759f9e387db87e1719911f019bc60c74be9ed8 from qemu
The WFE and YIELD instructions are really only hints and in TCG's case
they were useful to move the scheduling on from one vCPU to the next. In
the parallel context (MTTCG) this just causes an unnecessary cpu_exit
and contention of the BQL.
Backports commit c22edfebff29f63d793032e4fbd42a035bb73e27 from qemu
This moves the helper function closer to where it is called and updates
the error message to report via error_report instead of the deprecated
fprintf.
Backports commit 857baec1d9e80947f0c1007c3a3d2331d62b4b53 from qemu
While the vargs approach was flexible the original MTTCG ended up
having munge the bits to a bitmap so the data could be used in
deferred work helpers. Instead of hiding that in cputlb we push the
change to the API to make it take a bitmap of MMU indexes instead.
For ARM some the resulting flushes end up being quite long so to aid
readability I've tended to move the index shifting to a new line so
all the bits being or-ed together line up nicely, for example:
tlb_flush_page_by_mmuidx(other_cs, pageaddr,
(1 << ARMMMUIdx_S1SE1) |
(1 << ARMMMUIdx_S1SE0));
Backports commit 0336cbf8532935d8e23c2aabf3e2ce2c0697b6ac from qemu
The patch enables handling atomic code in the guest. This should be
preferably done in cpu_handle_exception(), but the current assumptions
regarding when we can execute atomic sections cause a deadlock.
The current mechanism discards the flags which were set in atomic
execution. We ensure they are properly saved by calling the
cc->cpu_exec_enter/leave() functions around the loop.
As we are running cpu_exec_step_atomic() from the outermost loop we
need to avoid an abort() when single stepping over atomic code since
debug exception longjmp will point to the the setlongjmp in
cpu_exec(). We do this by setting a new jmp_env so that it jumps back
here on an exception.
Backports relevant parts of commit 08e73c48b053566bfe0c994f154f73991cd0ff0e from qemu
There are a couple of changes that occur at the same time here:
- introduce a single vCPU qemu_tcg_cpu_thread_fn
One of these is spawned per vCPU with its own Thread and Condition
variables. qemu_tcg_rr_cpu_thread_fn is the new name for the old
single threaded function.
- the TLS current_cpu variable is now live for the lifetime of MTTCG
vCPU threads. This is for future work where async jobs need to know
the vCPU context they are operating in.
The user to switch on multi-thread behaviour and spawn a thread
per-vCPU. For a simple test kvm-unit-test like:
./arm/run ./arm/locking-test.flat -smp 4 -accel tcg,thread=multi
Will now use 4 vCPU threads and have an expected FAIL (instead of the
unexpected PASS) as the default mode of the test has no protection when
incrementing a shared variable.
We enable the parallel_cpus flag to ensure we generate correct barrier
and atomic code if supported by the front and backends. This doesn't
automatically enable MTTCG until default_mttcg_enabled() is updated to
check the configuration is supported.
Backports relevant parts of commit 372579427a5040a26dfee78464b50e2bdf27ef26
There are now only two uses of the global exit_request left.
The first ensures we exit the run_loop when we first start to process
pending work and in the kick handler. This is just as easily done by
setting the first_cpu->exit_request flag.
The second use is in the round robin kick routine. The global
exit_request ensured every vCPU would set its local exit_request and
cause a full exit of the loop. Now the iothread isn't being held while
running we can just rely on the kick handler to push us out as intended.
We lightly re-factor the main vCPU thread to ensure cpu->exit_requests
cause us to exit the main loop and process any IO requests that might
come along. As an cpu->exit_request may legitimately get squashed
while processing the EXCP_INTERRUPT exception we also check
cpu->queued_work_first to ensure queued work is expedited as soon as
possible.
Backports commit e5143e30fb87fbf179029387f83f98a5a9b27f19 from qemu
..and make the definition local to cpus. In preparation for MTTCG the
concept of a global tcg_current_cpu will no longer make sense. However
we still need to keep track of it in the single-threaded case to be able
to exit quickly when required.
qemu_cpu_kick_no_halt() moves and becomes qemu_cpu_kick_rr_cpu() to
emphasise its use-case. qemu_cpu_kick now kicks the relevant cpu as
well as qemu_kick_rr_cpu() which will become a no-op in MTTCG.
For the time being the setting of the global exit_request remains.
Backports commit 791158d93b27f22a17c2ada06621831d54f09a2c from qemu
Also atomically sets the unicorn equivalents
We know there will be cases where MTTCG won't work until additional work
is done in the front/back ends to support. It will however be useful to
be able to turn it on.
As a result MTTCG will default to off unless the combination is
supported. However the user can turn it on for the sake of testing.
Backports commit 8d4e9146b3568022ea5730d92841345d41275d66 from qemu
We'll be using the memory ordering definitions to define values for
both the host and guest. To avoid fighting with circular header
dependencies just move these types into their own minimal header.
Backports commit 20937143145b8f5a4194e5c407731ba38797864e from qemu
Enable tcg lock debug asserts in a debug build by default instead of
relying on DEBUG_LOCKING. None of the other DEBUG_* macros have
asserts, so this patch removes DEBUG_LOCKING and enable these asserts
in a debug build.
Backports commit 6ac3d7e845549f08473f020c1c70f14b8911a67e from qemu
This makes qemu_strtosz(), qemu_strtosz_mebi() and
qemu_strtosz_metric() similar to qemu_strtoi64(), except negative
values are rejected.
Backports commit f17fd4fdf0df3d2f3444399d04c38d22b9a3e1b7 from qemu
Change the qemu_strtosz() & friends to return -EINVAL when @endptr is
null and the conversion doesn't consume the string completely.
Matches how qemu_strtol() & friends work.
Only test_qemu_strtosz_simple() passes a null @endptr. No functional
change there, because its conversion consumes the string.
Simplify callers that use @endptr only to fail when it doesn't point
to '\0' to pass a null @endptr instead.
Backports commit 4fcdf65ae2c00ae69f7625f26ed41f37d77b403c from qemu
Writing QEMU_STRTOSZ_DEFSUFFIX_* instead of '*' gains nothing. Get
rid of these eyesores.
Backports commit 17f942560e54f8ee72996bc3276c697503606d7b from qemu
Most callers of qemu_strtosz_suffix() pass QEMU_STRTOSZ_DEFSUFFIX_B.
Capture the pattern in new qemu_strtosz().
Inline qemu_strtosz_suffix() into its only remaining caller.
Backports commit 466dea14e677555dd24465aca75d00a3537ad062 from qemu
With qemu_strtosz(), no suffix means mebibytes. It's used rarely.
I'm going to add a similar function where no suffix means bytes.
Rename qemu_strtosz() to qemu_strtosz_MiB() to make the name
qemu_strtosz() available for the new function.
Backports commit e591591b323772eea733de6027f5e8b50692d0ff from qemu
To parse numbers with metric suffixes, we use
qemu_strtosz_suffix_unit(nptr, &eptr, QEMU_STRTOSZ_DEFSUFFIX_B, 1000)
Capture this in a new function for legibility:
qemu_strtosz_metric(nptr, &eptr)
Replace test_qemu_strtosz_suffix_unit() by test_qemu_strtosz_metric().
Rename qemu_strtosz_suffix_unit() to do_strtosz() and give it internal
linkage.
Backports commit d2734d2629266006b0413433778474d5801c60be from qemu
Reorder check_strtox_error() to make it obvious that we always store
through a non-null @endptr.
Transform
if (some error) {
error case ...
err = value for error case;
} else {
normal case ...
err = value for normal case;
}
return err;
to
if (some error) {
error case ...
return value for error case;
}
normal case ...
return value for normal case;
Backports commit 4baef2679e029c76707be1e2ed54bf3dd21693fe from qemu
Name same things the same, different things differently.
* qemu_strtol()'s parameter @nptr is called @p in
check_strtox_error(). Rename the latter.
* qemu_strtol()'s parameter @endptr is called @next in
check_strtox_error(). Rename the latter.
* qemu_strtol()'s variable @p is called @endptr in
check_strtox_error(). Rename both to @ep.
* qemu_strtol()'s variable @err is *negative* errno,
check_strtox_error()'s parameter @err is *positive*. Rename the
latter to @libc_errno.
Same for qemu_strtoul(), qemu_strtoi64(), qemu_strtou64(), of course.
Backports commit 717adf960933da0650d995f050d457063d591914 from qemu
The name qemu_strtoll() suggests conversion to long long, but it
actually converts to int64_t. Rename to qemu_strtoi64().
The name qemu_strtoull() suggests conversion to unsigned long long,
but it actually converts to uint64_t. Rename to qemu_strtou64().
Backports commit b30d188677456b17c1cd68969e08ddc634cef644 from qemu
Fixes the following documentation bugs:
* Fails to document that null @nptr is safe.
* Fails to document that we return -EINVAL when no conversion could be
performed (commit 47d4be1).
* Confuses long long with int64_t, and unsigned long long with
uint64_t.
* Claims the unsigned conversions can underflow. They can't.
While there, mark problematic assumptions that int64_t is long long,
and uint64_t is unsigned long long with FIXME comments.
Backports commit 4295f879becfbbb9f4330489311586b96915d920 from qemu
Commit 89cad9f changed qdict_get_qdict() to return NULL instead of
crash when the key doesn't exist or its value isn't a QDict.
Commit 2d6421a neglected to do the same for qdict_get_qlist().
Correct that, and update the function comments.
qdict_get_obj() is now unused, remove.
Backports commit b25f23e7dbc6bc0dcda010222a4f178669d1aedc from qemu
float128_to_uint32_round_to_zero() is needed by xscvqpuwz instruction
of PowerPC ISA 3.0.
Backports commit fd425037d25cecaaffdb3831697e0adc10ca2ba3 from qemu
Implement float128_to_uint64() and use that to implement
float128_to_uint64_round_to_zero()
This is required by xscvqpudz instruction of PowerPC ISA 3.0.
Backports commit 2e6d85683576c970c714c1cc071dca742835b9d4 from qemu
Power ISA 3.0 introduces a few quadruple precision floating point
instructions that support round-to-odd rounding mode. The
round-to-odd mode is explained as under:
Let Z be the intermediate arithmetic result or the operand of a convert
operation. If Z can be represented exactly in the target format, the
result is Z. Otherwise the result is either Z1 or Z2 whichever is odd.
Here Z1 and Z2 are the next larger and smaller numbers representable
in the target format respectively.
Backports commit 9ee6f678f473007e252934d6acd09c24490d9d42 from qemu
Provide a new cpu_supports_isa function which allows callers to
determine whether a CPU supports one of the ISA_ flags, by testing
whether the associated struct mips_def_t sets the ISA flags in its
insn_flags field.
An example use of this is to allow boards which generate bootloader code
to determine the properties of the CPU that will be used, for example
whether the CPU is 64 bit or which architecture revision it implements.
Backports commit bed9e5ceb158c886d548fe59675a6eba18baeaeb from qemu
Clear cache->mr so that address_space_cache_destroy does nothing
the second time it is called.
Backports commit 91047df38dffa80222179f63fbb74c1dfefa25ed from qemu
Reorganize the sigsetjmp so that the restart case falls through
to cpu_handle_exception and the execution loop.
Backports commit 4515e58d60dc3aac53dbd5e53e4c3bec126967d8 from qemu
The sigsetjmp only needs to be prepared once for the whole execution
of cpu_exec. This patch takes care of the "== 0" side, using a
nested loop so that cpu_handle_interrupt goes straight back to
cpu_handle_exception without doing another sigsetjmp.
Backports commit a42cf3f3f266a97ceb13e8b99bc7b13f7bf4192a from qemu
The siglongjmp goes straight back to the beginning of cpu_exec's
outermost loop. We do not need a siglongjmp, we can simply
leave the inner TB execution loop.
Backports commit 209b71b60ef3341246038e1c926c3b704969cdd3 from qemu
This seems to have worked just fine so far on weakly-ordered
architectures, but I don't see anything that prevents the
reordering from:
store 1 to exit_request
store 1 to tcg_exit_req
load tcg_exit_req
store 0 to tcg_exit_req
load exit_request
store 0 to exit_request
store 1 to exit_request
store 1 to tcg_exit_req
to this:
store 1 to exit_request
store 1 to tcg_exit_req
load tcg_exit_req
load exit_request
store 1 to exit_request
store 1 to tcg_exit_req
store 0 to tcg_exit_req
store 0 to exit_request
therefore losing a request. It's possible that other memory barriers
(e.g. in rcu_read_unlock) are hiding it, but better safe than
sorry.
Backports commit a70fe14b7dddcb944fbd6c9f3739cd3a22089af5 from qemu
This patch contains several fixes to enable vPMU under TCG mode. It
first removes the checking of kvm_enabled() while unsetting
ARM_FEATURE_PMU. With it, the .pmu option can be used to turn on/off vPMU
under TCG mode. Secondly the PMU node of DT table is now created under TCG.
The last fix is to disable the masking of PMUver field of ID_AA64DFR0_EL1.
Backports commit d6f02ce3b8a43ddd8f83553fe754a34b26fb273f from qemu
In order to support Linux perf, which uses PMXEVTYPER register,
this patch adds read/write access support for PMXEVTYPER. The access
is CONSTRAINED UNPREDICTABLE when PMSELR is not 0x1f. Additionally
this patch adds support for PMXEVTYPER_EL0.
Backports commit fdb8665672ded05f650d18f8b62d5c8524b4385b from qemu
This patch adds support for AArch64 register PMSELR_EL0. The existing
PMSELR definition is revised accordingly.
Backports commit 6b0407805d46bbeba70f4be426285d0a0e669750 from qemu
Add support for generating the ISS (Instruction Specific Syndrome)
for Data Abort exceptions taken from AArch32. These syndromes are
used by hypervisors for example to trap and emulate memory accesses.
This is the equivalent for AArch32 guests of the work done for AArch64
guests in commit aaa1f954d4cab243.
Backports commit 9bb6558a218bf7e466e5ac1100639517d8a30d33 from qemu
In the ARM ldr/str decode path, rather than directly testing
"insn & (1 << 21)" and "insn & (1 << 24)", abstract these
bits out into wbit and pbit local flags. (We will want to
do more tests against them to determine whether we need to
provide syndrome information.)
Backports commit 63f26fcfda8e19f94ce23336726d14805250a5b6 from qemu
In BE32 mode, sub-word size watchpoints can fail to trigger because the
address of the access is adjusted in the opcode helpers before being
compared with the watchpoint registers. This patch reverses the address
adjustment before performing the comparison with the help of a new CPUClass
hook.
This version of the patch augments and tidies up comments a little.
Backports commit 40612000599e52e792d23c998377a0fa429c4036 from qemu
Thumb-1 code has some issues in BE32 mode (as currently implemented). In
short, since bytes are swapped within words at load time for BE32
executables, this also swaps pairs of adjacent Thumb-1 instructions.
This patch un-swaps those pairs of instructions again, both for execution,
and for disassembly. (The previous version of the patch always read four
bytes in arm_read_memory_func and then extracted the proper two bytes,
in a probably misguided attempt to match the behaviour of actual hardware
as described by e.g. the ARM9TDMI TRM, section 3.3 "Endian effects for
instruction fetches". It's less complicated to just read the correct
two bytes though.)
Backports commit f7478a92dd9ee2276bfaa5b7317140d3f9d6a53b from qemu
Add a new "cfgend" property which selects whether the CPU resets into
big-endian mode or not. This setting affects whether we reset with
SCTLR_B (ARMv6 and earlier) or SCTLR_EE (ARMv7 and later) set.
Backports commit 3a062d5730266b2386eeda68b1a1c6e96451db31 from qemu
Currently float128_default_nan() returns 0xFFFF800000000000 in the
higher double word, but it should return 0x7FFF800000000000 which
is the correct higher double word for default qNAN on PowerPC.
Backports commit 5d51eaea84899d88cb161fab3f089168e3812e9e from qemu
stub version of MISMATCH_CHECK is empty so it's easy to misuse for
people not building kvm on arm. Use QEMU_BUILD_BUG_ON similar to the
non-stub version to make it easier to catch bugs.
Backports commit 705ae59fecae341a4b1a45ce48b46de4b1bb3cf4 from qemu
Macro calls without a trailing ; look weird in C, this works as a side
effect of how QEMU_BUILD_BUG_ON is implemented. Fix this up.
Backports commit 1b28762a333bd238611103e9ed2348d7af93b0db from qemu
It's a familiar pattern: some code uses ARRAY_SIZE, then refactoring
changes the argument from an array to a pointer to a dynamically
allocated buffer. Code keeps compiling but any ARRAY_SIZE calls now
return the size of the pointer divided by element size.
Let's add build time checks to ARRAY_SIZE before we allow more
of these in the code-base.
Backports commit ed63ec0d22ccdce3b2222d9a514423b7fbba3a0d from qemu
QEMU_BUILD_BUG_ON uses a typedef in order to be safe
to use outside functions, but sometimes it's useful
to have a version that can be used within an expression.
Following what Linux does, introduce QEMU_BUILD_BUG_ON_ZERO
that return zero after checking condition at build time.
Backports commit d757573e69f2ef58a4a7b41f6c55d65fa1e1c5c2 from qemu
There are theoretical concerns that some compilers might not trigger
build failures on attempts to define an array of size (x ? -1 : 1) where
x is a variable and make it a variable sized array instead. Let rewrite
using a struct with a negative bit field size instead as there are no
dynamic bit field sizes. This is similar to what Linux does.
Backports commit f291887e8eef5d37d31484638f6e62401b4b99a2 from qemu
Some headers use QEMU_BUILD_BUG_ON. This causes a problem
if the C file including that header happens to have
QEMU_BUILD_BUG_ON at the same line number.
Fix using a widely available extension: __COUNTER__.
If unavailable, provide a stub.
Backports commit 60abf0a5e05134187e274ce5f32524ccf0cae1a6 from qemu
ldl_p has a signed return type so assigning it to uint64_t implicitly
sign-extends the value. This results in devices with min_access_size = 8
seeing unexpected values passed to their write handlers.
Example: guest performs a 32-bit write of 0x80000000 to an mmio region
and the handler receives 0xFFFFFFFF80000000 in its value argument.
Backports commit 6da67de6803e93cbb7e93ac3497865832f8c00ea from qemu
We only use the IS_M() macro in two places, and it's a bit of a
namespace grab to put in cpu.h. Drop it in favour of just explicitly
calling arm_feature() in the places where it was used.
Backports commit 531c60a97ab51618b4b9ccef1c5fe00607079706 from qemu
1st mmap returns *ptr* which aligns to host page size,
| size + align |
------------------------------------------
ptr
input param *align* could be 1M, or 2M, or host page size. After
QEMU_ALIGN_UP, offset will >= 0
2nd mmap use flag MAP_FIXED, then it return ptr+offset, or else fail.
If it success, then we will have something like:
| offset | size |
--------------------------------------
ptr ptr1
*ptr1* is what we really want to return, it equals ptr+offset.
Backports commit 6e4c890e15b23f078650499fbde11760b8eccf10 from qemu
When CPU vendor is set to AMD, the AMD feature alias bits on
CPUID[0x80000001].EDX are already automatically copied from CPUID[1].EDX
on x86_cpu_realizefn(). When CPU vendor is Intel, those bits are
reserved and should be zero. On either case, those bits shouldn't be set
in the CPU model table.
Commit 726a8ff68677d8d5fba17eb0ffb85076bfb598dc removed those
bits from most CPU models, but the Opteron_* entries still have
them. Remove the alias bits from Opteron_* too.
Add an assert() to x86_register_cpudef_type() to ensure we don't
make the same mistake again.
Backports commit 2a923a293df95334fa22634016efdd138f49da7f from qemu
AVX512_VPOPCNTDQ: Vector POPCNT instructions for word and qwords.
variable precision.
Backports commit f77543772dcd38fa438470d9b80bafbd3a3ebbd7 from qemu
Like the original MIPS, HPPA has the MSB of an SNaN set.
However, it has different rules for silencing an SNaN:
(1) msb is cleared and (2) msb-1 must be set if the fraction
is now zero, and (implementation defined) may be set always.
I haven't checked real hardware but chose the set always
alternative because it's easy and within spec.
Backports commit 005fa38d86257d471ac461c066a5409a9f5ebb02 from qemu
C11 allows errno to be clobbered by pretty much any library function
call, so in general callers need to take care to save errno before
calling other functions.
However, for error reporting functions this is rather awkward and can
make the code on the caller side more complicated than
necessary. error_setg_errno() already takes care of preserving errno
and some functions rely on that, so just promise that we continue to
do so in the future.
Backports commit 98cb89af4df7e1776ce418ed6167b6e214a64435 from qemu
Enable the ARM_FEATURE_EL2 bit on Cortex-A52 and
Cortex-A57, since this is all now sufficiently implemented
to work with the GICv3. We provide the usual CPU property
to disable it for backwards compatibility with the older
virt boards.
In this commit, we disable the EL2 feature on the
virt and ZynpMP boards, so there is no overall effect.
Another commit will expose a board-level property to
allow the user to enable EL2.
Backports commit c25bd18a04c8bd0f19556d719864b7b08528222d from qemu
The PSCI spec states that a CPU_ON call should cause the new
CPU to be started in the highest implemented Non-secure
exception level. We were incorrectly starting it at the
exception level of the caller, which happens to be correct
if EL2 is not implemented. Implement the correct logic
as described in the PSCI 1.0 spec section 6.4:
* if EL2 exists and SCR_EL3.HCE is set: start in EL2
* otherwise start in EL1
Backports commit 3f591a20221511c639cc7959755e570801a21cd2 from qemu
Split ARM on/off function from PSCI support code.
This will allow to reuse these functions in other code.
Backports commit 825482adde1f971cbddf27e15fb4453ab3fae994 from qemu
The DBGVCR_EL2 system register is needed to run a 32-bit
EL1 guest under a Linux EL2 64-bit hypervisor. Its only
purpose is to provide AArch64 with access to the state of
the DBGVCR AArch32 register. Since we only have a dummy
DBGVCR, implement a corresponding dummy DBGVCR32_EL2.
Backports commit 4d2ec4da1c2d60c9fd8bad137506870c2f980410 from qemu
To run a VM in 32-bit EL1 our AArch32 interrupt handling code
needs to be able to cope with VIRQ and VFIQ exceptions.
These behave like IRQ and FIQ except that we don't need to try
to route them to Monitor mode.
Backports commit 87a4b270348c69a446ebcddc039bfae31b1675cb from qemu
We've currently got 18 architectures in QEMU, and thus 18 target-xxx
folders in the root folder of the QEMU source tree. More architectures
(e.g. RISC-V, AVR) are likely to be included soon, too, so the main
folder of the QEMU sources slowly gets quite overcrowded with the
target-xxx folders.
To disburden the main folder a little bit, let's move the target-xxx
folders into a dedicated target/ folder, so that target-xxx/ simply
becomes target/xxx/ instead.
Backports commit fcf5ef2ab52c621a4617ebbef36bf43b4003f4c0 from qemu
In OpenSPARC T1+ TWINX ASIs in store instructions are aliased
with Block Initializing Store ASIs.
"UltraSPARC T1 Supplement Draft D2.1, 14 May 2007" describes them
in the chapter "5.9 Block Initializing Store ASIs"
Integer stores of all sizes are allowed with these ASIs.
Backports commit 3390537b5df4014e24a30f9bdcfa05c2bd0cd6d8 from qemu
According to chapter 13.3 of the
UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005,
only the sun4u format is available for data-access loads.
Store UA2005 entries in the sun4u format to simplify processing.
Backports commit 7285fba083de3f14f6e98abb4469173b56da9480 from qemu
Implement the behavior described in the chapter 13.9.11 of
UltraSPARC T1™ Supplement to the UltraSPARC Architecture 2005:
"If a TLB Data-In replacement is attempted with all TLB
entries locked and valid, the last TLB entry (entry 63) is
replaced."
Backports commit 4797a6851975c1239df440c5f01d8566e63717bb from qemu
Please note that QEMU doesn't impelement Real->Physical address
translation. The "Real Address" is always the "Physical Address".
Backports commit 84f8f5876628963e67f66edde8a71208c4274ac8 from qemu
Accordinf to UA2005, 9.3.3 "Address Space Identifiers",
"In hyperprivileged mode, all instruction fetches and loads and stores with implicit
ASIs use a physical address, regardless of the value of TL".
Backports commit 9a10756d1204c3528e47892195349bf882069846 from qemu
As described in Chapter 5.7.6 of the UltraSPARC Architecture 2005,
outstanding disrupting exceptions that are destined for privileged mode can only
cause a trap when the virtual processor is in nonprivileged or privileged mode and
PSTATE.ie = 1. At all other times, they are held pending.
Backports commit 1a2aefae6627170fdee689b394a65f76080c068a from qemu
Use explicit register pointers while accessing D/I-MMU registers.
Call cpu_unassigned_access on access to missing registers.
Backports commit 20395e63375358bf6dd147057aaf998abf7abdb9 from qemu
while IMMU/DMMU is disabled
- ignore MMU-faults in hypervisorv mode or if CPU doesn't have hypervisor
- signal TT_INSN_REAL_TRANSLATION_MISS/TT_DATA_REAL_TRANSLATION_MISS otherwise
Backports commit 1ceca928538a3633b74a7dc718a05ce6767f2f76 from qemu
We have never has the concept of global TLB entries which would avoid
the flush so we never actually use this flag. Drop it and make clear
that tlb_flush is the sledge-hammer it has always been.
Backports commit d10eb08f5d8389c814b554d01aa2882ac58221bf from qemu
Both the cpu->tb_jmp_cache and SoftMMU TLB structures are only used
when running TCG code so we might as well skip them for anything else.
Backports commit ba7d3d1858c257e39b47f7f12fa2016ffd960b11 from qemu
It is a common thing amongst the various cpu reset functions want to
flush the SoftMMU's TLB entries. This is done either by calling
tlb_flush directly or by way of a general memset of the CPU
structure (sometimes both).
This moves the tlb_flush call to the common reset function and
additionally ensures it is only done for the CONFIG_SOFTMMU case and
when tcg is enabled.
In some target cases we add an empty end_of_reset_fields structure to the
target vCPU structure so have a clear end point for any memset which
is resetting value in the structure before CPU_COMMON (where the TLB
structures are).
While this is a nice clean-up in general it is also a precursor for
changes coming to cputlb for MTTCG where the clearing of entries
can't be done arbitrarily across vCPUs. Currently the cpu_reset
function is usually called from the context of another vCPU as the
architectural power up sequence is run. By using the cputlb API
functions we can ensure the right behaviour in the future.
Backports commit 1f5c00cfdb8114c1e3a13426588ceb64f82c9ddb from qemu
On 680x0 family only.
Address Register indirect With postincrement:
When using the stack pointer (A7) with byte size data, the register
is incremented by two.
Address Register indirect With predecrement:
When using the stack pointer (A7) with byte size data, the register
is decremented by two.
Backports commit 727d937b59f1f722f983e20f9cd23b0e7ef60165 from qemu
gen_flush_flags() is setting unconditionally cc_op_synced to 1
and s->cc_op to CC_OP_FLAGS, whereas env->cc_op can be set
to something else by a previous tcg fragment.
We fix that by not setting cc_op_synced to 1
(except for gen_helper_flush_flags() that updates env->cc_op)
FIX: https://github.com/vivier/qemu-m68k/issues/19
Backports commit 695576db2daaf2bdc63e7f6d36038b61caed622a from qemu
M680x0 bit operations with an immediate value use 9 bits of the 16bit
value, while coldfire ones use only 8 bits.
Backports commit fe53c2be8c12da345bd788b949e0b2360e4b3db3 from qemu
In commit c52ab08aee6f7d4717fc6b517174043126bd302f,
the patch snippet for the "syscall" insn got applied to "iret".
Backports commit 410e98146ffde201ab4c778823ac8beaa74c4c3f from qemu
Particularly when andc is also available, this is two insns
shorter than using clz to compute ctz.
Backports commit 14e99210f6c6cede461a54b2e0f9b4cd55175f00 from qemu
The number of actual invocations of ctpop itself does not warrent
an opcode, but it is very helpful for POWER7 to use in generating
an expansion for ctz.
Backports commit a768e4e99247911f00c5c0267c12d4e207d5f6cc from qemu
The number of actual invocations does not warrent an opcode,
and the backends generating it. But at least we can eliminate
redundant helpers.
Backports commit 086920c2c8008f125fd38781072fa25c3ad158ea from qemu
Previously we could not have different constraints for different ISA levels,
which prevented us from eliding the matching constraint for shifts.
We do now have to make sure that the operands match for constant shifts.
We can also handle some small left shifts via lea.
Backports commit 6a5aed4bdc7078838a8098336588d56c9ce09d1d from qemu
Use a switch instead of searching a table. Share constraints between
32-bit and 64-bit, when at all possible.
Backports commit cd26449a505f808e479af4fdd539e05767e09c06 from qemu
This allows an output operand to match an input operand
only when the input operand needs a register.
Backports commit 17280ff4a5f264e01e55ae514ee6d3586f9577b2 from qemu
This will let us choose how to interpret a given constraint
depending on whether the opcode is 32- or 64-bit. Which will
let us share more constraint combinations between opcodes.
At the same time, change the interface to return the advanced
pointer instead of passing it in/out by reference.
Backports commit 069ea736b50b75fdec99c9b8cc603b97bd98419e from qemu
This will allow the target to tailor the constraints to the
auto-detected ISA extensions.
Backports commit f69d277ece43c42c7ab0144c2ff05ba740f6706b from qemu
A couple of places where it was easy to identify a right-shift
followed by an extract or and-with-immediate, and the obvious
sign-extract from a high byte register.
Backports commit 04fc2f1c8fc030a11e08e81bb926392c0991282a from qemu
Use the new primitives for UBFX and SBFX.
Backports commits 59a71b4c5b4ef2ef6425b9e21c972dd5bf450275 and 86c9ab277615af4e0389eb80a83073873ff96c86 from qemu
Since we can no longer use matching constraints, this does
mean we must handle that data movement by hand.
Backports commit 752b1be94757de906b9c24ebc8f5e6aa54b96b23 from qemu
This lets us expose facilities to TCG_TARGET_HAS_* defines
directly, rather than hiding behind function calls.
Backports commit b2c98d9d392c87c9b9e975d30f79924719d9cbbe from qemu
This allows us to use this detection within the TCG_TARGET_HAS_*
macros, instead of requiring a function call into tcg-target.inc.c.
Backports commit 40b2ccb156534f5d5f1d110a6ce008d87ee10af1 from qemu
While we don't require a new opcode, it is handy to have an expander
that knows the first source is zero.
Backports commit 07cc68d52852bf47dea7c402b46ddd28248d4212 from qemu
Adds tcg_gen_extract_* and tcg_gen_sextract_* for extraction of
fixed position bitfields, much like we already have for deposit.
Backports commit 7ec8bab3deae643b1ce579c2d65a244f30708330 from qemu
This patch introduces a helper to query the iotlb entry for a
possible iova. This will be used by later device IOTLB API to enable
the capability for a dataplane (e.g vhost) to query the IOTLB.
Backports commit 052c8fa9983f553fdfa0d61034774070dd639c2b from qemu
gcc 5.3.0 diagnoses
translate-all.c: In function ‘alloc_code_gen_buffer’:
translate-all.c:756:17: error: switch condition has boolean value
switch (buf2 != MAP_FAILED) {
^
Backports commit f68808c7494b38764e1895a9852b994638b86536 from qemu
tcg_out_ldst: using a generic ALIAS_PADD to avoid ifdefs
tcg_out_ld: generates LD or LW
tcg_out_st: generates SD or SW
Backports commit 32b69707df3365aadaad1d058044a7704397ec62 from qemu
tcg_out_mov: using OPC_OR as most mips assemblers do;
tcg_out_movi: extended to 64-bit immediate.
Backports commit 2294d05dab503d11664e73712c7f250fd0bf9e3b from qemu
Without the mips32r2 instructions to perform swapping, bswap is quite large,
dominating the size of each reverse-endian qemu_ld/qemu_st operation.
Create two subroutines in the prologue block. The subroutines require extra
reserved registers (TCG_TMP[2, 3]). Using these within qemu_ld means that
we need not place additional restrictions on the qemu_ld outputs.
Backports commit 7f54eaa3b78d71cb57e45a719980f9b5ff06d21c from qemu
Bulk patch adding 64-bit opcodes into tcg_out_op. Note that
mips64 is as yet neither complete nor enabled.
Backports commit 0119b1927d531f3fac22b9b4da01dafc23644973 from qemu
Since the mips manual tables are in octal, reorg all of the opcodes
into that format for clarity. Note that the 64-bit opcodes are as
yet unused.
Backports commit 57a701fc2b34902310d4dbd1411088055616938a from qemu
Without the mips32r2 instructions to perform swapping, bswap is quite large,
dominating the size of each reverse-endian qemu_ld/qemu_st operation.
Create a subroutine in the prologue block. The subroutine requires extra
reserved registers (TCG_TMP[2, 3]). Using these within qemu_ld means that
we need not place additional restrictions on the qemu_ld outputs.
Backports commit bb08afe9f0aee1a3f5c23508e2511b882ca31e1b from qemu
Also manage word and byte operands and fix the computation of
overflow in the case of M68000 arithmetic shifts.
Backports commit 367790cce8e14131426f5190dfd7d1bdbf656e4d from qemu
Report this properly via exception and, importantly, allow
the disassembler the chance to tell us what insn is not handled.
Backports commit 72d2e4b6a437f11f97d3138f6b2ec177b78210c7 from qemu
680x0 movem can load/store words and long words and can use more
addressing modes. Coldfire can only use long words with (Ax) and
(d16,Ax) addressing modes.
Backports commit 7b542eb96d7d5d9266a9c0425f05d49c8e6df2f9 from qemu
Implement CAS using cmpxchg.
Implement CAS2 using helper and either cmpxchg when
the 32bit addresses are consecutive, or with
parallel_cpus+cpu_loop_exit_atomic() otherwise.
Backports commit 14f944063affbcc7bd6df42b060793dbfee8a822 from qemu
Update helper to set the throwing location in case of div-by-0.
Cleanup divX.w and add quad word variants of divX.l.
Backports commit 0ccb9c1d8128a020720d5c6abf99a470742a1b94 from qemu
Update helper to set the throwing location in case of div-by-0.
Cleanup divX.w and add quad word variants of divX.l.
Backports commit 0ccb9c1d8128a020720d5c6abf99a470742a1b94 from qemu
Provide gen_lea_mode and gen_ea_mode, where the mode can be
specified manually, rather than taken from the instruction.
Backports commit f84aab269ddab8509b77408b886e9071bf5c48fb from qemu