Commit graph

2526 commits

Author SHA1 Message Date
Peter Maydell e07cd2542c
exec.c: Drop TARGET_HAS_ICE define and checks
The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.

Backports commit ec53b45bcd1f74f7a4c31331fa6d50b402cd6d26 from qemu
2018-02-18 18:17:14 -05:00
Amit Shah f64e3d4931
exec: add wrapper for host pointer access
host pointer accesses force pointer math, let's
add a wrapper to make them safer.

Backports relevant parts of commit 1240be24357ee292f8d05aa2abfdba75dd0ca25d from qemu
2018-02-18 18:10:17 -05:00
Paolo Bonzini 0696e7fe19
exec: allows 8-byte accesses in subpage_ops
Otherwise fw_cfg accesses are split into 4-byte ones before they reach the
fw_cfg ops / handlers.

Backports commit ff6cff7554be06e95f8d712f66cd16bd6681c746 from qemu
2018-02-18 18:06:37 -05:00
Paolo Bonzini a06611e45b
exec: skip MMIO regions correctly in cpu_physical_memory_write_rom_internal
Loading the BIOS in the mac99 machine is interesting, because there is a
PROM in the middle of the BIOS region (from 16K to 32K).  Before memory
region accesses were clamped, when QEMU was asked to load a BIOS from
0xfff00000 to 0xffffffff it would put even those 16K from the BIOS file
into the region.  This is weird because those 16K were not actually
visible between 0xfff04000 and 0xfff07fff.  However, it worked.

After clamping was added, this also worked.  In this case, the
cpu_physical_memory_write_rom_internal function split the write in
three parts: the first 16K were copied, the PROM area (second 16K) were
ignored, then the rest was copied.

Problems then started with commit 965eb2f (exec: do not clamp accesses
to MMIO regions, 2015-06-17).  Clamping accesses is not done for MMIO
regions because they can overlap wildly, and MMIO registers can be
expected to perform full-width accesses based only on their address
(with no respect for adjacent registers that could decode to completely
different MemoryRegions).  However, this lack of clamping also applied
to the PROM area!  cpu_physical_memory_write_rom_internal thus failed
to copy the third range above, i.e. only copied the first 16K of the BIOS.

In effect, address_space_translate is expecting _something else_ to do
the clamping for MMIO regions if the incoming length is large.  This
"something else" is memory_access_size in the case of address_space_rw,
so use the same logic in cpu_physical_memory_write_rom_internal.

Backports commit b242e0e0e2969c044a318e56f7988bbd84de1f63 from qemu
2018-02-18 17:57:08 -05:00
Paolo Bonzini 32996e48fc
exec: clamp accesses against the MemoryRegionSection
Because the clamping was done against the MemoryRegion,
address_space_rw was effectively broken if a write spanned
multiple sections that are not linear in underlying memory
(with the memory not being under an IOMMU).

This is visible with the MIPS rc4030 IOMMU, which is implemented
as a series of alias memory regions that point to the actual RAM.

Backports commit e4a511f8cc6f4a46d409fb5c9f72c38ba45f8d83 from qemu
2018-02-18 17:52:58 -05:00
Lioncash 7c21d3059e
memory: Silence unused variable warning 2018-02-18 17:52:03 -05:00
Paolo Bonzini 26ba2e91f6
exec: do not clamp accesses to MMIO regions
It is common for MMIO registers to overlap, for example a 4 byte register
at 0xcf8 (totally random choice... :)) and a 1 byte register at 0xcf9.
If these registers are implemented via separate MemoryRegions, it is
wrong to clamp the accesses as the value written would be truncated.

Hence for these regions the effects of commit 23820db (exec: Respect
as_translate_internal length clamp, 2015-03-16, previously applied as
commit c3c1bb9) must be skipped.

Backports commit 965eb2fcdfe919ecced6c34803535ad32dc1249c from qemu
2018-02-18 17:50:27 -05:00
Peter Crosthwaite b82e711a65
memory: Add address_space_init_shareable()
This will either create a new AS or return a pointer to an
already existing equivalent one, if we have already created
an AS for the specified root memory region.

The motivation is to reuse address spaces as much as possible.
It's going to be quite common that bus masters out in device land
have pointers to the same memory region for their mastering yet
each will need to create its own address space. Let the memory
API implement sharing for them.

Aside from the perf optimisations, this should reduce the amount
of redundant output on info mtree as well.

Thee returned value will be malloced, but the malloc will be
automatically freed when the AS runs out of refs.

Backports commit f0c02d15b57da6f5463e3768aa0cfeedccf4b8f4 from qemu
2018-02-18 00:18:21 -05:00
Peter Maydell 73efe6cda3
exec.c: Use cpu_get_phys_page_attrs_debug
Use cpu_get_phys_page_attrs_debug() when doing virtual-to-physical
conversions in debug related code, so that we can obtain the right
address space index and thus select the correct AddressSpace,
rather than always using cpu->as.

Backports commit 5232e4c798ba7a46261d3157b73d08fc598e7dcb from qemu
2018-02-17 23:26:09 -05:00
Peter Maydell 1dfba71bef
exec.c: Add cpu_get_address_space()
Add a function to return the AddressSpace for a CPU based on
its numerical index. (Callers outside exec.c don't have access
to the CPUAddressSpace struct so can't just fish it out of the
CPUState struct directly.)

Backports commit 651a5bc03705102de519ebf079a40ecc1da991db from qemu
2018-02-17 23:22:23 -05:00
Peter Maydell 2fe995a0da
exec.c: Pass MemTxAttrs to iotlb_to_region so it uses the right AS
Pass the MemTxAttrs for the memory access to iotlb_to_region(); this
allows it to determine the correct AddressSpace to use for the lookup.

Backports commit a54c87b68a0410d0cf6f8b84e42074a5cf463732 from qemu
2018-02-17 23:19:00 -05:00
Peter Maydell 8edd6ffdfd
cputlb.c: Use correct address space when looking up MemoryRegionSection
When looking up the MemoryRegionSection for the new TLB entry in
tlb_set_page_with_attrs(), use cpu_asidx_from_attrs() to determine
the correct address space index for the lookup, and pass it into
address_space_translate_for_iotlb().

Backports commit d7898cda81b6efa6b2d7a749882695cdcf280eaa from qemu
2018-02-17 23:15:22 -05:00
Peter Maydell d23831f4dd
cpu: Add new asidx_from_attrs() method
Add a new method to CPUClass which the memory system core can
use to obtain the correct address space index to use for a memory
access with a given set of transaction attributes, together
with the wrapper function cpu_asidx_from_attrs() which implements
the default behaviour ("always use asidx 0") for CPU classes
which don't provide the method.

Backports commit d7f25a9e6a6b2c69a0be6033903b7d6087bcf47d from qemu
2018-02-17 22:45:32 -05:00
Lioncash 1cc4b92c67
cpu: Add new get_phys_page_attrs_debug() method
Add a new optional method get_phys_page_attrs_debug() to CPUClass.
This is like the existing get_phys_page_debug(), but also returns
the memory transaction attributes to use for the access.
This will be necessary for CPUs which have multiple address
spaces and use the attributes to select the correct address
space.

We provide a wrapper function cpu_get_phys_page_attrs_debug()
which falls back to the existing get_phys_page_debug(), so we
don't need to change every target CPU.

Backports commit 1dc6fb1f5cc5cea5ba01010a19c6acefd0ae4b73 from qemu
2018-02-17 22:43:42 -05:00
Peter Maydell 90c7c1bdb5
exec-all.h: Document tlb_set_page_with_attrs, tlb_set_page
Add documentation comments for tlb_set_page_with_attrs()
and tlb_set_page().

Backports commit 1787cc8ee55143b6071c87e59f08d56e7c22c1eb from qemu
2018-02-17 22:37:58 -05:00
Peter Maydell 51369b67cd
exec.c: Allow target CPUs to define multiple AddressSpaces
Allow multiple calls to cpu_address_space_init(); each
call adds an entry to the cpu->ases array at the specified
index. It is up to the target-specific CPU code to actually use
these extra address spaces.

Since this multiple AddressSpace support won't work with
KVM, add an assertion to avoid confusing failures.

Backports commit 12ebc9a76dd7702aef0a3618717a826c19c34ef4 from qemu
2018-02-17 22:35:13 -05:00
Peter Maydell f1b237236c
exec.c: Don't set cpu->as until cpu_address_space_init
Rather than setting cpu->as unconditionally in cpu_exec_init
(and then having target-i386 override this later), don't set
it until the first call to cpu_address_space_init.

This requires us to initialise the address space for
both TCG and KVM (KVM doesn't need the AS listener but
it does require cpu->as to be set).

For target CPUs which don't set up any address spaces (currently
everything except i386), add the default address_space_memory
in qemu_init_vcpu().

Backports commit 56943e8cc14b7eeeab67d1942fa5d8bcafe3e53f from qemu
2018-02-17 22:24:36 -05:00
Peter Maydell 51aeab661f
hw/arm: Clean up includes
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.

This commit was created with scripts/clean-includes.

Backports commit 12b167226f2804063cf8d72fe4fdc01764c99e96 from qemu
2018-02-17 21:10:57 -05:00
Peter Maydell cd5c4037ac
target-arm: Clean up includes
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.

This commit was created with scripts/clean-includes.

Backports commit 74c21bd07491739c6e56bcb1f962e4df730e77f3 from qemu
2018-02-17 21:09:32 -05:00
Juan Quintela 7da263e4ea
target-sparc: Convert to VMStateDescription
Convert the SPARC CPU from cpu_load/save functions to VMStateDescription.
We preserve migration compatibility with the previous version
(required for SPARC32 but not necessarily for SPARC64).

Backports commit df32c8d436d4eb3f40b00647ca0df2bbc7f6bf6f from qemu
2018-02-17 21:06:46 -05:00
Peter Maydell a734ef8156
target-sparc: Split cpu_put_psr into side-effect and no-side-effect parts
For inbound migration we really want to be able to set the PSR without
having any side effects, but cpu_put_psr() calls cpu_check_irqs() which
might try to deliver CPU interrupts. Split cpu_put_psr() into the
no-side-effect and side-effect parts.

This includes reordering the cpu_check_irqs() to the end of cpu_put_psr(),
because that function may actually end up calling cpu_interrupt(), which
does not seem like a good thing to happen in the middle of updating the PSR.

Backports commit 4552a09dd4055c806b7df8c595dc0fb8951834be from qemu
2018-02-17 21:04:15 -05:00
Paolo Bonzini 3dab621825
target-i386: do not duplicate page protection checks
x86_cpu_handle_mmu_fault is currently checking twice for writability
and executability of pages; the first time to decide whether to
trigger a page fault, the second time to compute the "prot" argument
to tlb_set_page_with_attrs.

Reorganize code so that first "prot" is computed, then it is used
to check whether to raise a page fault, then finally PROT_WRITE is
removed if the D bit will have to be set.

Backports commit 76c64d33601a4948d6f72022992574a75b6fab97 from qemu
2018-02-17 20:59:54 -05:00
Alvise Rigo 1e3e75fa44
target-arm: Use the right MMU index in arm_regime_using_lpae_format
arm_regime_using_lpae_format checks whether the LPAE extension is used
for stage 1 translation regimes. MMU indexes not exclusively of a stage 1
regime won't work with this method.

In case of ARMMMUIdx_S12NSE0 or ARMMMUIdx_S12NSE1, offset these values
by ARMMMUIdx_S1NSE0 to get the right index indicating a stage 1
translation regime.

Rename also the function to arm_s1_regime_using_lpae_format and update
the comments to reflect the change.

Backports commit deb2db996cbb9470b39ae1e383791ef34c4eb3c2 from qemu
2018-02-17 20:56:32 -05:00
Markus Armbruster e4976f4597
error: Improve documentation
While there, tighten error_append_hint()'s assertion.

Backports commit f4d0064afcff4c38b379800674938cde8f069dcd from qemu
2018-02-17 20:52:49 -05:00
Markus Armbruster 9398bd49fe
error: Document how to accumulate multiple errors
Backports commit 8d780f43921feb7fd8d0b58f779a22d1265f2378 from qemu
2018-02-17 20:49:12 -05:00
Peter Maydell f336fc39eb
osdep.h: Include glib-compat.h in osdep.h rather than qemu-common.h
Our use of glib is now pervasive across QEMU. Move the include of glib-compat.h
from qemu-common.h to osdep.h so that it is more widely accessible and doesn't
get forgotten by accident. (Failure to include it will result in build failure
on old versions of glib which is likely to be unnoticed by most developers.)

Backports commit 529490e5d664a20d5c4223070dd7c03a0e02b6bd from qemu
2018-02-17 20:47:28 -05:00
Paolo Bonzini 1650af8c8b
memory: try to inline constant-length reads
memcpy can take a large amount of time for small reads and writes.
Handle the common case of reading s/g descriptors from memory (there
is no corresponding "write" case that is as common, because writes
often use address_space_st* functions) by inlining the relevant
parts of address_space_read into the caller.

Backports commit 3cc8f884996584630734a90c9b3c535af81e3c92 from qemu
2018-02-17 20:44:39 -05:00
Paolo Bonzini 712c300639
memory: inline a few small accessors
These are used in the address_space_* fast paths.

Backports commit 1619d1fe737d2af068aefe134386a69b76164794 from qemu
2018-02-17 20:35:28 -05:00
Paolo Bonzini 9a78c61145
memory: extract first iteration of address_space_read and address_space_write
We want to inline the case where there is only one iteration, because
then the compiler can also inline the memcpy. As a start, extract
everything after the first address_space_translate call.

Backports commit a203ac702e0720135fac8b1f2061d119814c1798 from qemu
2018-02-17 20:31:21 -05:00
Paolo Bonzini 0688343929
memory: split address_space_read and address_space_write
Rather than dispatching on is_write for every iteration, make
address_space_rw call one of the two functions. The amount of
duplicate logic is pretty small, and memory_access_is_direct can
be tweaked so that it inlines better in the callers.

Backports commit eb7eeb88628074207dd611472e712af775985e73 from qemu
2018-02-17 20:19:05 -05:00
Paolo Bonzini 077ffc3bd5
memory: avoid unnecessary object_ref/unref
For the common case of DMA into non-hotplugged RAM, it is unnecessary
but expensive to do object_ref/unref. Add back an owner field to
MemoryRegion, so that these memory regions can skip the reference
counting.

Backports commit 612263cf33062f7441a5d0e3b37c65991fdc3210 from qemu
2018-02-17 20:10:25 -05:00
Paolo Bonzini e6b25279f8
memory: reorder MemoryRegion fields
Order fields so that all fields accessed during a RAM read/write fit in
the same cache line.

Backports commit a676854f3447019c7c4b005ab6aece905fccfddd from qemu
2018-02-17 19:48:52 -05:00
Paolo Bonzini 52b3120995
exec: make qemu_ram_ptr_length more similar to qemu_get_ram_ptr
Notably, use qemu_get_ram_block to enjoy the MRU optimization.

Backports commit e81bcda529378f5ed8b9b0b59bb2b24b8ee1c814 from qemu
2018-02-17 19:46:37 -05:00
Eduardo Habkost d52f55b46d
memory: Eliminate memory_region_destructor_ram_from_ptr()
The function is equivalent to memory_region_destructor_ram(), so
it's not needed anymore.

Backports commit fc3e7665d7fe1b2f842441d250d7afec26b8a910 from qemu
2018-02-17 19:39:22 -05:00
Eduardo Habkost 26791ea61b
exec: Eliminate qemu_ram_free_from_ptr()
Replace qemu_ram_free_from_ptr() with qemu_ram_free().

The only difference between qemu_ram_free_from_ptr() and
qemu_ram_free() is that g_free_rcu() is used instead of
call_rcu(reclaim_ramblock). We can safely replace it because:

* RAM blocks allocated by qemu_ram_alloc_from_ptr() always have
RAM_PREALLOC set;
* reclaim_ramblock(block) will do nothing except g_free(block)
if RAM_PREALLOC is set at block->flags.

Backports commit a29ac16632aec6065c72985b9f7eeb1ca6fbef4a from qemu
2018-02-17 19:37:45 -05:00
Sergey Fedorov 587c9f0570
target-arm: Fix and improve AA32 singlestep translation completion code 2018-02-17 19:34:52 -05:00
Andrew Baumann e1701b069f
target-arm: raise exception on misaligned LDREX operands
Qemu does not generally perform alignment checks. However, the ARM ARM
requires implementation of alignment exceptions for a number of cases
including LDREX, and Windows-on-ARM relies on this.

This change adds plumbing to enable alignment checks on loads using
MO_ALIGN, a do_unaligned_access hook to raise the exception (data
abort), and uses the new aligned loads in LDREX (for all but
single-byte loads).

Backports commit 30901475b91ef1f46304404ab4bfe89097f61b96 from qemu
2018-02-17 19:29:29 -05:00
Peter Maydell 3664e1ab8a
sparc: allow CASA with ASI 0xa from user space
LEON3 allows the CASA instruction to be used from user space
if the ASI is set to 0xa (user data).

Backports commit bd4e097a8e30bb63ed7f46553b13f60809c30ac3 from qemu
2018-02-17 19:23:35 -05:00
Markus Armbruster 03dffc1e9c
typedefs: Put them back into alphabetical order
"Please keep this list in alphabetical order" has been more honoured
in the breach than in the observance. Clean up.

While there, drop a redundant struct declaration.

Backports commit 2988cbeaf94203b2cf31c0b3f589aa4ebc0cff34 from qemu
2018-02-17 19:22:23 -05:00
Lioncash c8be425439
translate-all: ensure host page mask is always extended with 1's
Anthony reported that >4GB guests on Xen with 32bit QEMU broke after
commit 4ed023c ("Round up RAMBlock sizes to host page sizes", 2015-11-05).

In that patch sizes are masked against qemu_host_page_size/mask which
are uintptr_t, and thus 32bit on a 32bit QEMU, even though the ram space
might be bigger than 4GB on Xen.

Since ram_addr_t is not available on user-mode emulation targets, ensure
that we get a sign extension when masking away the low bits of the address.
Remove the ~10 year old scary comment that the type of these variables
is probably wrong, with another equally scary comment. The new comment
however does not have "???" in it, which is arguably an improvement.

For completeness use the alignment macros in linux-user and bsd-user
instead of manually doing an &. linux-user and bsd-user are not affected
by the Xen issue, however.

Backports commit 0c2d70c448b7853a91cfa63659aa3cc6630fb9be from qemu
2018-02-17 19:17:19 -05:00
Don Slutz 86436964b5
exec: Stop using memory after free
memory_region_unref(mr) can free memory.

For example I got:

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f43280d4700 (LWP 4462)]
0x00007f43323283c0 in phys_section_destroy (mr=0x7f43259468b0)
at /home/don/xen/tools/qemu-xen-dir/exec.c:1023
1023 if (mr->subpage) {
(gdb) bt
at /home/don/xen/tools/qemu-xen-dir/exec.c:1023
at /home/don/xen/tools/qemu-xen-dir/exec.c:1034
at /home/don/xen/tools/qemu-xen-dir/exec.c:2205
(gdb) p mr
$1 = (MemoryRegion *) 0x7f43259468b0

And this change prevents this.

Backports commit 55b4e80b047300e1512df02887b7448ba3786b62 from qemu
2018-02-17 19:11:50 -05:00
Stefan Weil 5218799171
oslib-win32: Change return type of function getpagesize
getpagesize on Linux returns an int. Fix QEMU's implementation for
Windows to return an int (instead of size_t), too.

This fixes a compiler warning which was introduced recently
(commit 093e3c42).

Backports commit a28c2f2df7679e3a87789e9fb7ed69331f697297 from qemu
2018-02-17 19:10:37 -05:00
Paolo Bonzini 1918761803
target-sparc: fix 32-bit truncation in fpackfix
This is reported by Coverity. The algorithm description at
ftp://ftp.icm.edu.pl/packages/ggi/doc/hw/sparc/Sparc.pdf suggests
that the 32-bit parts of rs2, after the left shift, is treated
as a 64-bit integer. Bits 32 and above are used to do the
saturating truncation.

Backports commit 12a3567c4099be194b44987ac5d7d65b99bcfab7 from qemu
2018-02-17 19:08:40 -05:00
Leon Alrae 272e412fc9
target-mips: flush QEMU TLB when disabling 64-bit addressing
CP0.Status.KX/SX/UX bits are responsible for enabling access to 64-bit
Kernel/Supervisor/User Segments. If bit is cleared an access to
corresponding segment should generate Address Error Exception.

However, the guest may still be able to access some pages belonging to
the disabled 64-bit segment because we forget to flush QEMU TLB.

This patch fixes it.

Backports commit f93c3a8d0c0c1038dbe1e957eb8ab92671137975 from qemu
2018-02-17 19:06:43 -05:00
James Hogan 8689c6efef
target-mips: Fix exceptions while UX=0
Commit 01f728857941 ("target-mips: Status.UX/SX/KX enable 32-bit address
wrapping") added a new hflag MIPS_HFLAG_AWRAP, which indicates that
64-bit addressing is disallowed in the current mode, so hflag users
don't need to worry about the complexities of working that out, for
example checking both MIPS_HFLAG_KSU and MIPS_HFLAG_UX.

However when exceptions are taken outside of exception level,
mips_cpu_do_interrupt() manipulates the env->hflags directly rather than
using compute_hflags() to update them, and this code wasn't updated
accordingly. As a result, when UX is cleared, MIPS_HFLAG_AWRAP is set,
but it doesn't get cleared on entry back into kernel mode due to an
exception. Kernel mode then cannot access the 64-bit segments resulting
in a nested exception loop. The same applies to errors and debug
exceptions.

Fix by updating mips_cpu_do_interrupt() to clear the MIPS_HFLAG_WRAP
flag when necessary, according to compute_hflags().

Backports commit 7871abb94c2f4adc39f2487f6edf5e69ba872a65 from qemu
2018-02-17 18:57:52 -05:00
Peter Maydell 386e398c56
target-arm: Don't mask out bits [47:40] in LPAE descriptors for v8
In an LPAE format descriptor in ARMv8 the address field extends
up to bit 47, not just bit 39. Correct the masking so we don't
give incorrect results if the output address size is greater
than 40 bits, as it can be for AArch64.

(Note that we don't yet support the new-in-v8 Address Size fault which
should be generated if any translation table entry or TTBR contains
an address with non-zero bits above the most significant bit of the
maximum output address size.)

Backports commit 6109769a8b42bd0c3d5b1601c9b35fe7ea6a603e from qemu
2018-02-17 18:55:32 -05:00
John Clarke 5c57445f08
tcg: Fix highwater check
A simple typo in the variable to use when comparing vs the highwater mark.
Reports are that qemu can in fact segfault occasionally due to this mistake.

Backports commit 644da9b39e477caa80bab69d2847dfcb468f0d33 from qemu
2018-02-17 18:53:18 -05:00
Daniel P. Berrange 3ba8959dfd
qom: Introduce ObjectPropertyIterator struct for iteration
Some users of QOM need to be able to iterate over properties
defined against an object instance. Currently they are just
directly using the QTAIL macros against the object properties
data structure.

This is bad because it exposes them to changes in the data
structure used to store properties, as well as changes in
functionality such as ability to register properties against
the class.

This provides an ObjectPropertyIterator struct which will
insulate the callers from the particular data structure
used to store properties. It can be used thus

ObjectProperty *prop;
ObjectPropertyIterator *iter;

iter = object_property_iter_init(obj);
while ((prop = object_property_iter_next(iter))) {
... do something with prop ...
}
object_property_iter_free(iter);

Backports commit a00c94824126901168bca5b89147f9e334a49e87 from qemu
2018-02-17 18:39:00 -05:00
Sergey Fedorov 5df03a909c
target-arm: Update condexec before arch BP check in AA32 translation
Architectural breakpoint check could raise an exceptions, thus condexec
bits should be updated before calling gen_helper_check_breakpoints().

Backports commit ce8a1b5449cd8c4c2831abb581d3208c3a3745a0 from qemu
2018-02-17 18:35:50 -05:00
Sergey Fedorov b42bfc59f1
target-arm: Update condexec before CP access check in AA32 translation
Coprocessor access instructions are allowed inside IT block.
gen_helper_access_check_cp_reg() can raise an exceptions thus condexec
bits should be updated before.

Backports commit 43bfa4a100687af8d293fef0a197839b51400fca from qemu
2018-02-17 18:33:58 -05:00