Allow qemu to build on 32-bit hosts without 64-bit atomic ops.
Even if we only allow 32-bit hosts to multi-thread emulate 32-bit
guests, we still need some way to handle the 32-bit guest using a
64-bit atomic operation. Do so by dropping back to single-step.
Backports commit df79b996a7b21c6ea7847f7927a2e1a294b86c72 from qemu
Force the use of cmpxchg16b on x86_64.
Wikipedia suggests that only very old AMD64 (circa 2004) did not have
this instruction. Further, it's required by Windows 8 so no new cpus
will ever omit it.
If we truely care about these, then we could check this at startup time
and then avoid executing paths that use it.
Backports commit 7ebee43ee3e2fcd7b5063058b7ef74bc43216733 from qemu
Add all of cmpxchg, op_fetch, fetch_op, and xchg.
Handle both endian-ness, and sizes up to 8.
Handle expanding non-atomically, when emulating in serial.
Backports commit c482cb117cc418115ca9c6d21a7a2315414c0a40 from qemu
Probe for whether the specified guest write access is permitted.
If it is not permitted then an exception will be taken in the same
way as if this were a real write access (and we will not return).
Otherwise the function will return, and there will be a valid
entry in the TLB for this access.
Backports commit 3b4afc9e75ab1a95f33e41f462921093f8a109c4 from qemu
When we cannot emulate an atomic operation within a parallel
context, this exception allows us to stop the world and try
again in a serial context.
Backports commit fdbc2b5722f6092e47181a947c90fd4bdcc1c121 from qemu
Also backports parts of commit 02d57ea115b7669f588371c86484a2e8ebc369be
The QmpOutputVisitor has no direct dependency on QMP. It is
valid to use it anywhere that one wants a QObject. Rename it
to better reflect its functionality as a generic QAPI
to QObject converter.
The commit before previous renamed the files, this one renames C
identifiers.
Backports commit 7d5e199ade76c53ec316ab6779800581bb47c50a from qemu
The QmpInputVisitor has no direct dependency on QMP. It is
valid to use it anywhere that one has a QObject. Rename it
to better reflect its functionality as a generic QObject
to QAPI converter.
The previous commit renamed the files, this one renames C identifiers.
Backports commit 09e68369a88d7de0f988972bf28eec1b80cc47f9 from qemu
Support target CPUs having a page size which isn't knownn
at compile time. To use this, the CPU implementation should:
* define TARGET_PAGE_BITS_VARY
* not define TARGET_PAGE_BITS
* define TARGET_PAGE_BITS_MIN to the smallest value it
might possibly want for TARGET_PAGE_BITS
* call set_preferred_target_page_bits() in its realize
function to indicate the actual preferred target page
size for the CPU (and report any error from it)
In CONFIG_USER_ONLY, the CPU implementation should continue
to define TARGET_PAGE_BITS appropriately for the guest
OS page size.
Machines which want to take advantage of having the page
size something larger than TARGET_PAGE_BITS_MIN must
set the MachineClass minimum_page_bits field to a value
which they guarantee will be no greater than the preferred
page size for any CPU they create.
Note that changing the target page size by setting
minimum_page_bits is a migration compatibility break
for that machine.
For debugging purposes, attempts to use TARGET_PAGE_SIZE
before it has been finally confirmed will assert.
Backports commit 20bccb82ff3ea09bcb7c4ee226d3160cab15f7da from qemu
Remove L1 page mapping table properties computing
statically using macros which is dependent on
TARGET_PAGE_BITS. Drop macros V_L1_SIZE, V_L1_SHIFT,
V_L1_BITS macros and replace with variables which are
computed at early stage of VM boot.
Removing dependency can help to make TARGET_PAGE_BITS
dynamic.
Backports commit 66ec9f49399f0a9fa13ee77c472caba0de2773fc from qemu
This commit introduces the TCGOpcode for memory barrier instruction.
This opcode takes an argument which is the type of memory barrier
which should be generated.
Backports commit f65e19bc2c9e8358e634d309606144ac2a3c2936 from qemu
Setting skip_dump on a MemoryRegion allows us to modify one specific
code path, but the restriction we're trying to address encompasses
more than that. If we have a RAM MemoryRegion backed by a physical
device, it not only restricts our ability to dump that region, but
also affects how we should manipulate it. Here we recognize that
MemoryRegions do not change to sometimes allow dumps and other times
not, so we replace setting the skip_dump flag with a new initializer
so that we know exactly the type of region to which we're applying
this behavior.
Backports commit ca83f87a66d19fdaabf23d4f5ebb49396fe232c1 from qemu
Rather than rely on recursion during the middle of register allocation,
lower indirect registers to loads and stores off the indirect base into
plain temps.
For an x86_64 host, with sufficient registers, this results in identical
code, modulo the actual register assignments.
For an i686 host, with insufficient registers, this means that temps can
be (temporarily) spilled to the stack in order to satisfy an allocation.
This as opposed to the possibility of not being able to spill, to allocate
a register for the indirect base, in order to perform a spill.
Backports commit 5a18407f55ade924aa6397c9a043a9ffd59645fe from qemu
By arranging for explicit writes to cpu_fsr after floating point
operations, we are able to mark the helpers as not writing to
tcg globals, which means that we don't need to invalidate the
integer register set across said calls.
Backports commit 7385aed20db5d83979f683b9d0048674411e963c from qemu
Provide a new helper function memory_region_init_rom() for memory
regions which are read-only (and unlike those created by
memory_region_init_rom_device() don't have special behaviour
for writes). This has the same behaviour as calling
memory_region_init_ram() and then memory_region_set_readonly()
(which is what we do today in boards with pure ROMs) but is a
more easily discoverable API for the purpose.
Backports commit a1777f7f6462c66e1ee6e98f0d5c431bfe988aa5 from qemu
Add an API object_type_get_size(const char *typename) that returns the
instance_size of the give typename.
Backports commit 3f97b53a682d2595747c926c00d78b9d406f1be0 from qemu
The function cpu_resume_from_signal() is now always called with a
NULL puc argument, and is rather misnamed since it is never called
from a signal handler. It is essentially forcing an exit to the
top level cpu loop but without raising any exception, so rename
it to cpu_loop_exit_noexc() and drop the useless unused argument.
Backports commit 6886b98036a8f8f5bce8b10756ce080084cef11b from qemu
Let users of qemu_get_ram_ptr and qemu_ram_ptr_length pass in an
address that is relative to the MemoryRegion. This basically means
what address_space_translate returns.
Because the semantics of the second parameter change, rename the
function to qemu_map_ram_ptr.
Backports commit 0878d0e11ba8013dd759c6921cbf05ba6a41bd71 from qemu
Move the old qemu_ram_addr_from_host to memory_region_from_host and
make it return an offset within the region. For qemu_ram_addr_from_host
return the ram_addr_t directly, similar to what it was before
commit 1b5ec23 ("memory: return MemoryRegion from qemu_ram_addr_from_host",
2013-07-04).
Backports commit 07bdaa4196b51bc7ffa7c3f74e9e4a9dc8a7966a from qemu
Remove direct uses of ram_addr_t and optimize memory_region_{get,set}_fd
now that a MemoryRegion knows its RAMBlock directly.
Backports commit 4ff87573df3606856a92c14eef3393a63d736d11 from qemu
To prepare for multi-arch, cputlb.c should only have awareness of one
single architecture. This means it should not have access to the full
CPU lists which may be heterogeneous. Instead, push the CPU_LOOP() up
to the one and only caller in exec.c.
Backports commit 9a13565d52bfd321934fb44ee004bbaf5f5913a8 from qemu
There is no particular reason to keep these functions in the header.
Suggested by Paolo.
Backports commit 99affd1d5bd4e396ecda50e53dfbc5147fa1313d from qemu
Not only it makes sense, but it gets rid of checkpatch warning:
WARNING: consider using qemu_strtosz in preference to strtosz
Also remove get rid of tabs to please checkpatch.
Backports commit 4677bb40f809394bef5fa07329dea855c0371697 from qemu
This patch replaces get_ticks_per_sec() calls with the macro
NANOSECONDS_PER_SECOND. Also, as there are no callers, get_ticks_per_sec()
is then removed. This replacement improves the readability and
understandability of code.
For example,
timer_mod(fdctrl->result_timer,
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + (get_ticks_per_sec() / 50));
NANOSECONDS_PER_SECOND makes it obvious that qemu_clock_get_ns
matches the unit of the expression on the right side of the plus.
Backports commit 73bcb24d932912f8e75e1d88da0fc0ac6d4bce78 from qemu
Starting with the ARMv7 Virtualization Extensions, the A32 and T32
instruction sets provide instructions "MSR (banked)" and "MRS
(banked)" which can be used to access registers for a mode other
than the current one:
* R<m>_<mode>
* ELR_hyp
* SPSR_<mode>
Implement the missing instructions.
Backports commit 8bfd0550be821cf27d71444e2af350de3c3d2ee3 from qemu
Since this is not a high-performance path, just use a helper to
flip the E bit and force a lookup in the hash table since the
flags have changed.
Backports commit 9886ecdf31165de2d4b8bccc1a220bd6ac8bc192 from qemu
The rules for setting the CPSR on a 32-bit exception return are
subtly different from those for setting the CPSR via an instruction
like MSR or CPS. (In particular, in Hyp mode changing the mode bits
is not valid via MSR or CPS.) Split the exception-return case into
its own helper for setting CPSR, so we can eventually handle them
differently in the helper function.
Backports commit 235ea1f5c89abf30e452539b973b0dbe43d3fe2b from qemu
ARM stops before access to a location covered by watchpoint. Also, QEMU
watchpoint fire is not necessarily an architectural watchpoint match.
Unfortunately, that is hardly possible to ignore a fired watchpoint in
debug exception handler. So move watchpoint check from debug exception
handler to the dedicated watchpoint checking callback.
Backports commit 3826121d9298cde1d29ead05910e1f40125ee9b0 from qemu
This will either create a new AS or return a pointer to an
already existing equivalent one, if we have already created
an AS for the specified root memory region.
The motivation is to reuse address spaces as much as possible.
It's going to be quite common that bus masters out in device land
have pointers to the same memory region for their mastering yet
each will need to create its own address space. Let the memory
API implement sharing for them.
Aside from the perf optimisations, this should reduce the amount
of redundant output on info mtree as well.
Thee returned value will be malloced, but the malloc will be
automatically freed when the AS runs out of refs.
Backports commit f0c02d15b57da6f5463e3768aa0cfeedccf4b8f4 from qemu
Add a function to return the AddressSpace for a CPU based on
its numerical index. (Callers outside exec.c don't have access
to the CPUAddressSpace struct so can't just fish it out of the
CPUState struct directly.)
Backports commit 651a5bc03705102de519ebf079a40ecc1da991db from qemu
When looking up the MemoryRegionSection for the new TLB entry in
tlb_set_page_with_attrs(), use cpu_asidx_from_attrs() to determine
the correct address space index for the lookup, and pass it into
address_space_translate_for_iotlb().
Backports commit d7898cda81b6efa6b2d7a749882695cdcf280eaa from qemu
Rather than setting cpu->as unconditionally in cpu_exec_init
(and then having target-i386 override this later), don't set
it until the first call to cpu_address_space_init.
This requires us to initialise the address space for
both TCG and KVM (KVM doesn't need the AS listener but
it does require cpu->as to be set).
For target CPUs which don't set up any address spaces (currently
everything except i386), add the default address_space_memory
in qemu_init_vcpu().
Backports commit 56943e8cc14b7eeeab67d1942fa5d8bcafe3e53f from qemu
For inbound migration we really want to be able to set the PSR without
having any side effects, but cpu_put_psr() calls cpu_check_irqs() which
might try to deliver CPU interrupts. Split cpu_put_psr() into the
no-side-effect and side-effect parts.
This includes reordering the cpu_check_irqs() to the end of cpu_put_psr(),
because that function may actually end up calling cpu_interrupt(), which
does not seem like a good thing to happen in the middle of updating the PSR.
Backports commit 4552a09dd4055c806b7df8c595dc0fb8951834be from qemu
arm_regime_using_lpae_format checks whether the LPAE extension is used
for stage 1 translation regimes. MMU indexes not exclusively of a stage 1
regime won't work with this method.
In case of ARMMMUIdx_S12NSE0 or ARMMMUIdx_S12NSE1, offset these values
by ARMMMUIdx_S1NSE0 to get the right index indicating a stage 1
translation regime.
Rename also the function to arm_s1_regime_using_lpae_format and update
the comments to reflect the change.
Backports commit deb2db996cbb9470b39ae1e383791ef34c4eb3c2 from qemu
memcpy can take a large amount of time for small reads and writes.
Handle the common case of reading s/g descriptors from memory (there
is no corresponding "write" case that is as common, because writes
often use address_space_st* functions) by inlining the relevant
parts of address_space_read into the caller.
Backports commit 3cc8f884996584630734a90c9b3c535af81e3c92 from qemu
We want to inline the case where there is only one iteration, because
then the compiler can also inline the memcpy. As a start, extract
everything after the first address_space_translate call.
Backports commit a203ac702e0720135fac8b1f2061d119814c1798 from qemu
Replace qemu_ram_free_from_ptr() with qemu_ram_free().
The only difference between qemu_ram_free_from_ptr() and
qemu_ram_free() is that g_free_rcu() is used instead of
call_rcu(reclaim_ramblock). We can safely replace it because:
* RAM blocks allocated by qemu_ram_alloc_from_ptr() always have
RAM_PREALLOC set;
* reclaim_ramblock(block) will do nothing except g_free(block)
if RAM_PREALLOC is set at block->flags.
Backports commit a29ac16632aec6065c72985b9f7eeb1ca6fbef4a from qemu
Qemu does not generally perform alignment checks. However, the ARM ARM
requires implementation of alignment exceptions for a number of cases
including LDREX, and Windows-on-ARM relies on this.
This change adds plumbing to enable alignment checks on loads using
MO_ALIGN, a do_unaligned_access hook to raise the exception (data
abort), and uses the new aligned loads in LDREX (for all but
single-byte loads).
Backports commit 30901475b91ef1f46304404ab4bfe89097f61b96 from qemu
Add a function to find a RAMBlock by name; use it in two
of the places that already open code that loop; we've
got another use later in postcopy.
Backports commit e3dd74934f2d2c8c67083995928ff68e8c1d0030 from qemu
Postcopy sends RAMBlock names and offsets over the wire (since it can't
rely on the order of ramaddr being the same), and it starts out with
HVA fault addresses from the kernel.
qemu_ram_block_from_host translates a HVA into a RAMBlock, an offset
in the RAMBlock and the global ram_addr_t value.
Rewrite qemu_ram_addr_from_host to use qemu_ram_block_from_host.
Provide qemu_ram_get_idstr since its the actual name text sent on the
wire.
Backports commit 422148d3e56c3c9a07c0cf36c1e0a0b76f09c357 from qemu
This makes ROM blocks resizeable. This infrastructure is required for other
functionality we have queued.
Backports commit aaf03019175949eda5087329448b8a0033b89479 from qemu
A QEMU breakpoint match is not definitely an architectural breakpoint
match. If an exception is generated unconditionally during translation,
it is hardly possible to ignore it in the debug exception handler.
Generate a call to a helper to check CPU breakpoints and raise an
exception only if any breakpoint matches architecturally.
Backports commit 5d98bf8f38c17a348ab6e8af196088cd4953acd0 from qemu
It is no longer used, so tidy up everything reached by it.
This includes the gen_opc_* arrays, the search_pc parameter
and the inline gen_intermediate_code_internal functions.
Backports commit 4e5e1215156662b2b153255c49d4640d82c5568b from qemu