When we handle a signal from a fault within a user-only memory helper,
we cannot cpu_restore_state with the PC found within the signal frame.
Use a TLS variable, helper_retaddr, to record the unwind start point
to find the faulting guest insn.
Backports commit ec603b5584fa71213ef8f324fe89e4b27cc9d2bc from qemu
Call the new cpu_transaction_failed() hook at the places where
CPU generated code interacts with the memory system:
io_readx()
io_writex()
get_page_addr_code()
Any access from C code (eg via cpu_physical_memory_rw(),
address_space_rw(), ld/st_*_phys()) will *not* trigger CPU exceptions
via cpu_transaction_failed(). Handling for transactions failures for
this kind of call should be done by using a function which returns a
MemTxResult and treating the failure case appropriately in the
calling code.
In an ideal world we would not generate CPU exceptions for
instruction fetch failures in get_page_addr_code() but instead wait
until the code translation process tried a load and it failed;
however that change would require too great a restructuring and
redesign to attempt at this point.
Backports commit 04e3aabde397e7abc78ba1ce6cbd144d5fbb1722 from qemu
Some code paths can lead to atomic accesses racing with memset()
on cpu->tb_jmp_cache, which can result in torn reads/writes
and is undefined behaviour in C11.
These torn accesses are unlikely to show up as bugs, but from code
inspection they seem possible. For example, tb_phys_invalidate does:
/* remove the TB from the hash list */
h = tb_jmp_cache_hash_func(tb->pc);
CPU_FOREACH(cpu) {
if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) {
atomic_set(&cpu->tb_jmp_cache[h], NULL);
}
}
Here atomic_set might race with a concurrent memset (such as the
ones scheduled via "unsafe" async work, e.g. tlb_flush_page) and
therefore we might end up with a torn pointer (or who knows what,
because we are under undefined behaviour).
This patch converts parallel accesses to cpu->tb_jmp_cache to use
atomic primitives, thereby bringing these accesses back to defined
behaviour. The price to pay is to potentially execute more instructions
when clearing cpu->tb_jmp_cache, but given how infrequently they happen
and the small size of the cache, the performance impact I have measured
is within noise range when booting debian-arm.
Note that under "safe async" work (e.g. do_tb_flush) we could use memset
because no other vcpus are running. However I'm keeping these accesses
atomic as well to keep things simple and to avoid confusing analysis
tools such as ThreadSanitizer.
Backports commit f3ced3c59287dabc253f83f0c70aa4934470c15e from qemu
This replaces env1 and page_index variables by env and index
so we can use VICTIM_TLB_HIT macro later.
Backports commit 3416343255cbe01fbe12e5e36cd4bb5042425b27 from qemu
In case where the conditional write is the first write to the page,
TLB_NOTDIRTY will be set and stop_the_world is triggered. Handle this as
a special case and set the dirty bit. After that fall through to the
actual atomic instruction below.
Backports commit 7f9af1abdcc69fd1d3d8d2be68464329600616d6 from qemu
In get_page_addr_code(), if the guest PC doesn't correspond to RAM
then we currently run the CPU's do_unassigned_access() hook if it has
one, and otherwise we give up and exit QEMU with a more-or-less
useful message. This code assumes that the do_unassigned_access hook
will never return, because if it does then we'll plough on attempting
to use a non-RAM TLB entry to get a RAM address and will abort() in
qemu_ram_addr_from_host_nofail(). Unfortunately some CPU
implementations of this hook do return: Microblaze, SPARC and the ARM
v7M.
Change the code to call report_bad_exec() if the hook returns, as
well as if it didn't have one. This means we can tidy it up to use
the cpu_unassigned_access() function which wraps the "get the CPU
class and call the hook if it has one" work, since we aren't trying
to distinguish "no hook" from "hook existed and returned" any more.
This brings the handling of this hook into line with the handling
used for data accesses, where "hook returned" is treated the
same as "no hook existed" and gets you the default behaviour.
Backports commit 44d7ce0ef39cb45e13d384574d79799eb3d39834 from qemu
This moves the helper function closer to where it is called and updates
the error message to report via error_report instead of the deprecated
fprintf.
Backports commit 857baec1d9e80947f0c1007c3a3d2331d62b4b53 from qemu
While the vargs approach was flexible the original MTTCG ended up
having munge the bits to a bitmap so the data could be used in
deferred work helpers. Instead of hiding that in cputlb we push the
change to the API to make it take a bitmap of MMU indexes instead.
For ARM some the resulting flushes end up being quite long so to aid
readability I've tended to move the index shifting to a new line so
all the bits being or-ed together line up nicely, for example:
tlb_flush_page_by_mmuidx(other_cs, pageaddr,
(1 << ARMMMUIdx_S1SE1) |
(1 << ARMMMUIdx_S1SE0));
Backports commit 0336cbf8532935d8e23c2aabf3e2ce2c0697b6ac from qemu
We have never has the concept of global TLB entries which would avoid
the flush so we never actually use this flag. Drop it and make clear
that tlb_flush is the sledge-hammer it has always been.
Backports commit d10eb08f5d8389c814b554d01aa2882ac58221bf from qemu
Some files contain multiple #includes of the same header file.
Removed most of those unnecessary duplicate entries using
scripts/clean-includes.
Backports commit 814bb12a561d36aeb5ae4440ad43d2b0761d76da from qemu
Allow qemu to build on 32-bit hosts without 64-bit atomic ops.
Even if we only allow 32-bit hosts to multi-thread emulate 32-bit
guests, we still need some way to handle the 32-bit guest using a
64-bit atomic operation. Do so by dropping back to single-step.
Backports commit df79b996a7b21c6ea7847f7927a2e1a294b86c72 from qemu
Force the use of cmpxchg16b on x86_64.
Wikipedia suggests that only very old AMD64 (circa 2004) did not have
this instruction. Further, it's required by Windows 8 so no new cpus
will ever omit it.
If we truely care about these, then we could check this at startup time
and then avoid executing paths that use it.
Backports commit 7ebee43ee3e2fcd7b5063058b7ef74bc43216733 from qemu
Add all of cmpxchg, op_fetch, fetch_op, and xchg.
Handle both endian-ness, and sizes up to 8.
Handle expanding non-atomically, when emulating in serial.
Backports commit c482cb117cc418115ca9c6d21a7a2315414c0a40 from qemu
TGT_LE and TGT_BE are not size dependent and do not need to be
redefined. The others are no longer used at all.
Backports commit c86c6e4c80fee4d9423bedb10ba9e9c4aa68f861 from qemu
The return address argument to the softmmu template helpers was
confused. In the legacy case, we wanted to indicate that there
is no return address, and so passed in NULL. However, we then
immediately subtracted GETPC_ADJ from NULL, resulting in a non-zero
value, indicating the presence of an (invalid) return address.
Push the GETPC_ADJ subtraction down to the only point it's required:
immediately before use within cpu_restore_state_from_tb, after all
NULL pointer checks have been completed.
This makes GETPC and GETRA identical. Remove GETRA as the lesser
used macro, replacing all uses with GETPC.
Backports commit 01ecaf438b1eb46abe23392c8ce5b7628b0c8cf5 from qemu
There are currently 22 invocations of this function,
and we're about to increase that number.
Backports commit 7e9a7c50d9a400ef51242d661a261123c2cc9485 from qemu
Move the old qemu_ram_addr_from_host to memory_region_from_host and
make it return an offset within the region. For qemu_ram_addr_from_host
return the ram_addr_t directly, similar to what it was before
commit 1b5ec23 ("memory: return MemoryRegion from qemu_ram_addr_from_host",
2013-07-04).
Backports commit 07bdaa4196b51bc7ffa7c3f74e9e4a9dc8a7966a from qemu
exec-all.h contains TCG-specific definitions. It is not needed outside
TCG-specific files such as translate.c, exec.c or *helper.c.
One generic function had snuck into include/exec/exec-all.h; move it to
include/qom/cpu.h.
Backports commit 63c915526d6a54a95919ebece83fa9ca631b2508 from qemu
This field was used for telling cpu_interrupt() to unlink a chain of TBs
being executed when it worked that way. Now, cpu_interrupt() don't do
this anymore. So we don't need this field anymore.
Backports commit 3213525f8ab48742db09dab18cb9ae6f36a6c921 from qemu
To prepare for multi-arch, cputlb.c should only have awareness of one
single architecture. This means it should not have access to the full
CPU lists which may be heterogeneous. Instead, push the CPU_LOOP() up
to the one and only caller in exec.c.
Backports commit 9a13565d52bfd321934fb44ee004bbaf5f5913a8 from qemu
To avoid cluttering the code with #ifdef legs we wrap up the print
statements into a tlb_debug() macro. As access to the virtual TLB can
get quite heavy defining DEBUG_TLB_LOG will ensure all the logs go to
the qemu_log target of CPU_LOG_MMU instead of stderr. This remains
compile time optional as these debug statements haven't been considered
for usefulness for user visible logging.
I've also removed DEBUG_TLB_CHECK which wasn't used.
Backports commit 8526e1f4e418443a4d6ed0714487e47d45ef9c98 from qemu
All references to mr->ram_addr are replaced by
memory_region_get_ram_addr(mr) (except for a few assertions that are
replaced with mr->ram_block).
Backports commit 8e41fb63c5bf29ecabe0cee1239bf6230f19978a from qemu
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Backports commit 7b31bbc2e68605ab2f10dc609dd54cf4c7b5f49a from qemu
Pass the MemTxAttrs for the memory access to iotlb_to_region(); this
allows it to determine the correct AddressSpace to use for the lookup.
Backports commit a54c87b68a0410d0cf6f8b84e42074a5cf463732 from qemu
When looking up the MemoryRegionSection for the new TLB entry in
tlb_set_page_with_attrs(), use cpu_asidx_from_attrs() to determine
the correct address space index for the lookup, and pass it into
address_space_translate_for_iotlb().
Backports commit d7898cda81b6efa6b2d7a749882695cdcf280eaa from qemu
Change tlb_set_dirty() to accept a CPU instead of an env pointer. This
allows for removal of another CPUArchState usage from prototypes that
need to be QOMified.
Backports commit bcae01e468d961ad9afaf4148329147e4be209ab from qemu
This is set to true when the index is for an instruction fetch
translation.
The core get_page_addr_code() sets it, as do the SOFTMMU_CODE_ACCESS
acessors.
All targets ignore it for now, and all other callers pass "false".
This will allow targets who wish to split the mmu index between
instruction and data accesses to do so. A subsequent patch will
do just that for PowerPC.
Backports commit 97ed5ccdee95f0b98bedc601ff979e368583472c from qemu
Guest CPU TLB maintenance operations may be sufficiently
specialized to only need to flush TLB entries corresponding
to a particular MMU index. Implement cputlb functions for
this, to avoid the inefficiency of flushing TLB entries
which we don't need to.
Backports commit d7a74a9d4a68e27b3a8ceda17bb95cb0a23d8e4d from qemu
The cpu_physical_memory_reset_dirty() function is sometimes used
together with cpu_physical_memory_get_dirty(). This is not atomic since
two separate accesses to the dirty memory bitmap are made.
Turn cpu_physical_memory_reset_dirty() and
cpu_physical_memory_clear_dirty_range_type() into the atomic
cpu_physical_memory_test_and_clear_dirty().
Backports commit 03eebc9e3246b9b3f5925aa41f7dfd7c1e467875 from qemu
These days modification of the TLB is done in notdirty_mem_write,
so the virtual address and env pointer as unnecessary.
The new name of the function, tlb_unprotect_code, is consistent with
tlb_protect_code.
Backports commit 9564f52da7eb061326956ed9a468935e3352512d from qemu
Add a MemTxAttrs field to the IOTLB, and allow target-specific
code to set it via a new tlb_set_page_with_attrs() function;
pass the attributes through to the device when making IO accesses.
Backports commit fadc1cbe85c6b032d5842ec0d19d209f50fcb375 from qemu
Make the CPU iotlb a structure rather than a plain hwaddr;
this will allow us to add transaction attributes to it.
Backports commit e469b22ffda40188954fafaf6e3308f58d50f8f8 from qemu
After the previous patch, TLBs will be flushed on every change to
the memory mapping. This patch augments that with synchronization
of the MemoryRegionSections referred to in the iotlb array.
With this change, it is guaranteed that iotlb_to_region will access
the correct memory map, even once the TLB will be accessed outside
the BQL.
Backports commit 9d82b5a792236db31a75b9db5c93af69ac07c7c5 from qemu
- Allow to register handler separately for invalid memory access
- Add new memory events for hooking:
- UC_MEM_READ_INVALID, UC_MEM_WRITE_INVALID, UC_MEM_FETCH_INVALID
- UC_HOOK_MEM_READ_PROT, UC_HOOK_MEM_WRITE_PROT, UC_HOOK_MEM_FETCH_PROT
- Rename UC_ERR_EXEC_PROT to UC_ERR_FETCH_PROT
- Change API uc_hook_add() so event type @type can be combined from hooking types