The MC68040 MMU provides the size of the access that
triggers the page fault.
This size is set in the Special Status Word which
is written in the stack frame of the access fault
exception.
So we need the size in m68k_cpu_unassigned_access() and
m68k_cpu_handle_mmu_fault().
To be able to do that, this patch modifies the prototype of
handle_mmu_fault handler, tlb_fill() and probe_write().
do_unassigned_access() already includes a size parameter.
This patch also updates handle_mmu_fault handlers and
tlb_fill() of all targets (only parameter, no code change).
Backports commit 98670d47cd8d63a529ff230fd39ddaa186156f8c from qemu
Call the new cpu_transaction_failed() hook at the places where
CPU generated code interacts with the memory system:
io_readx()
io_writex()
get_page_addr_code()
Any access from C code (eg via cpu_physical_memory_rw(),
address_space_rw(), ld/st_*_phys()) will *not* trigger CPU exceptions
via cpu_transaction_failed(). Handling for transactions failures for
this kind of call should be done by using a function which returns a
MemTxResult and treating the failure case appropriately in the
calling code.
In an ideal world we would not generate CPU exceptions for
instruction fetch failures in get_page_addr_code() but instead wait
until the code translation process tried a load and it failed;
however that change would require too great a restructuring and
redesign to attempt at this point.
Backports commit 04e3aabde397e7abc78ba1ce6cbd144d5fbb1722 from qemu
We already include exec/address-spaces.h and exec/memory.h in
cputlb.c; the include of qemu/timer.h appears to be a fossil.
Backports commit 40978428853e2f7b4597ab2a9ffeb187333802dc from qemu
TGT_LE and TGT_BE are not size dependent and do not need to be
redefined. The others are no longer used at all.
Backports commit c86c6e4c80fee4d9423bedb10ba9e9c4aa68f861 from qemu
Probe for whether the specified guest write access is permitted.
If it is not permitted then an exception will be taken in the same
way as if this were a real write access (and we will not return).
Otherwise the function will return, and there will be a valid
entry in the TLB for this access.
Backports commit 3b4afc9e75ab1a95f33e41f462921093f8a109c4 from qemu
The return address argument to the softmmu template helpers was
confused. In the legacy case, we wanted to indicate that there
is no return address, and so passed in NULL. However, we then
immediately subtracted GETPC_ADJ from NULL, resulting in a non-zero
value, indicating the presence of an (invalid) return address.
Push the GETPC_ADJ subtraction down to the only point it's required:
immediately before use within cpu_restore_state_from_tb, after all
NULL pointer checks have been completed.
This makes GETPC and GETRA identical. Remove GETRA as the lesser
used macro, replacing all uses with GETPC.
Backports commit 01ecaf438b1eb46abe23392c8ce5b7628b0c8cf5 from qemu
Previously we allowed fully unaligned operations, but not operations
that are aligned but with less alignment than the operation size.
In addition, arm32, ia64, mips, and sparc had been omitted from the
previous overalignment patch, which would have led to that alignment
being enforced.
Backports commit 85aa80813dd9f5c1f581c743e45678a3bee220f8 from qemu
As it currently stands, QEMU does not properly handle self-modifying code
when the write is unaligned and crosses a page boundary. The procedure
for handling a write to the current translation block is to write-protect
the current translation block, catch the write, split up the translation
block into the current instruction (which remains write-protected so that
the current instruction is not modified) and the remaining instructions
in the translation block, and then restore the CPU state to before the
write occurred so the write will be retried and successfully executed.
However, since unaligned writes across pages are split into one-byte
writes for simplicity, writes to the second page (which is not the
current TB) may succeed before a write to the current TB is attempted,
and since these writes are not invalidated before resuming state after
splitting the TB, these writes will be performed a second time, thus
corrupting the second page. Credit goes to Patrick Hulin for
discovering this.
In recent 64-bit versions of Windows running in emulated mode, this
results in either being very unstable (a BSOD after a couple minutes of
uptime), or being entirely unable to boot. Windows performs one or more
8-byte unaligned self-modifying writes (xors) which intersect the end
of the current TB and the beginning of the next TB, which runs into the
aforementioned issue. This commit fixes that issue by making the
unaligned write loop perform the writes in forwards order, instead of
reverse order. This way, QEMU immediately tries to write to the current
TB, and splits the TB before any write to the second page is executed.
The write then proceeds as intended. With this patch applied, I am able
to boot and use Windows 7 64-bit and Windows 10 64-bit in QEMU without
KVM.
Per Richard Henderson's input, this patch also ensures the second page
is in the TLB before executing the write loop, to ensure the second
page is mapped.
The original discussion of the issue is located at
http://lists.nongnu.org/archive/html/qemu-devel/2014-08/msg02161.html.
Backports commit 81daabaf7a572f138a8b88ba6eea556bdb0cce46 from qemu
There are currently 22 invocations of this function,
and we're about to increase that number.
Backports commit 7e9a7c50d9a400ef51242d661a261123c2cc9485 from qemu
Some architectures (e.g. ARMv8) need the address which is aligned
to a size more than the size of the memory access.
To support such check it's enough the current costless alignment
check implementation in QEMU, but we need to support
an alignment size specifying.
Backports commit 1f00b27f17518a1bcb4cedca49eaec96a4d560bd from qemu
Pass the MemTxAttrs for the memory access to iotlb_to_region(); this
allows it to determine the correct AddressSpace to use for the lookup.
Backports commit a54c87b68a0410d0cf6f8b84e42074a5cf463732 from qemu
Now that the cpu_ld/st_* function directly call helper_ret_ld/st, we can
drop the old helper_ld/st functions.
Backports commit b8611499b940b1b4db67aa985e3a844437bcbf00 from qemu
This patch introduces several helpers to pass return address
which points to the TB. Correct return address allows correct
restoring of the guest PC and icount. These functions should be used when
helpers embedded into TB invoke memory operations.
Backports commit 282dffc8a4bfe8724548cabb8a26698bde0a6e18 from qemu
Add a MemTxAttrs field to the IOTLB, and allow target-specific
code to set it via a new tlb_set_page_with_attrs() function;
pass the attributes through to the device when making IO accesses.
Backports commit fadc1cbe85c6b032d5842ec0d19d209f50fcb375 from qemu
Make the CPU iotlb a structure rather than a plain hwaddr;
this will allow us to add transaction attributes to it.
Backports commit e469b22ffda40188954fafaf6e3308f58d50f8f8 from qemu
Rather than retaining io_mem_read/write as simple wrappers around
the memory_region_dispatch_read/write functions, make the latter
public and change all the callers to use them, since we need to
touch all the callsites anyway to add MemTxAttrs and MemTxResult
support. Delete io_mem_read and io_mem_write entirely.
(All the callers currently pass MEMTXATTRS_UNSPECIFIED
and convert the return value back to bool or ignore it.)
Backports commit 3b6434953934e6d4a776ed426d8c6d6badee176f from qemu
After the previous patch, TLBs will be flushed on every change to
the memory mapping. This patch augments that with synchronization
of the MemoryRegionSections referred to in the iotlb array.
With this change, it is guaranteed that iotlb_to_region will access
the correct memory map, even once the TLB will be accessed outside
the BQL.
Backports commit 9d82b5a792236db31a75b9db5c93af69ac07c7c5 from qemu
These modifiers control, on a per-memory-op basis, whether
unaligned memory accesses are allowed. The default setting
reflects the target's definition of ALIGNED_ONLY.
Backports commit dfb36305626636e2e07e0c5acd3a002a5419399e from qemu
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.
Backports commit 3972ef6f830d65e9bacbd31257abedc055fd6dc8 from qemu
- Allow to register handler separately for invalid memory access
- Add new memory events for hooking:
- UC_MEM_READ_INVALID, UC_MEM_WRITE_INVALID, UC_MEM_FETCH_INVALID
- UC_HOOK_MEM_READ_PROT, UC_HOOK_MEM_WRITE_PROT, UC_HOOK_MEM_FETCH_PROT
- Rename UC_ERR_EXEC_PROT to UC_ERR_FETCH_PROT
- Change API uc_hook_add() so event type @type can be combined from hooking types
As pointed out by aquynh the return types are actually different. A
uc_cb_eventmem_t callback returns a bool, while uc_cb_hookmem_t has a
void return type.
This reverts commit cb2b97f26c.