Commit graph

60 commits

Author SHA1 Message Date
Thomas Huth 975924bb2e
accel/tcg: move softmmu_template.h to accel/tcg/
The header is only used by accel/tcg/cputlb.c so we can
move it to the accel/tcg/ folder, too.

Backports commit da1849c1eba50aa372f87c7945d7b230eb2b2fb2 from qemu
2018-03-13 12:27:04 -04:00
Laurent Vivier 0aecb15f3b
accel/tcg: add size paremeter in tlb_fill()
The MC68040 MMU provides the size of the access that
triggers the page fault.

This size is set in the Special Status Word which
is written in the stack frame of the access fault
exception.

So we need the size in m68k_cpu_unassigned_access() and
m68k_cpu_handle_mmu_fault().

To be able to do that, this patch modifies the prototype of
handle_mmu_fault handler, tlb_fill() and probe_write().
do_unassigned_access() already includes a size parameter.

This patch also updates handle_mmu_fault handlers and
tlb_fill() of all targets (only parameter, no code change).

Backports commit 98670d47cd8d63a529ff230fd39ddaa186156f8c from qemu
2018-03-06 10:56:34 -05:00
Peter Maydell b1bff4d5c3
cputlb: Support generating CPU exceptions on memory transaction failures
Call the new cpu_transaction_failed() hook at the places where
CPU generated code interacts with the memory system:
io_readx()
io_writex()
get_page_addr_code()

Any access from C code (eg via cpu_physical_memory_rw(),
address_space_rw(), ld/st_*_phys()) will *not* trigger CPU exceptions
via cpu_transaction_failed(). Handling for transactions failures for
this kind of call should be done by using a function which returns a
MemTxResult and treating the failure case appropriately in the
calling code.

In an ideal world we would not generate CPU exceptions for
instruction fetch failures in get_page_addr_code() but instead wait
until the code translation process tried a load and it failed;
however that change would require too great a restructuring and
redesign to attempt at this point.

Backports commit 04e3aabde397e7abc78ba1ce6cbd144d5fbb1722 from qemu
2018-03-04 13:14:50 -05:00
Richard Henderson 0245f93c02
cputlb: Remove includes from softmmu_template.h
We already include exec/address-spaces.h and exec/memory.h in
cputlb.c; the include of qemu/timer.h appears to be a fossil.

Backports commit 40978428853e2f7b4597ab2a9ffeb187333802dc from qemu
2018-02-27 12:40:43 -05:00
Richard Henderson 5c79851143
cputlb: Tidy some macros
TGT_LE and TGT_BE are not size dependent and do not need to be
redefined. The others are no longer used at all.

Backports commit c86c6e4c80fee4d9423bedb10ba9e9c4aa68f861 from qemu
2018-02-27 12:36:25 -05:00
Richard Henderson 4da1cfb902
cputlb: Move most of iotlb code out of line
Saves 2k code size off of a cold path.

Backports commit 82a45b96a203a7403427183f1afd3d295222ff7d from qemu
2018-02-27 12:34:19 -05:00
Richard Henderson 5df7c9eec7
cputlb: Move probe_write out of softmmu_template.h
Backports commit 3b08f0a92545ba06fbdeaae929a5172480300c33 from qemu
2018-02-27 12:25:24 -05:00
Yongbok Kim 79e4c001a9
softmmu: Add probe_write()
Probe for whether the specified guest write access is permitted.
If it is not permitted then an exception will be taken in the same
way as if this were a real write access (and we will not return).
Otherwise the function will return, and there will be a valid
entry in the TLB for this access.

Backports commit 3b4afc9e75ab1a95f33e41f462921093f8a109c4 from qemu
2018-02-27 12:20:50 -05:00
Richard Henderson 1c9c8d3f10
cputlb: Replace SHIFT with DATA_SIZE
Backports commit dea2198201b3e0151d75b42774c51cf2ffe2ca4b from qemu
2018-02-27 12:00:33 -05:00
Richard Henderson 66d79ac959
tcg: Merge GETPC and GETRA
The return address argument to the softmmu template helpers was
confused. In the legacy case, we wanted to indicate that there
is no return address, and so passed in NULL. However, we then
immediately subtracted GETPC_ADJ from NULL, resulting in a non-zero
value, indicating the presence of an (invalid) return address.

Push the GETPC_ADJ subtraction down to the only point it's required:
immediately before use within cpu_restore_state_from_tb, after all
NULL pointer checks have been completed.

This makes GETPC and GETRA identical. Remove GETRA as the lesser
used macro, replacing all uses with GETPC.

Backports commit 01ecaf438b1eb46abe23392c8ce5b7628b0c8cf5 from qemu
2018-02-26 02:54:44 -05:00
Richard Henderson 91f5cf0417
tcg: Support arbitrary size + alignment
Previously we allowed fully unaligned operations, but not operations
that are aligned but with less alignment than the operation size.

In addition, arm32, ia64, mips, and sparc had been omitted from the
previous overalignment patch, which would have led to that alignment
being enforced.

Backports commit 85aa80813dd9f5c1f581c743e45678a3bee220f8 from qemu
2018-02-26 02:47:26 -05:00
Samuel Damashek 670d81367b
cputlb: Fix for self-modifying writes across page boundaries
As it currently stands, QEMU does not properly handle self-modifying code
when the write is unaligned and crosses a page boundary. The procedure
for handling a write to the current translation block is to write-protect
the current translation block, catch the write, split up the translation
block into the current instruction (which remains write-protected so that
the current instruction is not modified) and the remaining instructions
in the translation block, and then restore the CPU state to before the
write occurred so the write will be retried and successfully executed.
However, since unaligned writes across pages are split into one-byte
writes for simplicity, writes to the second page (which is not the
current TB) may succeed before a write to the current TB is attempted,
and since these writes are not invalidated before resuming state after
splitting the TB, these writes will be performed a second time, thus
corrupting the second page. Credit goes to Patrick Hulin for
discovering this.

In recent 64-bit versions of Windows running in emulated mode, this
results in either being very unstable (a BSOD after a couple minutes of
uptime), or being entirely unable to boot. Windows performs one or more
8-byte unaligned self-modifying writes (xors) which intersect the end
of the current TB and the beginning of the next TB, which runs into the
aforementioned issue. This commit fixes that issue by making the
unaligned write loop perform the writes in forwards order, instead of
reverse order. This way, QEMU immediately tries to write to the current
TB, and splits the TB before any write to the second page is executed.
The write then proceeds as intended. With this patch applied, I am able
to boot and use Windows 7 64-bit and Windows 10 64-bit in QEMU without
KVM.

Per Richard Henderson's input, this patch also ensures the second page
is in the TLB before executing the write loop, to ensure the second
page is mapped.

The original discussion of the issue is located at
http://lists.nongnu.org/archive/html/qemu-devel/2014-08/msg02161.html.

Backports commit 81daabaf7a572f138a8b88ba6eea556bdb0cce46 from qemu
2018-02-25 03:12:11 -05:00
Samuel Damashek 04c423b081
cputlb: Add address parameter to VICTIM_TLB_HIT
Backports commit a390284b80d2b6581143cdb40666674e60e635ae from qemu
2018-02-25 03:03:36 -05:00
Richard Henderson 9e2422032a
cputlb: Move VICTIM_TLB_HIT out of line
There are currently 22 invocations of this function,
and we're about to increase that number.

Backports commit 7e9a7c50d9a400ef51242d661a261123c2cc9485 from qemu
2018-02-25 02:58:47 -05:00
Sergey Sorokin e4d123caa9
tcg: Improve the alignment check infrastructure
Some architectures (e.g. ARMv8) need the address which is aligned
to a size more than the size of the memory access.
To support such check it's enough the current costless alignment
check implementation in QEMU, but we need to support
an alignment size specifying.

Backports commit 1f00b27f17518a1bcb4cedca49eaec96a4d560bd from qemu
2018-02-25 02:23:28 -05:00
Peter Maydell 2fe995a0da
exec.c: Pass MemTxAttrs to iotlb_to_region so it uses the right AS
Pass the MemTxAttrs for the memory access to iotlb_to_region(); this
allows it to determine the correct AddressSpace to use for the lookup.

Backports commit a54c87b68a0410d0cf6f8b84e42074a5cf463732 from qemu
2018-02-17 23:19:00 -05:00
Pavel Dovgalyuk 28f154129b
softmmu: remove now unused functions
Now that the cpu_ld/st_* function directly call helper_ret_ld/st, we can
drop the old helper_ld/st functions.

Backports commit b8611499b940b1b4db67aa985e3a844437bcbf00 from qemu
2018-02-17 15:23:38 -05:00
Pavel Dovgalyuk 6cdaaf9b1b
softmmu: add helper function to pass through retaddr
This patch introduces several helpers to pass return address
which points to the TB. Correct return address allows correct
restoring of the guest PC and icount. These functions should be used when
helpers embedded into TB invoke memory operations.

Backports commit 282dffc8a4bfe8724548cabb8a26698bde0a6e18 from qemu
2018-02-17 15:23:38 -05:00
Peter Maydell 933e3bd8d1
Add MemTxAttrs to the IOTLB
Add a MemTxAttrs field to the IOTLB, and allow target-specific
code to set it via a new tlb_set_page_with_attrs() function;
pass the attributes through to the device when making IO accesses.

Backports commit fadc1cbe85c6b032d5842ec0d19d209f50fcb375 from qemu
2018-02-12 18:38:38 -05:00
Peter Maydell 2aecce835b
Make CPU iotlb a structure rather than a plain hwaddr
Make the CPU iotlb a structure rather than a plain hwaddr;
this will allow us to add transaction attributes to it.

Backports commit e469b22ffda40188954fafaf6e3308f58d50f8f8 from qemu
2018-02-12 18:34:05 -05:00
Peter Maydell 825e74410f
memory: Replace io_mem_read/write with memory_region_dispatch_read/write
Rather than retaining io_mem_read/write as simple wrappers around
the memory_region_dispatch_read/write functions, make the latter
public and change all the callers to use them, since we need to
touch all the callsites anyway to add MemTxAttrs and MemTxResult
support. Delete io_mem_read and io_mem_write entirely.

(All the callers currently pass MEMTXATTRS_UNSPECIFIED
and convert the return value back to bool or ignore it.)

Backports commit 3b6434953934e6d4a776ed426d8c6d6badee176f from qemu
2018-02-12 17:26:52 -05:00
Paolo Bonzini a46accd252
exec: make iotlb RCU-friendly
After the previous patch, TLBs will be flushed on every change to
the memory mapping. This patch augments that with synchronization
of the MemoryRegionSections referred to in the iotlb array.

With this change, it is guaranteed that iotlb_to_region will access
the correct memory map, even once the TLB will be accessed outside
the BQL.

Backports commit 9d82b5a792236db31a75b9db5c93af69ac07c7c5 from qemu
2018-02-12 15:20:39 -05:00
Richard Henderson 336833c11e
tcg: Add MO_ALIGN, MO_UNALN
These modifiers control, on a per-memory-op basis, whether
unaligned memory accesses are allowed. The default setting
reflects the target's definition of ALIGNED_ONLY.

Backports commit dfb36305626636e2e07e0c5acd3a002a5419399e from qemu
2018-02-10 20:18:53 -05:00
Richard Henderson ac713c7034
tcg: Push merged memop+mmu_idx parameter to softmmu routines
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.

Backports commit 3972ef6f830d65e9bacbd31257abedc055fd6dc8 from qemu
2018-02-10 20:03:22 -05:00
Nguyen Anh Quynh 206819bd98 cleanup after msvc port 2017-01-22 21:27:17 +08:00
xorstream 770c5616e2 Automated leading tab to spaces conversion. 2017-01-21 12:28:22 +11:00
xorstream 1aeaf5c40d This code should now build the x86_x64-softmmu part 2. 2017-01-19 22:50:28 +11:00
Nguyen Anh Quynh 4083b87032 add new hook type UC_HOOK_MEM_READ_AFTER, adapted from PR #399 by @farmdve. updated all bindings, except Ruby & Haskell 2016-10-22 11:19:55 +08:00
Nguyen Anh Quynh 2341f5dd1a code style 2016-01-26 17:37:48 +08:00
Ryan Hileman 93052f6566 refactor to allow multiple hooks for one type 2016-01-22 18:41:43 -08:00
Nguyen Anh Quynh f935469658 mips: handle memory redirect for all APIs. this fixes issue #347 2015-12-28 15:19:30 +08:00
Nguyen Anh Quynh ed319bda0b x86: identity map guest address to host address. this fixes issue #300 2015-12-24 09:51:17 +08:00
Nguyen Anh Quynh 9e64cba6ec Rename some hook related enums:
- UC_ERR_READ_INVALID -> UC_ERR_READ_UNMAPPED
 - UC_ERR_WRITE_INVALID -> UC_ERR_WRITE_UNMAPPED
 - UC_ERR_FETCH_INVALID -> UC_ERR_FETCH_UNMAPPED
 - UC_MEM_READ_INVALID -> UC_MEM_READ_UNMAPPED
 - UC_MEM_WRITE_INVALID -> UC_MEM_WRITE_UNMAPPED
 - UC_MEM_FETCH_INVALID -> UC_MEM_FETCH_UNMAPPED
 - UC_HOOK_MEM_READ_INVALID -> UC_HOOK_MEM_READ_UNMAPPED
 - UC_HOOK_MEM_WRITE_INVALID -> UC_HOOK_MEM_WRITE_UNMAPPED
 - UC_HOOK_MEM_FETCH_INVALID -> UC_HOOK_MEM_FETCH_UNMAPPED
 - UC_HOOK_MEM_INVALID -> UC_HOOK_MEM_UNMAPPED

This also renames some newly added macros to use _INVALID postfix:

 - UC_HOOK_MEM_READ_ERR -> UC_HOOK_MEM_READ_INVALID
 - UC_HOOK_MEM_WRITE_ERR -> UC_HOOK_MEM_WRITE_INVALID
 - UC_HOOK_MEM_FETCH_ERR -> UC_HOOK_MEM_FETCH_INVALID
 - UC_HOOK_MEM_ERR -> UC_HOOK_MEM_INVALID

Fixed all the bindings Java, Go & Python.
2015-09-30 14:46:55 +08:00
Nguyen Anh Quynh 90eb8f2e72 This commit continues the PR #111
- Allow to register handler separately for invalid memory access
- Add new memory events for hooking:
   - UC_MEM_READ_INVALID, UC_MEM_WRITE_INVALID, UC_MEM_FETCH_INVALID
   - UC_HOOK_MEM_READ_PROT, UC_HOOK_MEM_WRITE_PROT, UC_HOOK_MEM_FETCH_PROT
- Rename UC_ERR_EXEC_PROT to UC_ERR_FETCH_PROT
- Change API uc_hook_add() so event type @type can be combined from hooking types
2015-09-24 14:18:02 +08:00
Sean Heelan dfb4a9d9ad Revert "Remove uc_cb_eventmem_t as it is identical to uc_cb_hookmem_t"
As pointed out by aquynh the return types are actually different. A
uc_cb_eventmem_t callback returns a bool, while uc_cb_hookmem_t has a
void return type.

This reverts commit cb2b97f26c.
2015-09-23 12:51:47 +07:00
Sean Heelan cb2b97f26c Remove uc_cb_eventmem_t as it is identical to uc_cb_hookmem_t, as per
issue #111
2015-09-22 12:37:05 +07:00
Nguyen Anh Quynh d7ef204398 rename error codes ERR_MEM_READ, ERR_MEM_WRITE, ERR_MEM_FETCH 2015-09-09 16:25:48 +08:00
Nguyen Anh Quynh d3d38d3f21 handle read/write/fetch from unaligned addresses. this adds new error codes UC_ERR_READ_UNALIGNED, UC_ERR_WRITE_UNALIGNED & UC_ERR_FETCH_UNALIGNED 2015-09-09 15:52:15 +08:00
Nguyen Anh Quynh 7a5d790ade rename UC_MEM_EXE to UC_MEM_FETCH 2015-09-08 12:55:56 +08:00
Nguyen Anh Quynh 022f8d82d1 handle memory fetch as invalid memory access. now we can also report error if exec memory is unmapped (UC_ERR_MEM_FETCH) 2015-09-04 11:55:17 +08:00
Jonathon Reinhart da46071c7d bring new code and samples up-to-date with API changes 2015-09-03 22:15:49 -04:00
Jonathon Reinhart 5e9d07a40a Merge remote-tracking branch 'upstream/master' into change-handle-based-api 2015-09-03 22:01:52 -04:00
Nguyen Anh Quynh 6ca85a72ed simplify uc_mem_protect() & uc_mem_unmap() 2015-09-04 01:02:38 +08:00
Nguyen Anh Quynh b8d4240240 solve merging conflict 2015-09-03 18:05:21 +08:00
Jonathon Reinhart bd0a6921cc Merge remote-tracking branch 'upstream/master' into change-handle-based-api 2015-09-02 21:04:43 -04:00
Nguyen Anh Quynh be659d201d fix confusion betweet UC_MEM_xxx & UC_HOOK_MEM_xxx. fix issue #93 2015-09-03 01:13:57 +08:00
Chris Eagle 2c4f3769d4 clean up mem_protect related constants and error codes 2015-09-01 12:10:09 -07:00
Chris Eagle 658e399776 clean up mem_protect related constants 2015-08-31 19:08:48 -07:00
Jonathon Reinhart 3bd705a060 Merge remote-tracking branch 'upstream/master' into change-handle-based-api 2015-08-30 00:23:51 -04:00
Chris Eagle eab6167241 Merge branch 'master' into mem_map_ex_cse 2015-08-28 19:00:39 -07:00