Commit graph

24 commits

Author SHA1 Message Date
David Hildenbrand de513617c8 accel/tcg: allow to invalidate a write TLB entry immediately
Background: s390x implements Low-Address Protection (LAP). If LAP is
enabled, writing to effective addresses (before any translation)
0-511 and 4096-4607 triggers a protection exception.

So we have subpage protection on the first two pages of every address
space (where the lowcore - the CPU private data resides).

By immediately invalidating the write entry but allowing the caller to
continue, we force every write access onto these first two pages into
the slow path. we will get a tlb fault with the specific accessed
addresses and can then evaluate if protection applies or not.

We have to make sure to ignore the invalid bit if tlb_fill() succeeds.

Backports commit f52bfb12143e29d7c8bd827bdb751aee47a9694e from qemu
2020-01-14 07:14:10 -05:00
Richard Henderson 07f30382c0 cputlb: Handle watchpoints via TLB_WATCHPOINT
The raising of exceptions from check_watchpoint, buried inside
of the I/O subsystem, is fundamentally broken. We do not have
the helper return address with which we can unwind guest state.

Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT.
Move the call to cpu_check_watchpoint into the cputlb helpers
where we do have the helper return address.

This allows watchpoints on RAM to bypass the full i/o access path.

Backports commit 50b107c5d617eaf93301cef20221312e7a986701 from qemu
2020-01-14 06:58:33 -05:00
Richard Henderson 6c4a3fd06f cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK
We had two different mechanisms to force a recheck of the tlb.

Before TLB_RECHECK was introduced, we had a PAGE_WRITE_INV bit
that would immediate set TLB_INVALID_MASK, which automatically
means that a second check of the tlb entry fails.

We can use the same mechanism to handle small pages.
Conserve TLB_* bits by removing TLB_RECHECK.

Backports commit 30d7e098d5c38644359820317fcf72e3e129ec53 from qemu
2020-01-14 06:20:33 -05:00
Lioncash 802c626145
Revert "cputlb: Filter flushes on already clean tlbs"
This reverts commit 5ab9723787.
2019-06-30 19:21:20 -04:00
Richard Henderson d7ea41c3a3
cpu: Move icount_decr to CPUNegativeOffsetState
Amusingly, we had already ignored the comment to keep this value
at the end of CPUState. This restores the minimum negative offset
from TCG_AREG0 for code generation.

For the couple of uses within qom/cpu.c, without NEED_CPU_H, add
a pointer from the CPUState object to the IcountDecr object within
CPUNegativeOffsetState.

Backports commit 5e1401969b25f676fee6b1c564441759cf967a43 from qemu
2019-06-13 15:34:28 -04:00
Richard Henderson 8f53f09a05
cpu: Introduce CPUNegativeOffsetState
Nothing in there so far, but all of the plumbing done
within the target ArchCPU state.

Backports commit 5b146dc716cfd247f99556c04e6e46fbd67565a0 from qemu
2019-06-13 15:08:25 -04:00
Richard Henderson a672b89e3b
cpu: Introduce cpu_set_cpustate_pointers
Consolidate some boilerplate from foo_cpu_initfn.

Backports commit 7506ed902eb97fe4e2a1dd16766c621d32ecc40d from qemu
2019-06-12 12:27:16 -04:00
Richard Henderson 8b108f3607
cpu: Introduce env_archcpu
This will replace foo_env_get_cpu with a generic definition.
No changes to the target specific code so far.

Backports commit 083dc73d7a3cf2a75b5625fd8f0669b57a855d16 from qemu
2019-06-12 11:17:47 -04:00
Richard Henderson fbf91a6535
cpu: Replace ENV_GET_CPU with env_cpu
Now that we have both ArchCPU and CPUArchState, we can define
this generically instead of via macro in each target's cpu.h.

Backports commit 29a0af618ddd21f55df5753c3e16b0625f534b3c from qemu
2019-06-12 11:16:16 -04:00
Lioncash 5ab9723787
cputlb: Filter flushes on already clean tlbs
Especially for guests with large numbers of tlbs, like ARM or PPC,
we may well not use all of them in between flush operations.
Remember which tlbs have been used since the last flush, and
avoid any useless flushing.

Backports much of 3d1523ced6060cdfe9e768a814d064067ccabfe5 from qemu
along with a bunch of updating changes.
2019-06-10 20:42:15 -04:00
Lioncash e75b32ca4b
cpu_ldst.h, cpu-all.h, bswap.h: Update documentation on ld/st accessors
Add documentation of what the cpu_*_* accessors look like.
Correct some minor errors in the existing documentation of the
direct _p accessor family. Remove the near-duplicate comment
on the _p accessors from cpu-all.h and replace it with a reference
to the comment in bswap.h.

Backports commit db5fd8d709fd57f4d4f11edfca9f421f657f4508 from qemu
2019-04-22 07:39:13 -04:00
Peter Maydell 1301becdab
tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
Add support for MMU protection regions that are smaller than
TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
pages with a flag TLB_RECHECK. This flag causes us to always
take the slow-path for accesses. In the slow path we can then
special case them to always call tlb_fill() again, so we have
the correct information for the exact address being accessed.

This change allows us to handle reading and writing from small
regions; we cannot deal with execution from the small region.

Backports commit 55df6fcf5476b44bc1b95554e686ab3e91d725c5 from qemu
2018-11-16 21:35:54 -05:00
Peter Maydell 6543f9ea26
tcg: Define and use new tlb_hit() and tlb_hit_page() functions
The condition to check whether an address has hit against a particular
TLB entry is not completely trivial. We do this in various places, and
in fact in one place (get_page_addr_code()) we have got the condition
wrong. Abstract it out into new tlb_hit() and tlb_hit_page() inline
functions (one for a known-page-aligned address and one for an
arbitrary address), and use them in all the places where we had the
condition correct.

This is a no-behaviour-change patch; we leave fixing the buggy
code in get_page_addr_code() to a subsequent patch

Backports commit 334692bce7f0653a93b8d84ecde8c847b08dec38 from qemu
2018-07-03 19:21:36 -04:00
Peter Maydell 8295b228e3
bswap: Add new stn_*_p() and ldn_*_p() memory access functions
There's a common pattern in QEMU where a function needs to perform
a data load or store of an N byte integer in a particular endianness.
At the moment this is handled by doing a switch() on the size and
calling the appropriate ld*_p or st*_p function for each size.

Provide a new family of functions ldn_*_p() and stn_*_p() which
take the size as an argument and do the switch() themselves.

Backports commit afa4f6653dca095f63f3fe7f2001e9334f5676c1 from qemu
2018-06-15 12:17:21 -04:00
Peter Crosthwaite ce1831bfb4
target-*: Don't redefine cpu_exec()
This function needs to be converted to QOM hook and virtualised for
multi-arch. This rename interferes, as cpu-qom will not have access
to the renaming causing name divergence. This rename doesn't really do
anything anyway so just delete it.

Backports commit 8642c1b81e0418df066a7960a7426d85a923a253 from qemu
2018-03-20 07:02:47 -04:00
Paolo Bonzini 81ad780e5e
exec: introduce MemoryRegionCache
Device models often have to perform multiple access to a single
memory region that is known in advance, but would to use "DMA-style"
functions instead of address_space_map/unmap. This can happen
for example when the data has to undergo endianness conversion.
Introduce a new data structure to cache the result of
address_space_translate without forcing usage of a host address
like address_space_map does.

Backports commit 1f4e496e1fc2eb6c8bf377a0f9695930c380bfd3 from qemu
2018-03-01 10:50:30 -05:00
Richard Henderson e35aacd5ae
tcg: Add EXCP_ATOMIC
When we cannot emulate an atomic operation within a parallel
context, this exception allows us to stop the world and try
again in a serial context.

Backports commit fdbc2b5722f6092e47181a947c90fd4bdcc1c121 from qemu

Also backports parts of commit 02d57ea115b7669f588371c86484a2e8ebc369be
2018-02-27 11:57:58 -05:00
Peter Maydell db8b0a82b1
cpu: Support a target CPU having a variable page size
Support target CPUs having a page size which isn't knownn
at compile time. To use this, the CPU implementation should:
* define TARGET_PAGE_BITS_VARY
* not define TARGET_PAGE_BITS
* define TARGET_PAGE_BITS_MIN to the smallest value it
might possibly want for TARGET_PAGE_BITS
* call set_preferred_target_page_bits() in its realize
function to indicate the actual preferred target page
size for the CPU (and report any error from it)

In CONFIG_USER_ONLY, the CPU implementation should continue
to define TARGET_PAGE_BITS appropriately for the guest
OS page size.

Machines which want to take advantage of having the page
size something larger than TARGET_PAGE_BITS_MIN must
set the MachineClass minimum_page_bits field to a value
which they guarantee will be no greater than the preferred
page size for any CPU they create.

Note that changing the target page size by setting
minimum_page_bits is a migration compatibility break
for that machine.

For debugging purposes, attempts to use TARGET_PAGE_SIZE
before it has been finally confirmed will assert.

Backports commit 20bccb82ff3ea09bcb7c4ee226d3160cab15f7da from qemu
2018-02-26 12:29:08 -05:00
Sergey Sorokin e4d123caa9
tcg: Improve the alignment check infrastructure
Some architectures (e.g. ARMv8) need the address which is aligned
to a size more than the size of the memory access.
To support such check it's enough the current costless alignment
check implementation in QEMU, but we need to support
an alignment size specifying.

Backports commit 1f00b27f17518a1bcb4cedca49eaec96a4d560bd from qemu
2018-02-25 02:23:28 -05:00
Peter Maydell 8d0faac1dc
qemu-common.h: Drop WORDS_ALIGNED define
The WORDS_ALIGNED #define is not used anywhere, and hasn't been since
2013 when commit 612d590ebc6cef rewrote the various ld<type>_<endian>_p
functions to not use it. Remove the #define and the comment describing it.
Also remove the line in the comment about TARGET_WORDS_ALIGNED, since
it has never actually existed.

Backports commit 0d5c21f2b3bf1e0b562a2c74e353d2e03f2f50ef from qemu
2018-02-24 17:01:55 -05:00
Paolo Bonzini 78fd1aab94
cpu: move endian-dependent load/store functions to cpu-all.h
Disentangle cpu-common.h and memory.h from NEED_CPU_H. Prototypes are
not defined for !NEED_CPU_H, so remove them from poison.h too. Only
macros need poisoning.

Backports commit a7d6039cb35592683ecc56d2b37817da2d2f8b00 from qemu
2018-02-24 01:04:26 -05:00
Peter Crosthwaite 13b919f5c8
cpu-all: complete real host page size API
Currently the "host" page size alignment API is really aligning to both
host and target page sizes. There is the qemu_real_page_size which can
be used for the actual host page size but it's missing a mask and ALIGN
macro as provided for qemu_page_size. Complete the API. This allows
system level code that cares about the host page size to use a
consistent alignment interface without having to un-needingly align to
the target page size. This also reduces system level code dependency
on the cpu specific TARGET_PAGE_SIZE.

Backports commit 4e51361d79289aee2985dfed472f8d87bd53a8df from qemu
2018-02-17 15:23:16 -05:00
Peter Crosthwaite 860e4184df
include/exec: Move standard exceptions to cpu-all.h
These exception indicies are generic and don't have any reliance on the
per-arch cpu.h defs. Move them to cpu-all.h so they can be used by core
code that does not have access to cpu-defs.h.

Backports commit 9e0dc48c9f05505b53cb28f860456a0648e56ddf from qemu
2018-02-17 15:23:15 -05:00
Nguyen Anh Quynh 344d016104 import 2015-08-21 15:04:50 +08:00