Add support for MMU protection regions that are smaller than
TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
pages with a flag TLB_RECHECK. This flag causes us to always
take the slow-path for accesses. In the slow path we can then
special case them to always call tlb_fill() again, so we have
the correct information for the exact address being accessed.
This change allows us to handle reading and writing from small
regions; we cannot deal with execution from the small region.
Backports commit 55df6fcf5476b44bc1b95554e686ab3e91d725c5 from qemu
Isolate the computation of an index from an address into a
helper before we change that function.
Backports commit 383beda9cf32f795616c3b93f7d6154d70372d4b from qemu
The condition to check whether an address has hit against a particular
TLB entry is not completely trivial. We do this in various places, and
in fact in one place (get_page_addr_code()) we have got the condition
wrong. Abstract it out into new tlb_hit() and tlb_hit_page() inline
functions (one for a known-page-aligned address and one for an
arbitrary address), and use them in all the places where we had the
condition correct.
This is a no-behaviour-change patch; we leave fixing the buggy
code in get_page_addr_code() to a subsequent patch
Backports commit 334692bce7f0653a93b8d84ecde8c847b08dec38 from qemu
The header is only used by accel/tcg/cputlb.c so we can
move it to the accel/tcg/ folder, too.
Backports commit da1849c1eba50aa372f87c7945d7b230eb2b2fb2 from qemu