When an IOMMUMemoryRegion is in front of a virtio device,
address_space_cache_init does not set cache->ptr as the memory
region is not RAM. However when the device performs an access,
we end up in glue() which performs the translation and then uses
MAP_RAM. This latter uses the unset ptr and returns a wrong value
which leads to a SIGSEV in address_space_lduw_internal_cached_slow,
for instance.
In slow path cache->ptr is NULL and MAP_RAM must redirect to
qemu_map_ram_ptr((mr)->ram_block, ofs).
As MAP_RAM, IS_DIRECT and INVALIDATE are the same in _cached_slow
and non cached mode, let's remove those macros.
This fixes the use cases featuring vIOMMU (Intel and ARM SMMU)
which lead to a SIGSEV.
Fixes: 48564041a73a (exec: reintroduce MemoryRegion caching)
Backports part of commit a99761d3c85679da380c0f597468acd3dc1b53b3 from
qemu
Templatize the address_space_* and *_phys functions, so that we can add
similar functions in the next patch that work with a lightweight,
cache-like version of address_space_map/unmap.
Backports commit 0ce265ffef87f19f4dd1ff0663e09a63d66ae408 from qemu