This will ensure we never use the MMX_* and ZMM_* macros with the
wrong struct type.
Backports commit f23a9db6bca5b9a228c77bbcaa06d01510e148b7 from qemu
Add a new field and reorder MMXReg fields, to make MMXReg and
ZMMReg field lists look the same (except for the array sizes).
Backports commit 9253e1a7923e94598419ac9a7df7b8bc6cba65a5 from qemu
They are helpers for the ZMMReg fields, so name them accordingly.
This is just a global search+replace, no other changes are being
introduced.
Backports commit 19cbd87c14ab208858ee1233b790f37cfefed4b9 from qemu
The struct represents a 512-bit register, so name it accordingly.
This is just a global search+replace, no other changes are being
introduced.
Backports commit fa4518741ed69aa7993f9c15bb52eacc375681fc from qemu
Make MMXReg use the same field names used on XMMReg, so we can
try to reuse macros and other code later.
Backports commit 9618f40f06e90c8fa8ae06b56c7404a7cc937e22 from qemu
Rename the function so that the reason for its existence is
clearer: it does x86-specific initialization of TCG structures.
Backports commit 63618b4ed48f0fc2a7a3fd1117e2f0b512248dab from qemu
The AArch64 FPEXC32_EL2 system register is visible at EL2 and EL3,
and allows those exception levels to read and write the FPEXC
register for a lower exception level that is using AArch32.
Backports commit 03fbf20f4da58f41998dc10ec7542f65d37ba759 from qemu
The architecture requires that for an exception return to AArch32 the
low bits of ELR_ELx are ignored when the PC is set from them:
* if returning to Thumb mode, ignore ELR_ELx[0]
* if returning to ARM mode, ignore ELR_ELx[1:0]
We were only squashing bit 0; also squash bit 1 if the SPSR T bit
indicates this is a return to ARM code.
Backports commit c1e0371442bf3a7e42ad53c2a3d816ed7099f81d from qemu
We already implement almost all the checks for the illegal
return events from AArch64 state described in the ARM ARM section
D1.11.2. Add the two missing ones:
* return to EL2 when EL3 is implemented and SCR_EL3.NS is 0
* return to Non-secure EL1 when EL2 is implemented and HCR_EL2.TGE is 1
(We don't implement external debug, so the case of "debug state exit
from EL0 using AArch64 state to EL0 using AArch32 state" doesn't apply
for QEMU.)
Backports commit e393f339af87da7210f6c86902b321df6a2e8bf5 from qemu
Remove the assumptions that the AArch64 exception return code was
making about a return to AArch32 always being a return to EL0.
This includes pulling out the illegal-SPSR checks so we can apply
them for return to 32 bit as well as return to 64-bit.
Backports commit 3809951bf61605974b91578c582de4da28f8ed07 from qemu
The entry offset when taking an exception to AArch64 from a lower
exception level may be 0x400 or 0x600. 0x400 is used if the
implemented exception level immediately lower than the target level
is using AArch64, and 0x600 if it is using AArch32. We were
incorrectly implementing this as checking the exception level
that the exception was taken from. (The two can be different if
for example we take an exception from EL0 to AArch64 EL3; we should
in this case be checking EL2 if EL2 is implemented, and EL1 if
EL2 is not implemented.)
Backports commit 3d6f761713745dfed7d2ccfe98077d213a6a6eba from qemu
Handling of semihosting calls should depend on the register width
of the calling code, not on that of any higher exception level,
so we need to identify and handle semihosting calls before we
decide whether to deliver the exception as an entry to AArch32
or AArch64. (EXCP_SEMIHOST is also an "internal exception" so
it has no target exception level in the first place.)
This will allow AArch32 EL1 code to use semihosting calls when
running under an AArch64 EL3.
Backports commit 904c04de2e1b425e7bc8c4ce2fae3d652eeed242 from qemu
If EL2 or EL3 is present on an AArch64 CPU, then exceptions can be
taken to an exception level which is running AArch32 (if only EL0
and EL1 are present then EL1 must be AArch64 and all exceptions are
taken to AArch64). To support this we need to have a single
implementation of the CPU do_interrupt() method which can handle both
32 and 64 bit exception entry.
Pull the common parts of aarch64_cpu_do_interrupt() and
arm_cpu_do_interrupt() out into a new function which calls
either the AArch32 or AArch64 specific entry code once it has
worked out which one is needed.
We temporarily special-case the handling of EXCP_SEMIHOST to
avoid an assertion in arm_el_is_aa64(); the next patch will
pull all the semihosting handling out to the arm_cpu_do_interrupt()
level (since semihosting semantics depend on the register width
of the calling code, not on that of any higher EL).
Backports commit 966f758c49ff478c4757efa5970ce649161bff92 from qemu
Move the aarch64_cpu_do_interrupt() function to helper.c. We want
to be able to call this from code that isn't AArch64-only, and
the move allows us to avoid awkward #ifdeffery at the callsite.
Backports commit f3a9b6945cbbb23f3a70da14e9ffdf1e60c580a8 from qemu
Support EL2 and EL3 in arm_el_is_aa64() by implementing the
logic for checking the SCR_EL3 and HCR_EL2 register-width bits
as appropriate to determine the register width of lower exception
levels.
Backports commit 446c81abf8e0572b8d5d23fe056516ac62af278d from qemu
If we have a secure address space, use it in page table walks:
when doing the physical accesses to read descriptors, make them
through the correct address space.
(The descriptor reads are the only direct physical accesses
made in target-arm/ for CPUs which might have TrustZone.)
Backports commit 5ce4ff6502fc6ae01a30c3917996c6c41be1d176 from qemu
Implement the asidx_from_attrs CPU method to return the
Secure or NonSecure address space as appropriate.
(The function is inline so we can use it directly in target-arm
code to be added in later patches.)
Backports commit 017518c1f6ed9939c7f390cb91078f0919b5494c from qemu
Add QOM property to the ARM CPU which boards can use to tell us what
memory region to use for secure accesses. Nonsecure accesses
go via the memory region specified with the base CPU class 'memory'
property.
By default, if no secure region is specified it is the same as the
nonsecure region, and if no nonsecure region is specified we will use
address_space_memory.
Backports commit 9e273ef2174d7cd5b14a16d8638812541d3eb6bb from qemu
address_space_translate_internal will clamp the *plen length argument
based on the size of the memory region being queried. The iommu walker
logic in addresss_space_translate was ignoring this by discarding the
post fn call value of *plen. Fix by just always using *plen as the
length argument throughout the fn, removing the len local variable.
This fixes a bootloader bug when a single elf section spans multiple
QEMU memory regions.
Backports commit 23820dbfc79d1c9dce090b4c555994f2bb6a69b3 from qemu
Add a MemoryRegion property, which if set is used to construct
the CPU's initial (default) AddressSpace.
Backports commit 6731d864f80938e404dc3e5eb7f6b76b891e3e43 from qemu
When accessing the dispatch pointer in an AddressSpace within an RCU
critical section we should always use atomic_rcu_read(). Fix an
access within memory_region_section_get_iotlb() which was incorrectly
doing a direct pointer access.
Backports commit 0b8e2c1002afddc8ef3d52fa6fc29e4768429f98 from qemu
check the return value of the function it calls and error if it's non-0
Fixup qemu_rdma_init_one_block that is the only current caller,
and rdma_add_block the only function it calls using it.
Pass the name of the ramblock to the function; helps in debugging.
Backports commit e3807054e20fb3b94d18cb751c437ee2f43b6fac from qemu
It is not necessary to munmap an area before remapping it with MAP_FIXED;
if the memory region specified by addr and len overlaps pages of any
existing mapping, then the overlapped part of the existing mapping will
be discarded.
On the other hand, if QEMU does munmap the pages, there is a small
probability that another mmap sneaks in and catches the just-freed
portion of the address space. In effect, munmap followed by
mmap(MAP_FIXED) is a use-after-free error, and Coverity flags it
as such. Fix it.
Backports commit f18c69cfc554cf9776eb3c35b7510e17541afacb from qemu
This reverts commit c3c1bb9.
It causes problems with boards that declare memory regions shorter
than the registers they contain.
Backports commit 4025446f0ac6213335c22ec43f3c3d8362ce7286 from qemu
Block size must fundamentally be a multiple of target page size.
Aligning automatically removes need to worry about the alignment
from callers.
Note: the only caller of qemu_ram_resize (acpi) already happens to have
size padded to a power of 2, but we would like to drop the padding in
ACPI core, and don't want to expose target page size knowledge to ACPI
Backports commit 129ddaf31be583fb7c97812e07e028661005ce42 from qemu
Allow "unlocked" reads of the ram_list by using an RCU-enabled QLIST.
The ramlist mutex is kept. call_rcu callbacks are run with the iothread
lock taken, but that may change in the future. Writers still take the
ramlist mutex, but they no longer need to assume that the iothread lock
is taken.
Readers of the list, instead, no longer require either the iothread
or ramlist mutex, but they need to use rcu_read_lock() and
rcu_read_unlock().
One place in arch_init.c was downgrading from write side to read side
like this:
qemu_mutex_lock_iothread()
qemu_mutex_lock_ramlist()
...
qemu_mutex_unlock_iothread()
...
qemu_mutex_unlock_ramlist()
and the equivalent idiom is:
qemu_mutex_lock_ramlist()
rcu_read_lock()
...
qemu_mutex_unlock_ramlist()
...
rcu_read_unlock()
Backports the write barriers from commit 0dc3f44aca18b1be8b425f3f4feb4b3e8d68de2e in qemu
Coverity flags this as "dereference after null check". Not quite a
dereference, since it will just EFAULT, but still nice to fix.
Backports commit a904c91196a9c5dbd7b9abcd3d40b0824286fb1c from qemu
The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.
Backports commit ec53b45bcd1f74f7a4c31331fa6d50b402cd6d26 from qemu
host pointer accesses force pointer math, let's
add a wrapper to make them safer.
Backports relevant parts of commit 1240be24357ee292f8d05aa2abfdba75dd0ca25d from qemu
Otherwise fw_cfg accesses are split into 4-byte ones before they reach the
fw_cfg ops / handlers.
Backports commit ff6cff7554be06e95f8d712f66cd16bd6681c746 from qemu
Loading the BIOS in the mac99 machine is interesting, because there is a
PROM in the middle of the BIOS region (from 16K to 32K). Before memory
region accesses were clamped, when QEMU was asked to load a BIOS from
0xfff00000 to 0xffffffff it would put even those 16K from the BIOS file
into the region. This is weird because those 16K were not actually
visible between 0xfff04000 and 0xfff07fff. However, it worked.
After clamping was added, this also worked. In this case, the
cpu_physical_memory_write_rom_internal function split the write in
three parts: the first 16K were copied, the PROM area (second 16K) were
ignored, then the rest was copied.
Problems then started with commit 965eb2f (exec: do not clamp accesses
to MMIO regions, 2015-06-17). Clamping accesses is not done for MMIO
regions because they can overlap wildly, and MMIO registers can be
expected to perform full-width accesses based only on their address
(with no respect for adjacent registers that could decode to completely
different MemoryRegions). However, this lack of clamping also applied
to the PROM area! cpu_physical_memory_write_rom_internal thus failed
to copy the third range above, i.e. only copied the first 16K of the BIOS.
In effect, address_space_translate is expecting _something else_ to do
the clamping for MMIO regions if the incoming length is large. This
"something else" is memory_access_size in the case of address_space_rw,
so use the same logic in cpu_physical_memory_write_rom_internal.
Backports commit b242e0e0e2969c044a318e56f7988bbd84de1f63 from qemu
Because the clamping was done against the MemoryRegion,
address_space_rw was effectively broken if a write spanned
multiple sections that are not linear in underlying memory
(with the memory not being under an IOMMU).
This is visible with the MIPS rc4030 IOMMU, which is implemented
as a series of alias memory regions that point to the actual RAM.
Backports commit e4a511f8cc6f4a46d409fb5c9f72c38ba45f8d83 from qemu
It is common for MMIO registers to overlap, for example a 4 byte register
at 0xcf8 (totally random choice... :)) and a 1 byte register at 0xcf9.
If these registers are implemented via separate MemoryRegions, it is
wrong to clamp the accesses as the value written would be truncated.
Hence for these regions the effects of commit 23820db (exec: Respect
as_translate_internal length clamp, 2015-03-16, previously applied as
commit c3c1bb9) must be skipped.
Backports commit 965eb2fcdfe919ecced6c34803535ad32dc1249c from qemu
This will either create a new AS or return a pointer to an
already existing equivalent one, if we have already created
an AS for the specified root memory region.
The motivation is to reuse address spaces as much as possible.
It's going to be quite common that bus masters out in device land
have pointers to the same memory region for their mastering yet
each will need to create its own address space. Let the memory
API implement sharing for them.
Aside from the perf optimisations, this should reduce the amount
of redundant output on info mtree as well.
Thee returned value will be malloced, but the malloc will be
automatically freed when the AS runs out of refs.
Backports commit f0c02d15b57da6f5463e3768aa0cfeedccf4b8f4 from qemu
Use cpu_get_phys_page_attrs_debug() when doing virtual-to-physical
conversions in debug related code, so that we can obtain the right
address space index and thus select the correct AddressSpace,
rather than always using cpu->as.
Backports commit 5232e4c798ba7a46261d3157b73d08fc598e7dcb from qemu
Add a function to return the AddressSpace for a CPU based on
its numerical index. (Callers outside exec.c don't have access
to the CPUAddressSpace struct so can't just fish it out of the
CPUState struct directly.)
Backports commit 651a5bc03705102de519ebf079a40ecc1da991db from qemu
Pass the MemTxAttrs for the memory access to iotlb_to_region(); this
allows it to determine the correct AddressSpace to use for the lookup.
Backports commit a54c87b68a0410d0cf6f8b84e42074a5cf463732 from qemu
When looking up the MemoryRegionSection for the new TLB entry in
tlb_set_page_with_attrs(), use cpu_asidx_from_attrs() to determine
the correct address space index for the lookup, and pass it into
address_space_translate_for_iotlb().
Backports commit d7898cda81b6efa6b2d7a749882695cdcf280eaa from qemu
Add a new method to CPUClass which the memory system core can
use to obtain the correct address space index to use for a memory
access with a given set of transaction attributes, together
with the wrapper function cpu_asidx_from_attrs() which implements
the default behaviour ("always use asidx 0") for CPU classes
which don't provide the method.
Backports commit d7f25a9e6a6b2c69a0be6033903b7d6087bcf47d from qemu
Add a new optional method get_phys_page_attrs_debug() to CPUClass.
This is like the existing get_phys_page_debug(), but also returns
the memory transaction attributes to use for the access.
This will be necessary for CPUs which have multiple address
spaces and use the attributes to select the correct address
space.
We provide a wrapper function cpu_get_phys_page_attrs_debug()
which falls back to the existing get_phys_page_debug(), so we
don't need to change every target CPU.
Backports commit 1dc6fb1f5cc5cea5ba01010a19c6acefd0ae4b73 from qemu