This patch fixes exception handling for FPU instructions
and removes obsolete PC update from translate.c.
Backports commit 6cad09d2f74d7318f737acaa21b3da49a0c9e670 from qemu
This patch introduces new versions of raise_exception functions
that receive TB return address as an argument.
Backports commit 9198009529d06b6489b68a7505942cca3a50893f from qemu
Future patches will be adding more crypto related APIs which
rely on QOM infrastructure. This creates a problem, because
QOM relies on library constructors to register objects. When
you have a file in a static .a library though which is only
referenced by a constructor the linker is dumb and will drop
that file when linking to the final executable :-( The only
workaround for this is to link the .a library to the executable
using the -Wl,--whole-archive flag, but this creates its own
set of problems because QEMU is relying on lazy linking for
libqemuutil.a. Using --whole-archive majorly increases the
size of final executables as they now contain a bunch of
object code they don't actually use.
The least bad option is to thus not include the crypto objects
in libqemuutil.la, and instead define a crypto-obj-y variable
that is referenced directly by all the executables that need
this code (tools + softmmu, but not qemu-ga). We avoid pulling
entire of crypto-obj-y into the userspace emulators as that
would force them to link to gnutls too, which is not required.
Backports commit fb37726db77b21f3731b90693d2c93ade1777528 from qemu
There is some iffy lock hierarchy going on in translate-all.c. To
fix it, we need to take the mmap_lock in cpu-exec.c. Make the
functions globally available.
Backports commit 8fd19e6cfd5b6cdf028c6ac2ff4157ed831ea3a6 from qemu
Synchronize the remaining pair of accesses in cpu_signal. These should
be necessary on Windows as well, at least in theory. Probably
SuspendProcess and ResumeProcess introduce some implicit memory
barrier.
Backports relevant parts of commit aed807c8e2bf009b2c6a35490d4fd4383887221d from qemu
TCG has not been reading cpu->current_tb from signal handlers for years.
The code that synchronized cpu_exec with the signal handler is not
needed anymore.
Backports commit b0a46fa796504c7334202877a68c857e49f7c96c from qemu
This is already useful on Windows in order to remove tls.h, because
accesses to current_cpu are done from a different thread on that
platform. It will be used on POSIX platforms as soon TCG stops using
signals to interrupt the execution of translated code.
Backports commit 9373e63297c43752f9cf085feb7f5aed57d959f8 from qemu
Break out mpidr_read_val() to allow future sharing of the
code that conditionally sets the M and U bits of MPIDR.
No functional changes.
Backports commit 06a7e6477c129ceaa72bd400cf281d44c456be43 from qemu
Stage-2 MMU translations do not have configurable TBI as
the top byte is always 0 (48-bit IPAs).
Backports commit 1edee4708a0e3163cbf20fac325be456abd960bb from qemu
This patch introduces loop exit function, which also
restores guest CPU state according to the value of host
program counter.
Backports commit 1c3c8af1fb40a481c07749e0448644d9b7700415 from qemu
Now that the cpu_ld/st_* function directly call helper_ret_ld/st, we can
drop the old helper_ld/st functions.
Backports commit b8611499b940b1b4db67aa985e3a844437bcbf00 from qemu
This patch introduces several helpers to pass return address
which points to the TB. Correct return address allows correct
restoring of the guest PC and icount. These functions should be used when
helpers embedded into TB invoke memory operations.
Backports commit 282dffc8a4bfe8724548cabb8a26698bde0a6e18 from qemu
This is set to true when the index is for an instruction fetch
translation.
The core get_page_addr_code() sets it, as do the SOFTMMU_CODE_ACCESS
acessors.
All targets ignore it for now, and all other callers pass "false".
This will allow targets who wish to split the mmu index between
instruction and data accesses to do so. A subsequent patch will
do just that for PowerPC.
Backports commit 97ed5ccdee95f0b98bedc601ff979e368583472c from qemu
This is particularly useful when we abort in error_propagate(),
because there the stack backtrace doesn't lead to where the error was
created. Looks like this:
Unexpected error in parse_block_error_action() at .../qemu/blockdev.c:322:
qemu-system-x86_64: -drive if=none,werror=foo: 'foo' invalid write error action
Aborted (core dumped)
Note: to get this example output, I monkey-patched drive_new() to pass
&error_abort to blockdev_init().
To keep the error handling boiler plate from growing even more, all
error_setFOO() become macros expanding into error_setFOO_internal()
with additional __FILE__, __LINE__, __func__ arguments. Not exactly
pretty, but it works.
The macro trickery breaks down when you take the address of an
error_setFOO(). Fortunately, we do that in just one place: qemu-ga's
Windows VSS provider and requester DLL wants to call
error_setg_win32() through a function pointer "to avoid linking glib
to the DLL". Use error_setg_win32_internal() there. The use of the
function pointer is already wrapped in a macro, so the churn isn't
bad.
Code size increases by some 35KiB for me (0.7%). Tolerable. Could be
less if we passed relative rather than absolute source file names to
the compiler, or forwent reporting __func__.
Backports commit 1e9b65bb1bad51735cab6c861c29b592dccabf0e from qemu
Duplicated when commit 680d16d added error_set_errno(), and again when
commit 20840d4 added error_set_win32().
Make the original copy in error_set() reusable by factoring out
error_setv(), then rewrite error_set_errno() and error_set_win32() on
top of it.
Backports commit 552375088a832fd5945ede92d01f98977b4eca13 from qemu
Log the target EL when taking exceptions. This is useful when
debugging guest SW or QEMU itself while transitioning through
the various ELs.
Backports commit dbc29a868cf5b7e6fa7bb2e6c4f188b9470779c5 from qemu
If EL3 is not supported in current configuration,
we should not try to get EL3 bitness.
Backports commit cef9ee706792b1e205fe472b67053a0e82cd058e from qemu
Use pow2floor() to round down to the nearest power of 2,
rather than an inline calculation.
Backports commit 6554f5c03793bb8a3d5dedcebf758a1694fa186c from qemu
Introduces reusable definitions for CPU affinity masks/shifts and gets rid
of hardcoded magic numbers.
Backports commit 0f4a9e45ec35811ee250ac232d84d3c6d4fcd7fc from qemu
There is an error in arm_excp_unmasked() function:
bitwise operator & is used with integer and bool operands
causing an incorrect zeroed result.
The patch fixes it.
Backports commit 771842585f3119f69641ed90a97d56eb9ed6f5ae from qemu
There is an error in functions aarch64_sync_32_to_64() and
aarch64_sync_64_to_32() with mapping of registers between AArch32 and
AArch64. This commit fixes the mapping to match the v8 ARM ARM
section D1.20.1 (table D1-77).
Backports commit 3a9148d0bdcee990fbe86759b9b1f5723c1d7fbc from qemu
All of these hw_errors are fatal and indicate something wrong with
QEMU implementation.
Convert to g_assert_not_reached.
Backports commit 8f6fd322f6e25995629a1a07b56bc5b91fb947ca from qemu
For the A64 instruction set, the semihosting call instruction
is 'HLT 0xf000'. Wire this up to call do_arm_semihosting()
if semihosting is enabled.
Backports commit 8012c84ff92a36d05dfe61af9b24dd01a7ea25e4 from qemu
The 64-bit A64 semihosting API has some pervasive changes from
the 32-bit version:
* all parameter blocks are arrays of 64-bit values, not 32-bit
* the semihosting call number is passed in W0
* the return value is a 64-bit value in X0
Implement the necessary handling for this widening.
Backports relevant parts of commit faacc041619581c566c21ed87aa1933420731282 from qemu
Print semihosting debugging information before the
do_arm_semihosting() call so that angel_SWIreason_ReportException,
which causes the function to not return, gets the same debug prints as
other semihosting calls. Also print out the semihosting call number.
Backports commit 205ace55ffff77964e50af08c99639ec47db53f6 from qemu
Softmmu unaligned load/stores currently goes through through the slow
path for two reasons:
- to support unaligned access on host with strict alignement
- to correctly handle accesses crossing pages
x86 is only concerned by the second reason. Unaligned accesses are
avoided by compilers, but are not uncommon. We therefore would like
to see them going through the fast path, if they don't cross pages.
For that we can use the fact that two adjacent TLB entries can't contain
the same page. Therefore accessing the TLB entry corresponding to the
first byte, but comparing its content to page address of the last byte
ensures that we don't cross pages. We can do this check without adding
more instructions in the TLB code (but increasing its length by one
byte) by using the LEA instruction to combine the existing move with the
size addition.
On an x86-64 host, this gives a 3% boot time improvement for a powerpc
guest and 4% for an x86-64 guest.
Backports commit 8cc580f6a0d8c0e2f590c1472cf5cd8e51761760 from qemu
Implement the AArch64 TLBI operations which take an intermediate
physical address and invalidate stage 2 translations.
Backports commit cea66e91212164e02ad1d245c2371f7e8eb59e7f from qemu
Now we have the ability to flush the TLB only for specific MMU indexes,
update the AArch64 TLB maintenance instruction implementations to only
flush the parts of the TLB they need to, rather than doing full flushes.
We take the opportunity to remove some duplicate functions (the per-asid
tlb ops work like the non-per-asid ones because we don't support
flushing a TLB only by ASID) and to bring the function names in line
with the architectural TLBI operation names.
Backports commit fd3ed969227f54f08f87d9eb6de2d4e48e99279b from qemu
Move the two regdefs for TLBI ALLE1 and TLBI ALLE1IS down so that the
whole set of AArch64 TLBI regdefs is arranged in numeric order.
Backports commit 83ddf975777cc23337b7ef92e83b1b9c949396f3 from qemu
Guest CPU TLB maintenance operations may be sufficiently
specialized to only need to flush TLB entries corresponding
to a particular MMU index. Implement cputlb functions for
this, to avoid the inefficiency of flushing TLB entries
which we don't need to.
Backports commit d7a74a9d4a68e27b3a8ceda17bb95cb0a23d8e4d from qemu
Apply the correct conditions in the ats_access() function for
the ATS12NSO* address translation operations:
* succeed at EL2 or EL3
* normal UNDEF trap from NS EL1
* trap to EL3 from S EL1 (only possible if EL3 is AArch64)
(This change means they're now available in our EL3-supporting
CPUs when they would previously always UNDEF.)
Backports commit 87562e4f4a2bdd028eef3549ce9cb4e7c83cb0bf from qemu