Commit 8074264 (qom: Add description field in ObjectProperty struct)
introduced property descriptions and copied them for alias properties.
Instead of using the caller-supplied property name, use the returned
property name for setting the description. This avoids an Error when
setting a property description for a property with literal "[*]" that
doesn't exist due to automatic property naming in object_property_add().
Backports commit a18bb417e954ceea0a30b46c38b0d58c3a7ca6a1 from qemu
PC needs to be saved if an exception can be generated by an helper.
This fixes a problem related to resuming the execution at unexpected address
after an exception (caused by MSA load/store instruction) has been serviced.
Backports commit 0af7a37054310384e00209e0a43efe95b7c19ef0 from qemu
All instructions which may change hflags terminate tb. However, this doesn't
work if such an instruction is placed in delay or forbidden slot.
gen_branch() clears MIPS_HFLAG_BMASK in ctx->hflags and then generates code
to overwrite hflags with ctx->hflags, consequently we loose any execution-time
hflags modifications. For example, in the following scenario hflag related to
Status.CU1 will not be updated:
/* Set Status.CU1 in delay slot */
mfc0 $24, $12, 0
lui $25, 0x2000
or $25, $25, $24
b check_Status_CU1
mtc0 $25, $12, 0
With this change we clear MIPS_HFLAG_BMASK in execution-time hflags if
instruction in delay or forbidden slot wants to terminate tb for some reason
(i.e. ctx->bstate != BS_NONE).
Also, die early and loudly if "unknown branch" is encountered as this should
never happen.
Backports commit a5f533909e746ca6e534b232fb42c9c6fd81b468 from qemu
CP0.BadVAddr is supposed to capture the most recent virtual address that caused
the exception. Currently this does not work correctly for unaligned instruction
fetch as translation is not stopped and CP0.BadVAddr is updated with subsequent
addresses.
Backports commit 62c688693bf2f0355fc5bad5dcc59c1cd2a51f1a from qemu
For the ARM M-profile cores, exception return pops various registers
including the PC from the stack. The architecture defines that if the
lowest bit in the new PC value is set (ie the PC is not halfword
aligned) then behaviour is UNPREDICTABLE. In practice hardware
implementations seem to simply ignore the low bit, and some buggy
RTOSes incorrectly rely on this. QEMU's behaviour was architecturally
permitted, but bringing QEMU into line with the hardware behaviour
allows more guest code to run. We log the situation as a guest error.
This was reported as LP:1428657.
Backports commit fcf83ab103dce6d2951f24f48e30820e7dbb3622 from qemu
The A32 encoding of LDM distinguishes LDM (user) from LDM (exception
return) based on whether r15 is in the register list. However for
STM (user) there is no equivalent distinction. We were incorrectly
treating "r15 in list" as indicating exception return for both LDM
and STM, with the result that an STM (user) involving r15 went into
an infinite loop. Fix this; note that the value stored for r15
in this case is the current PC regardless of our current mode.
Backports commit da3e53ddcb0ca924da97ca5a35605fc554aa3e05 from qemu
Save MSACSR state. Also remove fp_status, msa_fp_status, hflags and restore
them in post_load() from the architectural registers.
Float exception flags are not present in vmstate. Information they carry
is used only by softfloat caller who translates them into MIPS FCSR.Cause,
FCSR.Flags and then they are cleared. Therefore there is no need for saving
them in vmstate.
Backports commit 644511117e7ca9f26d633a59c202a297113a796c from qemu
The documentation for sextract64() claims that the return type is
an int64_t, but the code itself disagrees. Fix the return type to
conform to the documentation and to bring it into line with
sextract32(), which returns int32_t.
Backports commit 4f9950520a115acf9c0a209f0befa45758ad0215 from qemu
According to my reading of the Intel documentation, the SYSRET instruction
is supposed to force the RPL bits of the %ss register to 3 when returning
to user mode. The actual sequence is:
SS.Selector <-- (IA32_STAR[63:48]+8) OR 3; (* RPL forced to 3 *)
However, the code in helper_sysret() leaves them at 0 (in other words, the "OR
3" part of the above sequence is missing). It does set the privilege level
bits of %cs correctly though.
This has caused me trouble with some of my VxWorks development: code that runs
okay on real hardware will crash on QEMU, unless I apply the patch below.
Backports commit ac57622985220de064059971f9ccb00905e9bd04 from qemu
The APIC ID compatibility code is required only for PC, and now that
x86_cpu_initfn() doesn't use x86_cpu_apic_id_from_index() anymore, that
code can be moved to pc.c.
Backports commit de13197a38cf45c990802661a057f64a05426cbc from qemu
Instead of setting APIC ID automatically when creating a X86CPU, require
the property to be set before realizing the object (which all callers of
cpu_x86_create() already do).
Backports commit e1356dd70aef11425883dd4d2885f1d208eb9d57 from qemu
The PC CPU initialization code already sets apic-id based on the CPU
topology, and CONFIG_USER doesn't need the topology-based APIC ID
calculation code.
Make CONFIG_USER set apic-id before realizing the CPU (just like PC
already does), so we can simplify x86_cpu_initfn later. As there is no
CPU topology configuration in CONFIG_USER, just use cpu_index as the
APIC ID.
Backports commit 9c235e83f1c3437be6ca45755909efb745c10deb from qemu
The field doesn't need to be inside CPUState, and it is not specific for
the CPUID instruction, so move and rename it.
Backports commit 9e9d3863adcbd1ffeca30f240f49805b00ba0d87 from qemu
Instead of putting extra logic inside cpu.h, just do everything inside
cpu_x86_init_user().
Backports commit 15258d46baef5f8265ad5f1002905664cf58f051 from qem
The function was used in only two places. In one of them, the function
made the code less readable by requiring temporary te[bcd]x variables.
In the other one we can simply inline the existing code.
Backports commit 08e1a1e5a175ecbfdb761db5a62090498f736969 from qemu
After the previous patch, TLBs will be flushed on every change to
the memory mapping. This patch augments that with synchronization
of the MemoryRegionSections referred to in the iotlb array.
With this change, it is guaranteed that iotlb_to_region will access
the correct memory map, even once the TLB will be accessed outside
the BQL.
Backports commit 9d82b5a792236db31a75b9db5c93af69ac07c7c5 from qemu
This for now is a simple TLB flush. This can change later for two
reasons:
1) an AddressSpaceDispatch will be cached in the CPUState object
2) it will not be possible to do tlb_flush once the TCG-generated code
runs outside the BQL.
Backports commit 76e5c76f2e2e0d20bab2cd5c7a87452f711654fb from qemu
Avoid shifting potentially negative signed offset values in
disas_ldst_pair() by keeping the offset in a uint64_t rather
than an int64_t.
Backports commit c2ebd862a54b7e12175d65c03ba259926cb2237a from qemu
Shifting a negative integer left is undefined behaviour in C.
Avoid it by assembling and shifting the offset fields as
unsigned values and then sign extending as the final action.
Backports commit 037e1d009e2fcb80784d37f0e12aa999787d46d4 from qemu
The code in logic_imm_decode_wmask attempts to rotate a mask
value within the bottom 'e' bits of the value with
mask = (mask >> r) | (mask << (e - r));
This has two issues:
* if the element size is 64 then a rotate by zero results
in a shift left by 64, which is undefined behaviour
* if the element size is smaller than 64 then this will
leave junk in the value at bit 'e' and above, which is
not valid input to bitfield_replicate(). As it happens,
the bits at bit 'e' to '2e - r' are exactly the ones
which bitfield_replicate is going to copy in there,
so this isn't a "wrong code generated" bug, but it's
confusing and if we ever put an assert in
bitfield_replicate it would fire on valid guest code.
Fix the former by not doing anything if r is zero, and
the latter by masking with bitmask64(e).
Backports commit e167adc9d9f5df4f8109aecd4552c407fdce094a from qemu
Fix attempts to shift into the sign bit of an int, which is undefined
behaviour in C and warned about by the clang sanitizer.
Backports commit 1743d55c8b38bcee632cf6eb2de81131635bb3d2 from qemu
Add AArch32 to AArch64 register sychronization functions.
Replace manual register synchronization with new functions in
aarch64_cpu_do_interrupt() and HELPER(exception_return)().
Backports commit ce02049dbf1828b4bc77d921b108a9d84246e5aa from qemu
Adds registration and get/set functions for enabling/disabling the AArch64
execution state on AArch64 CPUs. By default AArch64 execution state is enabled
on AArch64 CPUs, setting the property to off, will disable the execution state.
The below QEMU invocation would have AArch64 execution state disabled.
$ ./qemu-system-aarch64 -machine virt -cpu cortex-a57,aarch64=off
Also adds stripping of features from CPU model string in acquiring the ARM CPU
by name.
Backports part of commit fb8d6c24b095c426151b9bba8c8b0e58b03d6503 from qemu
The code in the softfloat source files is under a mixture of
licenses: the original code and many changes from QEMU contributors
are under the base SoftFloat-2a license; changes from Stefan Weil
and RedHat employees are GPLv2-or-later; changes from Fabrice Bellard
are under the BSD license. Clarify this in the comments at the
top of each affected source file, including a statement about
the assumed licensing for future contributions, so we don't need
to remember to ask patch submitters explicitly to pick a license.
Backports commit 16017c48547960539fcadb1f91d252124f442482 from qemu
Revert the parts of commits b645bb4885 and 5a6932d51d which are still
in the codebase and under a SoftFloat-2b license.
Reimplement support for architectures where the most significant bit
in the mantissa is 1 for a signaling NaN rather than a quiet NaN,
by adding handling for SNAN_BIT_IS_ONE being set to the functions
which test values for NaN-ness.
This includes restoring the bugfixes lost in the reversion where
some of the float*_is_quiet_nan() functions were returning true
for both signaling and quiet NaNs.
[This is a mechanical squashing together of two separate "revert"
and "reimplement" patches.]
Backports commit 332d5849708d11b835e0b36f4e26e8b36bfb3f5a from qemu
Revert the remaining portions of commits 75d62a5856 and 3430b0be36f
which are under a SoftFloat-2b license, ie the functions
uint64_to_float32() and uint64_to_float64(). (The float64_to_uint64()
and float64_to_uint64_round_to_zero() functions were completely
rewritten in commits fb3ea83aa and 0a87a3107d so can stay.)
Reimplement from scratch the uint64_to_float64() and uint64_to_float32()
conversion functions.
[This is a mechanical squashing together of two separate "revert"
and "reimplement" patches.]
Backports commit 6bb8e0f130bd4aecfe835a0caa94390fa2235fde from qemu
This commit applies the changes to master which correspond to
replacing commit 158142c2c2df with a set of changes made by:
* taking the SoftFloat-2a release
* mechanically transforming the block comment style
* reapplying Fabrice's original changes from 158142c2c2df
This commit was created by:
diff -u 158142c2c2df import-sf-2a
patch -p1 --fuzz 10 <../relicense-patch.txt
(where import-sf-2a is the branch resulting from the changes above).
Backports commit a7d1ac78e0f1101df2ff84502029a4b0da6024ae from qemu
Right now, the AVX512 registers are split in many different fields:
xmm_regs for the low 128 bits of the first 16 registers, ymmh_regs
for the next 128 bits of the same first 16 registers, zmmh_regs
for the next 256 bits of the same first 16 registers, and finally
hi16_zmm_regs for the full 512 bits of the second 16 bit registers.
This makes it simple to move data in and out of the xsave region,
but would be a nightmare for a hypothetical TCG implementation and
leads to a proliferation of [XYZ]MM_[BWLSQD] macros. Instead,
this patch marshals data manually from the xsave region to a single
32x512-bit array, simplifying the macro jungle and clarifying which
bits are in which vmstate subsection.
The migration format is unaffected.
Backports commit b7711471f551aa4419f9d46a11121f48ced422da from qemu
f64 exponent in HELPER(recpe_f64) should be compared to 2045 rather than 1023
(FPRecipEstimate in ARMV8 spec). This fixes incorrect underflow handling when
flushing denormals to zero in the FRECPE instructions operating on 64-bit
values.
Backports commit fc1792e9aa36227ee9994757974f9397684e1a48 from qemu
This patch implements a fucntion pointer "virtio_is_big_endian"
from "CPUClass" structure for arm/arm64.
Function arm_cpu_is_big_endian() is added to determine and
return the guest cpu endianness to virtio.
This is required for running cross endian guests with virtio on ARM/ARM64.
Backports commit 84f2bed3cf505f90b7918e2de32e11da27160563 from qemu
A few of the oldest parts of the page-table-walk code have broken indent
(either hardcoded tabs or two-spaces). Reindent these sections.
For ease of review, this patch does not touch the brace style and
so is a whitespace-only change.
Backports commit 554b0b09aec4579c8164f363b18a263150e91a2c from qemu
Now we have the mmu_idx in get_phys_addr(), use it correctly to
determine the behaviour of virtual to physical address translations,
rather than using just an is_user flag and the current CPU state.
Some TODO comments have been added to indicate where changes will
need to be made to add EL2 and 64-bit EL3 support.
Backports commit 0480f69abf849ca0d48928cc6c669c1c7264239b from qemu
Make all the callers of get_phys_addr() pass it the correct
mmu_idx rather than just a simple "is_user" flag. This includes
properly decoding the AT/ATS system instructions; we include the
logic for handling all the opc1/opc2 cases because we'll need
them later for supporting EL2/EL3, even if we don't have the
regdef stanzas yet.
Backports commit d364970287c0ba68979711928c15e5d37414f87f from qemu
Instead of simply reusing ats_write() as the handler for both AArch32
and AArch64 address translation operations, use a different function
for each with the common code in a third function. This is necessary
because the semantics for selecting the right translation regime are
different; we are only getting away with sharing currently because
we don't support EL2 and only support EL3 in AArch32.
Backports commit 060e8a48cb84d41d4ac36e4bb29d9c14ed7168b6 from qemu
target-arm doesn't use any of the MMU-mode specific cpu ldst
accessor functions. Suppress their generation by not defining
any of the MMU_MODE*_SUFFIX macros. ("user" and "kernel" are
too simplistic as descriptions of indexes 0 and 1 anyway.)
Backports commit 0dfef7b58f0c24b463e36630f08a45e93012b33a from qemu
The MMU index to use for unprivileged loads and stores is more
complicated than we currently implement:
* for A64, it should be "if at EL1, access as if EL0; otherwise
access at current EL"
* for A32/T32, it should be "if EL2, UNPREDICTABLE; otherwise
access as if at EL0".
In both cases, if we want to make the access for Secure EL0
this is not the same mmu_idx as for Non-Secure EL0.
Backports commit 579d21cce63f3dd2f6ee49c0b02a14e92cb4a836 from qemu
We currently claim that for ARM the mmu_idx should simply be the current
exception level. However this isn't actually correct -- secure EL0 and EL1
should have separate indexes from non-secure EL0 and EL1 since their
VA->PA mappings may differ. We also will want an index for stage 2
translations when we properly support EL2.
Define and document all seven mmu index values that we require, and
pass the mmu index in the TB flags rather than exception level or
priv/user bit.
This change doesn't update the get_phys_addr() code, so our page
table walking still assumes a simplistic "user or priv?" model for
the moment.
Backports commit c1e3781090b9d36c60e1a254ba297cb34011d3d4 from qemu
Support guest CPUs which need 7 MMU index values.
Add a comment about what would be required to raise the limit
further (trivial for 8, TCG backend rework for 9 or more).
Backports commit 8f3ae2ae2d02727f6d56610c09d7535e43650dd4 from qemu
Although M profile doesn't have the same concept of exception level
as A profile, it does have a notion of privileged versus not, which
we currently track in the privmode TB flag. Support returning this
information if arm_current_el() is called on an M profile core, so
that we can identify the correct MMU index to use (and put the MMU
index in the TB flags) without having to special-case M profile.
Backports commit 6d54ed3c93f1e05a483201b087142998381c9be8 from qemu
The documentation states that if LSB > MSB in BFI instruction behaviour
is unpredictable. Currently QEMU crashes because of assertion failure in
this case:
tcg/tcg-op.h:2061: tcg_gen_deposit_i32: Assertion `len <= 32' failed.
While assertion failure may meet the "unpredictable" definition this
behaviour is undesirable because it allows an unprivileged guest program
to crash the emulator with the OS and other programs.
This patch addresses the issue by throwing illegal instruction exception
if LSB > MSB. Only ARM decoder is affected because Thumb decoder already
has this check in place.
To reproduce issue run the following program
int main(void) {
asm volatile (".long 0x07c00c12" :: );
return 0;
}
compiled with
gcc -marm -static badop_arm.c -o badop_arm
Backports commit 45140a57675ecb4b0daee71bf145c24dbdf9429c from qemu
The helper functions for FRECPS and FRSQRTS have special case
handling that includes checks for zero inputs, so squash input
denormals if necessary before those checks. This fixes incorrect
output when the FPCR DZ bit is set to enable squashing of input
denormals.
Backports commit a8eb6e19991d1a7a6a7b04ac447548d30d75eb4a from qemu
Add assertion checking when cpreg structures are registered that they
either forbid raw-access attempts or at least make an attempt at
handling them. Also add an assert in the raw-accessor-of-last-resort,
to avoid silently doing a read or write from offset zero, which is
actually AArch32 CPU register r0.
Backports commit 375421ccaeebae8212eb8f9a36835ad4d9dc60a8 from qemu
We currently mark ARM coprocessor/system register definitions with
the flag ARM_CP_NO_MIGRATE for two different reasons:
1) register is an alias on to state that's also visible via
some other register, and that other register is the one
responsible for migrating the state
2) register is not actually state at all (for instance the TLB
or cache maintenance operation "registers") and it makes no
sense to attempt to migrate it or otherwise access the raw state
This works fine for identifying which registers should be ignored
when performing migration, but we also use the same functions for
synchronizing system register state between QEMU and the kernel
when using KVM. In this case we don't want to try to sync state
into registers in category 2, but we do want to sync into registers
in category 1, because the kernel might have picked a different
one of the aliases as its choice for which one to expose for
migration. (In particular, on 32 bit hosts the kernel will
expose the state in the AArch32 version of the register, but
TCG's convention is to mark the AArch64 version as the version
to migrate, even if the CPU being emulated happens to be 32 bit,
so almost all system registers will hit this issue now that we've
added AArch64 system emulation.)
Fix this by splitting the NO_MIGRATE flag in two (ALIAS and NO_RAW)
corresponding to the two different reasons we might not want to
migrate a register. When setting up the TCG list of registers to
migrate we honour both flags; when populating the list from KVM,
only ignore registers which are NO_RAW.
Backports commit 7a0e58fa648736a75f2a6943afd2ab08ea15b8e0 from qemu