In v8M the MSR and MRS instructions have extra register value
encodings to allow secure code to access the non-secure banked
version of various special registers.
(We don't implement the MSPLIM_NS or PSPLIM_NS aliases, because
we don't currently implement the stack limit registers at all.)
Backports commit 50f11062d4c896408731d6a286bcd116d1e08465 from qemu
Although none of the existing macro call-sites were broken,
it's always better to write macros that properly parenthesize
arguments that can be complex expressions, so that the intended
order of operations is not broken.
Backports commit 2a2be359c4335607c7f746cf27c412c08ab89aff from qemu
now cpu_mips_init() reimplements subset of cpu_generic_init()
tasks, so just drop it and use cpu_generic_init() directly.
Backports commit c4c8146cfd0fc3f95418fbc82a2eded594675022 from qemu
Register separate QOM types for each mips cpu model,
so it would be possible to reuse generic CPU creation
routines.
Backports commit 41da212c9ce9482fcfd490170c2611470254f8dc from qemu
This changes the order between cpu_mips_realize_env() and
cpu_exec_initfn(), but cpu_exec_initfn() don't have anything that
depends on cpu_mips_realize_env() being called first.
Backports commit df4dc10284e1d871db8adb512816a561473ffe3e from qemu
no logical change, only code movement (and fix a comment typo).
Backports commit 26aa3d9aecbb6fe9bce808a1d127191bdf3cc3d2 from qemu
Also backports commit 5502b66fc7d0bebd08b9b7017cb7e8b5261c3a2d
We already have several files that knowingly require assert()
to work, sometimes because refactoring the code for proper
error handling has not been tackled yet; there are probably
other files that have a similar situation but with no comments
documenting the same. In fact, we have places in migration
that handle untrusted input with assertions, where disabling
the assertions risks a worse security hole than the current
behavior of losing the guest to SIGABRT when migration fails
because of the assertion. Promote our current per-file
safety-valve to instead be project-wide, and expand it to also
cover glib's g_assert().
Note that we do NOT want to encourage 'assert(side-effects);'
(that is a bad practice that prevents copy-and-paste of code to
other projects that CAN disable assertions; plus it costs
unnecessary reviewer mental cycles to remember whether a project
special-cases the crippling of asserts); and we would LIKE to
fix migration to not rely on asserts (but that takes a big code
audit). But in the meantime, we DO want to send a message
that anyone that disables assertions has to tweak code in order
to compile, making it obvious that they are taking on additional
risk that we are not going to support. At the same time, leave
comments mentioning NDEBUG in files that we know still need to
be scrubbed, so there is at least something to grep for.
It would be possible to come up with some other mechanism for
doing runtime checking by default, but which does not abort
the program on failure, while leaving side effects in place
(unlike how crippling assert() avoids even the side effects),
perhaps under the name q_verify(); but it was not deemed worth
the effort (developers should not have to learn a replacement
when the standard C macro works just fine, and it would be a lot
of churn for little gain). The patch specifically uses #error
rather than #warn so that a user is forced to tweak the header
to acknowledge the issue, even when not using a -Werror
compilation.
Backports commit 262a69f4282e44426c7a132138581d400053e0a1 from qemu
Starting with Windows Server 2012 and Windows 8, if
CPUID.40000005.EAX contains a value of -1, Windows assumes specific
limit to the number of VPs. In this case, Windows Server 2012
guest VMs may use more than 64 VPs, up to the maximum supported
number of processors applicable to the specific Windows
version being used.
https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/tlfs
For compatibility, Let's introduce a new property for X86CPU,
named "x-hv-max-vps" as Eduardo's suggestion, and set it
to 0x40 before machine 2.10.
(The "x-" prefix indicates that the property is not supposed to
be a stable user interface.)
Backports relevant parts of commit 6c69dfb67e84747cf071958594d939e845dfcc0c from qemu
The SSE4.1 phminposuw instruction finds the minimum 16-bit element in
the source vector, putting the value of that element in the low 16
bits of the destination vector, the index of that element in the next
three bits and zeroing the rest of the destination. The helper for
this operation fills the destination from high to low, meaning that
when the source and destination are the same register, the minimum
source element can be overwritten before it is copied to the
destination. This patch fixes it to fill the destination from low to
high instead, so the minimum source element is always copied first.
This fixes one gcc test failure in my GCC 6-based testing (and so
concludes the present sequence of patches, as I don't have any further
gcc test failures left in that testing that I attribute to QEMU bugs).
Backports commit aa406feadfc5b095ca147ec56d6187c64be015a7 from qemu
One of the cases of the SSE4.2 pcmpestri / pcmpestrm / pcmpistri /
pcmpistrm instructions does a substring search. The implementation of
this case in the pcmpxstrx helper is incorrect. The operation in this
case is a search for a string (argument d to the helper) in another
string (argument s to the helper); if a copy of d at a particular
position would run off the end of s, the resulting output bit should
be 0 whether or not the strings match in the region where they
overlap, but the QEMU implementation was wrongly comparing only up to
the point where s ends and counting it as a match if an initial
segment of d matched a terminal segment of s. Here, "run off the end
of s" means that some byte of d would overlap some byte outside of s;
thus, if d has zero length, it is considered to match everywhere,
including after the end of s. This patch fixes the implementation to
correspond with the proper instruction semantics. This fixes four gcc
test failures in my GCC 6-based testing.
Backports commit ae35eea7e4a9f21dd147406dfbcd0c4c6aaf2a60 from qemu
The SSE4.1 packusdw instruction combines source and destination
vectors of signed 32-bit integers into a single vector of unsigned
16-bit integers, with unsigned saturation. When the source and
destination are the same register, this means each 32-bit element of
that register is used twice as an input, to produce two of the 16-bit
output elements, and so if the operation is carried out
element-by-element in-place, no matter what the order in which it is
applied to the elements, the first element's operation will overwrite
some future input. The helper for packssdw avoids this issue by
computing the result in a local temporary and copying it to the
destination at the end; this patch fixes the packusdw helper to do
likewise. This fixes three gcc test failures in my GCC 6-based
testing.
Backports commit 80e19606215d4df370dfe8fe21c558a129f00f0b from qemu
It turns out that my recent fix to set rip_offset when emulating some
SSE4.1 instructions needs generalizing to cover a wider class of
instructions. Specifically, every instruction in the sse_op_table7
table, coming from various instruction set extensions, has an 8-bit
immediate operand that comes after any memory operand, and so needs
rip_offset set for correctness if there is a memory operand that is
rip-relative, and my patch only set it for a subset of those
instructions. This patch moves the rip_offset setting to cover the
wider class of instructions, so fixing 9 further gcc testsuite
failures in my GCC 6-based testing. (I do not know whether there
might be still further classes of instructions missing this setting.)
Backports commit c6a8242915328cda0df0fbc0803da3448137e614 from qemu
The SSE4.1 pmovsx* and pmovzx* instructions take packed 1-byte, 2-byte
or 4-byte inputs and sign-extend or zero-extend them to a wider vector
output. The associated helpers for these instructions do the
extension on each element in turn, starting with the lowest. If the
input and output are the same register, this means that all the input
elements after the first have been overwritten before they are read.
This patch makes the helpers extend starting with the highest element,
not the lowest, to avoid such overwriting. This fixes many GCC test
failures (161 in the gcc testsuite in my GCC 6-based testing) when
testing with a default CPU setting enabling those instructions.
Backports commit c6a56c8e990b213a1638af2d34352771d5fa4d9c from qemu
It's not even clear what the interface REG and VAL32 were supposed to mean.
All uses had REG = 0 and VAL32 was the bitset assigned to the destination.
Backports commit f46934df662182097dce07d57ec00f37e4d2abf1 from qemu
Instead of copying addr to a local temp, reuse the value (which we
have just compared as equal) already saved in cpu_exclusive_addr.
Backports commit 37e29a64254bf82a1901784fcca17c25f8164c2f from qemu
Previously when single stepping through ERET instruction via GDB
would result in debugger entering the "next" PC after ERET instruction.
When debugging in kernel mode, this will also cause unintended behavior,
because debugger will try to access memory from EL0 point of view.
Backports commit dddbba9943ef6a81c8702e4a50cb0a8b1a4201fe from qemu
In the v7M and v8M ARM ARM, the magic exception return values are
referred to as EXC_RETURN values, and in QEMU we use V7M_EXCRET_*
constants to define bits within them. Rename the 'type' variable
which holds the exception return value in do_v7m_exception_exit()
to excret, making it clearer that it does hold an EXC_RETURN value.
Backports commit 351e527a613147aa2a2e6910f92923deef27ee48 from qemu
The exception-return magic values get some new bits in v8M, which
makes some bit definitions for them worthwhile.
We don't use the bit definitions for the switch on the low bits
which checks the return type for v7M, because this is defined
in the v7M ARM ARM as a set of valid values rather than via
per-bit checks.
Backports commit 4d1e7a4745c050f7ccac49a1c01437526b5130b5 from qemu
In do_v7m_exception_exit(), there's no need to force the high 4
bits of 'type' to 1 when calling v7m_exception_taken(), because
we know that they're always 1 or we could not have got to this
"handle return to magic exception return address" code. Remove
the unnecessary ORs.
Backports commit 7115cdf5782922611bcc44c89eec5990db7f6466 from qemu
For a bus fault, the M profile BFSR bit PRECISERR means a bus
fault on a data access, and IBUSERR means a bus fault on an
instruction access. We had these the wrong way around; fix this.
Backports commit c6158878650c01b2c753b2ea7d0967c8fe5ca59e from qemu
For M profile we must clear the exclusive monitor on reset, exception
entry and exception exit. We weren't doing any of these things; fix
this bug.
Backports commit dc3c4c14f0f12854dbd967be3486f4db4e66d25b from qemu
For M profile we must clear the exclusive monitor on reset, exception
entry and exception exit. We weren't doing any of these things; fix
this bug.
Backports commit dc3c4c14f0f12854dbd967be3486f4db4e66d25b from qemu
Use a symbolic constant M_REG_NUM_BANKS for the array size for
registers which are banked by M profile security state, rather
than hardcoding lots of 2s.
Backports commit 4a16724f06ead684a5962477a557c26c677c2729 from qemu
Older compilers (rhel6) don't like redefinition of typedefs
Fixes: 12a6c15ef31c98ecefa63e91ac36955383038384
Backports commit 9d81b2d2000f41be55a0624a26873f993fb6e928 from qemu
GCC 4.7.2 on SunOS reports that the values assigned to array members are not
real constants:
target/m68k/fpu_helper.c:32:5: error: initializer element is not constant
target/m68k/fpu_helper.c:32:5: error: (near initialization for 'fpu_rom[0]')
rules.mak:66: recipe for target 'target/m68k/fpu_helper.o' failed
Convert the array to make_floatx80_init() to fix it.
Replace floatx80_pi-like constants with make_floatx80_init() as they are
defined as make_floatx80().
This fixes build on SmartOS (Joyent).
Backports commit 6fa9ba09dbf4eb8b52bcb47d6820957f1b77ee0b from qemu
We are not going to use ldrd for loading the comparator
for 32-bit guests, so don't limit cmp_off to 8 bits then.
This eliminates one insn in the tlb load for some guests.
Backports commit 95ede84f4de18747d03d79c148013cff99acd60b from qemu
Use UBFX to avoid limitation on CPU_TLB_BITS. Since we're dropping
the initial shift, we need to replace the page masking. We can use
MOVW+BIC to do this without shifting. The result is the same size
as the armv6 path with one less conditional instruction.
Backports commit 647ab96aaf5defeb138e48d610f7f633c587b40d from qemu
Split out maybe_out_small_movi for use with other operations
that want to add to the constant pool.
Backports commit 28eef8aaece5e83df4568d9842ab9611ec130b2c from qemu