For M profile, we currently have an mmu index MNegPri for
"requested execution priority negative". This fails to
distinguish "requested execution priority negative, privileged"
from "requested execution priority negative, usermode", but
the two can return different results for MPU lookups. Fix this
by splitting MNegPri into MNegPriPriv and MNegPriUser, and
similarly for the Secure equivalent MSNegPri.
This takes us from 6 M profile MMU modes to 8, which means
we need to bump NB_MMU_MODES; this is OK since the point
where we are forced to reduce TLB sizes is 9 MMU modes.
(It would in theory be possible to stick with 6 MMU indexes:
{mpu-disabled,user,privileged} x {secure,nonsecure} since
in the MPU-disabled case the result of an MPU lookup is
always the same for both user and privileged code. However
we would then need to rework the TB flags handling to put
user/priv into the TB flags separately from the mmuidx.
Adding an extra couple of mmu indexes is simpler.)
Backports commit 62593718d77c06ad2b5e942727cead40775d2395 from qemu
When we added the ARMMMUIdx_MSUser MMU index we forgot to
add it to the case statement in regime_is_user(), so we
weren't treating it as unprivileged when doing MPU lookups.
Correct the omission.
Backports commit 871bec7c44a453d9cab972ce1b5d12e1af0545ab from qemu
In ARMv7M the CPU ignores explicit writes to CONTROL.SPSEL
in Handler mode. In v8M the behaviour is slightly different:
writes to the bit are permitted but will have no effect.
We've already done the hard work to handle the value in
CONTROL.SPSEL being out of sync with what stack pointer is
actually in use, so all we need to do to fix this last loose
end is to update the condition we use to guard whether we
call write_v7m_control_spsel() on the register write.
Backports commit 83d7f86d3d27473c0aac79c1baaa5c2ab01b02d9 from qemu
For v8M it is possible for the CONTROL.SPSEL bit value and the
current stack to be out of sync. This means we need to update
the checks used in reads and writes of the PSP and MSP special
registers to use v7m_using_psp() rather than directly checking
the SPSEL bit in the control register.
Backports commit 1169d3aa5b19adca9384d954d80e1f48da388284 from qemu
EPYC-IBPB is a copy of the EPYC CPU model with
just CPUID_8000_0008_EBX_IBPB added.
Backports commit 8ebfafa796ca0cb2b035a7f06f836a675d8b48be from qemu
The new MSR IA32_SPEC_CTRL MSR was introduced by a recent Intel
microcode updated and can be used by OSes to mitigate
CVE-2017-5715. Unfortunately we can't change the existing CPU
models without breaking existing setups, so users need to
explicitly update their VM configuration to use the new *-IBRS
CPU model if they want to expose IBRS to guests.
The new CPU models are simple copies of the existing CPU models,
with just CPUID_7_0_EDX_SPEC_CTRL added and model_id updated.
Backports commit 61efbbf869293f1deb9ee39d44bd4e635de59fa7 from qemu
Add the new feature word and the "ibpb" feature flag.
Based on a patch by Paolo Bonzini.
Backports commit 1ade973f5202404e772aae7b1acd331270d246dc from qemu
It is valid to have a 48-character model ID on CPUID, however the
definition of X86CPUDefinition::model_id is char[48], which can
make the compiler drop the null terminator from the string.
If a CPU model happens to have 48 bytes on model_id, "-cpu help"
will print garbage and the object_property_set_str() call at
x86_cpu_load_def() will read data outside the model_id array.
We could increase the array size to 49, but this would mean the
compiler would not issue a warning if a 49-char string is used by
mistake for model_id.
To make things simpler, simply change model_id to be const char*,
and validate the string length using an assert() on
x86_register_cpudef_type().
Backports commit 4b220d88ba76fb2623ce4b8ba1f1eea66b82144e from qemu
In commit e3af7c788b73a6495eb9d94992ef11f6ad6f3c56 we
replaced direct calls to to cpu_ld*_code() with calls
to the x86_ld*_code() wrappers which incorporate an
advance of s->pc. Unfortunately we didn't notice that
in one place the old code was deliberately not incrementing
s->pc:
@@ -4501,7 +4528,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
static const int pp_prefix[4] = {
0, PREFIX_DATA, PREFIX_REPZ, PREFIX_REPNZ
};
- int vex3, vex2 = cpu_ldub_code(env, s->pc);
+ int vex3, vex2 = x86_ldub_code(env, s);
if (!CODE64(s) && (vex2 & 0xc0) != 0xc0) {
/* 4.1.4.6: In 32-bit mode, bits [7:6] must be 11b,
This meant we were mishandling this set of instructions.
Remove the manual advance of s->pc for the "is VEX" case
(which is now done by x86_ldub_code()) and instead rewind
PC in the case where we decide that this isn't really VEX.
Backports commit 817a9fcba8043faa467929e7b0193df6bdc92211 from qemu
The refactoring of commit 296e5a0a6c3935 has a nasty bug:
it accidentally dropped the generation of code to raise
the UNDEF exception when disas_thumb2_insn() returns nonzero.
This means that 32-bit Thumb2 instruction patterns that
ought to UNDEF just act like nops instead. This is likely
to break any number of things, including the kernel's "disable
the FPU and use the UNDEF exception to identify when to turn
it back on again" trick.
Backports commit 7472e2efb049ea65a6a5e7261b78ebf5c561bc2f from qemu
In our various supported host OSes, the time_t type may be either 32
or 64 bit, and could in theory also be either signed or unsigned.
Notably, in OpenBSD time_t is a 64 bit type even if 'long' is 32
bits, so using LONG_MAX for TIME_MAX is incorrect.
Use an approach suggested by Paolo Bonzini which calculates
the maximum value of the type rather than hardcoding it;
to do this we use the TYPE_MAXIMUM macro from Gnulib.
Backports commit e7b47c22e2df14d55e3e4426688c929bf8e3f7fb from qemu
In do_ats_write(), rather than using extended_addresses_enabled() to
decide whether the value we get back from get_phys_addr() is a 64-bit
format PAR or a 32-bit one, use arm_s1_regime_using_lpae_format().
This is not really the correct answer, because the PAR format
depends on the AT instruction being used, not just on the
translation regime. However getting this correct requires a
significant refactoring, so that get_phys_addr() returns raw
information about the fault which the caller can then assemble
into a suitable FSR/PAR/syndrome for its purposes, rather than
get_phys_addr() returning a pre-formatted FSR.
However this change at least improves the situation by making
the PAR work correctly for address translation operations done
at AArch64 EL2 on the EL2 translation regime. In particular,
this is necessary for Xen to be able to run in our emulation,
so this seems like a safer interim fix given that we are in freeze.
Backports commit 50cd71b0d347c74517dcb7da447fe657fca57d9c from qemu
The CPU ID registers ID_AA64PFR0_EL1, ID_PFR1_EL1 and ID_PFR1
have a field for reporting presence of GICv3 system registers.
We need to report this field correctly in order for Xen to
work as a guest inside QEMU emulation. We mustn't incorrectly
claim the sysregs exist when they don't, though, or Linux will
crash.
Unfortunately the way we've designed the GICv3 emulation in QEMU
puts the system registers as part of the GICv3 device, which
may be created after the CPU proper has been realized. This
means that we don't know at the point when we define the ID
registers what the correct value is. Handle this by switching
them to calling a function at runtime to read the value, where
we can fill in the GIC field appropriately.
Backports commit 96a8b92ed8f02d5e86ad380d3299d9f41f99b072 from qemu
We use raw memory primitives along the !parallel_cpus paths in order to
simplify the endianness handling. Because of that, we did not benefit
from the generic changes to cpu_ldst_user_only_template.h.
The simplest fix is to manipulate helper_retaddr here.
Backports commit 3bdb5fcc9a08a9a47ce30c4e0c2d64c95190b49d from qemu
When we handle a signal from a fault within a user-only memory helper,
we cannot cpu_restore_state with the PC found within the signal frame.
Use a TLS variable, helper_retaddr, to record the unwind start point
to find the faulting guest insn.
Backports commit ec603b5584fa71213ef8f324fe89e4b27cc9d2bc from qemu
When we handle a signal from a fault within a user-only memory helper,
we cannot cpu_restore_state with the PC found within the signal frame.
Use a TLS variable, helper_retaddr, to record the unwind start point
to find the faulting guest insn.
Backports commit ec603b5584fa71213ef8f324fe89e4b27cc9d2bc from qemu
Fixes the following warning when compiling with gcc 5.4.0 with -O1
optimizations and --enable-debug:
target/arm/translate-a64.c: In function ‘aarch64_tr_translate_insn’:
target/arm/translate-a64.c:2361:8: error: ‘post_index’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
if (!post_index) {
^
target/arm/translate-a64.c:2307:10: note: ‘post_index’ was declared here
bool post_index;
^
target/arm/translate-a64.c:2386:8: error: ‘writeback’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
if (writeback) {
^
target/arm/translate-a64.c:2308:10: note: ‘writeback’ was declared here
bool writeback;
^
Note that idx comes from selecting 2 bits, and therefore its value
can be at most 3.
Backports commit 5ca66278c859bb1ded243755aeead2be6992ce73 from qemu
For AArch32 LDREXD and STREXD, architecturally the 32-bit word at the
lowest address is always Rt and the one at addr+4 is Rt2, even if the
CPU is big-endian. Our implementation does these with a single
64-bit store, so if we're big-endian then we need to put the two
32-bit halves together in the opposite order to little-endian,
so that they end up in the right places. We were trying to do
this with the gen_aa32_frob64() function, but that is not correct
for the usermode emulator, because there there is a distinction
between "load a 64 bit value" (which does a BE 64-bit access
and doesn't need swapping) and "load two 32 bit values as one
64 bit access" (where we still need to do the swapping, like
system mode BE32).
Backports commit 3448d47b3172015006b79197eb5a69826c6a7b6d from qemu
On a successful address translation instruction, PAR is supposed to
contain cacheability and shareability attributes determined by the
translation. We previously returned 0 for these bits (in line with the
general strategy of ignoring caches and memory attributes), but some
guest OSes may depend on them.
This patch collects the attribute bits in the page-table walk, and
updates PAR with the correct attributes for all LPAE translations.
Short descriptor formats still return 0 for these bits, as in the
prior implementation.
Backports commit 5b2d261d60caf9d988d91ca1e02392d6fc8ea104 from qemu
GCC 4.9 and newer stopped warning for missing braces around the
"universal" C zero initializer {0}. One such initializer sneaked
into scsi/qemu-pr-helper.c and is breaking the build with such
older GCC versions.
Detect the lack of support for the idiom, and disable the warning
in that case.
Backports commit 20bc94a2b8449b7700b6bfa25a87ce2320a1c649 from qemu
Rather than have separate code only used for guest_base,
rely on a recent change to handle constant pool entries.
Backports commit ba2c747992f8c315c2fbddba196ce9137430d61d from qemu
Both ARMv6 and AArch64 currently may drop complex guest_base values
into the constant pool. But generic code wasn't expecting that, and
the pool is not emitted. Correct that.
Backports commit 5b38ee31616d1532c3c3a6dc644a9160d608ed2f from qemu
WFI/E are often, but not always, 4 bytes long. When they are, we need to
set ARM_EL_IL_SHIFT in the syndrome register.
Pass the instruction length to HELPER(wfi), use it to decrement pc
appropriately and to pass an is_16bit flag to syn_wfx, which sets
ARM_EL_IL_SHIFT if needed.
Set dc->insn in both arm_tr_translate_insn and thumb_tr_translate_insn.
Backports commit 58803318e5a546b2eb0efd7a053ed36b6c29ae6f from qemu
Using the offset of a temporary, relative to TCGContext, rather than
its index means that we don't use 0. That leaves offset 0 free for
a NULL representation without having to leave index 0 unused.
Backports commit e89b28a63501c0ad6d2501fe851d0c5202055e70 from qemu
When we used structures for TCGv_*, we needed a macro in order to
perform a comparison. Now that we use pointers, this is just clutter
Backports commit 11f4e8f8bfaa2caaab24bef6bbbb8a0205015119 from qemu
The GET and MAKE functions weren't really specific enough.
We now have a full complement of functions that convert exactly
between temporaries, arguments, tcgv pointers, and indices.
The target/sparc change is also a bug fix, which would have affected
a host that defines TCG_TARGET_HAS_extr[lh]_i64_i32, i.e. MIPS64.
Backports commit dc41aa7d34989b552efe712ffe184236216f960b from qemu
Transform TCGv_* to an "argument" or a temporary.
For now, an argument is simply the temporary index.
Backports commit ae8b75dc6ec808378487064922f25f1e7ea7a9be from qemu
While we're touching many of the lines anyway, adjust the naming
of the functions to better distinguish when "TCGArg" vs "TCGTemp"
should be used.
Backports commit 6349039d0b06eda59820629b934944246b14a1c1 from qemu
Copy s->nb_globals or s->nb_temps to a local variable for the purposes
of iteration. This should allow the compiler to use low-overhead
looping constructs on some hosts.
Backports commit ac3b88911ebc6fc841f28898ee8aed40839debe2 from qemu
Rather than have a separate buffer of 10*max_ops entries,
give each opcode 10 entries. The result is actually a bit
smaller and should have slightly more cache locality.
Backports commit 75e8b9b7aa0b95a761b9add7e2f09248b101a392 from qemu
Besides being more correct, arbitrarily long instruction allow the
generation of a translation block that spans three pages. This
confuses the generator and even allows ring 3 code to poison the
translation block cache and inject code into other processes that are
in guest ring 3.
This is an improved (and more invasive) fix for commit 30663fd ("tcg/i386:
Check the size of instruction being translated", 2017-03-24). In addition
to being more precise (and generating the right exception, which is #GP
rather than #UD), it distinguishes better between page faults and too long
instructions, as shown by this test case:
int main()
{
char *x = mmap(NULL, 8192, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON, -1, 0);
memset(x, 0x66, 4096);
x[4096] = 0x90;
x[4097] = 0xc3;
char *i = x + 4096 - 15;
mprotect(x + 4096, 4096, PROT_READ|PROT_WRITE);
((void(*)(void)) i) ();
}
... which produces a #GP without the mprotect, and a #PF with it.
Backports commit b066c5375737ad0d630196dab2a2b329515a1d00 from qemu
These take care of advancing s->pc, and will provide a unified point
where to check for the 15-byte instruction length limit.
Backports commit e3af7c788b73a6495eb9d94992ef11f6ad6f3c56 from qemu
Most of the users of page_set_flags offset (page, page + len) as
the end points. One might consider this an error, since the other
users do supply an endpoint as the last byte of the region.
However, the first thing that page_set_flags does is round end UP
to the start of the next page. Which means computing page + len - 1
is in the end pointless. Therefore, accept this usage and do not
assert when given the exact size of the vm as the endpoint.
Backports commit de258eb07db6cf893ef1bfad8c0cedc0b983db55 from qemu
DEFINE_TYPES() will help to simplify following routine patterns:
static void foo_register_types(void)
{
type_register_static(&foo1_type_info);
type_register_static(&foo2_type_info);
...
}
type_init(foo_register_types)
or
static void foo_register_types(void)
{
int i;
for (i = 0; i < ARRAY_SIZE(type_infos); i++) {
type_register_static(&type_infos[i]);
}
}
type_init(foo_register_types)
with a single line
DEFINE_TYPES(type_infos)
where types have static definition which could be consolidated in
a single array of TypeInfo structures.
It saves us ~6-10LOC per use case and would help to replace
imperative foo_register_types() there with declarative style of
type registration.
Backports commit 38b5d79b2e8cf6085324066d84e8bb3b3bbe8548 from qemu