(pde & 0x1fe000) is a 32-bit integer; when shifting it
into bits 39-32 the result is zero. Fix it by making the
mask (and thus the result of the AND) a 64-bit integer.
Reported by Coverity.
Backports commit 388ee48a88e684e719660a2cae9c21897b94fa37 from qemu
All references to cpu_T are done with a constant index. It aids
readability to decompose the array into two scalar variables.
Backports commit 1d1cc4d0f481b2939c7e9f6606e571b2fc81971a from qemu
Merge gen_op_addl_A0_im and gen_op_addq_A0_im into gen_add_A0_im
and clean up the ifdef.
Replace the one remaining user of gen_op_addl_A0_im with gen_add_A0_im.
Backports commit 4e85057b92d214decf10045d3d4faa2faf33d100 from qemu
Unify the code across stack pointer widths. Fix the note about
not updating ESP before the potential exception.
Backports commit 2045f04c3ae030bda650f84035f114bbd84909a9 from qemu
Use gen_lea_v_seg for centralized segment base knowledge. Unify
code across 32- and 64-bit. Fix note about "must save state"
before using the out-of-line helpers.
Backports commit 743e398e2fbf2f7183bf7a53c9d011fabcaa1770 from qemu
More centralization of handling of segment bases.
Also fixes the note about 16-bit wrap around not fully handled.
Backports commit d37ea0c04723f3e15fde55fe97cff6278159929b from qemu
Having segs[].base as a register significantly improves code
generation for real and protected modes, particularly for TBs
that have multiple memory references where the segment base
can be held in a hard register through the TB.
Backports commit 3558f8055f37a34762b7a2a0f02687e6eeab893d from qemu
I.e. gen_push_v, gen_pop_T0, gen_stack_A0.
More centralization of handling of segment bases.
Backports commit 77ebcad04f3659fa7eb799928fdd68280fac720d from qemu
Add forgotten zero-extension in the TARGET_X86_64, !CODE64, ss32 case;
use this new function to implement gen_string_movl_A0_EDI,
gen_string_movl_A0_ESI, gen_add_A0_ds_seg.
Backports commit ca2f29f555805d07fb0b9ebfbbfc4e3656530977 from qemu
Similar to the previous patch, it's nice to have all functions
in the tree that involve a visitor and a name for conversion to
or from QAPI to consistently stick the 'name' parameter next
to the Visitor parameter.
Done by manually changing include/qom/object.h and qom/object.c,
then running this Coccinelle script and touching up the fallout
(Coccinelle insisted on adding some trailing whitespace).
@ rule1 @
identifier fn;
typedef Object, Visitor, Error;
identifier obj, v, opaque, name, errp;
@@
void fn
- (Object *obj, Visitor *v, void *opaque, const char *name,
+ (Object *obj, Visitor *v, const char *name, void *opaque,
Error **errp) { ... }
@@
identifier rule1.fn;
expression obj, v, opaque, name, errp;
@@
fn(obj, v,
- opaque, name,
+ name, opaque,
errp)
Backports commit d7bce9999df85c56c8cb1fcffd944d51bff8ff48 from qemu
JSON uses "name":value, but many of our visitor interfaces were
called with visit_type_FOO(v, &value, name, errp). This can be
a bit confusing to have to mentally swap the parameter order to
match JSON order. It's particularly bad for visit_start_struct(),
where the 'name' parameter is smack in the middle of the
otherwise-related group of 'obj, kind, size' parameters! It's
time to do a global swap of the parameter ordering, so that the
'name' parameter is always immediately after the Visitor argument.
Additional reason in favor of the swap: the existing include/qjson.h
prefers listing 'name' first in json_prop_*(), and I have plans to
unify that file with the qapi visitors; listing 'name' first in
qapi will minimize churn to the (admittedly few) qjson.h clients.
Later patches will then fix docs, object.h, visitor-impl.h, and
those clients to match.
Done by first patching scripts/qapi*.py by hand to make generated
files do what I want, then by running the following Coccinelle
script to affect the rest of the code base:
$ spatch --sp-file script `git grep -l '\bvisit_' -- '**/*.[ch]'`
I then had to apply some touchups (Coccinelle insisted on TAB
indentation in visitor.h, and botched the signature of
visit_type_enum() by rewriting 'const char *const strings[]' to
the syntactically invalid 'const char*const[] strings'). The
movement of parameters is sufficient to provoke compiler errors
if any callers were missed.
// Part 1: Swap declaration order
@@
type TV, TErr, TObj, T1, T2;
identifier OBJ, ARG1, ARG2;
@@
void visit_start_struct
-(TV v, TObj OBJ, T1 ARG1, const char *name, T2 ARG2, TErr errp)
+(TV v, const char *name, TObj OBJ, T1 ARG1, T2 ARG2, TErr errp)
{ ... }
@@
type bool, TV, T1;
identifier ARG1;
@@
bool visit_optional
-(TV v, T1 ARG1, const char *name)
+(TV v, const char *name, T1 ARG1)
{ ... }
@@
type TV, TErr, TObj, T1;
identifier OBJ, ARG1;
@@
void visit_get_next_type
-(TV v, TObj OBJ, T1 ARG1, const char *name, TErr errp)
+(TV v, const char *name, TObj OBJ, T1 ARG1, TErr errp)
{ ... }
@@
type TV, TErr, TObj, T1, T2;
identifier OBJ, ARG1, ARG2;
@@
void visit_type_enum
-(TV v, TObj OBJ, T1 ARG1, T2 ARG2, const char *name, TErr errp)
+(TV v, const char *name, TObj OBJ, T1 ARG1, T2 ARG2, TErr errp)
{ ... }
@@
type TV, TErr, TObj;
identifier OBJ;
identifier VISIT_TYPE =~ "^visit_type_";
@@
void VISIT_TYPE
-(TV v, TObj OBJ, const char *name, TErr errp)
+(TV v, const char *name, TObj OBJ, TErr errp)
{ ... }
// Part 2: swap caller order
@@
expression V, NAME, OBJ, ARG1, ARG2, ERR;
identifier VISIT_TYPE =~ "^visit_type_";
@@
(
-visit_start_struct(V, OBJ, ARG1, NAME, ARG2, ERR)
+visit_start_struct(V, NAME, OBJ, ARG1, ARG2, ERR)
|
-visit_optional(V, ARG1, NAME)
+visit_optional(V, NAME, ARG1)
|
-visit_get_next_type(V, OBJ, ARG1, NAME, ERR)
+visit_get_next_type(V, NAME, OBJ, ARG1, ERR)
|
-visit_type_enum(V, OBJ, ARG1, ARG2, NAME, ERR)
+visit_type_enum(V, NAME, OBJ, ARG1, ARG2, ERR)
|
-VISIT_TYPE(V, OBJ, NAME, ERR)
+VISIT_TYPE(V, NAME, OBJ, ERR)
)
Backports commit 51e72bc1dd6ace6e91d675f41a1f09bd00ab8043 from qemu
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Backports commit b6a0aa053711e27e1a7825c1fca662beb05bee6f from qemu
This patch enables migrating vcpu's TSC rate. If KVM on the
destination machine supports TSC scaling, guest programs will
observe a consistent TSC rate across the migration.
If TSC scaling is not supported on the destination machine, the
migration will not be aborted and QEMU on the destination will
not set vcpu's TSC rate to the migrated value.
If vcpu's TSC rate specified by CPU option 'tsc-freq' on the
destination machine is inconsistent with the migrated TSC rate,
the migration will be aborted.
For backwards compatibility, the migration of vcpu's TSC rate is
disabled on pc-*-2.5 and older machine types.
Backports relevant parts of commit 36f96c4b6bd25f43000c317518ff3df10202bc75 from qemu
This will ensure we never use the MMX_* and ZMM_* macros with the
wrong struct type.
Backports commit f23a9db6bca5b9a228c77bbcaa06d01510e148b7 from qemu
Add a new field and reorder MMXReg fields, to make MMXReg and
ZMMReg field lists look the same (except for the array sizes).
Backports commit 9253e1a7923e94598419ac9a7df7b8bc6cba65a5 from qemu
They are helpers for the ZMMReg fields, so name them accordingly.
This is just a global search+replace, no other changes are being
introduced.
Backports commit 19cbd87c14ab208858ee1233b790f37cfefed4b9 from qemu
The struct represents a 512-bit register, so name it accordingly.
This is just a global search+replace, no other changes are being
introduced.
Backports commit fa4518741ed69aa7993f9c15bb52eacc375681fc from qemu
Make MMXReg use the same field names used on XMMReg, so we can
try to reuse macros and other code later.
Backports commit 9618f40f06e90c8fa8ae06b56c7404a7cc937e22 from qemu
Rename the function so that the reason for its existence is
clearer: it does x86-specific initialization of TCG structures.
Backports commit 63618b4ed48f0fc2a7a3fd1117e2f0b512248dab from qemu
The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.
Backports commit ec53b45bcd1f74f7a4c31331fa6d50b402cd6d26 from qemu
Allow multiple calls to cpu_address_space_init(); each
call adds an entry to the cpu->ases array at the specified
index. It is up to the target-specific CPU code to actually use
these extra address spaces.
Since this multiple AddressSpace support won't work with
KVM, add an assertion to avoid confusing failures.
Backports commit 12ebc9a76dd7702aef0a3618717a826c19c34ef4 from qemu
Rather than setting cpu->as unconditionally in cpu_exec_init
(and then having target-i386 override this later), don't set
it until the first call to cpu_address_space_init.
This requires us to initialise the address space for
both TCG and KVM (KVM doesn't need the AS listener but
it does require cpu->as to be set).
For target CPUs which don't set up any address spaces (currently
everything except i386), add the default address_space_memory
in qemu_init_vcpu().
Backports commit 56943e8cc14b7eeeab67d1942fa5d8bcafe3e53f from qemu
x86_cpu_handle_mmu_fault is currently checking twice for writability
and executability of pages; the first time to decide whether to
trigger a page fault, the second time to compute the "prot" argument
to tlb_set_page_with_attrs.
Reorganize code so that first "prot" is computed, then it is used
to check whether to raise a page fault, then finally PROT_WRITE is
removed if the D bit will have to be set.
Backports commit 76c64d33601a4948d6f72022992574a75b6fab97 from qemu
Now these instructions are handled by TCG and can be added to the
TCG_7_0_EBX_FEATURES macro.
Backports commit 0c47242b519a224279f13c685aa6e79347f97b85 from qemu
Detect the clflushopt and pcommit instructions and check their
corresponding feature flags, instead of checking CPUID_SSE and
CPUID_CLFLUSH.
Backports commit 891bc821a3ee462b09b1ec436f2891f00ab1f85b from qemu
Accept the clwb instruction (66 0F AE /6) if its corresponding feature
flag is enabled on CPUID[7].
Backports commit 5e1fac2dba7780e0cb2c022d4b39586af70bea0d from qemu
POPCNT is not available on Penryn and older and on Opteron_G2 and older,
and we want to make the default CPU runnable in most hosts, so it won't
be enabled by default in KVM mode.
We should eventually have all features supported by TCG enabled by
default in TCG mode, but as we don't have a good mechanism today to
ensure we have different defaults in KVM and TCG mode, disable POPCNT in
the qemu64 and qemu32 CPU models entirely.
Backports commit 6aa91e4a0237ddcebb85e3a95e166f3b3cfa42ae from qemu
ABM is not available on Sandy Bridge and older, and we want to make the
default CPU runnable in most hosts, so it won't be enabled by default in
KVM mode.
We should eventually have all features supported by TCG enabled by
default in TCG mode, but as we don't have a good mechanism today to
ensure we have different defaults in KVM and TCG mode, disable ABM in
the qemu64 CPU model entirely.
Backports commit 711956722c6764336f8b78a2106e57c55f02f36d from qemu
SSE4a is not available in any Intel CPU, and we want to make the default
CPU runnable in most hosts, so it doesn't make sense to enable it by
default in KVM mode.
We should eventually have all features supported by TCG enabled by
default in TCG mode, but as we don't have a good mechanism today to
ensure we have different defaults in KVM and TCG mode, disable SSE4a in
the qemu64 CPU model entirely.
Backports commit 0909ad24b2769368716c85f79fbb995dbb7041a9 from qemu
In this mode, referring an invalid element of the source forces the
result to false (table 4-7, last column) but referring an invalid
element of the destination forces the result to true, so the outer
loop should still be run even if some elements of the destination
will be invalid. They will be avoided in the inner loop, which
correctly bounds "i" to validd, but they will still contribute to a
positive outcome of the search.
This fixes tst_strstr in glibc 2.17.
Backports commit 54c54f8b56047d3c2420e1ae06a6a8890c220ac4 from qemu
Some targets already had this within their logic, but make sure
it's present for all targets.
Backports commit 522a0d4e3c0d397ffb45ec400d8cbd426dad9d17 from qemu
Reduce the boilerplate required for each target. At the same time,
move the test for breakpoint after calling tcg_gen_insn_start.
Note that arm and aarch64 do not use cpu_breakpoint_test, but still
move the inline test down after tcg_gen_insn_start.
Backports commit b933066ae03d924a92b2616b4a24e7d91cd5b841 from qemu
Left shift of negative values is undefined behavior. Detected by clang:
qemu/target-i386/translate.c:2423:26: runtime error:
left shift of negative value -8
This changes the code to reverse the sign after the left shift.
Backports commit 712b4243c761cb6ab6a4367a160fd2a42e2d4b76 from qemu
Fix undefined behavior detected by clang runtime check:
qemu/target-i386/cpu.c:1494:15: runtime error:
left shift of 1 by 31 places cannot be represented in type 'int'
While doing that, add extra parenthesis for clarity.
Backports commit 72370dc1149d7c90d2c2218e0d0658bee23a5bf7 from qemu
Introduce helper_get_dr so that we don't have to put CR4[DE]
into the scarce HFLAGS resource. At the same time, rename
helper_movl_drN_T0 to helper_set_dr and set the helper flags.
Backports commit d0052339236072bbf08c1d600c0906126b1ab258 from qemu
If the debug register is not enabled, we need
do nothing besides update the register.
Backports commit 7525b55051277717329cf64a9e1d5cff840d6f38 from qemu
Before the last patch, we had an efficient loop that disabled
local breakpoints on task switch. Re-add that, but in a more
general way that handles changes to the global enable bits too.
Backports commit 36eb6e096729f9aade3a6af7dbe4d0a990335d7e from qemu
This moves the last of the iteration over breakpoints into
the bpt_helper.c file. This also allows us to make several
breakpoint functions static.
Backports commit 93d00d0fbe4711061834730fb70525d167b6f908 from qemu
Processors up to the Pentium (says Bochs---I do not have old enough
manuals) require a 32KiB alignment for the SMBASE, but newer processors
do not need that, and Tiano Core will use non-aligned SMBASE values.
Backports commit dd75d4fcb4a82c34d4f466e7fc166162b71ff740 from qemu