This speeds up MEMORY_LISTENER_CALL noticeably. Right now,
with many PCI devices you have N regions added to M AddressSpaces
(M = # PCI devices with bus-master enabled) and each call looks
up the whole listener list, with at least M listeners in it.
Because most of the regions in N are BARs, which are also roughly
proportional to M, the whole thing is O(M^3). This changes it
to O(M^2), which is the best we can do without rewriting the
whole thing.
Backports commit 9a54635dcb51a3fcf7507af630168f514a8cd4e7 from qemu
This comes from free from unifying tcg_reg_alloc_mov and
tcg_reg_alloc_movi's handling of TEMP_VAL_CONST. It triggers
often on moves to cc_dst, such as the following translation
of "sub $0x3c,%esp":
before: after:
subl $0x3c,%ebp subl $0x3c,%ebp
movl %ebp,0x10(%r14) movl %ebp,0x10(%r14)
movl $0x3c,%ebx movl $0x3c,0x2c(%r14)
movl %ebx,0x2c(%r14)
Backports commit 0fe4fca4e1a5e06a270127dd80bb753d4dda61c6 from qemu
This was found with test-i386. The issue is that instructions
such as
addr32 lea (%eax), %rax
did not perform a 32-bit extension, because the LEA translation
skipped the gen_lea_v_seg step. That step does not just add
segments, it also takes care of extending from address size to
pointer size.
Backports commit 620abfb004543404bef1953e25da2ad77352941a from qemu
This introduces load-acquire and store-release operations in QEMU.
For now, just use them as an implementation detail of atomic_mb_read
and atomic_mb_set.
Since docs/atomics.txt documents that atomic_mb_read only synchronizes
with an atomic_mb_set of the same variable, we can use the new implementation
everywhere instead of seq-cst loads and stores.
Backports commit 803cf26a9e019b5d2256a8edeb22e3538c4f3261 from qemu
When explicitly enabling unmigratable flags using "-cpu host"
(e.g. "-cpu host,+invtsc"), the requested feature won't be
enabled because cpu->migratable is true by default.
This is inconsistent with all other CPU models, which don't have
the "migratable" option, making "+invtsc" work without the need
for extra options.
This happens because x86_cpu_filter_features() uses
cpu->migratable as an argument for
x86_cpu_get_supported_feature_word(). This is not useful
because:
2) on "-cpu host" it only makes QEMU disable features that were
explicitly enabled in the command-line;
1) on all the other CPU models, cpu->migratable is already false.
The fix is to just use 'false' as an argument to
x86_cpu_get_supported_feature_word() in
x86_cpu_filter_features().
Note that:
* This won't change anything for people using using
"-cpu host" or "-cpu host,migratable=<on|off>" (with no extra
features) because the x86_cpu_get_supported_feature_word() call
on the cpu->host_features check uses cpu->migratable as
argument.
* This won't change anything for any CPU model except "host"
because they all have cpu->migratable == false (and only "host"
has the "migratable" property that allows it to be changed).
* This will only change things for people using "-cpu host,+<feature>",
where <feature> is a non-migratable feature. The only existing
named non-migratable feature is "invtsc".
In other words, this change will only affect people using
"-cpu host,+invtsc" (that will now get what they asked for: the
invtsc flag will be enabled). All other use cases are unaffected.
Backports commit 46c032f3afcc05a0123914609f1003906ba63fda from qemu
When probing for CPU model information, we need to reuse the code
that initializes CPUID fields, but not the remaining side-effects
of x86_cpu_realizefn(). Move that code to a separate function
that can be reused later.
Backports commit 41f3d4d69a423dadb8431fda65d8d7c68c0de0fc from qemu
x86_cpu_filter_features() will be reused by code that shouldn't
print any warning. Move the warning code to a new
x86_cpu_report_filtered_features() function, and call it from
x86_cpu_realizefn().
Backports commit 8ca30e8673aff9bfcf8f969f8db4266b5f62e49c from qemu
Instead of treating the FP and SSE bits as special cases, add
them to the x86_ext_save_areas array. This will simplify the code
that calculates the supported xsave components and the size of
the xsave area.
Backports commit e3c9022b4e2b6a4deb6518361d2bbf33522b9198 from qemu
Instead of keeping the aliases inside the feature name arrays and
require parsing the strings, just register alias properties
manually. This simplifies the code for property registration and
lookup.
Backports commit 16d2fcaa509b1ca56eb2fcd8fe877279cf65cccc from qemu
Instead of translating the feature name entries when adding
property names, store the actual property names in the feature
name array.
For reference, here is the full list of functions that use
FeatureWordInfo::feat_names:
* x86_cpu_get_migratable_flags(): not affected, as it just
check for non-NULL values.
* report_unavailable_features(): informative only. It will
start printing feature names with hyphens.
* x86_cpu_list(): informative only. It will start printing
feature names with hyphens
* x86_cpu_register_feature_bit_props(): not affected, as it
was already calling feat2prop(). Now we can remove the
feat2prop() calls safely.
So, the only user-visible effect of this patch are the new names
being used in help and error messages for users.
Backports commit fc7dfd205f3287893c436d932a167bffa30579c8 from qemu
VME is already disabled automatically when using TCG. So, instead
of pretending it is there when reporting CPU model data on
query-cpu-* QMP commands (making every CPU model to be reported
as not runnable), we can disable it by default on all CPU models
when using TCG.
Do that by adding a tcg_default_props array that will work like
kvm_default_props.
Backports commit 04d99c3c61f4bdc0450dbeb6512b6dd743baca65 from qemu
Instead of using the builtin_x86_defs array, use the QOM subclass
list to list CPU models on "-cpu ?" and "query-cpu-definitions".
Backports commit ee465a3ef77c2b2975ffa71c72208c05b3f3970d from qemu
MDCCINT_EL1 is part of the DCC debugger communication
channel between the CPU and an attached external debugger.
QEMU doesn't implement this, but since Linux may try
to access this register we need to provide at least
a dummy implementation.
Backports commit 5dbdc4342f479d799a1970dd5fd22e64c9dcd50d from qemu
In commit 9b6a3ea7a699594 store_reg() was changed to mask
both bits 0 and 1 of the new PC value when in ARM mode.
Unfortunately this broke the exception return code paths
when doing a return from ARM mode to Thumb mode: in some
of these we write a new CPSR including new Thumb mode
bit via gen_helper_cpsr_write_eret(), and then use store_reg()
to write the new PC. In this case if the new CPSR specified
Thumb mode then masking bit 1 of the PC is incorrect
(these code paths correspond to the v8 ARM ARM pseudocode
function AArch32.ExceptionReturn(), which always aligns the
new PC appropriately for the new instruction set state).
Instead of using store_reg() in exception-return code paths,
call a new store_pc_exc_ret() which stores the raw new PC
value to env->regs[15], and then mask it appropriately in
the subsequent helper_cpsr_write_eret() where the new
env->thumb state is available.
This fixes a bug introduced by 9b6a3ea7a699594 which caused
crashes/hangs or otherwise bad behaviour for Linux when
userspace was using Thumb.
Backports commit fb0e8e79a9d77ee240dbca036fa8698ce654e5d1 from qemu
3 cases in a switch in disas_exc() require reference to the
ARM ARM spec in order to determine what case they're handling.
Backports commit 957956b3013c8122a749dfe61a41aef8b4100e31 from qemu
For BR, BLR and RET instructions, if tagged addresses are enabled, the
tag field in the address must be cleared out prior to loading the
address into the PC. Depending on the current EL, it will be set to
either all 0's or all 1's.
Backports commit 6feecb8b941f2d21e5645d0b6e0cdb776998121b from qemu
When capturing the current CPU state for the TB, extract the TBI0 and TBI1
values from the correct TCR for the current EL and then add them to the TB
flags field.
Then, at the start of code generation for the block, copy the TBI fields
into the DisasContext structure.
Backports commit 86fb3fa4ed5873b021a362ea26a021f4aeab1bb4 from qemu
The 'old' dispatch code returned a QERR_MISSING_PARAMETER for missing
parameters, but the qapi qmp_dispatch() code uses
QERR_INVALID_PARAMETER_TYPE.
Improve qapi code to return QERR_MISSING_PARAMETER where
appropriate.
Fix expected error message in iotests.
Backports commit 1382d4abdf9619985e4078e37e49e487cea9935e from qemu
Unlike the other visit methods, visit_type_any() and visit_type_null()
neglect to check whether qmp_input_get_object() succeeded. They crash
when it fails. Reproducer:
{ "execute": "qom-set",
"arguments": { "path": "/machine", "property": "rtc-time" } }
Will crash with:
qapi/qapi-visit-core.c:277: visit_type_any: Assertion `!err != !*obj'
failed
Broken in commit 5c678ee. Fix by adding the missing error checks.
Backports commit c489780203f9b22aca5539ec7589b7140bdc951f from qemu
Only very modern GCC's actually set this define when building with the
ThreadSanitizer so this little typo slipped though.
Backports commit 23ea7f57949f2f5934f4d5bbc29fe321b3a7067b from qemu
ThreadSanitizer picks up potential races although we already use
barriers to ensure things are in the correct order when processing exit
requests. For true C11 defined behaviour across threads we need to use
relaxed atomic_set/atomic_read semantics to reassure tsan.
Backports commit 027d9a7d2911e993cdcbd21c7c35d1dd058f05bb from qemu
The ThreadSanitizer rightly complains that something initialised with a
normal access is later updated and read atomically.
Backports commit ce7cf6a973f4b614162b9518954d441fa5e32fc6 from qemu
The idiom CPU_GET_CLASS(cpu) is fairly extensively used in various
threads and trips of ThreadSanitizer due to the fact it updates
obj->class->object_cast_cache behind the scenes. As this is just a
fast-path cache there is no need to lock updates.
However to ensure defined C11 behaviour across threads we need to use
the plain atomic_read/set primitives and keep the sanitizer happy.
Backports commit b6b3ccfda015dcd5ab50f70c189ee5cc6c622e91 from qemu
This is to appease sanitizer builds which complain that:
"error: control reaches end of non-void function"
Backports commit 550276ae0a88851edda2cb7fcdd64256dbb8e314 from qemu
Add some notes on the use of the relaxed atomic access helpers and their
importance for defined behaviour in C11's multi-threaded memory model.
Backports commit e653bc6b0ff645c25b8a2eb607c18a5c98b59db6 from qemu
In the ARM v6 architecture, 'sub pc, pc, 1' is not an interworking
branch, so the computed new value is written to r15 as a normal
value. The architecture says that in this case, bits [1:0] of
the value written must be ignored if we are in ARM mode (or
bit [0] ignored if in Thumb mode); this is a change from the
ARMv4/v5 specification that behaviour is UNPREDICTABLE.
Use the correct mask on the PC value when doing a non-interworking
store to PC.
A popular library used on RaspberryPi uses this instruction
as part of a trick to determine whether it is running on
ARMv6 or ARMv7, and we were mishandling the sequence.
Fixes bug: https://bugs.launchpad.net/bugs/1625295
Backports commit 9b6a3ea7a699594162ed3d11e4e04b98568dc5c0 from qemu
Fix the decoding of iss_sf in disas_ld_lit.
The SF (Sixty-Four) field in the ISS (Instruction Specific Syndrome)
is a bit that specifies the width of the register that the
instruction loads to.
If cleared it specifies 32 bits.
If set it specifies 64 bits.
Backports commit 173ff58580b383a7841b18fddb293038c9d40d1c from qemu
Current CPU definition for AMD Opteron third generation includes
features like SSE4a and LAHF_LM support in emulated CPUID. These
features are present in K8 rev.E or K10 CPUs and later. However,
current G3 family and model describe 2nd generation K8 cores instead.
This is incorrect but was considered harmless until our tests found a
problem with linux kernels >= 3.10 (and maybe earlier) which specifically
check for Opteron K8 model when parsing CPUID leaf 0x80000001:
http://lxr.free-electrons.com/source/arch/x86/kernel/cpu/amd.c?v=3.16#L552
This code will disable LAHF_LM feature in /proc/cpuinfo if model number
is inconsistent.
This change sets Opteron_G3 family/model/stepping to 16/2/3 which is
a proper Opteron 3rd generation 2350 CPU.
Backports commit 339892d758efb2d0954160d41736a0eac9875d67 from qemu
A regression was introduced by commit 96193c22a "target-i386:
Move xsave component mask to features array": all
CPUID[EAX=0xD,ECX=0]:EAX bits were being reported as unmigratable
because they don't have feature names defined. This broke
"-cpu host" because it enables only migratable features by
default.
This adds a new field to FeatureWordInfo: migratable_flags, which
will make those features be reported as migratable even if they
don't have a property name defined.
Backports commit 6fb2fff75dceed1716e757882a6dfbadd9042407 from qemu
CPUState is a fairly common pointer to pass to these helpers. This means
if you need other arguments for the async_run_on_cpu case you end up
having to do a g_malloc to stuff additional data into the routine. For
the current users this isn't a massive deal but for MTTCG this gets
cumbersome when the only other parameter is often an address.
This adds the typedef run_on_cpu_func for helper functions which has an
explicit CPUState * passed as the first parameter. All the users of
run_on_cpu and async_run_on_cpu have had their helpers updated to use
CPUState where available.
Backports commit e0eeb4a21a3ca4b296220ce4449d8acef9de9049 from qemu
As discussed on the list [1], having a comment stating that this file
is "public domain" is arguably wrong and not legally binding. This patch
replaces that comment with a clear GPLv2+ license as proposed in [2].
[1] http://lists.nongnu.org/archive/html/qemu-devel/2016-09/msg06151.html
[2] http://lists.nongnu.org/archive/html/qemu-devel/2016-09/msg06217.html
Worth noting, compiler.h was originally created on 5c026320 by splitting
qemu-common.h. At the time, qemu-common.h was already GPLv2+.
Backports commit cc9d8a3b2c41c22fb09f90f3085e6036c199c3ca from qemu
This will ensure all checks for features[FEAT_KVM] in the code
will be correct in case the KVM CPUID leaf is completely
disabled.
Backports commit aec661de86894e914d2d82431d9cefa9a9a40213 from qemu
This will reuse the existing check/enforce logic in
x86_cpu_filter_features() to check the xsave component bits
against GET_SUPPORTED_CPUID.
Backports commit 96193c22ab39ea24f81e386ad7883260ff24f5fd from qemu
Instead of doing complex calculations and calling
kvm_arch_get_supported_cpuid() inside cpu_x86_cpuid(), calculate
the set of required XSAVE components earlier, at realize time.
Backports commit 2ca8a8becc2eeb5262e478ce502f5daa53f3d0bc from qemu
Move the xsave area size calculation from cpu_x86_cpuid() inside
its own function. While doing it, change it to use the XSAVE area
struct sizes for the initial size, instead of the magic 0x240
number.
Backports commit 1fda6198e4126af9988754c8824cfc9928649890 from qemu
Instead of assigning individual bits in a loop, just copy the
values from ena_mask.
Backports commit 8057c621b1b17cbcb35fe67d1a09ada9055873a9 from qemu
Instead of checking both env->features and ena_mask at two
different places in the CPUID code, initialize ena_mask based on
the features that are enabled for the CPU, and then clear
unsupported bits based on kvm_arch_get_supported_cpuid().
The results should be exactly the same, but it will make it
easier to move the mask calculation elsewhare, and reuse
x86_cpu_filter_features() for the kvm_arch_get_supported_cpuid()
check.
Backports commit 4928cd6de6b4211a79f98c8dc39115be1e815c2b from qemu
The code that calculates the set of supported XSAVE components on
CPUID looks at ext_save_areas to find out which components should
be enabled. However, if there are zeroed entries in the
ext_save_areas array, the
((env->features[esa->feature] & esa->bits) == esa->bits)
check will always succeed and QEMU will unconditionally try to
enable the component.
Luckily this never caused any problems because the only missing
entry in ext_save_areas is the PT State component (bit 8), and
KVM currently doesn't support it (so it was cleared on ena_mask).
But the code was still incorrect and would break if KVM starts
returning CPUID[EAX=0xD,ECX=0].EAX[bit 8] as supported on
GET_SUPPORTED_CPUID.
Fix the problem by changing the code to not enable a XSAVE
component if ExtSaveArea::bits is zero.
Backports commit 9646f4927faf68e8690588c2fd6dc9834c440b58 from qemu
It makes it easier to guarantee the arrays are the right size,
and to find information when looking at the code.
Backports commit 2d5312da566e4424a807d078da05f92ee7be3eec from qemu
SVM needs CPUID[0x8000000A] to be available. So if SVM is enabled
in a CPU model or explicitly in the command-line, adjust CPUID
xlevel to expose the CPUID[0x8000000A] leaf.
Backports commit 0c3d7c0051576d220e6da0a8ac08f2d8482e2f0b from qemu
Instead of requiring users and management software to be aware of
required CPUID level/xlevel/xlevel2 values for each feature,
automatically increase those values when features need them.
This was already done for CPUID[7].EBX, and is now made generic
for all CPUID feature flags. Unit test included, to make sure we
don't break ABI on older machine-types and don't mess with the
CPUID level values if they are explicitly set by the user.
Backports commit c39c0edf9bb3b968ba95484465a50c7b19f4aa3a from qemu
Instead of using cpuid_level, use an empty struct as a marker
(like we already did with {start,end}_init_save). This will avoid
accidentaly resetting the wrong fields if we change the field
ordering on CPUX86State.
Backports commit 5e992a8e337e710ea2d02f35668ac55a80e15f99 from qemu
No CPU model in builtin_x86_defs has xlevel2 set, so it is always
zero. Delete the field.
Note that this is not an user-visible change. It doesn't remove
the ability to set xlevel2 on the command-line, it just removes
an unused field in builtin_x86_defs.
Backports commit 0456441b5eb6694a561ad5bb8dad52483e6a08d0 from qemu
Define a new CPU definition supporting 24KEc cores, similar to
the existing 24Kc, but with added support for DSP instructions
and MIPS16e (and without FPU).
Backports commit e9deaad8a58c899dc32e9fdeff9e533070e79dca from qemu
Add the "cortex-a7" CPU with features and registers matching the Cortex-A7
MPCore Technical Reference Manual and the Cortex-A7 Floating-Point Unit
Technical Reference Manual. The A7 is very similar to the A15.
Backports commit dcf578ed8cec89543158b103940e854ebd21a8cf from qemu
This avoids a double hand-full of magic numbers in the
xsave and xrstor helper functions.
Backports commit 3f32bd21df655e62eb271182a5c63280d631c7b3 from qemu
TARGET_PAGE_MASK, as defined, has type "int". We need to extend
that to the proper target width before oring in an "unsigned".
Backports commit ebb90a005da67147245cd38fb04a965a87a961b7 from qemu
This commit optimizes fence instructions. Two optimizations are
currently implemented: (1) unnecessary duplicate fence instructions,
and (2) merging weaker fences into a stronger fence.
[rth: Merge tcg_optimize_mb back into tcg_optimize, so that we only
loop over the opcode stream once. Merge "unrelated" weaker barriers
into one stronger barrier.]
Backports commit 34f939218ce78163171addd63750e1e0300376ab from qemu
Generate a 'lock orl $0,0(%esp)' instruction for ordering instead of
mfence which has similar ordering semantics.
Backports commit a7d00d4effb58889ac6df64f98ac50c9d1594149 from qemu
This commit introduces the TCGOpcode for memory barrier instruction.
This opcode takes an argument which is the type of memory barrier
which should be generated.
Backports commit f65e19bc2c9e8358e634d309606144ac2a3c2936 from qemu
The return address argument to the softmmu template helpers was
confused. In the legacy case, we wanted to indicate that there
is no return address, and so passed in NULL. However, we then
immediately subtracted GETPC_ADJ from NULL, resulting in a non-zero
value, indicating the presence of an (invalid) return address.
Push the GETPC_ADJ subtraction down to the only point it's required:
immediately before use within cpu_restore_state_from_tb, after all
NULL pointer checks have been completed.
This makes GETPC and GETRA identical. Remove GETRA as the lesser
used macro, replacing all uses with GETPC.
Backports commit 01ecaf438b1eb46abe23392c8ce5b7628b0c8cf5 from qemu
Previously we allowed fully unaligned operations, but not operations
that are aligned but with less alignment than the operation size.
In addition, arm32, ia64, mips, and sparc had been omitted from the
previous overalignment patch, which would have led to that alignment
being enforced.
Backports commit 85aa80813dd9f5c1f581c743e45678a3bee220f8 from qemu
In user-mode emulation env->idt.base memory is
allocated in linux-user/main.c with
size 8*512 = 4096 (for 64-bit).
When fake interrupt EXCP_SYSCALL is thrown
do_interrupt_user checks destination privilege level
for this fake exception, and tries to read 4 bytes
at address base + (256 * 2^4)=4096, that causes
segfault.
Privlege level was checked only for int's, so lets
read dpl from memory only for this case.
Backports commit 885b7c44e4f8b7a012a92770a0dba8b238662caa from qemu
Make sure reset zeroes TSC_AUX, XCR0, PKRU. Move XSTATE_BV from the
"vmstate only" section to the "KVM only" section.
Backports commit 7616f1c2da1c0f336a474a56ad6d32e15ccd666e from qemu
Unused function declarations were found using a simple gcc plugin and
manually verified by grepping the sources.
Backports commit d4b84d564ee3eb7a58e4585d671fb3c220b6c3b9 from qemu
All operations that take a floatx80 as an operand need to have their
inputs checked for malformed encodings. In all of these cases, use the
function floatx80_invalid_encoding to perform the check. If an invalid
operand is found, raise an invalid operation exception, and then return
either NaN (for fp-typed results) or the integer indefinite value (the
minimum representable signed integer value, for int-typed results).
For the non-quiet comparison operations, this touches adjacent code in
order to pass style checks.
Backports cast correction portion of commit d1eb8f2acba579830cf3798c3c15ce51be852c56m from qemu
Use the __atomic_*_n() primitives which take the value as argument. It
is not necessary to store the value locally before calling the
primitive, hence saving us a stack store and load.
Backports commit 89943de17c4e276f2c47f05b4604e8816a6a636c from qemu
For module build, .mo objects are passed to LINK and consumed in
process-archive-undefs. The reason behind that is documented in the
comment above process-archive-undefs.
Similarly, extract-libs should be called with .mo filtered out too.
Otherwise, the .mo-libs are added to the link command incorrectly,
spoiling the purpose of modularization.
Currently we don't have any .mo-libs usage, but it will be used soon
when we modularize more multi-source objects, like sdl and gtk.
Backports commit 5b1b6dbd94e2e2e98920f886cb32fcf4a1520b50 from qemu
In fact, this function does not exactly perform a lookup by physical
address as it is descibed for comment on get_page_addr_code(). Thus
it may be a bit confusing to have "physical" in it's name. So rename it
to tb_htable_lookup() to better reflect its actual functionality.
Backports commit b34de45fc40d01c14b31d3a682e284180a2ed8c5 from qemu
These functions are not too big and can be merged together. This makes
locking scheme more clear and easier to follow.
Backports commit bd2710d5da06ad7706d4864f65b3f0c9f7cb4d7f from qemu
Lock contention in the hot path of moving between existing patched
TranslationBlocks is the main drag in multithreaded performance. This
patch pushes the tb_lock() usage down to the two places that really need
it:
- code generation (tb_gen_code)
- jump patching (tb_add_jump)
The rest of the code doesn't really need to hold a lock as it is either
using per-CPU structures, atomically updated or designed to be used in
concurrent read situations (qht_lookup).
To keep things simple I removed the #ifdef CONFIG_USER_ONLY stuff as the
locks become NOPs anyway until the MTTCG work is completed.
Backports commit 518615c6503ad78d3bb67ddf1cd848c4a41de02e from qemu
This ensures that if we find the TB on the slow path that tb->page_addr
is correctly set before being tested.
Backports commit 2e1ae44a4f4a6149fbb9dc812243522f07284700 from qemu
When invalidating a translation block, set an invalid flag into the
TranslationBlock structure first. It is also necessary to check whether
the target TB is still valid after acquiring 'tb_lock' but before calling
tb_add_jump() since TB lookup is to be performed out of 'tb_lock' in
future. Note that we don't have to check 'last_tb'; an already invalidated
TB will not be executed anyway and it is thus safe to patch it.
Backports commit 6d21e4208f382dd8ca1f7995a6dd9ea7ca281163 from qemu
Ensure atomicity and ordering of CPU's 'tb_flushed' access for future
translation block lookup out of 'tb_lock'.
This field can only be touched from another thread by tb_flush() in user
mode emulation. So the only access to be sequential atomic is:
* a single write in tb_flush();
* reads/writes out of 'tb_lock'.
In future, before enabling MTTCG in system mode, tb_flush() must be safe
and this field becomes unnecessary.
Backports commit 118b07308a8cedc16ef63d7ab243a95f1701db40 from qemu
Ensure atomicity of CPU's 'tb_jmp_cache' access for future translation
block lookup out of 'tb_lock'.
Note that this patch does *not* make CPU's TLB invalidation safe if it
is done from some other thread while the CPU is in its execution loop.
Backports commit 89a16b1e4294e3664667a151c2f70c84dfac6fd9 from qemu
This is a small clean up. tb_find_fast() is a final consumer of this
variable so no need to pass it by reference. 'last_tb' is always updated
by subsequent cpu_loop_exec_tb() in cpu_exec().
This change also simplifies calling cpu_exec_nocache() in
cpu_handle_exception().
Backports commit 4b7e69509df2fcbfdab8c62c294dbfcfdab8a6e1 from qemu
val is assigned twice; the second one should be combined with "|".
Reported by Coverity.
Backports commit 5ce747cfac697f61668ab4fa4a71c1dba15cc272 from qemu
There is no need to make sure that the memory is zeroed after the
allocation if we also immediatly fill the whole buffer afterwards
with memcpy(). Thus g_new0 should be g_new instead. But since we
are also doing a memcpy() here, we can also simply replace both
with g_memdup() instead.
Backports commit a337f295defad7eb977da4d6317cf70f7f2fa4b4 from qemu
QEMU's code relies on left shifts of signed integers always
being defined behaviour with the obvious 2s-complement
semantics. The only way to tell the compiler (and any
associated undefined-behaviour sanitizer) that we require a
C dialect with these semantics is to use the -fwrapv option.
This is a bit of a heavy hammer for the job as it also gives
us guaranteed semantics on integer arithmetic overflow which
in theory we don't require.
In an ideal world this would allow us to drop the warning
flag -Wno-shift-negative-value, but we must retain this to
avoid spurious warnings on clang versions predating the
fix to https://llvm.org/bugs/show_bug.cgi?id=25552.
Backports commit 2d31515bc0880a1cea86ce638d2a109f4f4e6f7d from qemu
Some software algorithms are based on the hardware's cache info, for example,
for x86 linux kernel, when cpu1 want to wakeup a task on cpu2, cpu1 will trigger
a resched IPI and told cpu2 to do the wakeup if they don't share low level
cache. Oppositely, cpu1 will access cpu2's runqueue directly if they share llc.
The relevant linux-kernel code as bellow:
static void ttwu_queue(struct task_struct *p, int cpu)
{
struct rq *rq = cpu_rq(cpu);
......
if (... && !cpus_share_cache(smp_processor_id(), cpu)) {
......
ttwu_queue_remote(p, cpu); /* will trigger RES IPI */
return;
}
......
ttwu_do_activate(rq, p, 0); /* access target's rq directly */
......
}
In real hardware, the cpus on the same socket share L3 cache, so one won't
trigger a resched IPIs when wakeup a task on others. But QEMU doesn't present a
virtual L3 cache info for VM, then the linux guest will trigger lots of RES IPIs
under some workloads even if the virtual cpus belongs to the same virtual socket.
For KVM, there will be lots of vmexit due to guest send IPIs.
The workload is a SAP HANA's testsuite, we run it one round(about 40 minuates)
and observe the (Suse11sp3)Guest's amounts of RES IPIs which triggering during
the period:
No-L3 With-L3(applied this patch)
cpu0: 363890 44582
cpu1: 373405 43109
cpu2: 340783 43797
cpu3: 333854 43409
cpu4: 327170 40038
cpu5: 325491 39922
cpu6: 319129 42391
cpu7: 306480 41035
cpu8: 161139 32188
cpu9: 164649 31024
cpu10: 149823 30398
cpu11: 149823 32455
cpu12: 164830 35143
cpu13: 172269 35805
cpu14: 179979 33898
cpu15: 194505 32754
avg: 268963.6 40129.8
The VM's topology is "1*socket 8*cores 2*threads".
After present virtual L3 cache info for VM, the amounts of RES IPIs in guest
reduce 85%.
For KVM, vcpus send IPIs will cause vmexit which is expensive, so it can cause
severe performance degradation. We had tested the overall system performance if
vcpus actually run on sparate physical socket. With L3 cache, the performance
improves 7.2%~33.1%(avg:15.7%).
Backports commit 14c985cffa6cb177fc01a163d8bcf227c104718c from qemu
If an alignment fault occurred and target EL is using AArch32,
then DFSR/IFSR bit LPAE[9] must be set correctly.
Backports commit e0fe723c24562c8f909bb40f131bfdbe75650677 from qemu
With a vfio assigned device we lay down a base MemoryRegion registered
as an IO region, giving us read & write accessors. If the region
supports mmap, we lay down a higher priority sub-region MemoryRegion
on top of the base layer initialized as a RAM device pointer to the
mmap. Finally, if we have any quirks for the device (ie. address
ranges that need additional virtualization support), we put another IO
sub-region on top of the mmap MemoryRegion. When this is flattened,
we now potentially have sub-page mmap MemoryRegions exposed which
cannot be directly mapped through KVM.
This is as expected, but a subtle detail of this is that we end up
with two different access mechanisms through QEMU. If we disable the
mmap MemoryRegion, we make use of the IO MemoryRegion and service
accesses using pread and pwrite to the vfio device file descriptor.
If the mmap MemoryRegion is enabled and results in one of these
sub-page gaps, QEMU handles the access as RAM, using memcpy to the
mmap. Using either pread/pwrite or the mmap directly should be
correct, but using memcpy causes us problems. I expect that not only
does memcpy not necessarily honor the original width and alignment in
performing a copy, but it potentially also uses processor instructions
not intended for MMIO spaces. It turns out that this has been a
problem for Realtek NIC assignment, which has such a quirk that
creates a sub-page mmap MemoryRegion access.
To resolve this, we disable memory_access_is_direct() for ram_device
regions since QEMU assumes that it can use memcpy for those regions.
Instead we access through MemoryRegionOps, which replaces the memcpy
with simple de-references of standard sizes to the host memory.
With this patch we attempt to provide unrestricted access to the RAM
device, allowing byte through qword access as well as unaligned
access. The assumption here is that accesses initiated by the VM are
driven by a device specific driver, which knows the device
capabilities. If unaligned accesses are not supported by the device,
we don't want them to work in a VM by performing multiple aligned
accesses to compose the unaligned access. A down-side of this
philosophy is that the xp command from the monitor attempts to use
the largest available access weidth, unaware of the underlying
device. Using memcpy had this same restriction, but at least now an
operator can dump individual registers, even if blocks of device
memory may result in access widths beyond the capabilities of a
given device (RTL NICs only support up to dword).
Backports commit 1b16ded6a512809f99c133a97f19026fe612b2de from qemu
Setting skip_dump on a MemoryRegion allows us to modify one specific
code path, but the restriction we're trying to address encompasses
more than that. If we have a RAM MemoryRegion backed by a physical
device, it not only restricts our ability to dump that region, but
also affects how we should manipulate it. Here we recognize that
MemoryRegions do not change to sometimes allow dumps and other times
not, so we replace setting the skip_dump flag with a new initializer
so that we know exactly the type of region to which we're applying
this behavior.
Backports commit ca83f87a66d19fdaabf23d4f5ebb49396fe232c1 from qemu
Rather than rely on recursion during the middle of register allocation,
lower indirect registers to loads and stores off the indirect base into
plain temps.
For an x86_64 host, with sufficient registers, this results in identical
code, modulo the actual register assignments.
For an i686 host, with insufficient registers, this means that temps can
be (temporarily) spilled to the stack in order to satisfy an allocation.
This as opposed to the possibility of not being able to spill, to allocate
a register for the indirect base, in order to perform a spill.
Backports commit 5a18407f55ade924aa6397c9a043a9ffd59645fe from qemu