tcg_out_ldst: using a generic ALIAS_PADD to avoid ifdefs
tcg_out_ld: generates LD or LW
tcg_out_st: generates SD or SW
Backports commit 32b69707df3365aadaad1d058044a7704397ec62 from qemu
tcg_out_mov: using OPC_OR as most mips assemblers do;
tcg_out_movi: extended to 64-bit immediate.
Backports commit 2294d05dab503d11664e73712c7f250fd0bf9e3b from qemu
Without the mips32r2 instructions to perform swapping, bswap is quite large,
dominating the size of each reverse-endian qemu_ld/qemu_st operation.
Create two subroutines in the prologue block. The subroutines require extra
reserved registers (TCG_TMP[2, 3]). Using these within qemu_ld means that
we need not place additional restrictions on the qemu_ld outputs.
Backports commit 7f54eaa3b78d71cb57e45a719980f9b5ff06d21c from qemu
Bulk patch adding 64-bit opcodes into tcg_out_op. Note that
mips64 is as yet neither complete nor enabled.
Backports commit 0119b1927d531f3fac22b9b4da01dafc23644973 from qemu
Since the mips manual tables are in octal, reorg all of the opcodes
into that format for clarity. Note that the 64-bit opcodes are as
yet unused.
Backports commit 57a701fc2b34902310d4dbd1411088055616938a from qemu
Without the mips32r2 instructions to perform swapping, bswap is quite large,
dominating the size of each reverse-endian qemu_ld/qemu_st operation.
Create a subroutine in the prologue block. The subroutine requires extra
reserved registers (TCG_TMP[2, 3]). Using these within qemu_ld means that
we need not place additional restrictions on the qemu_ld outputs.
Backports commit bb08afe9f0aee1a3f5c23508e2511b882ca31e1b from qemu
Also manage word and byte operands and fix the computation of
overflow in the case of M68000 arithmetic shifts.
Backports commit 367790cce8e14131426f5190dfd7d1bdbf656e4d from qemu
Report this properly via exception and, importantly, allow
the disassembler the chance to tell us what insn is not handled.
Backports commit 72d2e4b6a437f11f97d3138f6b2ec177b78210c7 from qemu
680x0 movem can load/store words and long words and can use more
addressing modes. Coldfire can only use long words with (Ax) and
(d16,Ax) addressing modes.
Backports commit 7b542eb96d7d5d9266a9c0425f05d49c8e6df2f9 from qemu
Implement CAS using cmpxchg.
Implement CAS2 using helper and either cmpxchg when
the 32bit addresses are consecutive, or with
parallel_cpus+cpu_loop_exit_atomic() otherwise.
Backports commit 14f944063affbcc7bd6df42b060793dbfee8a822 from qemu
Update helper to set the throwing location in case of div-by-0.
Cleanup divX.w and add quad word variants of divX.l.
Backports commit 0ccb9c1d8128a020720d5c6abf99a470742a1b94 from qemu
Update helper to set the throwing location in case of div-by-0.
Cleanup divX.w and add quad word variants of divX.l.
Backports commit 0ccb9c1d8128a020720d5c6abf99a470742a1b94 from qemu
Provide gen_lea_mode and gen_ea_mode, where the mode can be
specified manually, rather than taken from the instruction.
Backports commit f84aab269ddab8509b77408b886e9071bf5c48fb from qemu
ARM1176 CPUs have TrustZone support and can use the Vector Base
Address Register, but currently, qemu only adds VBAR support to ARMv7
CPUs. Fix this by adding a new feature ARM_FEATURE_VBAR which can used
for ARMv7 and ARM1176 CPUs.
The VBAR feature is always set for ARMv7 because some legacy boards
require it even if this is not architecturally correct.
Backports commit 91db4642f868cf2e591b62d31a19d35b02ea791e from qemu
We already log exception entry; add logging of the AArch64 exception
return path as well.
Backports commit c9b61d9aa1ad234b0961f8add023cdc999cda3da from qemu
The value of the MVFR1 (Media and VFP Feature Register 1) register for
the Cortex-A8 appears to be incorrect (according to the TRM, DDI0344K),
with the "full denormal arithmetic" and "propagation of NaN" fields
holding both 0 instead of both 1.
I had a go tracing the history of the use of this value, and it seems
it's always just been wrong in QEMU: maybe it was derived from early
documentation, or guessed based on the use of a "VFP Lite" implementation
in the Cortex-A8.
Depending on the startup/early-boot code in use, this can manifest as
failure to perform denormal arithmetic properly: in our case, selecting
a Cortex-A8 CPU when using QEMU as an instruction-set simulator for
bare-metal GCC testing caused tests using denormal arithmetic to
fail. Problems might be masked (or not occur) when using a full OS kernel
with suitable trap handlers (I'm not sure).
Backports commit 0f1944735b6bac810b067e8a7a5154744536fd59 from qemu
We can't use LOAD AND TEST for unsigned data and then expect to
extract the result with ADD LOGICAL WITH CARRY. Fall through to
using COMPARE LOGICAL IMMEDIATE instead.
Backports commit 65839b56b9a740e6b898b5d81afc160502bd2935 from qemu
The new paging more is extension of IA32e mode with more additional page
table level.
It brings support of 57-bit vitrual address space (128PB) and 52-bit
physical address space (4PB).
The structure of new page table level is identical to pml4.
The feature is enumerated with CPUID.(EAX=07H, ECX=0):ECX[bit 16].
CR4.LA57[bit 12] need to be set when pageing enables to activate 5-level
paging mode.
Backports commit 6c7c3c21f95dd9af8a0691c0dd29b07247984122 from qemu
The syscall and sysret instructions behave a bit differently:
TF is checked after the instruction completes.
This allows the o/s to disable #DB at a syscall by adding TF to FMASK.
And then when the sysret is executed the #DB is taken "as if" the
syscall insn just completed.
Backports commit c52ab08aee6f7d4717fc6b517174043126bd302f from qemu
Device models often have to perform multiple access to a single
memory region that is known in advance, but would to use "DMA-style"
functions instead of address_space_map/unmap. This can happen
for example when the data has to undergo endianness conversion.
Introduce a new data structure to cache the result of
address_space_translate without forcing usage of a host address
like address_space_map does.
Backports commit 1f4e496e1fc2eb6c8bf377a0f9695930c380bfd3 from qemu
This extracts the common part of address_space_map and
address_space_cache_init into a new function.
Backports commit 715c31ec8e12107f47ac74b464c97e813c76f898 from qemu
Templatize the address_space_* and *_phys functions, so that we can add
similar functions in the next patch that work with a lightweight,
cache-like version of address_space_map/unmap.
Backports commit 0ce265ffef87f19f4dd1ff0663e09a63d66ae408 from qemu
Since CPUARMState.vfp.regs is not 16 byte aligned, the ^ 8 fixup used
for a big-endian host doesn't do what's intended. Fix this by adding
in the vfp.regs offset after computing the inter-register offset.
Backports commit d437262fa8edd0d9fbe038a515dda3dbf7c5bb54 from qemu
We add s->be_data within do_vec_ld/st. Adding it here means that
we have the wrong bits set in SIZE for a big-endian host, leading
to g_assert_not_reached in write_vec_element and read_vec_element.
Backports commit 74b13f92c2428abae41a61c46a5cf47545da5fcb from qemu
Commit 2afbdf8 ("target-i386: exception handling for memory helpers",
2015-09-15) changed tlb_fill's cpu_restore_state+raise_exception_err
to raise_exception_err_ra. After this change, the cpu_restore_state
and raise_exception_err's cpu_loop_exit are merged into
raise_exception_err_ra's cpu_loop_exit_restore.
This actually fixed some bugs, but when SVM is enabled there is a
second path from raise_exception_err_ra to cpu_loop_exit. This is
the VMEXIT path, and now cpu_vmexit is called without a
cpu_restore_state before.
The fix is to pass the retaddr to cpu_vmexit (via
cpu_svm_check_intercept_param). All helpers can now use GETPC() to pass
the correct retaddr, too.
Backports commit 823fb688ebc52a7d79c1308acb28c92b56820167 from qemu
When icount is active, tb_add_jump is surprisingly called with an
out of bounds basic block index. I have no idea how that can work,
but it does not seem like a good idea. Clear *last_tb for all
TB_EXIT_ICOUNT_EXPIRED cases, even when all you have to do is
refill icount_extra.
Backports commit d8dea6fbcbed177ca5d23ab77b3834a9437f0e88 from qemu
There were some patterns, like 0x0000_ffff_ffff_00ff, for which we
would select to begin a multi-insn sequence with MOVN, but would
fail to set the 0x0000 lane back from 0xffff.
Backports commit 50b468d42107a2c646b1c566ed17d9ec362c51c4 from qemu
When al == xzr, we cannot use addi/subi because that encodes xsp.
Force a zero into the temp register for that (rare) case.
Backports commit 028fbea47713f909d6ea761a457779a82b276247 from qemu
rcu_read_unlock was not called if the address_space_access_valid result is
negative.
This caused (at least) a problem when qemu on PPC/E500+TAP failed to terminate
properly and instead got stuck in a deadlock.
Backports commit 662a97d74f9b34cafe9aeb6d96620a97d768a1fa from qemu
A bug (1647683) was reported showing a crash when removing
breakpoints. The reproducer was bisected to 3359baad when tb_flush
was finally made thread safe. While in MTTCG the locking in
breakpoint_invalidate would have prevented any problems, but
currently tb_lock() is a NOP for system emulation.
The race is between a tb_flush from the gdbstub and the
tb_invalidate_phys_addr() in breakpoint_invalidate().
Ideally we'd have actual locking here; for the moment the
simple fix is to do a full tb_flush() for a bp invalidate,
since that is thread-safe even if no lock is taken.
Backports commit a9353fe897ca2687e5b3385ed39e3db3927a90e0 from qemu
While testing rth's latest TCG patches with risu I found ldaxp was
broken. Investigating further I found it was broken by 1dd089d0 when
the cmpxchg atomic work was merged. As part of that change the code
attempted to be clever by doing a single 64 bit load and then shuffle
the data around to set the two 32 bit registers.
As I couldn't quite follow the endian magic I've simply partially
reverted the change to the original code gen_load_exclusive code. This
doesn't affect the cmpxchg functionality as that is all done on in
gen_store_exclusive part which is untouched.
I've also restored the comment that was removed (with a slight tweak
to mention cmpxchg).
Backports commit 5460da501a57cd72eda6fec736d76539122e2f99 from qemu
The documentation parser we are going to add expects a section name to
end with ':', otherwise the comment is treated as free-form text body.
Backports commit 5072f7b38b1b9b26b8fbe1a89086386a420aded8 from qemu
Fixed issues in the MIPSDSP64 instructions dextp and dextpdp.
Shifting can go out of 32 bit range.
https://bugs.launchpad.net/qemu/+bug/1631625
Backports commit e6e2784cacd4cfec149a7690976b9ff15e541c4d from qemu
Needed to emit FPU exception on Loongson multimedia instructions
executing if Status:CU1 is clear. or FPR changes may be missed
on Linux.
Backports commit b5a587b613f6151c2ce164552579ae64f2ddfd1c from qemu
"The multiplier and multiplicand are both word operands, and the result
is a long-word operand."
So compute flags on a long-word result, not on a word result.
Backports commit 4a18cd44f3c905d443c26e26bb9b09932606d1a3 from qemu
"The size of the operation can be specified as word or long.
Word length source operands are sign-extended to 32 bits for
comparison."
So comparison is always done using OS_LONG.
Backports commit 5436c29d78957a6825a93f0eb79dfab388641017 from qemu
In the user emulation code path, tlb_vaddr_to_host erronesously passed
vaddr as the guest address to be translated, instead of addr, the parameter
which actually contained the guest address.
This resulted in incorrect addresses being used when emulating block copy
(mvc/mvpg) and block clear (xc) instructions for the s390x target.
Backports commit c2a85316902e67530da9d6548139fcce73c0cac6 from qemu
Include sys/user.h for declaration of 'struct kinfo_proc'.
Add -lutil to qemu-ga link for kinfo_getproc.
Backports commit a7764f1548ef9946af30a8f96be9cef10761f0c1 from qemu
The spec can be found in Intel Software Developer Manual or in
Instruction Set Extensions Programming Reference.
Backports commit 95ea69fb46266aaa46d0c8b7f0ba8c4903dbe4e3 from qemu
The memory_dispatch field is meant to be protected by RCU so we should
use the correct primitives when accessing it. This race was flagged up
by the ThreadSanitizer.
Backports commit f35e44e7645edbb08e35b111c10c2fc57e2905c7 from qemu
The version of tcg_gen_ld8s_i64 for 32-bit systems does a load into
the low part of the return value - then attempts a sign extension into
the high part, but wrongly sets the high part to a sign extension of
itself rather than of the low part. This results in TCG internal
errors from the use of the uninitialized high part (in some GCC tests
of AArch64 NEON shift intrinsics, in particular). This patch corrects
the sign-extension logic, making it match other functions such as
tcg_gen_ld16s_i64.
Backports commit 3ff91d7e85176f8b4b131163d7fd801757a2c949 from qemu
The typedefs we use for the TCGv_i32, TCGv_i64 and TCGv_ptr
types are somewhat confusing, because we define them as
pointers to structs, but the structs themselves are never
defined. Explain in the comments a bit more clearly why
this is OK and what is going on under the hood.
Backports commit a40d4701bc9f6e6a3bbfb7b4fbe756a5b72b5df1 from qemu
This multiply has one signed input and one unsigned input,
producing the full double-width result.
Backports commit 5087abfb7dfd1d368ae6939420057036b4d8e509 from qemu
The cpu is allowed to require stricter alignment on these 8- and 16-byte
operations, and the OS is required to fix up the accesses as necessary,
so the previous code was not wrong.
However, we can easily handle this misalignment for all direct 8-byte
operations and for direct 16-byte loads.
We must retain 16-byte alignment for 16-byte stores, so that we don't have
to probe for writability of a second page before performing the first of
two 8-byte stores. We also retain 8-byte alignment for no-fault loads,
since they are rare and it's not worth extending the helpers for this.
Backports commit cb21b4da6cca1bb4e3f5fefb698fb9e4d00c8f66 from qemu
At the same time, fix a problem with stqf_asi, when
a write might access two pages.
Backports commit f939ffe5a022a8798824e2720ed5a14186fca6b6 from qemu
Now that we never call out to helpers when direct accesses can
handle an asi, remove the corresponding code in those helpers.
For ldda, this removes the entire helper.
Backports commit 918d9a2c9d36378a3cf6636018900a4731c83b9d from qemu
As used by HelenOS, presumably for ultra 2 and 3,
prior to the sun4v platform and the current twinx names.
Backports commit 34a6e13da70b2c798630a8dbd03d09f201c0198f from qemu
It's handy to have a mmu idx for physical addresses, so
that mmu disabled and physical access asis can use the
same path as normal accesses.
Backports commit af7a06bac7d3abb2da48ef3277d2a415772d2ae8 from qemu
Several helpers call helper_raise_exception directly, which requires
in turn that their callers have performed save_state. The new function
allows a TCG return address to be passed in so that we can restore
PC + NPC + flags data from that.
This fixes a bug in the usage of helper_check_align, whose callers had
not been calling save_state. It fixes another bug in which the divide
helpers used GETPC at a level other than the direct callee from TCG.
This allows the translator to avoid save_state prior to SAVE, RESTORE,
and FLUSHW instructions.
Backports commit 2f9d35fc4006122bad33f9ae3e2e51d2263e98ee from qemu
In the linux-user case all things that involve ''l1_map' and PageDesc
tweaks are protected by the memory lock (mmpa_lock). For SoftMMU mode
we previously relied on single threaded behaviour, with MTTCG we now use
the tb_lock().
As a result we need to do a little re-factoring and push the taking of
this lock up the call tree. This requires a slightly different entry for
the SoftMMU and user-mode cases from tb_invalidate_phys_range.
This also means user-mode breakpoint insertion needs to take two locks
but it hadn't taken any previously so this is an improvement.
Backpoirts commit ba051fb5e56d5ff5e4fa672d37954452e58543b2 from qemu
softmmu requires more functions to be thread-safe, because translation
blocks can be invalidated from e.g. notdirty callbacks. Probably the
same holds for user-mode emulation, it's just that no one has ever
tried to produce a coherent locking there.
This patch will guide the introduction of more tb_lock and tb_unlock
calls for system emulation.
Note that after this patch some (most) of the mentioned functions are
still called outside tb_lock/tb_unlock. The next one will rectify this.
Backports commit 7d7500d99895f888f97397ef32bb536bb0df3b74 from qemu
This adds asserts to check the locking on the various translation
engines structures. There are two sets of structures that are protected
by locks.
The first the l1map and PageDesc structures used to track which
translation blocks are associated with which physical addresses. In
user-mode this is covered by the mmap_lock.
The second case are TB context related structures which are protected by
tb_lock which is also user-mode only.
Currently the asserts do nothing in SoftMMU mode but this will change
for MTTCG.
Backports commit 301e40ed8005306c009978be295ed9a4b725178b from qemu
Make the debug define consistent with the others. The flush operation is
all about invalidating TranslationBlocks on flush events.
Also fix up the commenting on the other DEBUG for the benefit of
checkpatch.
Backports commit 955939a2b51f72bea1c200b559ea39985df5a633 from qemu
Some files contain multiple #includes of the same header file.
Removed most of those unnecessary duplicate entries using
scripts/clean-includes.
Backports commit 814bb12a561d36aeb5ae4440ad43d2b0761d76da from qemu
This patch adds a pmu=[on/off] option to enable/disable vPMU support
in guest vCPU. It allows virt tools, such as libvirt, to determine the
exsitence of vPMU and configure it. Note this option is only available
for cortex-a57/cortex-53/ host CPUs, but unavailable on ARMv7 and other
processors. Also even though "pmu=" option is available for TCG mode,
setting it doesn't turn PMU on.
Backports commit 929e754d5a621cd53f30e69b766ccf381b58d124 from qemu
Stop specializing on TARGET_LONG_BITS == 32; unconditionally allocate
a temp and expand with tcg_gen_extu_i32_tl. Split out gen_aa32_addr,
gen_aa32_frob64, gen_aa32_ld_i32 and gen_aa32_st_i32 as separate interfaces.
Backports commit 7f5616f53896a4e08ad37de3ac50d3a4cc8eff7a from qemu
The diff here is uglier than necessary. All this does is to turn
FOO
into:
if (s->prefix & PREFIX_LOCK) {
BAR
} else {
FOO
}
where FOO is the original implementation of an unlocked cmpxchg.
Backports commit ae03f8de45427042ecd10b0941a005f21ecc064c from qemu
Allow qemu to build on 32-bit hosts without 64-bit atomic ops.
Even if we only allow 32-bit hosts to multi-thread emulate 32-bit
guests, we still need some way to handle the 32-bit guest using a
64-bit atomic operation. Do so by dropping back to single-step.
Backports commit df79b996a7b21c6ea7847f7927a2e1a294b86c72 from qemu
Force the use of cmpxchg16b on x86_64.
Wikipedia suggests that only very old AMD64 (circa 2004) did not have
this instruction. Further, it's required by Windows 8 so no new cpus
will ever omit it.
If we truely care about these, then we could check this at startup time
and then avoid executing paths that use it.
Backports commit 7ebee43ee3e2fcd7b5063058b7ef74bc43216733 from qemu
Add all of cmpxchg, op_fetch, fetch_op, and xchg.
Handle both endian-ness, and sizes up to 8.
Handle expanding non-atomically, when emulating in serial.
Backports commit c482cb117cc418115ca9c6d21a7a2315414c0a40 from qemu
We already include exec/address-spaces.h and exec/memory.h in
cputlb.c; the include of qemu/timer.h appears to be a fossil.
Backports commit 40978428853e2f7b4597ab2a9ffeb187333802dc from qemu
TGT_LE and TGT_BE are not size dependent and do not need to be
redefined. The others are no longer used at all.
Backports commit c86c6e4c80fee4d9423bedb10ba9e9c4aa68f861 from qemu
Probe for whether the specified guest write access is permitted.
If it is not permitted then an exception will be taken in the same
way as if this were a real write access (and we will not return).
Otherwise the function will return, and there will be a valid
entry in the TLB for this access.
Backports commit 3b4afc9e75ab1a95f33e41f462921093f8a109c4 from qemu
When we cannot emulate an atomic operation within a parallel
context, this exception allows us to stop the world and try
again in a serial context.
Backports commit fdbc2b5722f6092e47181a947c90fd4bdcc1c121 from qemu
Also backports parts of commit 02d57ea115b7669f588371c86484a2e8ebc369be
Allows Int128 to be used more generally, rather than having to
begin with 64-bit inputs and accumulate.
Backports commit 1edaeee0955fba7d834b7c8f4e372e7eae030745 from qemu
While the check against sizeof(void *) is appropriate for
normal usage within qemu, there are places in which we want
wider operaions and have checked for their existance.
Backports commit 84bca3927b36fb1d9a2ca85cbbdf9023d2b84678 from qemu
Making these functional rather than object macros will
prevent later problems with complex macro expansion.
Backports commit d1a9f2d12fcfc942924956fbe321aedf4226ccb7 from qemu
Separate all ccr bits. Continue to batch updates via cc_op.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Fix gen_logic_cc() to really extend the size of the result.
Fix gen_get_ccr(): update cc_op as it is used by the helper.
Factorize flags computing and src/ccr cleanup
Backports commit 620c6cf66584bfbee90db84a7e87a6eabf230ca9 from qemu
Update cc_op directly from tcg_gen_insn_start() and
restore_state_to_opc()
Copied from target-i386
Backports commit 20a8856eba0980fbe9d2b8ed2b33ecdb9c9fe5ad from qemu
Read a 8, 16 or 32bit immediat constant.
An immediate constant is stored in the instruction opcode and
can be in one or two extension words.
Backports commit 28b68cd79ef01e8b1f5bd26718cd8c09a12c625f from qemu
Scaled index is not supported by 68000, 68008, and 68010.
EA = (bd + PC) + Xn.SIZE*SCALE + od
Ignore it:
M68000 FAMILY PROGRAMMER’S REFERENCE MANUAL
2.4 BRIEF EXTENSION WORD FORMAT COMPATIBILITY
"If the MC68000 were to execute an instruction that
encoded a scaling factor, the scaling factor would be
ignored and would not access the desired memory address.
The earlier microprocessors do not recognize the brief
extension word formats implemented by newer processors.
Although they can detect illegal instructions, they do not
decode invalid encodings of the brief extension word formats
as exceptions."
Backports commit d8633620a112296fcf6a6ae9a1cbba614c0ca502 from qemu
The QmpOutputVisitor has no direct dependency on QMP. It is
valid to use it anywhere that one wants a QObject. Rename it
to better reflect its functionality as a generic QAPI
to QObject converter.
The commit before previous renamed the files, this one renames C
identifiers.
Backports commit 7d5e199ade76c53ec316ab6779800581bb47c50a from qemu
The QmpInputVisitor has no direct dependency on QMP. It is
valid to use it anywhere that one has a QObject. Rename it
to better reflect its functionality as a generic QObject
to QAPI converter.
The previous commit renamed the files, this one renames C identifiers.
Backports commit 09e68369a88d7de0f988972bf28eec1b80cc47f9 from qemu
The QMP visitors have no direct dependency on QMP. It is
valid to use them anywhere that one has a QObject. Rename them
to better reflect their functionality as a generic QObject
to QAPI converter.
This is the first of three parts: rename the files. The next two
parts will rename C identifiers. The split is necessary to make git
rename detection work.
Backports commit b3db211f3c80bb996a704d665fe275619f728bd4 from qemu
Version 2.0 of the semihosting specification introduces new trap
instructions for AArch32: HLT 0xF000 for A32 and HLT 0x3C for T32.
Implement these (in the same way we implement the existing HLT
semihosting trap for A64).
The old traps via SVC and BKPT are unaffected.
Backports commit 19a6e31c9d2701ef648b70ddcfc3bf64cec8c37e from qemu
Support target CPUs having a page size which isn't knownn
at compile time. To use this, the CPU implementation should:
* define TARGET_PAGE_BITS_VARY
* not define TARGET_PAGE_BITS
* define TARGET_PAGE_BITS_MIN to the smallest value it
might possibly want for TARGET_PAGE_BITS
* call set_preferred_target_page_bits() in its realize
function to indicate the actual preferred target page
size for the CPU (and report any error from it)
In CONFIG_USER_ONLY, the CPU implementation should continue
to define TARGET_PAGE_BITS appropriately for the guest
OS page size.
Machines which want to take advantage of having the page
size something larger than TARGET_PAGE_BITS_MIN must
set the MachineClass minimum_page_bits field to a value
which they guarantee will be no greater than the preferred
page size for any CPU they create.
Note that changing the target page size by setting
minimum_page_bits is a migration compatibility break
for that machine.
For debugging purposes, attempts to use TARGET_PAGE_SIZE
before it has been finally confirmed will assert.
Backports commit 20bccb82ff3ea09bcb7c4ee226d3160cab15f7da from qemu
Remove L1 page mapping table properties computing
statically using macros which is dependent on
TARGET_PAGE_BITS. Drop macros V_L1_SIZE, V_L1_SHIFT,
V_L1_BITS macros and replace with variables which are
computed at early stage of VM boot.
Removing dependency can help to make TARGET_PAGE_BITS
dynamic.
Backports commit 66ec9f49399f0a9fa13ee77c472caba0de2773fc from qemu
Allocate sub_section dynamically. Remove dependency
on TARGET_PAGE_SIZE to make run-time page size detection
for arm platforms.
Backports commit 2615fabd42ea0078dd9e659bdb21a5b7a1f87a9a from qemu
This speeds up MEMORY_LISTENER_CALL noticeably. Right now,
with many PCI devices you have N regions added to M AddressSpaces
(M = # PCI devices with bus-master enabled) and each call looks
up the whole listener list, with at least M listeners in it.
Because most of the regions in N are BARs, which are also roughly
proportional to M, the whole thing is O(M^3). This changes it
to O(M^2), which is the best we can do without rewriting the
whole thing.
Backports commit 9a54635dcb51a3fcf7507af630168f514a8cd4e7 from qemu
This comes from free from unifying tcg_reg_alloc_mov and
tcg_reg_alloc_movi's handling of TEMP_VAL_CONST. It triggers
often on moves to cc_dst, such as the following translation
of "sub $0x3c,%esp":
before: after:
subl $0x3c,%ebp subl $0x3c,%ebp
movl %ebp,0x10(%r14) movl %ebp,0x10(%r14)
movl $0x3c,%ebx movl $0x3c,0x2c(%r14)
movl %ebx,0x2c(%r14)
Backports commit 0fe4fca4e1a5e06a270127dd80bb753d4dda61c6 from qemu
This was found with test-i386. The issue is that instructions
such as
addr32 lea (%eax), %rax
did not perform a 32-bit extension, because the LEA translation
skipped the gen_lea_v_seg step. That step does not just add
segments, it also takes care of extending from address size to
pointer size.
Backports commit 620abfb004543404bef1953e25da2ad77352941a from qemu
This introduces load-acquire and store-release operations in QEMU.
For now, just use them as an implementation detail of atomic_mb_read
and atomic_mb_set.
Since docs/atomics.txt documents that atomic_mb_read only synchronizes
with an atomic_mb_set of the same variable, we can use the new implementation
everywhere instead of seq-cst loads and stores.
Backports commit 803cf26a9e019b5d2256a8edeb22e3538c4f3261 from qemu
When explicitly enabling unmigratable flags using "-cpu host"
(e.g. "-cpu host,+invtsc"), the requested feature won't be
enabled because cpu->migratable is true by default.
This is inconsistent with all other CPU models, which don't have
the "migratable" option, making "+invtsc" work without the need
for extra options.
This happens because x86_cpu_filter_features() uses
cpu->migratable as an argument for
x86_cpu_get_supported_feature_word(). This is not useful
because:
2) on "-cpu host" it only makes QEMU disable features that were
explicitly enabled in the command-line;
1) on all the other CPU models, cpu->migratable is already false.
The fix is to just use 'false' as an argument to
x86_cpu_get_supported_feature_word() in
x86_cpu_filter_features().
Note that:
* This won't change anything for people using using
"-cpu host" or "-cpu host,migratable=<on|off>" (with no extra
features) because the x86_cpu_get_supported_feature_word() call
on the cpu->host_features check uses cpu->migratable as
argument.
* This won't change anything for any CPU model except "host"
because they all have cpu->migratable == false (and only "host"
has the "migratable" property that allows it to be changed).
* This will only change things for people using "-cpu host,+<feature>",
where <feature> is a non-migratable feature. The only existing
named non-migratable feature is "invtsc".
In other words, this change will only affect people using
"-cpu host,+invtsc" (that will now get what they asked for: the
invtsc flag will be enabled). All other use cases are unaffected.
Backports commit 46c032f3afcc05a0123914609f1003906ba63fda from qemu
When probing for CPU model information, we need to reuse the code
that initializes CPUID fields, but not the remaining side-effects
of x86_cpu_realizefn(). Move that code to a separate function
that can be reused later.
Backports commit 41f3d4d69a423dadb8431fda65d8d7c68c0de0fc from qemu
x86_cpu_filter_features() will be reused by code that shouldn't
print any warning. Move the warning code to a new
x86_cpu_report_filtered_features() function, and call it from
x86_cpu_realizefn().
Backports commit 8ca30e8673aff9bfcf8f969f8db4266b5f62e49c from qemu
Instead of treating the FP and SSE bits as special cases, add
them to the x86_ext_save_areas array. This will simplify the code
that calculates the supported xsave components and the size of
the xsave area.
Backports commit e3c9022b4e2b6a4deb6518361d2bbf33522b9198 from qemu
Instead of keeping the aliases inside the feature name arrays and
require parsing the strings, just register alias properties
manually. This simplifies the code for property registration and
lookup.
Backports commit 16d2fcaa509b1ca56eb2fcd8fe877279cf65cccc from qemu
Instead of translating the feature name entries when adding
property names, store the actual property names in the feature
name array.
For reference, here is the full list of functions that use
FeatureWordInfo::feat_names:
* x86_cpu_get_migratable_flags(): not affected, as it just
check for non-NULL values.
* report_unavailable_features(): informative only. It will
start printing feature names with hyphens.
* x86_cpu_list(): informative only. It will start printing
feature names with hyphens
* x86_cpu_register_feature_bit_props(): not affected, as it
was already calling feat2prop(). Now we can remove the
feat2prop() calls safely.
So, the only user-visible effect of this patch are the new names
being used in help and error messages for users.
Backports commit fc7dfd205f3287893c436d932a167bffa30579c8 from qemu
VME is already disabled automatically when using TCG. So, instead
of pretending it is there when reporting CPU model data on
query-cpu-* QMP commands (making every CPU model to be reported
as not runnable), we can disable it by default on all CPU models
when using TCG.
Do that by adding a tcg_default_props array that will work like
kvm_default_props.
Backports commit 04d99c3c61f4bdc0450dbeb6512b6dd743baca65 from qemu
Instead of using the builtin_x86_defs array, use the QOM subclass
list to list CPU models on "-cpu ?" and "query-cpu-definitions".
Backports commit ee465a3ef77c2b2975ffa71c72208c05b3f3970d from qemu
MDCCINT_EL1 is part of the DCC debugger communication
channel between the CPU and an attached external debugger.
QEMU doesn't implement this, but since Linux may try
to access this register we need to provide at least
a dummy implementation.
Backports commit 5dbdc4342f479d799a1970dd5fd22e64c9dcd50d from qemu
In commit 9b6a3ea7a699594 store_reg() was changed to mask
both bits 0 and 1 of the new PC value when in ARM mode.
Unfortunately this broke the exception return code paths
when doing a return from ARM mode to Thumb mode: in some
of these we write a new CPSR including new Thumb mode
bit via gen_helper_cpsr_write_eret(), and then use store_reg()
to write the new PC. In this case if the new CPSR specified
Thumb mode then masking bit 1 of the PC is incorrect
(these code paths correspond to the v8 ARM ARM pseudocode
function AArch32.ExceptionReturn(), which always aligns the
new PC appropriately for the new instruction set state).
Instead of using store_reg() in exception-return code paths,
call a new store_pc_exc_ret() which stores the raw new PC
value to env->regs[15], and then mask it appropriately in
the subsequent helper_cpsr_write_eret() where the new
env->thumb state is available.
This fixes a bug introduced by 9b6a3ea7a699594 which caused
crashes/hangs or otherwise bad behaviour for Linux when
userspace was using Thumb.
Backports commit fb0e8e79a9d77ee240dbca036fa8698ce654e5d1 from qemu
3 cases in a switch in disas_exc() require reference to the
ARM ARM spec in order to determine what case they're handling.
Backports commit 957956b3013c8122a749dfe61a41aef8b4100e31 from qemu
For BR, BLR and RET instructions, if tagged addresses are enabled, the
tag field in the address must be cleared out prior to loading the
address into the PC. Depending on the current EL, it will be set to
either all 0's or all 1's.
Backports commit 6feecb8b941f2d21e5645d0b6e0cdb776998121b from qemu
When capturing the current CPU state for the TB, extract the TBI0 and TBI1
values from the correct TCR for the current EL and then add them to the TB
flags field.
Then, at the start of code generation for the block, copy the TBI fields
into the DisasContext structure.
Backports commit 86fb3fa4ed5873b021a362ea26a021f4aeab1bb4 from qemu
The 'old' dispatch code returned a QERR_MISSING_PARAMETER for missing
parameters, but the qapi qmp_dispatch() code uses
QERR_INVALID_PARAMETER_TYPE.
Improve qapi code to return QERR_MISSING_PARAMETER where
appropriate.
Fix expected error message in iotests.
Backports commit 1382d4abdf9619985e4078e37e49e487cea9935e from qemu
Unlike the other visit methods, visit_type_any() and visit_type_null()
neglect to check whether qmp_input_get_object() succeeded. They crash
when it fails. Reproducer:
{ "execute": "qom-set",
"arguments": { "path": "/machine", "property": "rtc-time" } }
Will crash with:
qapi/qapi-visit-core.c:277: visit_type_any: Assertion `!err != !*obj'
failed
Broken in commit 5c678ee. Fix by adding the missing error checks.
Backports commit c489780203f9b22aca5539ec7589b7140bdc951f from qemu
Only very modern GCC's actually set this define when building with the
ThreadSanitizer so this little typo slipped though.
Backports commit 23ea7f57949f2f5934f4d5bbc29fe321b3a7067b from qemu
ThreadSanitizer picks up potential races although we already use
barriers to ensure things are in the correct order when processing exit
requests. For true C11 defined behaviour across threads we need to use
relaxed atomic_set/atomic_read semantics to reassure tsan.
Backports commit 027d9a7d2911e993cdcbd21c7c35d1dd058f05bb from qemu
The ThreadSanitizer rightly complains that something initialised with a
normal access is later updated and read atomically.
Backports commit ce7cf6a973f4b614162b9518954d441fa5e32fc6 from qemu
The idiom CPU_GET_CLASS(cpu) is fairly extensively used in various
threads and trips of ThreadSanitizer due to the fact it updates
obj->class->object_cast_cache behind the scenes. As this is just a
fast-path cache there is no need to lock updates.
However to ensure defined C11 behaviour across threads we need to use
the plain atomic_read/set primitives and keep the sanitizer happy.
Backports commit b6b3ccfda015dcd5ab50f70c189ee5cc6c622e91 from qemu
This is to appease sanitizer builds which complain that:
"error: control reaches end of non-void function"
Backports commit 550276ae0a88851edda2cb7fcdd64256dbb8e314 from qemu
Add some notes on the use of the relaxed atomic access helpers and their
importance for defined behaviour in C11's multi-threaded memory model.
Backports commit e653bc6b0ff645c25b8a2eb607c18a5c98b59db6 from qemu
In the ARM v6 architecture, 'sub pc, pc, 1' is not an interworking
branch, so the computed new value is written to r15 as a normal
value. The architecture says that in this case, bits [1:0] of
the value written must be ignored if we are in ARM mode (or
bit [0] ignored if in Thumb mode); this is a change from the
ARMv4/v5 specification that behaviour is UNPREDICTABLE.
Use the correct mask on the PC value when doing a non-interworking
store to PC.
A popular library used on RaspberryPi uses this instruction
as part of a trick to determine whether it is running on
ARMv6 or ARMv7, and we were mishandling the sequence.
Fixes bug: https://bugs.launchpad.net/bugs/1625295
Backports commit 9b6a3ea7a699594162ed3d11e4e04b98568dc5c0 from qemu
Fix the decoding of iss_sf in disas_ld_lit.
The SF (Sixty-Four) field in the ISS (Instruction Specific Syndrome)
is a bit that specifies the width of the register that the
instruction loads to.
If cleared it specifies 32 bits.
If set it specifies 64 bits.
Backports commit 173ff58580b383a7841b18fddb293038c9d40d1c from qemu
Current CPU definition for AMD Opteron third generation includes
features like SSE4a and LAHF_LM support in emulated CPUID. These
features are present in K8 rev.E or K10 CPUs and later. However,
current G3 family and model describe 2nd generation K8 cores instead.
This is incorrect but was considered harmless until our tests found a
problem with linux kernels >= 3.10 (and maybe earlier) which specifically
check for Opteron K8 model when parsing CPUID leaf 0x80000001:
http://lxr.free-electrons.com/source/arch/x86/kernel/cpu/amd.c?v=3.16#L552
This code will disable LAHF_LM feature in /proc/cpuinfo if model number
is inconsistent.
This change sets Opteron_G3 family/model/stepping to 16/2/3 which is
a proper Opteron 3rd generation 2350 CPU.
Backports commit 339892d758efb2d0954160d41736a0eac9875d67 from qemu
A regression was introduced by commit 96193c22a "target-i386:
Move xsave component mask to features array": all
CPUID[EAX=0xD,ECX=0]:EAX bits were being reported as unmigratable
because they don't have feature names defined. This broke
"-cpu host" because it enables only migratable features by
default.
This adds a new field to FeatureWordInfo: migratable_flags, which
will make those features be reported as migratable even if they
don't have a property name defined.
Backports commit 6fb2fff75dceed1716e757882a6dfbadd9042407 from qemu
CPUState is a fairly common pointer to pass to these helpers. This means
if you need other arguments for the async_run_on_cpu case you end up
having to do a g_malloc to stuff additional data into the routine. For
the current users this isn't a massive deal but for MTTCG this gets
cumbersome when the only other parameter is often an address.
This adds the typedef run_on_cpu_func for helper functions which has an
explicit CPUState * passed as the first parameter. All the users of
run_on_cpu and async_run_on_cpu have had their helpers updated to use
CPUState where available.
Backports commit e0eeb4a21a3ca4b296220ce4449d8acef9de9049 from qemu
As discussed on the list [1], having a comment stating that this file
is "public domain" is arguably wrong and not legally binding. This patch
replaces that comment with a clear GPLv2+ license as proposed in [2].
[1] http://lists.nongnu.org/archive/html/qemu-devel/2016-09/msg06151.html
[2] http://lists.nongnu.org/archive/html/qemu-devel/2016-09/msg06217.html
Worth noting, compiler.h was originally created on 5c026320 by splitting
qemu-common.h. At the time, qemu-common.h was already GPLv2+.
Backports commit cc9d8a3b2c41c22fb09f90f3085e6036c199c3ca from qemu
This will ensure all checks for features[FEAT_KVM] in the code
will be correct in case the KVM CPUID leaf is completely
disabled.
Backports commit aec661de86894e914d2d82431d9cefa9a9a40213 from qemu
This will reuse the existing check/enforce logic in
x86_cpu_filter_features() to check the xsave component bits
against GET_SUPPORTED_CPUID.
Backports commit 96193c22ab39ea24f81e386ad7883260ff24f5fd from qemu
Instead of doing complex calculations and calling
kvm_arch_get_supported_cpuid() inside cpu_x86_cpuid(), calculate
the set of required XSAVE components earlier, at realize time.
Backports commit 2ca8a8becc2eeb5262e478ce502f5daa53f3d0bc from qemu
Move the xsave area size calculation from cpu_x86_cpuid() inside
its own function. While doing it, change it to use the XSAVE area
struct sizes for the initial size, instead of the magic 0x240
number.
Backports commit 1fda6198e4126af9988754c8824cfc9928649890 from qemu
Instead of assigning individual bits in a loop, just copy the
values from ena_mask.
Backports commit 8057c621b1b17cbcb35fe67d1a09ada9055873a9 from qemu
Instead of checking both env->features and ena_mask at two
different places in the CPUID code, initialize ena_mask based on
the features that are enabled for the CPU, and then clear
unsupported bits based on kvm_arch_get_supported_cpuid().
The results should be exactly the same, but it will make it
easier to move the mask calculation elsewhare, and reuse
x86_cpu_filter_features() for the kvm_arch_get_supported_cpuid()
check.
Backports commit 4928cd6de6b4211a79f98c8dc39115be1e815c2b from qemu