Forbid stack alignment change. (CCR)
Reserve FAULTMASK, BASEPRI registers.
Report any fault as a HardFault. Disable MemManage, BusFault and
UsageFault, so they always escalated to HardFault. (SHCSR)
Backports commit 22ab3460017cfcfb6b50f05838ad142e08becce5 from qemu
MSR_SMI_COUNT started being migrated in QEMU 2.12. Do not migrate it
on older machine types, or the subsection causes a load failure for
guests that use SMM.
Backports part of commit 990e0be2603511560168e1ad61f68294d951c39e from
qemu
qstring_from_substr() takes the index of the substring's first and
last character. qstring_from_substr(s, 0, SIZE_MAX) denotes an empty
substring. Awkward.
Shift the end index one to the right. This simplifies both
qstring_from_substr() and its callers.
Backports commit ba891d68b4ff17faaea3d3a8bfd82af3eed0a134 from qemu
qstring_from_substr() parameters @start and @end are of type int.
blkdebug_parse_filename(), blkverify_parse_filename(), nbd_parse_uri(),
and qstring_from_str() pass @end values of type size_t or ptrdiff_t.
Values exceeding INT_MAX get truncated, with possibly disastrous
results.
Such huge substrings seem unlikely, but we found one in a core dump,
where "info tlb" executed via QMP's human-monitor-command apparently
produced 35 GiB of output.
Fix by changing the parameters size_t.
Backports commit ad63c549ecd4af4a22a675a815edeb06b0e7bb6e from qemu
Rename DCACHE to DATA_CACHE and ICACHE to INSTRUCTION_CACHE.
This avoids conflict with Linux asm/cachectl.h macros and fixes
build failure on mips hosts.
Backports commit 5f00335aecafc9ad56592d943619d3252f8941f1 from qemu
When host vector registers and operations were introduced, I failed
to mark the registers call clobbered as required by the ABI.
Fixes: 770c2fc7bb7
Backports commit 672189cd586ea38a2c1d8ab91eb1f9dcff5ceb05 from qemu
To correctly handle small (less than TARGET_PAGE_SIZE) MPU regions,
we must correctly handle the case where the address being looked
up hits in an MPU region that is not small but the address is
in the same page as a small region. For instance if MPU region
1 covers an entire page from 0x2000 to 0x2400 and MPU region
2 is small and covers only 0x2200 to 0x2280, then for an access
to 0x2000 we must not return a result covering the full page
even though we hit the page-sized region 1. Otherwise we will
then cache that result in the TLB and accesses that should
hit region 2 will incorrectly find the region 1 information.
Check for the case where we miss an MPU region but it is still
within the same page, and in that case narrow the size we will
pass to tlb_set_page_with_attrs() for whatever the final
outcome is of the MPU lookup.
Backports commit 9d2b5a58f85be2d8e129c4b53d6708ecf8796e54 from qemu
In AdvSIMD we can only do 32x32 integer multiples although SVE is
capable of larger 64 bit multiples. As a result we can end up
generating invalid opcodes. Fix this by only reprting we can emit
mul vector ops if the size is small enough.
Fixes a crash on:
sve-all-short-v8.3+sve@vq3/insn_mul_z_zi___INC.risu.bin
When running on AArch64 hardware.
Backports commit e65a5f227d77a5dbae7a7123c3ee915ee4bd80cf from qemu
'I' was being double-incremented; correctly within the inner loop
and incorrectly within the outer loop.
Backports commit 628fc75f3a3bb115de3b445c1a18547c44613cfe from qemu
For M-profile exception returns, the mmu index to use for exception
return unstacking is supposed to be that of wherever we are returning to:
* if returning to handler mode, privileged
* if returning to thread mode, privileged or unprivileged depending on
CONTROL.nPRIV for the destination security state
We were passing the wrong thing as the 'priv' argument to
arm_v7m_mmu_idx_for_secstate_and_priv(). The effect was that guests
which programmed the MPU to behave differently for privileged and
unprivileged code could get spurious MemManage Unstack exceptions.
Backports commit 2b83714d4ea659899069a4b94aa2dfadc847a013 from qemu
Use MAKE_64BIT_MASK instead of open-coding. Remove an odd
vector size check that is unlikely to be more profitable
than 3 64-bit integer stores. Correct the iteration for WORD
to avoid writing too much data.
Fixes RISU tests of PTRUE for VL 256.
Backports commit 973558a3f869e591d2406dd8226ec0c4e32a3c3e from qemu
Normally this is automatic in the size restrictions that are placed
on vector sizes coming from the implementation. However, for the
legitimate size tuple [oprsz=8, maxsz=32], we need to clear the final
24 bytes of the vector register. Without this check, do_dup selects
TCG_TYPE_V128 and clears only 16 bytes.
Backports commit 499748d7683198a765d17b4fdf6901ab9dca920c from qemu
These instructions must perform the sve_access_check, but
since they are implemented as NOPs there is no generated
code to elide when the access check fails.
Backports commit 2f95a3b09aebdcb5c9152a7ac434a5d57441fe82 from qemu
This reverts commit 208ecb3e1acc8d55dab49fdf721a86d513691688. This was
causing problems by making DEF_TARGET_LIST pointless and having to
jump through hoops to build on mingw with a dully enabled config.
This includes a change to fix the per-guest TCG test probe which was
added after 208ecb3 and used TARGET_LIST.
Backports commit 2b1f35b9a85cf0232615a67e7ff523137a58795e from qemu
Types & visitors are coupled and must be handled together to avoid
temporary build regression.
Wrap generated types/visitor code with #if/#endif using the context
helpers. Derived from a patch by Marc-André.
Backports commit 9f88c66211342714b06c051140fd64ffd338dbe1 from qemu
Wrap generated code with #if/#endif using an 'ifcontext' on
QAPIGenCSnippet objects.
This makes a conditional event's qapi_event_send_FOO() compile-time
conditional, but its enum QAPIEvent member remains unconditional for
now. A follow up patch "qapi-event: add 'if' condition to implicit
event enum" will improve this.
Backports commit c3cd6aa0201c126eda8dc71b60e7aa259a3e79b9 from qemu
Add helpers to wrap generated code with #if/#endif lines.
A later patch wants to use QAPIGen for generating C snippets rather
than full C files with copyright headers etc. Splice in class
QAPIGenCCode between QAPIGen and QAPIGenC.
Add a 'with' statement context manager that will be used to wrap
generator visitor methods. The manager will check if code was
generated before adding #if/#endif lines on QAPIGenCSnippet
objects. Used in the following patches.
Backports commit ded9fc28b5a07213f3e5e8ac7ea0494b85813de1 from qemu
Skip preprocessor lines when adding indentation, since that would
likely result in invalid code.
Backports commit 485d948ce86f5a096dc848ec31b70cd66452d40d from qemu
We commonly initialize attributes to None in .init(), then set their
real value in .check(). Accessing the attribute before .check()
yields None. If we're lucky, the code that accesses the attribute
prematurely chokes on None.
It won't for .ifcond, because None is a legitimate value.
Leave the ifcond attribute undefined until check().
Backports commit 4fca21c1b043cb1ef2e197ef15e7474ba668d925 from qemu
Built-in objects remain unconditional. Explicitly defined objects use
the condition specified in the schema. Implicitly defined objects
inherit their condition from their users. For most of them, there is
exactly one user, so the condition to use is obvious. The exception
is wrapped types generated for simple union variants, which can be
shared by any number of simple unions. The tight condition would be
the disjunction of the conditions of these simple unions. For now,
use the wrapped type's condition instead. Much simpler and good
enough for now.
Backports commit 2cbc94376e718448699036be7f6e29ab75312b70 from qemu
Accept 'if' key in top-level elements, accepted as string or list of
string type. The following patches will modify the test visitor to
check the value is correctly saved, and generate #if/#endif code (as a
single #if/endif line or a series for a list).
Example of 'if' key:
{ 'struct': 'TestIfStruct', 'data': { 'foo': 'int' },
'if': 'defined(TEST_IF_STRUCT)' }
The generated code is for now *unconditional*. Later patches generate
the conditionals.
Backports commit 967c885108f18e5065744719f7959ba5ea0a5b0d from qemu
New option will be used to allow commands, which are prepared/need
to run, during preconfig state. Other commands that should be able
to run in preconfig state, should be amended to not expect machine
in initialized state or deal with it.
For compatibility reasons, commands that don't use new flag
'allow-preconfig' explicitly are not permitted to run in
preconfig state but allowed in all other states like they used
to be.
Within this patch allow following commands in preconfig state:
qmp_capabilities
query-qmp-schema
query-commands
query-command-line-options
query-status
exit-preconfig
to allow qmp connection, basic introspection and moving to the next
state.
PS:
set-numa-node and query-hotpluggable-cpus will be enabled later in
a separate patches.
Backports commit d6fe3d02e9a2ce7b63a0723d0b71f3013f59d705 from qemu
It was missed in the first version of OOB series. We should check this
to make sure we throw the right error when fault value is passed in.
Backports commit 9408860165e07aaadec66c336f3dc849b945a8ed from qemu
Here "oob" stands for "Out-Of-Band". When "allow-oob" is set, it means
the command allows out-of-band execution.
The "oob" idea is proposed by Markus Armbruster in following thread:
https://lists.gnu.org/archive/html/qemu-devel/2017-09/msg02057.html
This new "allow-oob" boolean will be exposed by "query-qmp-schema" as
well for command entries, so that QMP clients can know which commands
can be used in out-of-band calls. For example the command "migrate"
originally looks like:
{"name": "migrate", "ret-type": "17", "meta-type": "command",
"arg-type": "86"}
And it'll be changed into:
{"name": "migrate", "ret-type": "17", "allow-oob": false,
"meta-type": "command", "arg-type": "86"}
This patch only provides the QMP interface level changes. It does not
contain the real out-of-band execution implementation yet.
Backports commit 876c67512e2af8c05686faa9f9ff49b38d7a392c from qemu
This implements NPT suport for SVM by hooking into
x86_cpu_handle_mmu_fault where it reads the stage-1 page table. Whether
we need to perform this 2nd stage translation, and how, is decided
during vmrun and stored in hflags2, along with nested_cr3 and
nested_pg_mode.
As get_hphys performs a direct cpu_vmexit in case of NPT faults, we need
retaddr in that function. To avoid changing the signature of
cpu_handle_mmu_fault, this passes the value from tlb_fill to get_hphys
via the CPU state.
This was tested successfully via the Jailhouse hypervisor.
Backports commit fe441054bb3f0c75ff23335790342c0408e11c3a from qemu
In commit 71b9a45330fe220d1 we changed the condition we use
to determine whether we need to refill the TLB in
get_page_addr_code() to
if (unlikely(env->tlb_table[mmu_idx][index].addr_code !=
(addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK)))) {
This isn't the right check (it will falsely fail if the
input addr happens to have the low bit corresponding to
TLB_INVALID_MASK set, for instance). Replace it with a
use of the new tlb_hit() function, which is the correct test.
Backports commit e4c967a7201400d7f76e5847d5b4c4ac9e2566e0 from qemu
The condition to check whether an address has hit against a particular
TLB entry is not completely trivial. We do this in various places, and
in fact in one place (get_page_addr_code()) we have got the condition
wrong. Abstract it out into new tlb_hit() and tlb_hit_page() inline
functions (one for a known-page-aligned address and one for an
arbitrary address), and use them in all the places where we had the
condition correct.
This is a no-behaviour-change patch; we leave fixing the buggy
code in get_page_addr_code() to a subsequent patch
Backports commit 334692bce7f0653a93b8d84ecde8c847b08dec38 from qemu
There is no need to re-set these 3 features already
implied by the call to aarch64_a15_initfn.
Backports commit 0b33968e7f4cf998f678b2d1a5be3d6f3f3513d8 from qemu
There is no need to re-set these 9 features already
implied by the call to aarch64_a57_initfn.
Backports commit 156a7065365578deb3d63c2b5b69a4b5999a8fcc from qemu
Leave ARM_CP_SVE, removing ARM_CP_FPU; the sve_access_check
produced by the flag already includes fp_access_check. If
we also check ARM_CP_FPU the double fp_access_check asserts.
Backports commit 11d7870b1b4d038d7beb827f3afa72e284701351 from qemu
We already check for the same condition within the normal integer
sdiv and sdiv64 helpers. Use a slightly different formation that
does not require deducing the expression type.
Backports commit 7e8fafbfd0537937ba8fb366a90ea6548cc31576 from qemu
Since kernel commit a86bd139f2 (arm64: arch_timer: Enable CNTVCT_EL0
trap..), released in kernel version v4.12, user-space has been able
to read these system registers. As we can't use QEMUTimer's in
linux-user mode we just directly call cpu_get_clock().
Backports commit 26c4a83bd4707797868174332a540f7d61288d15 from qemu
We've already added the helpers with an SVE patch, all that remains
is to wire up the aa64 and aa32 translators. Enable the feature
within -cpu max for CONFIG_USER_ONLY.
Backports commit 26c470a7bb4233454137de1062341ad48947f252 from qemu
Enhance the existing helpers to support SVE, which takes the
index from each 128-bit segment. The change has no effect
for AdvSIMD, since there is only one such segment.
Backports commit 18fc24057815bf3d956cfab892a2bc2344bd1dcb from qemu
For aa64 advsimd, we had been passing the pre-indexed vector.
However, sve applies the index to each 128-bit segment, so we
need to pass in the index separately.
For aa32 advsimd, the fp32 operation always has index 0, but
we failed to interpret the fp16 index correctly.
Backports commit 2cc99919a81a62589a4a6b0f365eabfead1db1a7 from qemu
It calls cpu_loop_exit in system emulation mode (and should never be
called in user emulation mode).
Backports commit 50b3de6e5cd464dcc20e3a48f5a09e0299a184ac from qemu
We need to terminate the translation block after STGI so that pending
interrupts can be injected.
This fixes pending NMI injection for Jailhouse which uses "stgi; clgi"
to open a brief injection window.
Backports commit df2518aa587a0157bbfbc635fe47295629d9914a from qemu
Check for SVM interception prior to injecting an NMI. Tested via the
Jailhouse hypervisor.
Backports commit 02f7fd25a446a220905c2e5cb0fc3655d7f63b29 from qemu
Coverity does not like the new _Float* types that are used by
recent glibc, and croaks on every single file that includes
stdlib.h. Add dummy typedefs to please it.
Backports commit a1a98357e3fdfce92b5ed0c6728489b9992fecb5 from qemu
The implementation of these two instructions was swapped.
At the same time, unify the setup of eflags for the insn group.
Backports commit 13672386a93fef64cfd33bd72fbf3d80f2c00e94 from qemu
When an IOMMUMemoryRegion is in front of a virtio device,
address_space_cache_init does not set cache->ptr as the memory
region is not RAM. However when the device performs an access,
we end up in glue() which performs the translation and then uses
MAP_RAM. This latter uses the unset ptr and returns a wrong value
which leads to a SIGSEV in address_space_lduw_internal_cached_slow,
for instance.
In slow path cache->ptr is NULL and MAP_RAM must redirect to
qemu_map_ram_ptr((mr)->ram_block, ofs).
As MAP_RAM, IS_DIRECT and INVALIDATE are the same in _cached_slow
and non cached mode, let's remove those macros.
This fixes the use cases featuring vIOMMU (Intel and ARM SMMU)
which lead to a SIGSEV.
Fixes: 48564041a73a (exec: reintroduce MemoryRegion caching)
Backports part of commit a99761d3c85679da380c0f597468acd3dc1b53b3 from
qemu
Determining the size of a field is useful when you don't have a struct
variable handy. Open-coding this is ugly.
This patch adds the sizeof_field() macro, which is similar to
typeof_field(). Existing instances are updated to use the macro.
Backports commit f18793b096e69c7acfce66cded483ba9fc01762a from qemu
Offset can be larger than 16 bit from nanoMIPS,
and immediate field can be larger than 16 bits as well.
Backports commit 72e1f16f18fe62504f8f25d7a3f6813b24b221be from qemu
Fix to raise a Reserved Instruction exception when given fs is not
available from CTC1.
Backports commit f48a2cb21824217a61ec7be797860a0702e5325c from qemu
Allow ARMv8M to handle small MPU and SAU region sizes, by making
get_phys_add_pmsav8() set the page size to the 1 if the MPU or
SAU region covers less than a TARGET_PAGE_SIZE.
We choose to use a size of 1 because it makes no difference to
the core code, and avoids having to track both the base and
limit for SAU and MPU and then convert into an artificially
restricted "page size" that the core code will then ignore.
Since the core TCG code can't handle execution from small
MPU regions, we strip the exec permission from them so that
any execution attempts will cause an MPU exception, rather
than allowing it to end up with a cpu_abort() in
get_page_addr_code().
(The previous code's intention was to make any small page be
treated as having no permissions, but unfortunately errors
in the implementation meant that it didn't behave that way.
It's possible that some binaries using small regions were
accidentally working with our old behaviour and won't now.)
We also retain an existing bug, where we ignored the possibility
that the SAU region might not cover the entire page, in the
case of executable regions. This is necessary because some
currently-working guest code images rely on being able to
execute from addresses which are covered by a page-sized
MPU region but a smaller SAU region. We can remove this
workaround if we ever support execution from small regions.
Backports commit 720424359917887c926a33d248131fbff84c9c28 from qemu
We want to handle small MPU region sizes for ARMv7M. To do this,
make get_phys_addr_pmsav7() set the page size to the region
size if it is less that TARGET_PAGE_SIZE, rather than working
only in TARGET_PAGE_SIZE chunks.
Since the core TCG code con't handle execution from small
MPU regions, we strip the exec permission from them so that
any execution attempts will cause an MPU exception, rather
than allowing it to end up with a cpu_abort() in
get_page_addr_code().
(The previous code's intention was to make any small page be
treated as having no permissions, but unfortunately errors
in the implementation meant that it didn't behave that way.
It's possible that some binaries using small regions were
accidentally working with our old behaviour and won't now.)
Backports commit e5e40999b5e03567ef654546e3d448431643f8f3 from qemu
Enable TOPOEXT feature on EPYC CPU. This is required to support
hyperthreading on VM guests. Also extend xlevel to 0x8000001E.
Disable topoext on PC_COMPAT_2_12 and keep xlevel 0x8000000a.
Backports commit e00516475c270dcb6705753da96063f95699abf2 from qemu
This is part of topoext support. To keep the compatibility, it is better
we support all the combination of nr_cores and nr_threads currently
supported. By allowing more nr_cores and nr_threads, we might end up with
more nodes than we can actually support with the real hardware. We need to
fix up the node id to make this work. We can achieve this by shifting the
socket_id bits left to address more nodes.
Backports commit 631be32155dbafa1fe886f2488127956c9120ba6 from qemu
AMD future CPUs expose a mechanism to tell the guest that the
Speculative Store Bypass Disable is not needed and that the
CPU is all good.
This is exposed via the CPUID 8000_0008.EBX[26] bit.
See 124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
A copy of this document is available at
https://bugzilla.kernel.org/show_bug.cgi?id=199889
Backports commit 254790a909a2f153d689bfa7d8e8f0386cda870d from qemu
AMD future CPUs expose _two_ ways to utilize the Intel equivalant
of the Speculative Store Bypass Disable. The first is via
the virtualized VIRT_SPEC CTRL MSR (0xC001_011f) and the second
is via the SPEC_CTRL MSR (0x48). The document titled:
124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
gives priority of SPEC CTRL MSR over the VIRT SPEC CTRL MSR.
A copy of this document is available at
https://bugzilla.kernel.org/show_bug.cgi?id=199889
Anyhow, this means that on future AMD CPUs there will be _two_ ways to
deal with SSBD.
Backports commit a764f3f7197f4d7ad8fe8424269933de912224cb from qemu
OSPKE is not a static feature flag: it changes dynamically at
runtime depending on CR4, and it was never configurable: KVM
never returned OSPKE on GET_SUPPORTED_CPUID, and on TCG enables
it automatically if CR4_PKE_MASK is set.
Remove OSPKE from the feature name array so users don't try to
configure it manually.
Backports commit 9ccb9784b57804f5c74434ad6ccb66650a015ffc from qemu
OSXAVE is not a static feature flag: it changes dynamically at
runtime depending on CR4, and it was never configurable: KVM
never returned OSXSAVE on GET_SUPPORTED_CPUID, and it is not
included in TCG_EXT_FEATURES.
Remove OSXSAVE from the feature name array so users don't try to
configure it manually.
Backports commit f1a23522b03a569f13aad49294bb4c4b1a9500c7 from qemu
Add support for cpuid leaf CPUID_8000_001E. Build the config that closely
match the underlying hardware. Please refer to the Processor Programming
Reference (PPR) for AMD Family 17h Model for more details.
Backports commit ed78467a214595a63af7800a073a03ffe37cd7db from qemu
This commit removes the PYTHON_UTF8 workaround. The problem with setting
LC_ALL= LANG=C LC_CTYPE=en_US.UTF-8
is that the en_US.UTF-8 locale might not be available. In this case
setting above locales results in build errors even though another UTF-8
locale was originally set [1]. The only stable way of fixing the
encoding problem is by specifying the encoding in Python, like the
previous commit does.
[1] https://bugs.gentoo.org/657766
Backports commit 0d6b93deeeb3cc190692d629f5927befdc8b1fb8 from qemu
Python 2 happily reads UTF-8 files in text mode, but Python 3 requires
either UTF-8 locale or an explicit encoding passed to open(). Commit
d4e5ec877ca fixed this by setting the en_US.UTF-8 locale. Falls apart
when the locale isn't be available.
Matthias Maier and Arfrever Frehtes Taifersar Arahesis proposed to use
binary mode instead, with manual conversion from bytes to str. Works,
but opening with an explicit encoding is simpler, so do that.
Since Python 2's open() doesn't support the encoding parameter, we
need to suppress it with a version check.
Backports commit de685ae5e9a4b523513033bd6cadc8187a227170 from qemu
It often happens that just a few discriminator values imply extra data in
a flat union. Existing checks did not make possible to leave other values
uncovered. Such cases had to be worked around by either stating a dummy
(empty) type or introducing another (subset) discriminator enumeration.
Both options create redundant entities in qapi files for little profit.
With this patch it is not necessary anymore to add designated union
fields for every possible value of a discriminator enumeration.
Backports commit 800877bb1639d38ffaebe312a37b61c66bb10c83 from qemu
The event generator produces an enum, and put it in the last visited
module. It fits better in the main module, since it's the set of all
visited events, from all modules.
Backports commit f030ffd39d6c1ea8fff281be5e4b19c819d7ce10 from qemu
Unlike ARMv7-M, ARMv6-M and ARMv8-M Baseline only supports naturally
aligned memory accesses for load/store instructions.
Backports commit 2aeba0d007d33efa12a6339bb140aa634e0d52eb from qemu
This feature is intended to distinguish ARMv8-M variants: Baseline and
Mainline. ARMv7-M compatibility requires the Main Extension. ARMv6-M
compatibility is provided by all ARMv8-M implementations.
Backports commit cc2ae7c9de14efd72c6205825eb7cd980ac09c11 from qemu
The arrays were made static, "if" was simplified because V7M and V8M
define V6 feature.
Backports commit 8297cb13e407db8a96cc7ed6b6a6c318a150759a from qemu
The assembler in most versions of Mac OS X is pretty old and does not
support the xgetbv instruction. To go around this problem, the raw
encoding of the instruction is used instead.
Backports commit 1019242af11400252f6735ca71a35f81ac23a66d from qemu
ARMv6-M supports 6 Thumb2 instructions. This patch checks for these
instructions and allows their execution.
Like Thumb2 cores, ARMv6-M always interprets BL instruction as 32-bit.
This patch is required for future Cortex-M0 support.
Backports commit 14120108f87b3f9e1beacdf0a6096e464e62bb65 from qemu
Rearrange the arithmetic so that we are agnostic about the total size
of the vector and the size of the element. This will allow us to index
up to the 32nd byte and with 16-byte elements.
Backports commit 66f2dbd783d0b6172043e3679171421b2d0bac11 from qemu
Now we have stn_p() and ldn_p() we can use them in various
functions in exec.c that used to have their own switch-on-size code.
Backports commit 6d3ede5410e05c5f6221dab1daf99164fd6bf879 from qemu
In subpage_read() we perform a load of the data into a local buffer
which we then access using ldub_p(), lduw_p(), ldl_p() or ldq_p()
depending on its size, storing the result into the uint64_t *data.
Since ldl_p() returns an 'int', this means that for the 4-byte
case we will sign-extend the data, whereas for 1 and 2 byte
reads we zero-extend it.
This ought not to matter since the caller will likely ignore values in
the high bytes of the data, but add a cast so that we're consistent.
Backports commit 22672c6075a16d1998e37686f02ed4bd2fb30f78 from qemu
There's a common pattern in QEMU where a function needs to perform
a data load or store of an N byte integer in a particular endianness.
At the moment this is handled by doing a switch() on the size and
calling the appropriate ld*_p or st*_p function for each size.
Provide a new family of functions ldn_*_p() and stn_*_p() which
take the size as an argument and do the switch() themselves.
Backports commit afa4f6653dca095f63f3fe7f2001e9334f5676c1 from qemu
The 'addr' field in the CPUIOTLBEntry struct has a rather non-obvious
use; add a comment documenting it (reverse-engineered from what
the code that sets it is doing).
Backports commit ace4109011b4912b24e76f152e2cf010e78819c5 from qemu
The API for cpu_transaction_failed() says that it takes the physical
address for the failed transaction. However we were actually passing
it the offset within the target MemoryRegion. We don't currently
have any target CPU implementations of this hook that require the
physical address; fix this bug so we don't get confused if we ever
do add one.
Backports commit 2d54f19401bc54b3b56d1cc44c96e4087b604b97 from qemu
This allows KVM with the Book3S radix MMU mode to take advantage of
THP and install larger pages in the partition scope page tables (the
host translation).
Backports commit 0c1272cc7c72dfe0ef66be8f283cf67c74b58586 from qemu
Add information for cpuid 0x8000001D leaf. Populate cache topology information
for different cache types (Data Cache, Instruction Cache, L2 and L3) supported
by 0x8000001D leaf. Please refer to the Processor Programming Reference (PPR)
for AMD Family 17h Model for more details.
Backports commit 8f4202fb1080f86958782b1fca0bf0279f67d136 from qemu
Always initialize CPUCaches structs with cache information, even
if legacy_cache=true. Use different CPUCaches struct for
CPUID[2], CPUID[4], and the AMD CPUID leaves.
This will simplify a lot the logic inside cpu_x86_cpuid()
Backports commit a9f27ea9adc8c695197bd08f2e938ef7b4183f07 from qemu
Rather than limit total TB size to PAGE-32 bytes, end the TB when
near the end of a page. This should provide proper semantics of
SIGSEGV when executing near the end of a page.
Backports commit 4c7a0f6f34869b3dfe7091d28ff27a8dfbdd8b70 from qemu
Removed ctx->insn_pc in favour of ctx->base.pc_next.
Yes, it is annoying, but didn't want to waste its 4 bytes.
Backports commit a575cbe01caecf22ab322a9baa5930a6d9e39ca6 from qemu
The name gen_lookup_tb is at odds with tcg_gen_lookup_and_goto_tb.
For these cases, we do indeed want to exit back to the main loop.
Similarly, DISAS_UPDATE performs no actual update, whereas DISAS_EXIT
does what it says.
Backports commit 4106f26e95c83b8759c3fe61a4d3a1fa740db0a9 from qemu
These are all indirect or out-of-page direct jumps.
We can indirectly chain to the next TB without going
back to the main loop.
Backports commit 8aaf7da9c3b1f282b5a123de3e87a2e6ca87f3b9 from qemu