When host vector registers and operations were introduced, I failed
to mark the registers call clobbered as required by the ABI.
Fixes: 770c2fc7bb7
Backports commit 672189cd586ea38a2c1d8ab91eb1f9dcff5ceb05 from qemu
To correctly handle small (less than TARGET_PAGE_SIZE) MPU regions,
we must correctly handle the case where the address being looked
up hits in an MPU region that is not small but the address is
in the same page as a small region. For instance if MPU region
1 covers an entire page from 0x2000 to 0x2400 and MPU region
2 is small and covers only 0x2200 to 0x2280, then for an access
to 0x2000 we must not return a result covering the full page
even though we hit the page-sized region 1. Otherwise we will
then cache that result in the TLB and accesses that should
hit region 2 will incorrectly find the region 1 information.
Check for the case where we miss an MPU region but it is still
within the same page, and in that case narrow the size we will
pass to tlb_set_page_with_attrs() for whatever the final
outcome is of the MPU lookup.
Backports commit 9d2b5a58f85be2d8e129c4b53d6708ecf8796e54 from qemu
In AdvSIMD we can only do 32x32 integer multiples although SVE is
capable of larger 64 bit multiples. As a result we can end up
generating invalid opcodes. Fix this by only reprting we can emit
mul vector ops if the size is small enough.
Fixes a crash on:
sve-all-short-v8.3+sve@vq3/insn_mul_z_zi___INC.risu.bin
When running on AArch64 hardware.
Backports commit e65a5f227d77a5dbae7a7123c3ee915ee4bd80cf from qemu
'I' was being double-incremented; correctly within the inner loop
and incorrectly within the outer loop.
Backports commit 628fc75f3a3bb115de3b445c1a18547c44613cfe from qemu
For M-profile exception returns, the mmu index to use for exception
return unstacking is supposed to be that of wherever we are returning to:
* if returning to handler mode, privileged
* if returning to thread mode, privileged or unprivileged depending on
CONTROL.nPRIV for the destination security state
We were passing the wrong thing as the 'priv' argument to
arm_v7m_mmu_idx_for_secstate_and_priv(). The effect was that guests
which programmed the MPU to behave differently for privileged and
unprivileged code could get spurious MemManage Unstack exceptions.
Backports commit 2b83714d4ea659899069a4b94aa2dfadc847a013 from qemu
Use MAKE_64BIT_MASK instead of open-coding. Remove an odd
vector size check that is unlikely to be more profitable
than 3 64-bit integer stores. Correct the iteration for WORD
to avoid writing too much data.
Fixes RISU tests of PTRUE for VL 256.
Backports commit 973558a3f869e591d2406dd8226ec0c4e32a3c3e from qemu
Normally this is automatic in the size restrictions that are placed
on vector sizes coming from the implementation. However, for the
legitimate size tuple [oprsz=8, maxsz=32], we need to clear the final
24 bytes of the vector register. Without this check, do_dup selects
TCG_TYPE_V128 and clears only 16 bytes.
Backports commit 499748d7683198a765d17b4fdf6901ab9dca920c from qemu
These instructions must perform the sve_access_check, but
since they are implemented as NOPs there is no generated
code to elide when the access check fails.
Backports commit 2f95a3b09aebdcb5c9152a7ac434a5d57441fe82 from qemu
This reverts commit 208ecb3e1acc8d55dab49fdf721a86d513691688. This was
causing problems by making DEF_TARGET_LIST pointless and having to
jump through hoops to build on mingw with a dully enabled config.
This includes a change to fix the per-guest TCG test probe which was
added after 208ecb3 and used TARGET_LIST.
Backports commit 2b1f35b9a85cf0232615a67e7ff523137a58795e from qemu
Types & visitors are coupled and must be handled together to avoid
temporary build regression.
Wrap generated types/visitor code with #if/#endif using the context
helpers. Derived from a patch by Marc-André.
Backports commit 9f88c66211342714b06c051140fd64ffd338dbe1 from qemu
Wrap generated code with #if/#endif using an 'ifcontext' on
QAPIGenCSnippet objects.
This makes a conditional event's qapi_event_send_FOO() compile-time
conditional, but its enum QAPIEvent member remains unconditional for
now. A follow up patch "qapi-event: add 'if' condition to implicit
event enum" will improve this.
Backports commit c3cd6aa0201c126eda8dc71b60e7aa259a3e79b9 from qemu
Add helpers to wrap generated code with #if/#endif lines.
A later patch wants to use QAPIGen for generating C snippets rather
than full C files with copyright headers etc. Splice in class
QAPIGenCCode between QAPIGen and QAPIGenC.
Add a 'with' statement context manager that will be used to wrap
generator visitor methods. The manager will check if code was
generated before adding #if/#endif lines on QAPIGenCSnippet
objects. Used in the following patches.
Backports commit ded9fc28b5a07213f3e5e8ac7ea0494b85813de1 from qemu
Skip preprocessor lines when adding indentation, since that would
likely result in invalid code.
Backports commit 485d948ce86f5a096dc848ec31b70cd66452d40d from qemu
We commonly initialize attributes to None in .init(), then set their
real value in .check(). Accessing the attribute before .check()
yields None. If we're lucky, the code that accesses the attribute
prematurely chokes on None.
It won't for .ifcond, because None is a legitimate value.
Leave the ifcond attribute undefined until check().
Backports commit 4fca21c1b043cb1ef2e197ef15e7474ba668d925 from qemu
Built-in objects remain unconditional. Explicitly defined objects use
the condition specified in the schema. Implicitly defined objects
inherit their condition from their users. For most of them, there is
exactly one user, so the condition to use is obvious. The exception
is wrapped types generated for simple union variants, which can be
shared by any number of simple unions. The tight condition would be
the disjunction of the conditions of these simple unions. For now,
use the wrapped type's condition instead. Much simpler and good
enough for now.
Backports commit 2cbc94376e718448699036be7f6e29ab75312b70 from qemu
Accept 'if' key in top-level elements, accepted as string or list of
string type. The following patches will modify the test visitor to
check the value is correctly saved, and generate #if/#endif code (as a
single #if/endif line or a series for a list).
Example of 'if' key:
{ 'struct': 'TestIfStruct', 'data': { 'foo': 'int' },
'if': 'defined(TEST_IF_STRUCT)' }
The generated code is for now *unconditional*. Later patches generate
the conditionals.
Backports commit 967c885108f18e5065744719f7959ba5ea0a5b0d from qemu
New option will be used to allow commands, which are prepared/need
to run, during preconfig state. Other commands that should be able
to run in preconfig state, should be amended to not expect machine
in initialized state or deal with it.
For compatibility reasons, commands that don't use new flag
'allow-preconfig' explicitly are not permitted to run in
preconfig state but allowed in all other states like they used
to be.
Within this patch allow following commands in preconfig state:
qmp_capabilities
query-qmp-schema
query-commands
query-command-line-options
query-status
exit-preconfig
to allow qmp connection, basic introspection and moving to the next
state.
PS:
set-numa-node and query-hotpluggable-cpus will be enabled later in
a separate patches.
Backports commit d6fe3d02e9a2ce7b63a0723d0b71f3013f59d705 from qemu
It was missed in the first version of OOB series. We should check this
to make sure we throw the right error when fault value is passed in.
Backports commit 9408860165e07aaadec66c336f3dc849b945a8ed from qemu
Here "oob" stands for "Out-Of-Band". When "allow-oob" is set, it means
the command allows out-of-band execution.
The "oob" idea is proposed by Markus Armbruster in following thread:
https://lists.gnu.org/archive/html/qemu-devel/2017-09/msg02057.html
This new "allow-oob" boolean will be exposed by "query-qmp-schema" as
well for command entries, so that QMP clients can know which commands
can be used in out-of-band calls. For example the command "migrate"
originally looks like:
{"name": "migrate", "ret-type": "17", "meta-type": "command",
"arg-type": "86"}
And it'll be changed into:
{"name": "migrate", "ret-type": "17", "allow-oob": false,
"meta-type": "command", "arg-type": "86"}
This patch only provides the QMP interface level changes. It does not
contain the real out-of-band execution implementation yet.
Backports commit 876c67512e2af8c05686faa9f9ff49b38d7a392c from qemu
This implements NPT suport for SVM by hooking into
x86_cpu_handle_mmu_fault where it reads the stage-1 page table. Whether
we need to perform this 2nd stage translation, and how, is decided
during vmrun and stored in hflags2, along with nested_cr3 and
nested_pg_mode.
As get_hphys performs a direct cpu_vmexit in case of NPT faults, we need
retaddr in that function. To avoid changing the signature of
cpu_handle_mmu_fault, this passes the value from tlb_fill to get_hphys
via the CPU state.
This was tested successfully via the Jailhouse hypervisor.
Backports commit fe441054bb3f0c75ff23335790342c0408e11c3a from qemu
In commit 71b9a45330fe220d1 we changed the condition we use
to determine whether we need to refill the TLB in
get_page_addr_code() to
if (unlikely(env->tlb_table[mmu_idx][index].addr_code !=
(addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK)))) {
This isn't the right check (it will falsely fail if the
input addr happens to have the low bit corresponding to
TLB_INVALID_MASK set, for instance). Replace it with a
use of the new tlb_hit() function, which is the correct test.
Backports commit e4c967a7201400d7f76e5847d5b4c4ac9e2566e0 from qemu
The condition to check whether an address has hit against a particular
TLB entry is not completely trivial. We do this in various places, and
in fact in one place (get_page_addr_code()) we have got the condition
wrong. Abstract it out into new tlb_hit() and tlb_hit_page() inline
functions (one for a known-page-aligned address and one for an
arbitrary address), and use them in all the places where we had the
condition correct.
This is a no-behaviour-change patch; we leave fixing the buggy
code in get_page_addr_code() to a subsequent patch
Backports commit 334692bce7f0653a93b8d84ecde8c847b08dec38 from qemu
There is no need to re-set these 3 features already
implied by the call to aarch64_a15_initfn.
Backports commit 0b33968e7f4cf998f678b2d1a5be3d6f3f3513d8 from qemu
There is no need to re-set these 9 features already
implied by the call to aarch64_a57_initfn.
Backports commit 156a7065365578deb3d63c2b5b69a4b5999a8fcc from qemu
Leave ARM_CP_SVE, removing ARM_CP_FPU; the sve_access_check
produced by the flag already includes fp_access_check. If
we also check ARM_CP_FPU the double fp_access_check asserts.
Backports commit 11d7870b1b4d038d7beb827f3afa72e284701351 from qemu
We already check for the same condition within the normal integer
sdiv and sdiv64 helpers. Use a slightly different formation that
does not require deducing the expression type.
Backports commit 7e8fafbfd0537937ba8fb366a90ea6548cc31576 from qemu
Since kernel commit a86bd139f2 (arm64: arch_timer: Enable CNTVCT_EL0
trap..), released in kernel version v4.12, user-space has been able
to read these system registers. As we can't use QEMUTimer's in
linux-user mode we just directly call cpu_get_clock().
Backports commit 26c4a83bd4707797868174332a540f7d61288d15 from qemu
We've already added the helpers with an SVE patch, all that remains
is to wire up the aa64 and aa32 translators. Enable the feature
within -cpu max for CONFIG_USER_ONLY.
Backports commit 26c470a7bb4233454137de1062341ad48947f252 from qemu
Enhance the existing helpers to support SVE, which takes the
index from each 128-bit segment. The change has no effect
for AdvSIMD, since there is only one such segment.
Backports commit 18fc24057815bf3d956cfab892a2bc2344bd1dcb from qemu
For aa64 advsimd, we had been passing the pre-indexed vector.
However, sve applies the index to each 128-bit segment, so we
need to pass in the index separately.
For aa32 advsimd, the fp32 operation always has index 0, but
we failed to interpret the fp16 index correctly.
Backports commit 2cc99919a81a62589a4a6b0f365eabfead1db1a7 from qemu