Commit graph

6343 commits

Author SHA1 Message Date
AeonLucid 2d8117a0f1 Fixes #1143 (#1144)
Backports commit c46e745338f880a7991adcd53b683722a5d52ad2 from unicorn.
2020-01-14 09:08:41 -05:00
naq de842ee76c docs: we no longer requires python2 in building
Backports commit 27cf6617a3d9b0824fe40badf2c6b97d40e395c8 from unicorn.
2020-01-14 09:07:44 -05:00
Chen Huitao 00ffa5c930 Remove warnings (#1140)
* remove warnings on windows with vs2019.

* remove warnings.

Backports commit ca6516ff790f2c6b2bc59a6b7472cb25be0f82b8 from unicorn.
2020-01-14 09:05:43 -05:00
BAYET bcef414231 Handle serialization of cpu context save (#1129)
* Handle the cpu context save in a more pythonic way, so the context can be serialized and reuse in an other process using the same emulator architecture and modes

* Fix type error ; mistakes a size_t uint64_t ; breaks in 32bit...

Backports commit 8987ad0fffadd16669aa3b402e7e8aaab70ad700 from qemu
2020-01-14 09:02:53 -05:00
Chen Huitao 221333ceaf check arguments, return error instead of raising exceptions. (#1125)
* check arguments, return error instaed of raising exceptions. close #1117.

* remove empty lines. remove thr underscore prefix in function name.

Backports commit 23a426625f1469bd2052eab7d014deb6b9820bf2 from unicorn.
2020-01-14 09:00:11 -05:00
Pan Nengyuan 134a026e6b arm/translate-a64: fix uninitialized variable warning
Fixes:
target/arm/translate-a64.c: In function 'disas_crypto_three_reg_sha512':
target/arm/translate-a64.c:13625:9: error: 'genfn' may be used uninitialized in this function [-Werror=maybe-uninitialized]
genfn(tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
qemu/target/arm/translate-a64.c:13609:8: error: 'feature' may be used uninitialized in this function [-Werror=maybe-uninitialized]
if (!feature) {

Backports commit c7a5e7910517e2711215a9e869a733ffde696091 from qemu
2020-01-14 08:46:42 -05:00
Xiaoyao Li 31ab6fbd2c target/i386: Add missed features to Cooperlake CPU model
It lacks VMX features and two security feature bits (disclosed recently) in
MSR_IA32_ARCH_CAPABILITIES in current Cooperlake CPU model, so add them.

Fixes: 22a866b6166d ("i386: Add new CPU model Cooperlake")

Backports commit 2dea9d9ca4ea7e9afe83d0b4153b21a16987e866 from qemu
2020-01-14 08:43:26 -05:00
Xiaoyao Li 5e0b249dc0 target/i386: Add new bit definitions of MSR_IA32_ARCH_CAPABILITIES
The bit 6, 7 and 8 of MSR_IA32_ARCH_CAPABILITIES are recently disclosed
for some security issues. Add the definitions for them to be used by named
CPU models.

Backports commit 6c997b4adb300788d61d72e2b8bc67c03a584956 from qemu
2020-01-14 08:30:17 -05:00
Alex Bennée 8f275077b0 target/arm: only update pc after semihosting completes
Before we introduce blocking semihosting calls we need to ensure we
can restart the system on semi hosting exception. To be able to do
this the EXCP_SEMIHOST operation should be idempotent until it finally
completes. Practically this means ensureing we only update the pc
after the semihosting call has completed.

Backports commit 4ff5ef9e911c670ca10cdd36dd27c5395ec2c753 from qemu
2020-01-14 08:28:25 -05:00
Alex Bennée ea2714796f target/arm: remove unused EXCP_SEMIHOST leg
All semihosting exceptions are dealt with earlier in the common code
so we should never get here.

Backports commit b906acbb3aceed5b1eca30d9d365d5bd7431400b from qemu
2020-01-14 08:18:51 -05:00
Laurent Vivier fe60494b77 target/m68k: only change valid bits in CACR
This is used by netBSD (and MacOS ROM) to detect the MMU type

Backports commit 18b6102e51bb317d25ee61b49b7b56702b79560c from qemu
2020-01-14 08:17:14 -05:00
Eduardo Habkost 9a6fb2bad6 configure: Require Python >= 3.5
Python 3.5 is the oldest Python version available on our
supported build platforms, and Python 2 end of life will be 3
weeks after the planned release date of QEMU 4.2.0. Drop Python
2 support from configure completely, and require Python 3.5 or
newer.

Backports commit ddf90699631db53c981b6a5a63d31c08e0eaeec7 from qemu
2020-01-14 08:09:23 -05:00
Markus Armbruster 2814d68506 util/cutils: Turn FIXME comment into QEMU_BUILD_BUG_ON()
qemu_strtoi64() assumes int64_t is long long. This is marked FIXME.
Replace by a QEMU_BUILD_BUG_ON() to avoid surprises.

Same for qemu_strtou64().

Fix a typo in qemu_strtoul()'s contract while there.

Backports commit 369276ebf3cbba419653a19a01b790f3bcf3aea7 from qemu
2020-01-14 08:04:30 -05:00
Cathy Zhang 2952ab497f i386: Add new CPU model Cooperlake
Cooper Lake is intel's successor to Cascade Lake, the new
CPU model inherits features from Cascadelake-Server, while
add one platform associated new feature: AVX512_BF16. Meanwhile,
add STIBP for speculative execution.

Backports commit 22a866b6166db5caa4abaa6e656c2a431fa60726 from qemu
2020-01-14 08:00:22 -05:00
Cathy Zhang 67b6034a0f i386: Add macro for stibp
stibp feature is already added through the following commit.
0e89165829

Add a macro for it to allow CPU models to report it when host supports.

Backports commit 5af514d0cb314f43bc53f2aefb437f6451d64d0c from qemu
2020-01-14 08:00:22 -05:00
Lioncash 067a459774 i386: Backport some formatting changes 2020-01-14 08:00:18 -05:00
Cathy Zhang 8c0ed30d38 i386: Add MSR feature bit for MDS-NO
Define MSR_ARCH_CAP_MDS_NO in the IA32_ARCH_CAPABILITIES MSR to allow
CPU models to report the feature when host supports it.

Backports commit 77b168d221191156c47fcd8d1c47329dfdb9439e from qemu
2020-01-14 07:56:43 -05:00
Alex Bennée 639c5c4fe2 target/arm: ensure we use current exception state after SCR update
A write to the SCR can change the effective EL by droppping the system
from secure to non-secure mode. However if we use a cached current_el
from before the change we'll rebuild the flags incorrectly. To fix
this we introduce the ARM_CP_NEWEL CP flag to indicate the new EL
should be used when recomputing the flags.

Backports partof commit f80741d107673f162e3b097fc76a1590036cc9d1 from
qemu
2020-01-14 07:51:10 -05:00
Beata Michalska 81c14bb595 target/arm: Add support for DC CVAP & DC CVADP ins
ARMv8.2 introduced support for Data Cache Clean instructions
to PoP (point-of-persistence) - DC CVAP and PoDP (point-of-deep-persistence)
- DV CVADP. Both specify conceptual points in a memory system where all writes
that are to reach them are considered persistent.
The support provided considers both to be actually the same so there is no
distinction between the two. If none is available (there is no backing store
for given memory) both will result in Data Cache Clean up to the point of
coherency. Otherwise sync for the specified range shall be performed.

Backports commit 0d57b49992200a926c4436eead97ecfc8cc710be from qemu
2020-01-14 07:47:48 -05:00
Beata Michalska 0716794d86 Memory: Enable writeback for given memory region
Add an option to trigger memory writeback to sync given memory region
with the corresponding backing store, case one is available.
This extends the support for persistent memory, allowing syncing on-demand.

Backports commit 61c490e25e081af39ff40556f6c1229b8b011585 from qemu
2020-01-14 07:44:24 -05:00
Beata Michalska 47776dc862 tcg: cputlb: Add probe_read
Add probe_read alongside the write probing equivalent.

Backports commit 9e70492b4389d4355ae9c9ee2ba6286cfdadc257 from qemu
2020-01-14 07:16:41 -05:00
David Hildenbrand de513617c8 accel/tcg: allow to invalidate a write TLB entry immediately
Background: s390x implements Low-Address Protection (LAP). If LAP is
enabled, writing to effective addresses (before any translation)
0-511 and 4096-4607 triggers a protection exception.

So we have subpage protection on the first two pages of every address
space (where the lowcore - the CPU private data resides).

By immediately invalidating the write entry but allowing the caller to
continue, we force every write access onto these first two pages into
the slow path. we will get a tlb fault with the specific accessed
addresses and can then evaluate if protection applies or not.

We have to make sure to ignore the invalid bit if tlb_fill() succeeds.

Backports commit f52bfb12143e29d7c8bd827bdb751aee47a9694e from qemu
2020-01-14 07:14:10 -05:00
David Hildenbrand d9d91c1db6 tcg: Factor out probe_write() logic into probe_access()
Let's also allow to probe other access types.

Backports commit c25c283df0f08582df29f1d5d7be1516b851532d from qemu
2020-01-14 07:07:54 -05:00
David Hildenbrand 53c3c47efa tcg: Make probe_write() return a pointer to the host page
... similar to tlb_vaddr_to_host(); however, allow access to the host
page except when TLB_NOTDIRTY or TLB_MMIO is set.

Backports commit fef39ccd567032d3ad520ed80f3576068e6eb2e3 from qemu
2020-01-14 07:04:17 -05:00
David Hildenbrand 2bc3843fe3 tcg: Enforce single page access in probe_write()
Let's enforce the interface restriction.

Backports commit ca86cf328ce216bb304bbf09a43614613f945d86 from qemu
2020-01-14 07:02:15 -05:00
David Hildenbrand b732ad9eba tcg: Check for watchpoints in probe_write()
Let size > 0 indicate a promise to write to those bytes.
Check for write watchpoints in the probed range.

Backports commit 03a981893c99faba84bb373976796ad7dce0aecc from qemu
2020-01-14 07:01:05 -05:00
Richard Henderson 07f30382c0 cputlb: Handle watchpoints via TLB_WATCHPOINT
The raising of exceptions from check_watchpoint, buried inside
of the I/O subsystem, is fundamentally broken. We do not have
the helper return address with which we can unwind guest state.

Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT.
Move the call to cpu_check_watchpoint into the cputlb helpers
where we do have the helper return address.

This allows watchpoints on RAM to bypass the full i/o access path.

Backports commit 50b107c5d617eaf93301cef20221312e7a986701 from qemu
2020-01-14 06:58:33 -05:00
Richard Henderson 6c4a3fd06f cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK
We had two different mechanisms to force a recheck of the tlb.

Before TLB_RECHECK was introduced, we had a PAGE_WRITE_INV bit
that would immediate set TLB_INVALID_MASK, which automatically
means that a second check of the tlb entry fails.

We can use the same mechanism to handle small pages.
Conserve TLB_* bits by removing TLB_RECHECK.

Backports commit 30d7e098d5c38644359820317fcf72e3e129ec53 from qemu
2020-01-14 06:20:33 -05:00
David Hildenbrand f7b61b95f0 tcg: Factor out CONFIG_USER_ONLY probe_write() from s390x code
Factor it out into common code. Similar to the !CONFIG_USER_ONLY variant,
let's not allow to cross page boundaries.

Backports commit 59e96ac6cb13951dd09afc70622858089abf3384 from qemu
2020-01-12 10:27:49 -05:00
Richard Henderson bb313206e5 cputlb: Remove double-alignment in store_helper
We have already aligned page2 to the start of the next page.
There is no reason to do that a second time.

Backports commit 5787585d0406cfd54dda0c71ea1a603347ce6e71 from qemu
2020-01-12 10:25:13 -05:00
Richard Henderson 6990b212e3 cputlb: Fix size operand for tlb_fill on unaligned store
We are currently passing the size of the full write to
the tlb_fill for the second page. Instead pass the real
size of the write to that page.

This argument is unused within all tlb_fill, except to be
logged via tracing, so in practice this makes no difference.

But in a moment we'll need the value of size2 for watchpoints,
and if we've computed the value we might as well use it.

Backports commit 8f7cd2ad4acd01242d00807e231097b3de9f0930 from qemu
2020-01-12 06:17:09 -05:00
Tony Nguyen 15eb165995 target/sparc: sun4u Invert Endian TTE bit
This bit configures endianness of PCI MMIO devices. It is used by
Solaris and OpenBSD sunhme drivers.

Tested working on OpenBSD.

Unfortunately Solaris 10 had a unrelated keyboard issue blocking
testing... another inch towards Solaris 10 on SPARC64 =)

Backports commit ccdb4c5535f41ee4da2ef158f58fca0327e50dab from qemu
2020-01-07 19:21:30 -05:00
Tony Nguyen 7eea07fe55 target/sparc: Add TLB entry with attributes
Append MemTxAttrs to interfaces so we can pass along up coming Invert
Endian TTE bit on SPARC64.

Backports commit 9bed46e67e2ee54bc596ba58063ee71a5ca40923 from qemu
2020-01-07 19:19:30 -05:00
Tony Nguyen a95927de1d cputlb: Byte swap memory transaction attribute
Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Backports commit a26fc6f5152b47f1d7ed928f9c9d462d01ff1624 from qemu
2020-01-07 19:15:33 -05:00
Tony Nguyen 103d6f51c8 memory: Single byte swap along the I/O path
Now that MemOp has been pushed down into the memory API, and
callers are encoding endianness, we can collapse byte swaps
along the I/O path into the accelerator and target independent
adjust_endianness.

Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant
byte swaps cancelling out.

Backports commit 9bf825bf3df4ebae3af51566c8088e3f1249a910 from qemu
2020-01-07 19:12:04 -05:00
Tony Nguyen ad8957a4c3 cputlb: Replace size and endian operands for MemOp
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.

Backports commit be5c4787e9a6eed12fd765d9e890f7cc6cd63220 from qemu
2020-01-07 19:03:51 -05:00
Tony Nguyen da98d0da4e memory: Access MemoryRegion with endianness
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.

Call memory_region_dispatch_{read|write} with endianness encoded into
the "MemOp op" operand.

This patch does not change any behaviour as
memory_region_dispatch_{read|write} is yet to handle the endianness.

Once it does handle endianness, callers with byte swaps can collapse
them into adjust_endianness.

Backports commit d5d680cacc66ef7e3c02c81dc8f3a34eabce6dfe from qemu
2020-01-07 18:54:11 -05:00
Tony Nguyen b335c4756a exec: Hard code size with MO_{8|16|32|64}
Temporarily no-op size_memop was introduced to aid the conversion of
memory_region_dispatch_{read|write} operand "unsigned size" into
"MemOp op".

Now size_memop is implemented, again hard coded size but with
MO_{8|16|32|64}. This is more expressive and avoids size_memop calls.

Backports commit 07f0834f264a79d6225202bd35ca37f74afb8df1 from qemu
2020-01-07 18:33:15 -05:00
Tony Nguyen cb5688009e target/mips: Hard code size with MO_{8|16|32|64}
Temporarily no-op size_memop was introduced to aid the conversion of
memory_region_dispatch_{read|write} operand "unsigned size" into
"MemOp op".

Now size_memop is implemented, again hard coded size but with
MO_{8|16|32|64}. This is more expressive and avoids size_memop calls.

Backports commit 4574664677116dedb29b12150137f3888374a857 from qemu
2020-01-07 18:30:39 -05:00
Tony Nguyen 435d2e5c67 memory: Access MemoryRegion with MemOp
Convert memory_region_dispatch_{read|write} operand "unsigned size"
into a "MemOp op".

Backports commit e67c904668d82ca4416cd91d37d9f5abcceef747 from qemu
2020-01-07 18:29:27 -05:00
Tony Nguyen 3b777a2332 cputlb: Access MemoryRegion with MemOp
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".

Convert interfaces by using no-op size_memop.

After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be converted into a "MemOp op".

As size_memop is a no-op, this patch does not change any behaviour.

Backports commit 4cbb198eefef41bbca703605c78875fd4fec6ef6 from qemu
2020-01-07 18:26:29 -05:00
Tony Nguyen ab64c53bd0 exec: Access MemoryRegion with MemOp
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".

Convert interfaces by using no-op size_memop.

After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be converted into a "MemOp op".

As size_memop is a no-op, this patch does not change any behaviour.

Backports commit 3d9e7c3e7bf11962e1100d077e46f93f780b7310 from qemu
2020-01-07 18:25:19 -05:00
Tony Nguyen 7e9a1641c2 target/mips: Access MemoryRegion with MemOp
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".

Convert interfaces by using no-op size_memop.

After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be converted into a "MemOp op".

As size_memop is a no-op, this patch does not change any behaviour.

Backports commit e501824b3f3b3650e7cb8a509064cac01bc27c82 from qemu
2020-01-07 18:21:31 -05:00
Tony Nguyen dd78f65bc6 memory: Introduce size_memop
Introduce no-op size_memop to aid preparatory conversion of
interfaces.

Once interfaces are converted, size_memop will be implemented to
return a MemOp from size in bytes.

Backports commit 66b9b24375ac215cdcbdf9e14d665395360abff4 from qemu
2020-01-07 18:19:35 -05:00
Niek Linnenbank 998714db1f arm/arm-powerctl: set NSACR.{CP11, CP10} bits in arm_set_cpu_on()
This change ensures that the FPU can be accessed in Non-Secure mode
when the CPU core is reset using the arm_set_cpu_on() function call.
The NSACR.{CP11,CP10} bits define the exception level required to
access the FPU in Non-Secure mode. Without these bits set, the CPU
will give an undefined exception trap on the first FPU access for the
secondary cores under Linux.

This is necessary because in this power-control codepath QEMU
is effectively emulating a bit of EL3 firmware, and has to set
the CPU up as the EL3 firmware would.

Fixes: fc1120a7f5

Backports commit 0c7f8c43daf6556078e51de98aa13f069e505985 from qemu
2020-01-07 18:10:29 -05:00
Marc Zyngier 9c3e512479 target/arm: Add support for missing Jazelle system registers
QEMU lacks the minimum Jazelle implementation that is required
by the architecture (everything is RAZ or RAZ/WI). Add it
together with the HCR_EL2.TID0 trapping that goes with it.

Backports commit f96f3d5f09973ef40f164cf2d5fd98ce5498b82a from qemu
2020-01-07 18:09:13 -05:00
Marc Zyngier 457934855b target/arm: Handle AArch32 CP15 trapping via HSTR_EL2
HSTR_EL2 offers a way to trap ranges of CP15 system register
accesses to EL2, and it looks like this register is completely
ignored by QEMU.

To avoid adding extra .accessfn filters all over the place (which
would have a direct performance impact), let's add a new TB flag
that gets set whenever HSTR_EL2 is non-zero and that QEMU translates
a context where this trap has a chance to apply, and only generate
the extra access check if the hypervisor is actively using this feature.

Tested with a hand-crafted KVM guest accessing CBAR.

Backports commit 5bb0a20b74ad17dee5dae38e3b8b70b383ee7c2d from qemu
2020-01-07 18:07:21 -05:00
Marc Zyngier 868de52f69 target/arm: Handle trapping to EL2 of AArch32 VMRS instructions
HCR_EL2.TID3 requires that AArch32 reads of MVFR[012] are trapped to
EL2, and HCR_EL2.TID0 does the same for reads of FPSID.
In order to handle this, introduce a new TCG helper function that
checks for these control bits before executing the VMRC instruction.

Tested with a hacked-up version of KVM/arm64 that sets the control
bits for 32bit guests.

Backports commit 9ca1d776cb49c09b09579d9edd0447542970c834 from qemu
2020-01-07 18:04:16 -05:00
Marc Zyngier 51062d3fc2 target/arm: Honor HCR_EL2.TID1 trapping requirements
HCR_EL2.TID1 mandates that access from EL1 to REVIDR_EL1, AIDR_EL1
(and their 32bit equivalents) as well as TCMTR, TLBTR are trapped
to EL2. QEMU ignores it, making it harder for a hypervisor to
virtualize the HW (though to be fair, no known hypervisor actually
cares).

Do the right thing by trapping to EL2 if HCR_EL2.TID1 is set.

Backports commit 93fbc983b29a2eb84e2f6065929caf14f99c3681 from qemu
2020-01-07 18:00:01 -05:00
Marc Zyngier d1e981c44b target/arm: Honor HCR_EL2.TID2 trapping requirements
HCR_EL2.TID2 mandates that access from EL1 to CTR_EL0, CCSIDR_EL1,
CCSIDR2_EL1, CLIDR_EL1, CSSELR_EL1 are trapped to EL2, and QEMU
completely ignores it, making it impossible for hypervisors to
virtualize the cache hierarchy.

Do the right thing by trapping to EL2 if HCR_EL2.TID2 is set.

Backports commit 630fcd4d2ba37050329e0adafdc552d656ebe2f3 from qemu
2020-01-07 17:55:40 -05:00