Commit graph

768 commits

Author SHA1 Message Date
Richard Henderson 22004b8106 softfloat: Return bool from all classification predicates
This includes *_is_any_nan, *_is_neg, *_is_inf, etc.

Backports commit 150c7a91ce7862bcaf7422f6038dcf0ba4a7eee3 from qemu
2020-05-21 18:23:11 -04:00
Richard Henderson afd8d05aa2 softfloat: Inline floatx80 compare specializations
Replace the floatx80 compare specializations with inline functions
that call the standard floatx80_compare{,_quiet} functions.
Use bool as the return type.

Backports commit c6baf65000f826a713e8d9b5b35e617b0ca9ab5d from qemu
2020-05-21 18:17:53 -04:00
Richard Henderson 57d2419cd3 softfloat: Inline float128 compare specializations
Replace the float128 compare specializations with inline functions
that call the standard float128_compare{,_quiet} functions.
Use bool as the return type.

Backports commit b7b1ac684fea49c6bfe1ad8b706aed7b09116d15 from qemu
2020-05-21 18:15:55 -04:00
Richard Henderson 18a46c4d79 softfloat: Inline float64 compare specializations
Replace the float64 compare specializations with inline functions
that call the standard float64_compare{,_quiet} functions.
Use bool as the return type.

Backports commit 0673ecdf6cb2b1445a85283db8cbacb251c46516 from qemu
2020-05-21 18:13:44 -04:00
Richard Henderson a35333741a softfloat: Inline float32 compare specializations
Replace the float32 compare specializations with inline functions
that call the standard float32_compare{,_quiet} functions.
Use bool as the return type.

Backports commit 5da2d2d8e53d80e92a61720ea995c86b33cbf25d from qemu
2020-05-21 18:11:25 -04:00
Richard Henderson d960523cbd softfloat: Name compare relation enum
Give the previously unnamed enum a typedef name. Use it in the
prototypes of compare functions. Use it to hold the results
of the compare functions.

Backports commit 71bfd65c5fcd72f8af2735905415c7ce4220f6dc from qemu
2020-05-21 18:08:52 -04:00
Richard Henderson 8adc704058 softfloat: Name rounding mode enum
Give the previously unnamed enum a typedef name. Use the packed
attribute so that we do not affect the layout of the float_status
struct. Use it in the prototypes of relevant functions.

Adjust switch statements as necessary to avoid compiler warnings.

Backports commit 3dede407cc61b64997f0c30f6dbf4df09949abc9 from qemu
2020-05-21 18:02:05 -04:00
Richard Henderson a5c8178e35 softfloat: Change tininess_before_rounding to bool
Slightly tidies the usage within softfloat.c and the
representation in float_status.

Backports commit a828b373bdabc7e53d1e218e3fc76f85b6674688 from qemu
2020-05-21 17:52:50 -04:00
Richard Henderson a417227674 softfloat: Replace flag with bool
We have had this on the to-do list for quite some time.

Backports commit c120391c0090d9c40425c92cdb00f38ea8588ff6 from qemu
2020-05-21 17:48:12 -04:00
Richard Henderson 4016b667f3 accel/tcg: Add block comment for probe_access
Backports commit 857129b34190a4c2e782006dc255352a6cd3934b from qemu
2020-05-11 16:42:10 -04:00
Peter Maydell c6509498da osdep.h: Drop no-longer-needed Coverity workarounds
In commit a1a98357e3fd in 2018 we added some workarounds for Coverity
not being able to handle the _Float* types introduced by recent
glibc. Newer versions of the Coverity scan tools have support for
these types, and will fail with errors about duplicate typedefs if we
have our workaround. Remove our copy of the typedefs.

Backports commit c160f17cd6f5fc3f8698b408a451149b34b1a647 from qemu
2020-04-30 07:27:24 -04:00
Alexander Duyck 05cd02d6c6 memory: Do not allow direct write access to rom_device regions
According to the documentation in memory.h a ROM memory region will be
backed by RAM for reads, but is supposed to go through a callback for
writes. Currently we were not checking for the existence of the rom_device
flag when determining if we could perform a direct write or not.

To correct that add a check to memory_region_is_direct so that if the
memory region has the rom_device flag set we will return false for all
checks where is_write is set.

Backports commit d489ae4ac57ebe14bde8384556cbac237ead988d from qemu
2020-04-30 07:26:06 -04:00
Taylor Simpson 6507fdb3b1 tcg: Add support for a helper with 7 arguments
Currently, helpers can only take up to 6 arguments. This patch adds the
capability for up to 7 arguments. I have tested it with the Hexagon port
that I am preparing for submission.

Backports commit e6cadf49c3d191f6984e56ec3bbeb0b103ca5bc2 from qemu
2020-03-21 16:53:56 -04:00
Richard Henderson e41c51f6da target/arm: Add VHE system register redirection and aliasing
Several of the EL1/0 registers are redirected to the EL2 version when in
EL2 and HCR_EL2.E2H is set. Many of these registers have side effects.
Link together the two ARMCPRegInfo structures after they have been
properly instantiated. Install common dispatch routines to all of the
relevant registers.

The same set of registers that are redirected also have additional
EL12/EL02 aliases created to access the original register that was
redirected.

Omit the generic timer registers from redirection here, because we'll
need multiple kinds of redirection from both EL0 and EL2.

Backports commit e2cce18f5c1d0d55328c585c8372cdb096bbf528 from qemu
2020-03-21 15:57:03 -04:00
Beata Michalska 0716794d86 Memory: Enable writeback for given memory region
Add an option to trigger memory writeback to sync given memory region
with the corresponding backing store, case one is available.
This extends the support for persistent memory, allowing syncing on-demand.

Backports commit 61c490e25e081af39ff40556f6c1229b8b011585 from qemu
2020-01-14 07:44:24 -05:00
Beata Michalska 47776dc862 tcg: cputlb: Add probe_read
Add probe_read alongside the write probing equivalent.

Backports commit 9e70492b4389d4355ae9c9ee2ba6286cfdadc257 from qemu
2020-01-14 07:16:41 -05:00
David Hildenbrand de513617c8 accel/tcg: allow to invalidate a write TLB entry immediately
Background: s390x implements Low-Address Protection (LAP). If LAP is
enabled, writing to effective addresses (before any translation)
0-511 and 4096-4607 triggers a protection exception.

So we have subpage protection on the first two pages of every address
space (where the lowcore - the CPU private data resides).

By immediately invalidating the write entry but allowing the caller to
continue, we force every write access onto these first two pages into
the slow path. we will get a tlb fault with the specific accessed
addresses and can then evaluate if protection applies or not.

We have to make sure to ignore the invalid bit if tlb_fill() succeeds.

Backports commit f52bfb12143e29d7c8bd827bdb751aee47a9694e from qemu
2020-01-14 07:14:10 -05:00
David Hildenbrand d9d91c1db6 tcg: Factor out probe_write() logic into probe_access()
Let's also allow to probe other access types.

Backports commit c25c283df0f08582df29f1d5d7be1516b851532d from qemu
2020-01-14 07:07:54 -05:00
David Hildenbrand 53c3c47efa tcg: Make probe_write() return a pointer to the host page
... similar to tlb_vaddr_to_host(); however, allow access to the host
page except when TLB_NOTDIRTY or TLB_MMIO is set.

Backports commit fef39ccd567032d3ad520ed80f3576068e6eb2e3 from qemu
2020-01-14 07:04:17 -05:00
Richard Henderson 07f30382c0 cputlb: Handle watchpoints via TLB_WATCHPOINT
The raising of exceptions from check_watchpoint, buried inside
of the I/O subsystem, is fundamentally broken. We do not have
the helper return address with which we can unwind guest state.

Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT.
Move the call to cpu_check_watchpoint into the cputlb helpers
where we do have the helper return address.

This allows watchpoints on RAM to bypass the full i/o access path.

Backports commit 50b107c5d617eaf93301cef20221312e7a986701 from qemu
2020-01-14 06:58:33 -05:00
Richard Henderson 6c4a3fd06f cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK
We had two different mechanisms to force a recheck of the tlb.

Before TLB_RECHECK was introduced, we had a PAGE_WRITE_INV bit
that would immediate set TLB_INVALID_MASK, which automatically
means that a second check of the tlb entry fails.

We can use the same mechanism to handle small pages.
Conserve TLB_* bits by removing TLB_RECHECK.

Backports commit 30d7e098d5c38644359820317fcf72e3e129ec53 from qemu
2020-01-14 06:20:33 -05:00
David Hildenbrand f7b61b95f0 tcg: Factor out CONFIG_USER_ONLY probe_write() from s390x code
Factor it out into common code. Similar to the !CONFIG_USER_ONLY variant,
let's not allow to cross page boundaries.

Backports commit 59e96ac6cb13951dd09afc70622858089abf3384 from qemu
2020-01-12 10:27:49 -05:00
Tony Nguyen a95927de1d cputlb: Byte swap memory transaction attribute
Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Backports commit a26fc6f5152b47f1d7ed928f9c9d462d01ff1624 from qemu
2020-01-07 19:15:33 -05:00
Tony Nguyen ad8957a4c3 cputlb: Replace size and endian operands for MemOp
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.

Backports commit be5c4787e9a6eed12fd765d9e890f7cc6cd63220 from qemu
2020-01-07 19:03:51 -05:00
Tony Nguyen da98d0da4e memory: Access MemoryRegion with endianness
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.

Call memory_region_dispatch_{read|write} with endianness encoded into
the "MemOp op" operand.

This patch does not change any behaviour as
memory_region_dispatch_{read|write} is yet to handle the endianness.

Once it does handle endianness, callers with byte swaps can collapse
them into adjust_endianness.

Backports commit d5d680cacc66ef7e3c02c81dc8f3a34eabce6dfe from qemu
2020-01-07 18:54:11 -05:00
Tony Nguyen 435d2e5c67 memory: Access MemoryRegion with MemOp
Convert memory_region_dispatch_{read|write} operand "unsigned size"
into a "MemOp op".

Backports commit e67c904668d82ca4416cd91d37d9f5abcceef747 from qemu
2020-01-07 18:29:27 -05:00
Tony Nguyen dd78f65bc6 memory: Introduce size_memop
Introduce no-op size_memop to aid preparatory conversion of
interfaces.

Once interfaces are converted, size_memop will be implemented to
return a MemOp from size in bytes.

Backports commit 66b9b24375ac215cdcbdf9e14d665395360abff4 from qemu
2020-01-07 18:19:35 -05:00
Tony Nguyen f75368cd0f
tcg: TCGMemOp is now accelerator independent MemOp
Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.

Target dependant attributes are conditionalized upon NEED_CPU_H.

Backports commit 14776ab5a12972ea439c7fb2203a4c15a09094b4 from qemu
2019-11-28 03:01:12 -05:00
Alex Bennée a2585ba590
include/exec/cpu-defs.h: fix typo
Backports commit 1eb21c428b1e8c9845c82c152a75d046fb19d6fe from qemu
2019-11-28 02:38:15 -05:00
tony.nguyen@bt.com b4c2c94602
configure: Define target access alignment in configure
This patch moves the define of target access alignment earlier from
target/foo/cpu.h to configure.

Suggested in Richard Henderson's reply to "[PATCH 1/4] tcg: TCGMemOp is now
accelerator independent MemOp"

Backports commit 52bf9771fdfce98e98cea36a17a18915be6f6b7f from qemu
2019-11-18 21:41:35 -05:00
Alex Bennée 064f50cdf6
fpu: make softfloat-macros self-contained
The macros use the "flags" type and to be consistent if anyone just
needs the macros we should bring in the header we need. There is an
outstanding TODO to audit the use of "flags" and replace with bool at
which point this include could be dropped.

Backports commit 5937fb63a92d54cc4e5270256e4387c4d3a70091 from qemu
2019-11-18 21:10:39 -05:00
Alex Bennée 5ab2ac8dac
fpu: move inline helpers into a separate header
There are a bunch of users of the inline helpers who do not need
access to the entire softfloat API. Move those inline helpers into a
new header file which can be included without bringing in the rest of
the world.

Backports commit e34c47ea3fb5f324b58db117b3c010a494c8d6ca from qemu
2019-11-18 21:09:02 -05:00
Alex Bennée be878fb0e9
fpu: remove the LIT64 macro
Now the rest of the code has been cleaned up we can remove this.

Backports commit 472038ccf5d69ebcad44d835488a36ee640c219f from qemu
2019-11-18 21:06:55 -05:00
Alex Bennée dbddafe2df
fpu: replace LIT64 with UINT64_C macros
In our quest to eliminate the home rolled LIT64 macro we fixup usage
inside the softfloat code. While we are at it we remove some of the
extraneous spaces to closer fit the house style.

Backports commit e932112420f063776f2b9d9e5512830cd6890a7a from qemu
2019-11-18 20:57:12 -05:00
Markus Armbruster bb1f47545c
typedefs: Separate incomplete types and function types
While there, rewrite the obsolete file comment.

Backports commit 2a28720d773df2193c9fb633c02092cca107a9e5 from qemu
2019-11-18 16:42:51 -05:00
Dr. David Alan Gilbert 0b8add4e6f
memory: Provide an equality function for MemoryRegionSections
Provide a comparison function that checks all the fields are the same.

Backports commit 42b6571357a083f721a27daa6dfdc69e4bd516bd from qemu
2019-11-18 16:42:50 -05:00
Dr. David Alan Gilbert 3807ec09de
memory: Align MemoryRegionSections fields
MemoryRegionSection includes an Int128 'size' field;
on some platforms the compiler causes an alignment of this to
a 128bit boundary, leaving 8 bytes of dead space.
This deadspace can be filled with junk.

Move the size field to the top avoiding unnecessary alignment.

Backports commit c0aca9352d51c102c55fe29ce5c1bf8e74a5183e from qemu
2019-11-18 16:42:47 -05:00
Richard Henderson ec81aef8c0
include/qemu/atomic.h: Add signal_barrier
We have some potential race conditions vs our user-exec signal
handler that will be solved with this barrier.

Backports commit 359896dfa4e9707e1acea99129d324250fccab04 from qemu
2019-08-08 19:26:41 -04:00
Lioncash 802c626145
Revert "cputlb: Filter flushes on already clean tlbs"
This reverts commit 5ab9723787.
2019-06-30 19:21:20 -04:00
Richard Henderson d7ea41c3a3
cpu: Move icount_decr to CPUNegativeOffsetState
Amusingly, we had already ignored the comment to keep this value
at the end of CPUState. This restores the minimum negative offset
from TCG_AREG0 for code generation.

For the couple of uses within qom/cpu.c, without NEED_CPU_H, add
a pointer from the CPUState object to the IcountDecr object within
CPUNegativeOffsetState.

Backports commit 5e1401969b25f676fee6b1c564441759cf967a43 from qemu
2019-06-13 15:34:28 -04:00
Richard Henderson 8f53f09a05
cpu: Introduce CPUNegativeOffsetState
Nothing in there so far, but all of the plumbing done
within the target ArchCPU state.

Backports commit 5b146dc716cfd247f99556c04e6e46fbd67565a0 from qemu
2019-06-13 15:08:25 -04:00
Richard Henderson a672b89e3b
cpu: Introduce cpu_set_cpustate_pointers
Consolidate some boilerplate from foo_cpu_initfn.

Backports commit 7506ed902eb97fe4e2a1dd16766c621d32ecc40d from qemu
2019-06-12 12:27:16 -04:00
Richard Henderson ac176ccb38
cpu: Move ENV_OFFSET to exec/gen-icount.h
Now that we have ArchCPU, we can define this generically,
in the one place that needs it.

Backports commit 677c4d69ac21961e76a386f9bfc892a44923acc0 from qemu
2019-06-12 12:20:21 -04:00
Richard Henderson 8b108f3607
cpu: Introduce env_archcpu
This will replace foo_env_get_cpu with a generic definition.
No changes to the target specific code so far.

Backports commit 083dc73d7a3cf2a75b5625fd8f0669b57a855d16 from qemu
2019-06-12 11:17:47 -04:00
Richard Henderson fbf91a6535
cpu: Replace ENV_GET_CPU with env_cpu
Now that we have both ArchCPU and CPUArchState, we can define
this generically instead of via macro in each target's cpu.h.

Backports commit 29a0af618ddd21f55df5753c3e16b0625f534b3c from qemu
2019-06-12 11:16:16 -04:00
Markus Armbruster 5e5197b136
Supply missing header guards
Backports applicable parts of commit
f91005e195e7e1485e60cb121731589960f1a3c9 from qemu
2019-06-12 10:59:10 -04:00
Lioncash 5ab9723787
cputlb: Filter flushes on already clean tlbs
Especially for guests with large numbers of tlbs, like ARM or PPC,
we may well not use all of them in between flush operations.
Remember which tlbs have been used since the last flush, and
avoid any useless flushing.

Backports much of 3d1523ced6060cdfe9e768a814d064067ccabfe5 from qemu
along with a bunch of updating changes.
2019-06-10 20:42:15 -04:00
Richard Henderson df2a890bd7
tcg: Split out target/arch/cpu-param.h
For all targets, into this new file move TARGET_LONG_BITS,
TARGET_PAGE_BITS, TARGET_PHYS_ADDR_SPACE_BITS,
TARGET_VIRT_ADDR_SPACE_BITS, and NB_MMU_MODES.

Include this new file from exec/cpu-defs.h.

This now removes the somewhat odd requirement that target/arch/cpu.h
defines TARGET_LONG_BITS before including exec/cpu-defs.h, so push the
bulk of the includes within target/arch/cpu.h to the top.

Backports commit 74433bf083b0766aba81534f92de13194f23ff3e from qemu
2019-06-10 19:35:46 -04:00
Richard Henderson 2a4a7b9391
tcg: Use tlb_fill probe from tlb_vaddr_to_host
Most of the existing users would continue around a loop which
would fault the tlb entry in via a normal load/store.

But for AArch64 SVE we have an existing emulation bug wherein we
would mark the first element of a no-fault vector load as faulted
(within the FFR, not via exception) just because we did not have
its address in the TLB. Now we can properly only mark it as faulted
if there really is no valid, readable translation, while still not
raising an exception. (Note that beyond the first element of the
vector, the hardware may report a fault for any reason whatsoever;
with at least one element loaded, forward progress is guaranteed.)

Backports commit 4811e9095c0491bc6f5450e5012c9c4796b9e59d from qemu
2019-05-16 18:27:03 -04:00
Laurent Vivier 8cdfed1032
linux-user: fix 32bit g2h()/h2g()
sparc32plus has 64bit long type but only 32bit virtual address space.

For instance, "apt-get upgrade" failed because of a mmap()/msync()
sequence.

mmap() returned 0xff252000 but msync() used g2h(0xffffffffff252000)
to find the host address. The "(target_ulong)" in g2h() doesn't fix the
address because it is 64bit long.

This patch introduces an "abi_ptr" that is set to uint32_t
if the virtual address space is addressed using 32bit in the linux-user
case. It stays set to target_ulong with softmmu case.

Backports commit 3e23de15237c81fe7af7c3ffa299a6ae5fec7d43 from qemu
2019-05-16 18:20:55 -04:00