Provide a functional interface for the vector expansion.
This fits better with the existing set of helpers that
we provide for other operations.
Backports commit 8161b75357095fef54c76b1a6ed1e54d0e8655e0 from qemu
Provide a functional interface for the vector expansion.
This fits better with the existing set of helpers that
we provide for other operations.
Backports commit 271063206a46062a45fc6bab8dabe45f0b88159d from qemu
Provide a functional interface for the vector expansion.
This fits better with the existing set of helpers that
we provide for other operations.
Macro-ize the 5 nearly identical comparisons.
Backports commit 69d5e2bf8c3cefedbfa1c1670137e636dbd7faa5 from qemu
The functions eliminate duplication of the special cases for
this operation. They match up with the GVecGen2iFn typedef.
Add out-of-line helpers. We got away with only having inline
expanders because the neon vector size is only 16 bytes, and
we know that the inline expansion will always succeed.
When we reuse this for SVE, tcg-gvec-op may decide to use an
out-of-line helper due to longer vector lengths.
Backports commit 893ab0542aa385a287cbe46d5535c8b9e95ce699 from qemu
Create vectorized versions of handle_shri_with_rndacc
for shift+round and shift+round+accumulate. Add out-of-line
helpers in preparation for longer vector lengths from SVE.
Backports commit 6ccd48d4ea244c1c46a24dfa50bfb547f11422dd from qemu
The functions eliminate duplication of the special cases for
this operation. They match up with the GVecGen2iFn typedef.
Add out-of-line helpers. We got away with only having inline
expanders because the neon vector size is only 16 bytes, and
we know that the inline expansion will always succeed.
When we reuse this for SVE, tcg-gvec-op may decide to use an
out-of-line helper due to longer vector lengths.
Backports commit 631e565450c483e0622eec3d8b61d7fa41d16bca from qemu
Add a version of tcg_gen_dup_* that takes both immediate and
a vector element size operand. This will replace the set of
tcg_gen_gvec_dup{8,16,32,64}i functions that encode the element
size within the function name.
Backports commit 44c94677febd15488f9190b11eaa4a08e8ac696b from qemu
Make cpu_register() (renamed to arm_cpu_register()) available
from internals.h so we can register CPUs also from other files
in the future.
Backports commit 37bcf244454f4efb82e2c0c64bbd7eabcc165a0c from qemu
These instructions are often used in glibc's string routines.
They were the final uses of the 32-bit at a time neon helpers.
Backports commit 6b375d3546b009d1e63e07397ec9c6af256e15e9 from qemu
We still need two different helpers, since NEON and SVE2 get the
inputs from different locations within the source vector. However,
we can convert both to the same internal form for computation.
The sve2 helper is not used yet, but adding it with this patch
helps illustrate why the neon changes are helpful.
Backports commit e7e96fc5ec8c79dc77fef522d5226ac09f684ba5 from qemu
The gvec form will be needed for implementing SVE2.
Extend the implementation to operate on uint64_t instead of uint32_t.
Use a counted inner loop instead of terminating when op1 goes to zero,
looking toward the required implementation for ARMv8.4-DIT.
Backports commit a21bb78e5817be3f494922e1dadd6455fe5d6318 from qemu
These instructions shift left or right depending on the sign
of the input, and 7 bits are significant to the shift. This
requires several masks and selects in addition to the actual
shifts to form the complete answer.
That said, the operation is still a small improvement even for
two 64-bit elements -- 13 vector operations instead of 2 * 7
integer operations.
Backports commit 87b74e8b6edd287ea2160caa0ebea725fa8f1ca1 from qemu
Use the correct sctlr for EL2&0 regime. Due to header ordering,
and where arm_mmu_idx_el is declared, we need to move the function
out of line. Use the function in many more places in order to
select the correct control.
Backports commit aaec143212bb70ac9549cf73203d13100bd5c7c2 from qemu
Prepare for, but do not yet implement, the EL2&0 regime.
This involves adding the new MMUIdx enumerators and adjusting
some of the MMUIdx related predicates to match.
Backports commit b9f6033c1a5fb7da55ed353794db8ec064f78bb2 from qemu.
Add an option to trigger memory writeback to sync given memory region
with the corresponding backing store, case one is available.
This extends the support for persistent memory, allowing syncing on-demand.
Backports commit 61c490e25e081af39ff40556f6c1229b8b011585 from qemu
The raising of exceptions from check_watchpoint, buried inside
of the I/O subsystem, is fundamentally broken. We do not have
the helper return address with which we can unwind guest state.
Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT.
Move the call to cpu_check_watchpoint into the cputlb helpers
where we do have the helper return address.
This allows watchpoints on RAM to bypass the full i/o access path.
Backports commit 50b107c5d617eaf93301cef20221312e7a986701 from qemu
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.
Call memory_region_dispatch_{read|write} with endianness encoded into
the "MemOp op" operand.
This patch does not change any behaviour as
memory_region_dispatch_{read|write} is yet to handle the endianness.
Once it does handle endianness, callers with byte swaps can collapse
them into adjust_endianness.
Backports commit d5d680cacc66ef7e3c02c81dc8f3a34eabce6dfe from qemu
HCR_EL2.TID3 requires that AArch32 reads of MVFR[012] are trapped to
EL2, and HCR_EL2.TID0 does the same for reads of FPSID.
In order to handle this, introduce a new TCG helper function that
checks for these control bits before executing the VMRC instruction.
Tested with a hacked-up version of KVM/arm64 that sets the control
bits for 32bit guests.
Backports commit 9ca1d776cb49c09b09579d9edd0447542970c834 from qemu
Despite the fact that the text for the call to gen_exception_insn
is identical for aarch64 and aarch32, the implementation inside
gen_exception_insn is totally different.
This fixes exceptions raised from aarch64.
This reverts commit fb2d3c9a9a.
Replace x = double_saturate(y) with x = add_saturate(y, y).
There is no need for a separate more specialized helper.
Backports commit 640581a06d14e2d0d3c3ba79b916de6bc43578b0 from qemu
Promote this function from aarch64 to fully general use.
Use it to unify the code sequences for generating illegal
opcode exceptions.
Backports commit 3cb36637157088892e9e33ddb1034bffd1251d3b from qemu
In the next commit we will split the M-profile functions from this
file. Some function will be called out of helper.c. Declare them in
the "internals.h" header.
Backports commit 787a7e76c2e93a48c47b324fea592c9910a70483 from qemu
These routines are TCG specific.
The arm_deliver_fault() function is only used within the new
helper. Make it static.
Backports commit e21b551cb652663f2f2405a64d63ef6b4a1042b7 from qemu
In the next commit we will split the TLB related routines of
this file, and this function will also be called in the new
file. Declare it in the "internals.h" header.
Backports commit ebae861fc6c385a7bcac72dde4716be06e6776f1 from qemu
Perform a per-element conditional move. This combination operation is
easier to implement on some host vector units than plain cmp+bitsel.
Omit the usual gvec interface, as this is intended to be used by
target-specific gvec expansion call-backs.
Backports commit f75da2988eb2457fa23d006d573220c5c680ec4e from qemu
This operation performs d = (b & a) | (c & ~a), and is present
on a majority of host vector units. Include gvec expanders.
Backports commit 38dc12947ec9106237f9cdbd428792c985cd86ae from qemu
We can now use the CPUClass hook instead of a named function.
Create a static tlb_fill function to avoid other changes within
cputlb.c. This also isolates the asserts within. Remove the
named tlb_fill function from all of the targets.
Backports commit c319dc13579a92937bffe02ad2c9f1a550e73973 from qemu
Remove a function of the same name from target/arm/.
Use a branchless implementation of abs gleaned from gcc.
Backports commit ff1f11f7f8710a768f9313f24bd7f509d3db27e5 from qemu
Allow expansion either via shift by scalar or by replicating
the scalar for shift by vector.
Backports commit b4578cd91cda4cef1c413304353ca6dc5b957b60 from qemu
The gvec expanders perform a modulo on the shift count. If the target
requires alternate behaviour, then it cannot use the generic gvec
expanders anyway, and will have to have its own custom code.
Backports commit 5ee5c14cacda27e904cd6b0d9e7ffe1acff42838 from qemu
Allow the backend to expand dup from memory directly, instead of
forcing the value into a temp first. This is especially important
if integer/vector register moves do not exist.
Note that officially tcg_out_dupm_vec is allowed to fail.
If it did, we could fix this up relatively easily:
VECE == 32/64:
Load the value into a vector register, then dup.
Both of these must work.
VECE == 8/16:
If the value happens to be at an offset such that an aligned
load would place the desired value in the least significant
end of the register, go ahead and load w/garbage in high bits.
Load the value w/INDEX_op_ld{8,16}_i32.
Attempt a move directly to vector reg, which may fail.
Store the value into the backing store for OTS.
Load the value into the vector reg w/TCG_TYPE_I32, which must work.
Duplicate from the vector reg into itself, which must work.
All of which is well and good, except that all supported
hosts can support dupm for all vece, so all of the failure
paths would be dead code and untestable.
Backports commit 37ee55a081b7863ffab2151068dd1b2f11376914 from qemu
Replace the single opcode in .opc with a null-terminated
array in .opt_opc. We still require that all opcodes be
used with the same .vece.
Validate the contents of this list with CONFIG_DEBUG_TCG.
All tcg_gen_*_vec functions will check any list active
during .fniv expansion. Swap the active list in and out
as we expand other opcodes, or take control away from the
front-end function.
Convert all existing vector aware front ends.
Backports commit 53229a7703eeb2bbe101a19a33ef22aaf960c65b from qemu
Let's add tcg_gen_gvec_3i(), similar to tcg_gen_gvec_2i(), however
without introducing "gen_helper_gvec_3i *fnoi", as it isn't needed
for now.
Backports commit e1227bb6e59173117f094a6a13b998587b45c928 from qemu
The M-profile architecture floating point system supports
lazy FP state preservation, where FP registers are not
pushed to the stack when an exception occurs but are instead
only saved if and when the first FP instruction in the exception
handler is executed. Implement this in QEMU, corresponding
to the check of LSPACT in the pseudocode ExecuteFPCheck().
Backports commit e33cf0f8d8c9998a7616684f9d6aa0d181b88803 from qemu
Add a new helper function which returns the MMU index to use
for v7M, where the caller specifies all of the security
state, privilege level and whether the execution priority
is negative, and reimplement the existing
arm_v7m_mmu_idx_for_secstate_and_priv() in terms of it.
We are going to need this for the lazy-FP-stacking code.
Backports commit fa6252a988dbe440cd6087bf93cbe0887f0c401b from qemu
Will be helpful for s390x. Input 128 bit and output 64 bit only,
which is sufficient for now.
Backports commit 2089fcc9e7b4174d1c351eaa7d277c02188a6dd2 from qemu
This ports over the RISC-V architecture from Qemu. This is currently a
very barebones transition. No code hooking or any fancy stuff.
Currently, you can feed it instructions and query the CPU state itself.
This also allows choosing whether or not RISC-V 32-bit or RISC-V 64-bit
is desirable through Unicorn's interface as well.
Extremely basic examples of executing a single instruction have been
added to the samples directory to help demonstrate how to use the basic
functionality.
We do not need an out-of-line helper for manipulating bits in pstate.
While changing things, share the implementation of gen_ss_advance.
Backports commit 22ac3c49641f6eed93dca5b852030b4d3eacf6c4 from qemu
The EL0+UMA check is unique to DAIF. While SPSel had avoided the
check by nature of already checking EL >= 1, the other post v8.0
extensions to MSR (imm) allow EL0 and do not require UMA. Avoid
the unconditional write to pc and use raise_exception_ra to unwind.
Backports commit ff730e9666a716b669ac4a8ca7c521177d1d2b15 from qemu
Note that float16_to_float32 rightly squashes SNaN to QNaN.
But of course pickNaNMulAdd, for ARM, selects SNaNs first.
So we have to preserve SNaN long enough for the correct NaN
to be selected. Thus float16_to_float32_by_bits.
Backports commit a4e943a716d5fac923d82df3eabc65d1e3624019 from qemu
For same-sign saturation, we have tcg vector operations. We can
compute the QC bit by comparing the saturated value against the
unsaturated value.
Backports commit 89e68b575e138d0af1435f11a8ffcd8779c237bd from qemu
A number of CPUID registers are exposed to userspace by modern Linux
kernels thanks to the "ARM64 CPU Feature Registers" ABI. For QEMU's
user-mode emulation we don't need to emulate the kernels trap but just
return the value the trap would have done. To avoid too much #ifdef
hackery we process ARMCPRegInfo with a new helper (modify_arm_cp_regs)
before defining the registers. The modify routine is driven by a
simple data structure which describes which bits are exported and
which are fixed.
Backports commit 6c5c0fec29bbfe36c64eca1edfd8455be46b77c6 from qemu
A bug was introduced during a respin of:
commit 57a4a11b2b281bb548b419ca81bfafb214e4c77a
target/arm: Add array for supported PMU events, generate PMCEID[01]_EL0
This patch introduced two calls to get_pmceid() during CPU
initialization - one each for PMCEID0 and PMCEID1. In addition to
building the register values, get_pmceid() clears an internal array
mapping event numbers to their implementations (supported_event_map)
before rebuilding it. This is an optimization since much of the logic is
shared. However, since it was called twice, the contents of
supported_event_map reflect only the events in PMCEID1 (the second call
to get_pmceid()).
Fix this bug by moving the initialization of PMCEID0 and PMCEID1 back
into a single function call, and name it more appropriately since it is
doing more than simply generating the contents of the PMCEID[01]
registers.
Backports commit bf8d09694ccc07487cd73d7562081fdaec3370c8 from qemu
This commit doesn't add any supported events, but provides the framework
for adding them. We store the pm_event structs in a simple array, and
provide the mapping from the event numbers to array indexes in the
supported_event_map array. Because the value of PMCEID[01] depends upon
which events are supported at runtime, generate it dynamically.
Backports commit 57a4a11b2b281bb548b419ca81bfafb214e4c77a from qemu
Rename arm_ccnt_enabled to pmu_counter_enabled, and add logic to only
return 'true' if the specified counter is enabled and neither prohibited
or filtered.
Backports commit 033614c47de78409ad3fb39bb7bd1483b71c6789 from qemu
pmccntr_read and pmccntr_write contained duplicate code that was already
being handled by pmccntr_sync. Consolidate the duplicated code into two
functions: pmccntr_op_start and pmccntr_op_finish. Add a companion to
c15_ccnt in CPUARMState so that we can simultaneously save both the
architectural register value and the last underlying cycle count - this
ensures time isn't lost and will also allow us to access the 'old'
architectural register value in order to detect overflows in later
patches.
Backports commit 5d05b9d462666ed21b7fef61aa45dec9aaa9f0ff from qemu
The arm_regime_tbi{0,1} functions are replacable with the new function
by giving the lowest and highest address.
Backports commit 5d8634f5a3a8474525edcfd581a659830e9e97c0 from qemu
We need to reuse this from helper-a64.c. Provide a stub
definition for CONFIG_USER_ONLY. This matches the stub
definitions that we removed for arm_regime_tbi{0,1} before.
Backports commit bf0be433878935e824479e8ae890493e1fb646ed from qemu
While we could expose stage_1_mmu_idx, the combination is
probably going to be more useful.
Backports commit 64be86ab1b5ef10b660a4230ee7f27c0da499043 from qemu
The pattern
ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
is computing the full ARMMMUIdx, stripping off the ARM bits,
and then putting them back.
Avoid the extra two steps with the appropriate helper function.
Backports commit 50494a279dab22a015aba9501a94fcc3cd52140e from qemu
Replace arm_hcr_el2_{fmo,imo,amo} with a more general routine
that also takes SCR_EL3.NS (aka arm_is_secure_below_el3) into
account, as documented for the plethora of bits in HCR_EL2.
Backports commit f77784446045231f7dfa46c9b872091241fa1557 from qemu
In commit 8a0fc3a29fc2315325400 we tried to implement HCR_EL2.{VI,VF},
but we got it wrong and had to revert it.
In that commit we implemented them as simply tracking whether there
is a pending virtual IRQ or virtual FIQ. This is not correct -- these
bits cause a software-generated VIRQ/VFIQ, which is distinct from
whether there is a hardware-generated VIRQ/VFIQ caused by the
external interrupt controller. So we need to track separately
the HCR_EL2 bit state and the external virq/vfiq line state, and
OR the two together to get the actual pending VIRQ/VFIQ state.
Fixes: 8a0fc3a29fc2315325400c738f807d0d4ae0ab7f
Backports commit 89430fc6f80a5aef1d4cbd6fc26b40c30793786c from qemu
Add a new flag to mark memory region that are used as non-volatile, by
NVDIMM for example. That bit is propagated down to the flat view, and
reflected in HMP info mtree with a "nv-" prefix on the memory type.
This way, guest_phys_blocks_region_add() can skip the NV memory
regions for dumps and TCG memory clear in a following patch.
Backports commit c26763f8ec70b1011098cab0da9178666d8256a5 from qemu
Add code to insert calls to a helper function to do the stack
limit checking when we handle these forms of instruction
that write to SP:
* ADD (SP plus immediate)
* ADD (SP plus register)
* SUB (SP minus immediate)
* SUB (SP minus register)
* MOV (register)
Backports commit 5520318939fea5d659bf808157cd726cb967b761 from qemu
Use the existing helpers to determine if (1) the fpu is enabled,
(2) sve state is enabled, and (3) the current sve vector length.
Backports commit ced3155141755ba244c988c72c4bde32cc819670 from qemu
It has not had users since f83311e476 ("target-m68k: use floatx80
internally", 2017-06-21).
Note that no other bit-width has floatX_trunc_to_int.
Backports commit c953da8f0be5e026d1c9128660736d72294feb3e from qemu
if MemoryRegion intialization fails it's left in semi-initialized state,
where it's size is not 0 and attached as child to owner object.
And this leds to crash in following use-case:
(monitor) object_add memory-backend-file,id=mem1,size=99999G,mem-path=/tmp/foo,discard-data=yes
memory.c:2083: memory_region_get_ram_ptr: Assertion `mr->ram_block' failed
Aborted (core dumped)
it happens due to assumption that memory region is intialized when
memory_region_size() != 0
and therefore it's ok to access it in
file_backend_unparent()
if (memory_region_size() != 0)
memory_region_get_ram_ptr()
which happens when object_add fails and unparents failed backend making
file_backend_unparent() access invalid memory region.
Fix it by making sure that memory_region_init_foo() APIs cleanup externally
visible side effects on failure (like set size to 0 and unparenting object)
The API for cpu_transaction_failed() says that it takes the physical
address for the failed transaction. However we were actually passing
it the offset within the target MemoryRegion. We don't currently
have any target CPU implementations of this hook that require the
physical address; fix this bug so we don't get confused if we ever
do add one.
Backports commit 2d54f19401bc54b3b56d1cc44c96e4087b604b97 from qemu
Instead of passing env and leaving it up to the helper to get the
right fpstatus we pass it explicitly. There was already a get_fpstatus
helper for neon for the 32 bit code. We also add an get_ahp_flag() for
passing the state of the alternative FP16 format flag. This leaves
scope for later tracking the AHP state in translation flags.
Backports commit 486624fcd3eaca6165ab8401d73bbae6c0fb81c1 from qemu