Commit graph

192 commits

Author SHA1 Message Date
Peter Maydell c61e22627d
target/arm: Fix routing of singlestep exceptions
When generating an architectural single-step exception we were
routing it to the "default exception level", which is to say
the same exception level we execute at except that EL0 exceptions
go to EL1. This is incorrect because the debug exception level
can be configured by the guest for situations such as single
stepping of EL0 and EL1 code by EL2.

We have to track the target debug exception level in the TB
flags, because it is dependent on CPU state like HCR_EL2.TGE
and MDCR_EL2.TDE. (That we were previously calling the
arm_debug_target_el() function to determine dc->ss_same_el
is itself a bug, though one that would only have manifested
as incorrect syndrome information.) Since we are out of TB
flag bits unless we want to expand into the cs_base field,
we share some bits with the M-profile only HANDLER and
STACKCHECK bits, since only A-profile has this singlestep.

Fixes: https://bugs.launchpad.net/qemu/+bug/1838913

Backports commit 8bd587c1066f4456ddfe611b571d9439a947d74c from qemu
2019-11-18 16:50:15 -05:00
Alex Bennée 0d6ed39333
target/arm: generate a custom MIDR for -cpu max
While most features are now detected by probing the ID_* registers
kernels can (and do) use MIDR_EL1 for working out of they have to
apply errata. This can trip up warnings in the kernel as it tries to
work out if it should apply workarounds to features that don't
actually exist in the reported CPU type.

Avoid this problem by synthesising our own MIDR value.

Backports commit 2bd5f41c00686a1f847a60824d0375f3df2c26bf from qemu
2019-11-18 16:42:51 -05:00
Philippe Mathieu-Daudé 199e2f8a7d
target/arm: Restrict semi-hosting to TCG
Semihosting hooks either SVC or HLT instructions, and inside KVM
both of those go to EL1, ie to the guest, and can't be trapped to
KVM.

Let check_for_semihosting() return False when not running on TCG.

backports commit 91f78c58da9ba78c8ed00f5d822b701765be8499 from qemu
2019-08-08 17:48:34 -04:00
Peter Maydell dc1f2247ec
target/arm: Only implement doubles if the FPU supports them
The architecture permits FPUs which have only single-precision
support, not double-precision; Cortex-M4 and Cortex-M33 are
both like that. Add the necessary checks on the MVFR0 FPDP
field so that we UNDEF any double-precision instructions on
CPUs like this.

Note that even if FPDP==0 the insns like VMOV-to/from-gpreg,
VLDM/VSTM, VLDR/VSTR which take double precision registers
still exist.

Backports commit 1120827fa182f0e76226df7ffe7a86598d1df54f from qemu
2019-06-25 18:55:25 -05:00
Peter Maydell edf81eb214
target/arm: Convert VFP VMLA to decodetree
Convert the VFP VMLA instruction to decodetree.

This is the first of the VFP 3-operand data processing instructions,
so we include in this patch the code which loops over the elements
for an old-style VFP vector operation. The existing code to do this
looping uses the deprecated cpu_F0s/F0d/F1s/F1d TCG globals; since
we are going to be converting instructions one at a time anyway
we can take the opportunity to make the new loop use TCG temporaries,
which means we can do that conversion one operation at a time
rather than needing to do it all in one go.

We include an UNDEF check which was missing in the old code:
short-vector operations (with stride or length non-zero) were
deprecated in v7A and must UNDEF in v8A, so if the MVFR0 FPShVec
field does not indicate that support for short vectors is present
we UNDEF the operations that would use them. (This is a change
of behaviour for Cortex-A7, Cortex-A15 and the v8 CPUs, which
previously were all incorrectly allowing short-vector operations.)

Note that the conversion fixes a bug in the old code for the
case of VFP short-vector "mixed scalar/vector operations". These
happen where the destination register is in a vector bank but
but the second operand is in a scalar bank. For example
vmla.f64 d10, d1, d16 with length 2 stride 2
is equivalent to the pair of scalar operations
vmla.f64 d10, d1, d16
vmla.f64 d8, d3, d16
where the destination and first input register cycle through
their vector but the second input is scalar (d16). In the
old decoder the gen_vfp_F1_mul() operation uses cpu_F1{s,d}
as a temporary output for the multiply, which trashes the
second input operand. For the fully-scalar case (where we
never do a second iteration) and the fully-vector case
(where the loop loads the new second input operand) this
doesn't matter, but for the mixed scalar/vector case we
will end up using the wrong value for later loop iterations.
In the new code we use TCG temporaries and so avoid the bug.
This bug is present for all the multiply-accumulate insns
that operate on short vectors: VMLA, VMLS, VNMLA, VNMLS.

Note 2: the expression used to calculate the next register
number in the vector bank is not in fact correct; we leave
this behaviour unchanged from the old decoder and will
fix this bug later in the series.

Backports commit 266bd25c485597c94209bfdb3891c1d0c573c164 from qemu
2019-06-13 17:59:16 -04:00
Peter Maydell 3994dfd079
target/arm: Convert the VSEL instructions to decodetree
Convert the VSEL instructions to decodetree.
We leave trans_VSEL() in translate.c for now as this allows
the patch to show just the changes from the old handle_vsel().

In the old code the check for "do D16-D31 exist" was hidden in
the VFP_DREG macro, and assumed that VFPv3 always implied that
D16-D31 exist. In the new code we do the correct ID register test.
This gives identical behaviour for most of our CPUs, and fixes
previously incorrect handling for Cortex-R5F, Cortex-M4 and
Cortex-M33, which all implement VFPv3 or better with only 16
double-precision registers.

Backports commit b3ff4b87b4ae08120a51fe12592725e1dca8a085 from qemu
2019-06-13 16:41:22 -04:00
Richard Henderson 8f53f09a05
cpu: Introduce CPUNegativeOffsetState
Nothing in there so far, but all of the plumbing done
within the target ArchCPU state.

Backports commit 5b146dc716cfd247f99556c04e6e46fbd67565a0 from qemu
2019-06-13 15:08:25 -04:00
Richard Henderson ac176ccb38
cpu: Move ENV_OFFSET to exec/gen-icount.h
Now that we have ArchCPU, we can define this generically,
in the one place that needs it.

Backports commit 677c4d69ac21961e76a386f9bfc892a44923acc0 from qemu
2019-06-12 12:20:21 -04:00
Richard Henderson b8bd543390
target/arm: Use env_cpu, env_archcpu
Cleanup in the boilerplate that each target must define.
Replace arm_env_get_cpu with env_archcpu. The combination
CPU(arm_env_get_cpu) should have used ENV_GET_CPU to begin;
use env_cpu now.

Backports commit 2fc0cc0e1e034582f4718b1a2d57691474ccb6aa from qemu
2019-06-12 11:34:08 -04:00
Richard Henderson fbf91a6535
cpu: Replace ENV_GET_CPU with env_cpu
Now that we have both ArchCPU and CPUArchState, we can define
this generically instead of via macro in each target's cpu.h.

Backports commit 29a0af618ddd21f55df5753c3e16b0625f534b3c from qemu
2019-06-12 11:16:16 -04:00
Richard Henderson ae94fb5992
cpu: Define ArchCPU
For all targets, do this just before including exec/cpu-all.h.

Backports commit 2161a612b4e1d388046320bc464adefd6bba01a0 from qemu
2019-06-12 11:08:39 -04:00
Richard Henderson e3f1f25996
cpu: Define CPUArchState with typedef
For all targets, do this just before including exec/cpu-all.h.

Backports commit 4f7c64b3819d559417615ed2b1d028ebc1a49580 from qemu
2019-06-12 11:06:36 -04:00
Richard Henderson df2a890bd7
tcg: Split out target/arch/cpu-param.h
For all targets, into this new file move TARGET_LONG_BITS,
TARGET_PAGE_BITS, TARGET_PHYS_ADDR_SPACE_BITS,
TARGET_VIRT_ADDR_SPACE_BITS, and NB_MMU_MODES.

Include this new file from exec/cpu-defs.h.

This now removes the somewhat odd requirement that target/arch/cpu.h
defines TARGET_LONG_BITS before including exec/cpu-defs.h, so push the
bulk of the includes within target/arch/cpu.h to the top.

Backports commit 74433bf083b0766aba81534f92de13194f23ff3e from qemu
2019-06-10 19:35:46 -04:00
Richard Henderson 3dd7358a53
target/arm: Implement ARMv8.5-RNG
Use the newly introduced infrastructure for guest random numbers.

Backports commit de390645675966cce113bf5394445bc1f8d07c85 from qemu

(with the actual RNG portion disabled to preserve determinism for the
time being).
2019-05-23 15:03:23 -04:00
Richard Henderson a8df33c37c
target/arm: Put all PAC keys into a structure
This allows us to use a single syscall to initialize them all.

Backports commit 108b3ba891408c4dce93df78261ec4aca38c0e2e from qemu
2019-05-23 14:54:06 -04:00
Peter Maydell 7861820e94
target/arm: Implement XPSR GE bits
In the M-profile architecture, if the CPU implements the DSP extension
then the XPSR has GE bits, in the same way as the A-profile CPSR. When
we added DSP extension support we forgot to add support for reading
and writing the GE bits, which are stored in env->GE. We did put in
the code to add XPSR_GE to the mask of bits to update in the v7m_msr
helper, but forgot it in v7m_mrs. We also must not allow the XPSR we
pull off the stack on exception return to set the nonexistent GE bits.
Correct these errors:
* read and write env->GE in xpsr_read() and xpsr_write()
* only set GE bits on exception return if DSP present
* read GE bits for MRS if DSP present

Backports commit f1e2598c46d480c9e21213a244bc514200762828 from qemu
2019-05-09 17:46:31 -04:00
Peter Maydell b483951046
target/arm: Implement VLSTM for v7M CPUs with an FPU
Implement the VLSTM instruction for v7M for the FPU present case.

Backports commit 019076b036da4444494de38388218040d9d3a26c from qemu
2019-04-30 11:25:44 -04:00
Peter Maydell a976d7642a
target/arm: Implement M-profile lazy FP state preservation
The M-profile architecture floating point system supports
lazy FP state preservation, where FP registers are not
pushed to the stack when an exception occurs but are instead
only saved if and when the first FP instruction in the exception
handler is executed. Implement this in QEMU, corresponding
to the check of LSPACT in the pseudocode ExecuteFPCheck().

Backports commit e33cf0f8d8c9998a7616684f9d6aa0d181b88803 from qemu
2019-04-30 11:21:50 -04:00
Peter Maydell b1d6bd2792
target/arm: New function armv7m_nvic_set_pending_lazyfp()
In the v7M architecture, if an exception is generated in the process
of doing the lazy stacking of FP registers, the handling of
possible escalation to HardFault is treated differently to the normal
approach: it works based on the saved information about exception
readiness that was stored in the FPCCR when the stack frame was
created. Provide a new function armv7m_nvic_set_pending_lazyfp()
which pends exceptions during lazy stacking, and implements
this logic.

This corresponds to the pseudocode TakePreserveFPException().

Backports the relevant parts of commit
a99ba8ab1601904e0fa20325192fc850362ce80e from qemu
2019-04-30 10:56:54 -04:00
Peter Maydell 3fff653e20
target/arm: New helper function arm_v7m_mmu_idx_all()
Add a new helper function which returns the MMU index to use
for v7M, where the caller specifies all of the security
state, privilege level and whether the execution priority
is negative, and reimplement the existing
arm_v7m_mmu_idx_for_secstate_and_priv() in terms of it.

We are going to need this for the lazy-FP-stacking code.

Backports commit fa6252a988dbe440cd6087bf93cbe0887f0c401b from qemu
2019-04-30 10:54:26 -04:00
Peter Maydell 719231b4c0
target/arm: Activate M-profile floating point context when FPCCR.ASPEN is set
The M-profile FPCCR.ASPEN bit indicates that automatic floating-point
context preservation is enabled. Before executing any floating-point
instruction, if FPCCR.ASPEN is set and the CONTROL FPCA/SFPA bits
indicate that there is no active floating point context then we
must create a new context (by initializing FPSCR and setting
FPCA/SFPA to indicate that the context is now active). In the
pseudocode this is handled by ExecuteFPCheck().

Implement this with a new TB flag which tracks whether we
need to create a new FP context.

Backports commit 6000531e19964756673a5f4b694a649ef883605a from qemu
2019-04-30 10:51:31 -04:00
Peter Maydell 87c8c0fde7
target/arm: Set FPCCR.S when executing M-profile floating point insns
The M-profile FPCCR.S bit indicates the security status of
the floating point context. In the pseudocode ExecuteFPCheck()
function it is unconditionally set to match the current
security state whenever a floating point instruction is
executed.

Implement this by adding a new TB flag which tracks whether
FPCCR.S is different from the current security state, so
that we only need to emit the code to update it in the
less-common case when it is not already set correctly.

Note that we will add the handling for the other work done
by ExecuteFPCheck() in later commits.

Backports commit 6d60c67a1a03be32c3342aff6604cdc5095088d1 from qemu
2019-04-30 10:50:17 -04:00
Peter Maydell 8d726490ff
target/arm: Overlap VECSTRIDE and XSCALE_CPAR TB flags
We are close to running out of TB flags for AArch32; we could
start using the cs_base word, but before we do that we can
economise on our usage by sharing the same bits for the VFP
VECSTRIDE field and the XScale XSCALE_CPAR field. This
works because no XScale CPU ever had VFP.

Backports commit ea7ac69d124c94c6e5579145e727adec9ccbefef from qemu
2019-04-30 10:45:14 -04:00
Peter Maydell 3c1f3548c4
target/arm: Move NS TBFLAG from bit 19 to bit 6
Move the NS TBFLAG down from bit 19 to bit 6, which has not
been used since commit c1e3781090b9d36c60 in 2015, when we
started passing the entire MMU index in the TB flags rather
than just a 'privilege level' bit.

This rearrangement is not strictly necessary, but means that
we can put M-profile-only bits next to each other rather
than scattered across the flag word.

Backports commit 7fbb535f7aeb22896fedfcf18a1eeff48165f1d7 from qemu
2019-04-30 10:41:04 -04:00
Peter Maydell c7f5633cfe
target/arm: Implement v7m_update_fpccr()
Implement the code which updates the FPCCR register on an
exception entry where we are going to use lazy FP stacking.
We have to defer to the NVIC to determine whether the
various exceptions are currently ready or not.

Backports commit b593c2b81287040ab6f452afec6281e2f7ee487b from qemu
2019-04-30 10:32:12 -04:00
Peter Maydell a4f332f3e9
target/arm: Implement dummy versions of M-profile FP-related registers
The M-profile floating point support has three associated config
registers: FPCAR, FPCCR and FPDSCR. It also makes the registers
CPACR and NSACR have behaviour other than reads-as-zero.
Add support for all of these as simple reads-as-written registers.
We will hook up actual functionality later.

The main complexity here is handling the FPCCR register, which
has a mix of banked and unbanked bits.

Note that we don't share storage with the A-profile
cpu->cp15.nsacr and cpu->cp15.cpacr_el1, though the behaviour
is quite similar, for two reasons:
* the M profile CPACR is banked between security states
* it preserves the invariant that M profile uses no state
inside the cp15 substruct

Backports commit d33abe82c7c9847284a23e575e1078cccab540b5 from qemu
2019-04-30 10:13:41 -04:00
Lioncash 3521e72580
target/arm: Sychronize with qemu
Synchronizes with bits and pieces that were missed due to merging
incorrectly (sorry :<)
2019-04-18 04:49:11 -04:00
Andrew Jones 8719b3edb3
target/arm: make pmccntr_op_start/finish static
These functions are not used outside helper.c

Backports commit f2b2f53f6429b5abd7cd86bd65747f5f13e195eb from qemu
2019-03-26 20:35:34 -04:00
Richard Henderson f116560d2c
target/arm: Implement ARMv8.5-FRINT
Backports 6bea25631af92531027d3bf3ef972a4d51d62e7c from qemu.
2019-03-05 23:17:33 -05:00
Richard Henderson 94b5aab8f8
target/arm: Implement ARMv8.5-CondM
Backports commit 5ef84f111483e3f7b57efc690e22081ca8f99544 from qemu
2019-03-05 23:04:06 -05:00
Richard Henderson 1dfa15a683
target/arm: Implement ARMv8.4-CondM
Backports commit b89d9c988a988d5547c73e2bc43f59b0c07420a5 from qemu
2019-03-05 22:59:51 -05:00
Richard Henderson 5d42ca6a65
target/arm: Implement ARMv8.0-PredInv
Backports commit cb570bd318beb2ecce83cabf8016dacceb824dce from qemu
2019-03-05 22:37:57 -05:00
Richard Henderson 1721e429c2
target/arm: Implement ARMv8.0-SB
Backports commit 9888bd1e20425dfe4dcca5dcd1ca2fac8e90ad19 from qemu
2019-03-05 22:35:16 -05:00
Richard Henderson a552a7b2e0
target/arm: Split out arm_sctlr
Minimize the number of places that will need updating when
the virtual host extensions are added.

Backports commit 64e40755cd41fbe8cd266cf387e42ddc57a449ef from qemu
2019-03-05 22:29:25 -05:00
Richard Henderson 4ae3ff8e61
target/arm: Implement VFMAL and VFMSL for aarch32
Backports commit 87732318c5d68a366fc2d6fc394d9c20412099fa from qemu
2019-02-28 15:44:59 -05:00
Richard Henderson 625d3f3cfb
target/arm: Implement FMLAL and FMLSL for aarch64
Backports commit 0caa5af802ff622c854ff4ee2e2b8cdd135b4d73 from qemu
2019-02-28 15:36:41 -05:00
Peter Maydell 82b8e97f76
target/arm: Gate "miscellaneous FP" insns by ID register field
There is a set of VFP instructions which we implement in
disas_vfp_v8_insn() and gate on the ARM_FEATURE_V8 bit.
These were all first introduced in v8 for A-profile, but in
M-profile they appeared in v7M. Gate them on the MVFR2
FPMisc field instead, and rename the function appropriately.

Backports commit c0c760afe800b60b48c80ddf3509fec413594778 from qemu
2019-02-28 15:26:27 -05:00
Peter Maydell 118a2bde5c
target/arm: Use MVFR1 feature bits to gate A32/T32 FP16 instructions
Instead of gating the A32/T32 FP16 conversion instructions on
the ARM_FEATURE_VFP_FP16 flag, switch to our new approach of
looking at ID register bits. In this case MVFR1 fields FPHP
and SIMDHP indicate the presence of these insns.

This change doesn't alter behaviour for any of our CPUs.

Backports commit 602f6e42cfbfe9278be34e9b91d2ceb695837e02 from qemu
2019-02-28 15:23:51 -05:00
Richard Henderson c9ad233678
target/arm: Implement ARMv8.3-JSConv
Backports commit 6c1f6f2733a7692793135ea5ce72b829add99a50 from qemu
2019-02-22 19:08:57 -05:00
Richard Henderson 10d468f601
target/arm: Split out FPSCR.QC to a vector field
Change the representation of this field such that it is easy
to set from vector code.

Backports commit a4d5846245c5e029e5aa3945a9bda1de1c3fedbf from qemu
2019-02-15 18:04:13 -05:00
Alex Bennée bf9c8499ca
target/arm: expose remaining CPUID registers as RAZ
There are a whole bunch more registers in the CPUID space which are
currently not used but are exposed as RAZ. To avoid too much
duplication we expand ARMCPRegUserSpaceInfo to understand glob
patterns so we only need one entry to tweak whole ranges of registers.

Backports commit d040242effe47850060d2ef1c461ff637d88a84d from qemu
2019-02-15 17:48:37 -05:00
Alex Bennée babf31dfa0
target/arm: expose CPUID registers to userspace
A number of CPUID registers are exposed to userspace by modern Linux
kernels thanks to the "ARM64 CPU Feature Registers" ABI. For QEMU's
user-mode emulation we don't need to emulate the kernels trap but just
return the value the trap would have done. To avoid too much #ifdef
hackery we process ARMCPRegInfo with a new helper (modify_arm_cp_regs)
before defining the registers. The modify routine is driven by a
simple data structure which describes which bits are exported and
which are fixed.

Backports commit 6c5c0fec29bbfe36c64eca1edfd8455be46b77c6 from qemu
2019-02-15 17:27:30 -05:00
Alex Bennée 0a51e5055f
target/arm: relax permission checks for HWCAP_CPUID registers
Although technically not visible to userspace the kernel does make
them visible via a trap and emulate ABI. We provide a new permission
mask (PL0U_R) which maps to PL0_R for CONFIG_USER builds and adjust
the minimum permission check accordingly.

Backports commit b5bd7440422bb66deaceb812bb9287a6a3cdf10c from qemu
2019-02-15 17:18:06 -05:00
Peter Maydell 04676ed074
target/arm: Make FPSCR/FPCR trapped-exception bits RAZ/WI
The {IOE, DZE, OFE, UFE, IXE, IDE} bits in the FPSCR/FPCR are for
enabling trapped IEEE floating point exceptions (where IEEE exception
conditions cause a CPU exception rather than updating the FPSR status
bits). QEMU doesn't implement this (and nor does the hardware we're
modelling), but for implementations which don't implement trapped
exception handling these control bits are supposed to be RAZ/WI.
This allows guest code to test for whether the feature is present
by trying to write to the bit and checking whether it sticks.

QEMU is incorrectly making these bits read as written. Make them
RAZ/WI as the architecture requires.

In particular this was causing problems for the NetBSD automatic
test suite.

Backports commit a15945d98d3a3390c3da344d1b47218e91e49d8b from qemu
2019-02-05 17:45:22 -05:00
Richard Henderson 5c6ffde710
target/arm: Add TBFLAG_A64_TBID, split out gen_top_byte_ignore
Split out gen_top_byte_ignore in preparation of handling these
data accesses; the new tbflags field is not yet honored.

Backports commit 4a9ee99db38ba513bf1e8f43665b79c60accd017 from qemu
2019-02-05 17:20:11 -05:00
Richard Henderson cf3ac035bc
target/arm: Add BT and BTYPE to tb->flags
Backports commit 08f1434a71ddf2bdfdb034dcd24b24464d1efd72 from qemu
2019-02-05 16:59:53 -05:00
Richard Henderson a99119ce39
target/arm: Add PSTATE.BTYPE
Place this in its own field within ENV, as that will
make it easier to reset from within TCG generated code.

With the change to pstate_read/write, exception entry
and return are automatically handled.

Backports commit f6e52eaac13b6947f4406c127e3090c898e439c9 from qemu
2019-02-05 16:57:51 -05:00
Richard Henderson 6b4f7a28b5
target/arm: Introduce isar_feature_aa64_bti
Also create field definitions for id_aa64pfr1 from ARMv8.5.

Backports commit be53b6f4d7ace2e6a018e45af825069ccb7bab66 from qemu
2019-02-05 16:56:01 -05:00
Remi Denis-Courmont a20bb60f06
target/arm: fix AArch64 virtual address space size
Since QEMU does not support the ARMv8.2-LVA, Large Virtual Address,
extension (yet), the VA address space is 48-bits plus a sign bit. User
mode can only handle the positive half of the address space, so that
makes a limit of 48 bits.

(With LVA, it would be 53 and 52 bits respectively.)

The incorrectly large address space conflicts with PAuth instructions,
which use bits 48-54 and 56-63 for the pointer authentication code. This
also conflicts with (as yet unsupported by QEMU) data tagging and with
the ARMv8.5-MTE extension.

Backports commit f6768aa1b4c6a80448eabd22bb9b4123c709caea from qemu
2019-02-03 17:55:30 -05:00
Aaron Lindsay OS 8d7bb2cab3
target/arm: Don't clear supported PMU events when initializing PMCEID1
A bug was introduced during a respin of:

commit 57a4a11b2b281bb548b419ca81bfafb214e4c77a
target/arm: Add array for supported PMU events, generate PMCEID[01]_EL0

This patch introduced two calls to get_pmceid() during CPU
initialization - one each for PMCEID0 and PMCEID1. In addition to
building the register values, get_pmceid() clears an internal array
mapping event numbers to their implementations (supported_event_map)
before rebuilding it. This is an optimization since much of the logic is
shared. However, since it was called twice, the contents of
supported_event_map reflect only the events in PMCEID1 (the second call
to get_pmceid()).

Fix this bug by moving the initialization of PMCEID0 and PMCEID1 back
into a single function call, and name it more appropriately since it is
doing more than simply generating the contents of the PMCEID[01]
registers.

Backports commit bf8d09694ccc07487cd73d7562081fdaec3370c8 from qemu
2019-01-29 17:12:23 -05:00