Commit graph

988 commits

Author SHA1 Message Date
Richard Henderson 1b21ced6a1
target/arm: Convert Data Processing (reg-shifted-reg)
Convert the register shifted by register form of the data
processing insns. For A32, we cannot yet remove any code
because the legacy decoder intertwines the immediate form.

Backports commit 5be2c12337f4cbdbda4efe6ab485350f730faaad from qemu
2019-11-28 02:39:16 -05:00
Richard Henderson e151696a65
target/arm: Convert Data Processing (register)
Convert the register shifted by immediate form of the data
processing insns. For A32, we cannot yet remove any code
because the legacy decoder intertwines the reg-shifted-reg
and immediate forms.

Backports commit 25ae32c558182c07fc6ad01b936e9151cbf00c44 from qemu
2019-11-28 02:38:58 -05:00
Richard Henderson 9fc793b566
target/arm: Add stubs for aa32 decodetree
Add the infrastructure that will become the new decoder.
No instructions adjusted so far.

Backports commit 51409b9e8cfe997b1ac3365df7400e0c6e844437 from qemu
2019-11-28 02:38:49 -05:00
Richard Henderson 6ec6c71d50
target/arm: Use store_reg_from_load in thumb2 code
This function already includes the test for an interworking write
to PC from a load. Change the T32 LDM implementation to match the
A32 LDM implementation.

For LDM, the reordering of the tests does not change valid
behaviour because the only case that differs is has rn == 15,
which is UNPREDICTABLE.

Backports commit 69be3e13764111737e1a7a13bb0c231e4d5be756 from qemu
2019-11-28 02:38:42 -05:00
Richard Henderson 46a8dfff59
target/arm: Fix SMMLS argument order
The previous simplification got the order of operands to the
subtraction wrong. Since the 64-bit product is the subtrahend,
we must use a 64-bit subtract to properly compute the borrow
from the low-part of the product.

Fixes: 5f8cd06ebcf5 ("target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR")

Backports commit e0a0c8322b8ebcdad674f443a3e86db8708d6738 from qemu
2019-11-20 17:24:44 -05:00
Peter Maydell 9fb54a7f72
target/arm: Take exceptions on ATS instructions when needed
The translation table walk for an ATS instruction can result in
various faults. In general these are just reported back via the
PAR_EL1 fault status fields, but in some cases the architecture
requires that the fault is turned into an exception:
* synchronous stage 2 faults of any kind during AT S1E0* and
AT S1E1* instructions executed from NS EL1 fault to EL2 or EL3
* synchronous external aborts are taken as Data Abort exceptions

(This is documented in the v8A Arm ARM DDI0487A.e D5.2.11 and
G5.13.4.)

Backports commit 0710b2fa84a4aeb925422e1e88edac49ed407c79 from qemu
2019-11-20 17:24:44 -05:00
Peter Maydell 56b54f361e
target/arm: Allow ARMCPRegInfo read/write functions to throw exceptions
Currently the only part of an ARMCPRegInfo which is allowed to cause
a CPU exception is the access function, which returns a value indicating
that some flavour of UNDEF should be generated.

For the ATS system instructions, we would like to conditionally
generate exceptions as part of the writefn, because some faults
during the page table walk (like external aborts) should cause
an exception to be raised rather than returning a value.

There are several ways we could do this:
* plumb the GETPC() value from the top level set_cp_reg/get_cp_reg
helper functions through into the readfn and writefn hooks
* add extra readfn_with_ra/writefn_with_ra hooks that take the GETPC()
value
* require the ATS instructions to provide a dummy accessfn,
which serves no purpose except to cause the code generation
to emit TCG ops to sync the CPU state
* add an ARM_CP_ flag to mark the ARMCPRegInfo as possibly
throwing an exception in its read/write hooks, and make the
codegen sync the CPU state before calling the hooks if the
flag is set

This patch opts for the last of these, as it is fairly simple
to implement and doesn't require invasive changes like updating
the readfn/writefn hook function prototype signature.

Backports commit 37ff584c15bc3e1dd2c26b1998f00ff87189538c from qemu
2019-11-20 17:24:37 -05:00
Richard Henderson 87c06b7fae
target/arm: Factor out unallocated_encoding for aarch32
Make this a static function private to translate.c.
Thus we can use the same idiom between aarch64 and aarch32
without actually sharing function implementations.

Backports commit 1ce21ba1eaf08b22da5925f3e37fc0b4322da858 from qemu
2019-11-18 23:51:45 -05:00
Richard Henderson 1f59a43544
Revert "target/arm: Use unallocated_encoding for aarch32"
Despite the fact that the text for the call to gen_exception_insn
is identical for aarch64 and aarch32, the implementation inside
gen_exception_insn is totally different.

This fixes exceptions raised from aarch64.

This reverts commit fb2d3c9a9a.
2019-11-18 23:49:47 -05:00
Richard Henderson 9d2a3064af
target/arm: Use tcg_gen_extrh_i64_i32 to extract the high word
Separate shift + extract low will result in one extra insn
for hosts like RISC-V, MIPS, and Sparc.

Backports commit 664b7e3b97d6376f3329986c465b3782458b0f8b from qemu
2019-11-18 20:36:19 -05:00
Richard Henderson 93c016a3e7
target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR
All of the inputs to these instructions are 32-bits. Rather than
extend each input to 64-bits and then extract the high 32-bits of
the output, use tcg_gen_muls2_i32 and other 32-bit generator functions.

Backports commit 5f8cd06ebcf57420be8fea4574de2e074de46709 from qemu
2019-11-18 20:31:12 -05:00
Richard Henderson 4a1cc16eef
target/arm: Use tcg_gen_rotri_i32 for gen_swap_half
Rotate is the more compact and obvious way to swap 16-bit
elements of a 32-bit word.

Backports commit adefba76e8bf10dfb342094d2f5debfeedb1a74d from qemu
2019-11-18 20:27:12 -05:00
Richard Henderson 751ab7b24b
target/arm: Use ror32 instead of open-coding the operation
The helper function is more documentary, and also already
handles the case of rotate by zero.

Backports commit dd861b3f29be97a9e3cdb9769dcbc0c7d7825185 from qemu
2019-11-18 20:25:51 -05:00
Richard Henderson df4c773ed2
target/arm: Remove redundant shift tests
The immediate shift generator functions already test for,
and eliminate, the case of a shift by zero.

Backports commit 464eaa9571fae5867d9aea7d7209c091c8a50223 from qemu
2019-11-18 20:24:39 -05:00
Richard Henderson 4dd30ebfbd
target/arm: Use tcg_gen_deposit_i32 for PKHBT, PKHTB
Use deposit as the composit operation to merge the
bits from the two inputs.

Backports commit d1f8755fc93911f5b27246b1da794542d222fa1b from qemu
2019-11-18 20:22:00 -05:00
Richard Henderson 25ccd28e78
target/arm: Use tcg_gen_extract_i32 for shifter_out_im
Extract is a compact combination of shift + and.

Backports commit 191f4bfe8d6cf0c7d5cd7f84cd7076e32e3745dd from qemu
2019-11-18 20:19:40 -05:00
Andrew Jones ad63ee7509
target/arm/cpu: Use div-round-up to determine predicate register array size
Unless we're guaranteed to always increase ARM_MAX_VQ by a multiple of
four, then we should use DIV_ROUND_UP to ensure we get an appropriate
array size.

Backports commit 46417784d21c89446763f2047228977bdc267895 from qemu
2019-11-18 20:16:46 -05:00
Andrew Jones bb8b3bc42b
target/arm/helper: zcr: Add build bug next to value range assumption
The current implementation of ZCR_ELx matches the architecture, only
implementing the lower four bits, with the rest RAZ/WI. This puts
a strict limit on ARM_MAX_VQ of 16. Make sure we don't let ARM_MAX_VQ
grow without a corresponding update here.

Backports commit 7b351d98709d3f77d6bb18562e1bf228862b0d57 from qemu
2019-11-18 20:14:42 -05:00
Richard Henderson 3d3d56056b
target/arm: Remove helper_double_saturate
Replace x = double_saturate(y) with x = add_saturate(y, y).
There is no need for a separate more specialized helper.

Backports commit 640581a06d14e2d0d3c3ba79b916de6bc43578b0 from qemu
2019-11-18 20:13:21 -05:00
Richard Henderson fb2d3c9a9a
target/arm: Use unallocated_encoding for aarch32
Promote this function from aarch64 to fully general use.
Use it to unify the code sequences for generating illegal
opcode exceptions.

Backports commit 3cb36637157088892e9e33ddb1034bffd1251d3b from qemu
2019-11-18 20:10:50 -05:00
Richard Henderson d562bea784
target/arm: Remove offset argument to gen_exception_bkpt_insn
Unlike the other more generic gen_exception{,_internal}_insn
interfaces, breakpoints always refer to the current instruction.

Backports commit 06bcbda3f64d464b6ecac789bce4bd69f199cd68 from qemu
2019-11-18 20:05:45 -05:00
Richard Henderson f19b4df20d
target/arm: Replace offset with pc in gen_exception_internal_insn
The offset is variable depending on the instruction set.
Passing in the actual value is clearer in intent.

Backpors commit aee828e7541a5895669ade3a4b6978382b6b094a from qemu
2019-11-18 20:05:23 -05:00
Richard Henderson 00fbadf637
target/arm: Replace s->pc with s->base.pc_next
We must update s->base.pc_next when we return from the translate_insn
hook to the main translator loop. By incrementing s->base.pc_next
immediately after reading the insn word, "pc_next" contains the address
of the next instruction throughout translation.

All remaining uses of s->pc are referencing the address of the next insn,
so this is now a simple global replacement. Remove the "s->pc" field.

Backports commit a04159166b880b505ccadc16f2fe84169806883d from qemu
2019-11-18 17:32:53 -05:00
Richard Henderson 7d1fcef722
target/arm: Remove redundant s->pc & ~1
The thumb bit has already been removed from s->pc, and is always even.

Backports commit 4818c3743b0e0095fdcecd24457da9b3443730ab from qemu
2019-11-18 17:32:53 -05:00
Richard Henderson a2e60445de
target/arm: Introduce add_reg_for_lit
Provide a common routine for the places that require ALIGN(PC, 4)
as the base address as opposed to plain PC. The two are always
the same for A32, but the difference is meaningful for thumb mode.

Backports commit 16e0d8234ef9291747332d2c431e46808a060472 from qemu
2019-11-18 17:32:49 -05:00
Richard Henderson 1c0914e58c
target/arm: Introduce read_pc
We currently have 3 different ways of computing the architectural
value of "PC" as seen in the ARM ARM.

The value of s->pc has been incremented past the current insn,
but that is all. Thus for a32, PC = s->pc + 4; for t32, PC = s->pc;
for t16, PC = s->pc + 2. These differing computations make it
impossible at present to unify the various code paths.

With the newly introduced s->pc_curr, we can compute the correct
value for all cases, using the formula given in the ARM ARM.

This changes the behaviour for load_reg() and load_reg_var()
when called with reg==15 from a 32-bit Thumb instruction:
previously they would have returned the incorrect value
of pc_curr + 6, and now they will return the architecturally
correct value of PC, which is pc_curr + 4. This will not
affect well-behaved guest software, because all of the places
we call these functions from T32 code are instructions where
using r15 is UNPREDICTABLE. Using the architectural PC value
here is more consistent with the T16 and A32 behaviour.

Backports commit fdbcf6329d0c2984c55d7019419a72bf8e583c36 from qemu
2019-11-18 17:04:50 -05:00
Richard Henderson 0048f3e887
target/arm: Introduce pc_curr
Add a new field to retain the address of the instruction currently
being translated. The 32-bit uses are all within subroutines used
by a32 and t32. This will become less obvious when t16 support is
merged with a32+t32, and having a clear definition will help.

Convert aarch64 as well for consistency. Note that there is one
instance of a pre-assert fprintf that used the wrong value for the
address of the current instruction.

Backports commit 43722a6d4f0c92f7e7e1e291580039b0f9789df1 from qemu
2019-11-18 16:58:40 -05:00
Richard Henderson 1aa3c685a8
target/arm: Pass in pc to thumb_insn_is_16bit
This function is used in two different contexts, and it will be
clearer if the function is given the address to which it applies.

Backports commit 331b1ca616cb708db30dab68e3262d286e687f24 from qemu
2019-11-18 16:52:35 -05:00
Peter Maydell c61e22627d
target/arm: Fix routing of singlestep exceptions
When generating an architectural single-step exception we were
routing it to the "default exception level", which is to say
the same exception level we execute at except that EL0 exceptions
go to EL1. This is incorrect because the debug exception level
can be configured by the guest for situations such as single
stepping of EL0 and EL1 code by EL2.

We have to track the target debug exception level in the TB
flags, because it is dependent on CPU state like HCR_EL2.TGE
and MDCR_EL2.TDE. (That we were previously calling the
arm_debug_target_el() function to determine dc->ss_same_el
is itself a bug, though one that would only have manifested
as incorrect syndrome information.) Since we are out of TB
flag bits unless we want to expand into the cs_base field,
we share some bits with the M-profile only HANDLER and
STACKCHECK bits, since only A-profile has this singlestep.

Fixes: https://bugs.launchpad.net/qemu/+bug/1838913

Backports commit 8bd587c1066f4456ddfe611b571d9439a947d74c from qemu
2019-11-18 16:50:15 -05:00
Peter Maydell 3f531fac61
target/arm: Factor out 'generate singlestep exception' function
Factor out code to 'generate a singlestep exception', which is
currently repeated in four places.

To do this we need to also pull the identical copies of the
gen-exception() function out of translate-a64.c and translate.c
into translate.h.

(There is a bug in the code: we're taking the exception to the wrong
target EL. This will be simpler to fix if there's only one place to
do it.)

Backports commit c1d5f50f094ab204accfacc2ee6aafc9601dd5c4 from qemu
2019-11-18 16:47:08 -05:00
Alex Bennée 0d6ed39333
target/arm: generate a custom MIDR for -cpu max
While most features are now detected by probing the ID_* registers
kernels can (and do) use MIDR_EL1 for working out of they have to
apply errata. This can trip up warnings in the kernel as it tries to
work out if it should apply workarounds to features that don't
actually exist in the reported CPU type.

Avoid this problem by synthesising our own MIDR value.

Backports commit 2bd5f41c00686a1f847a60824d0375f3df2c26bf from qemu
2019-11-18 16:42:51 -05:00
Christophe Lyon 8264cb84fe
target/arm: Allow reading flags from FPSCR for M-profile
rt==15 is a special case when reading the flags: it means the
destination is APSR. This patch avoids rejecting vmrs apsr_nzcv, fpscr
as illegal instruction.

Backports commit cdc6896659b85f7ed8f7552850312e55170de0c5 from qemu
2019-11-18 16:32:06 -05:00
Peter Maydell 3fc86e1901
target/arm: Don't abort on M-profile exception return in linux-user mode
An attempt to do an exception-return (branch to one of the magic
addresses) in linux-user mode for M-profile should behave like
a normal branch, because linux-user mode is always going to be
in 'handler' mode. This used to work, but we broke it when we added
support for the M-profile security extension in commit d02a8698d7ae2bfed.

In that commit we allowed even handler-mode calls to magic return
values to be checked for and dealt with by causing an
EXCP_EXCEPTION_EXIT exception to be taken, because this is
needed for the FNC_RETURN return-from-non-secure-function-call
handling. For system mode we added a check in do_v7m_exception_exit()
to make any spurious calls from Handler mode behave correctly, but
forgot that linux-user mode would also be affected.

How an attempted return-from-non-secure-function-call in linux-user
mode should be handled is not clear -- on real hardware it would
result in return to secure code (not to the Linux kernel) which
could then handle the error in any way it chose. For QEMU we take
the simple approach of treating this erroneous return the same way
it would be handled on a CPU without the security extensions --
treat it as a normal branch.

The upshot of all this is that for linux-user mode we should never
do any of the bx_excret magic, so the code change is simple.

This ought to be a weird corner case that only affects broken guest
code (because Linux user processes should never be attempting to do
exception returns or NS function returns), except that the code that
assigns addresses in RAM for the process and stack in our linux-user
code does not attempt to avoid this magic address range, so
legitimate code attempting to return to a trampoline routine on the
stack can fall into this case. This change fixes those programs,
but we should also look at restricting the range of memory we
use for M-profile linux-user guests to the area that would be
real RAM in hardware.

Backports commit 9027d3fba605d8f6093342ebe4a1da450d374630 from qemu
2019-11-18 16:30:43 -05:00
Peter Maydell 8f7f19ce43
target/arm: Free TCG temps in trans_VMOV_64_sp()
The function neon_store_reg32() doesn't free the TCG temp that it
is passed, so the caller must do that. We got this right in most
places but forgot to free the TCG temps in trans_VMOV_64_sp().

Backports commit 38fb634853ac6547326d9f88b9a068d9fc6b4ad4 from qemu
2019-11-18 16:27:21 -05:00
Peter Maydell c6041bf94b
target/arm: Avoid bogus NSACR traps on M-profile without Security Extension
In Arm v8.0 M-profile CPUs without the Security Extension and also in
v7M CPUs, there is no NSACR register. However, the code we have to handle
the FPU does not always check whether the ARM_FEATURE_M_SECURITY bit
is set before testing whether env->v7m.nsacr permits access to the
FPU. This means that for a CPU with an FPU but without the Security
Extension we would always take a bogus fault when trying to stack
the FPU registers on an exception entry.

We could fix this by adding extra feature bit checks for all uses,
but it is simpler to just make the internal value of nsacr 0xcff
("all non-secure accesses allowed"), since this is not guest
visible when the Security Extension is not present. This allows
us to continue to follow the Arm ARM pseudocode which takes a
similar approach. (In particular, in the v8.1 Arm ARM the register
is documented as reading as 0xcff in this configuration.)

Fixes: https://bugs.launchpad.net/qemu/+bug/1838475

Backports commit 02ac2f7f613b47f6a5b397b20ab0e6b2e7fb89fa from qemu
2019-08-08 19:56:56 -04:00
Lioncash 59d808cf21
target/arm: Supply uc_struct instance to tcg_enabled() 2019-08-08 19:55:12 -04:00
Peter Maydell ecd3f0a5df
target/arm: Deliver BKPT/BRK exceptions to correct exception level
Most Arm architectural debug exceptions (eg watchpoints) are ignored
if the configured "debug exception level" is below the current
exception level (so for example EL1 can't arrange to get debug exceptions
for EL2 execution). Exceptions generated by the BRK or BPKT instructions
are a special case -- they must always cause an exception, so if
we're executing above the debug exception level then we
must take them to the current exception level.

This fixes a bug where executing BRK at EL2 could result in an
exception being taken at EL1 (which is strictly forbidden by the
architecture).

Fixes: https://bugs.launchpad.net/qemu/+bug/1838277

Backports commit 987a23224218fa3bb3aa0024ad236dcf29ebde9e from qemu
2019-08-08 19:53:30 -04:00
Peter Maydell fbbd582fb9
target/arm: Limit ID register assertions to TCG
In arm_cpu_realizefn() we make several assertions about the values of
guest ID registers:
* if the CPU provides AArch32 v7VE or better it must advertise the
ARM_DIV feature
* if the CPU provides AArch32 A-profile v6 or better it must
advertise the Jazelle feature

These are essentially consistency checks that our ID register
specifications in cpu.c didn't accidentally miss out a feature,
because increasingly the TCG emulation gates features on the values
in ID registers rather than using old-style checks of ARM_FEATURE_FOO
bits.

Unfortunately, these asserts can cause problems if we're running KVM,
because in that case we don't control the values of the ID registers
-- we read them from the host kernel. In particular, if the host
kernel is older than 4.15 then it doesn't expose the ID registers via
the KVM_GET_ONE_REG ioctl, and we set up dummy values for some
registers and leave the rest at zero. (See the comment in
target/arm/kvm64.c kvm_arm_get_host_cpu_features().) This set of
dummy values is not sufficient to pass our assertions, and so on
those kernels running an AArch32 guest on AArch64 will assert.

We could provide a more sophisticated set of dummy ID registers in
this case, but that still leaves the possibility of a host CPU which
reports bogus ID register values that would cause us to assert. It's
more robust to only do these ID register checks if we're using TCG,
as that is the only case where this is truly a QEMU code bug.

Backports commit 8f4821d77e465bc2ef77302d47640d5a43d92b30 from qemu
2019-08-08 19:44:16 -04:00
Philippe Mathieu-Daudé 9bd010263a
target/arm: Add missing break statement for Hypervisor Trap Exception
Reported by GCC9 when building with -Wimplicit-fallthrough=2:

target/arm/helper.c: In function ‘arm_cpu_do_interrupt_aarch32_hyp’:
target/arm/helper.c:7958:14: error: this statement may fall through [-Werror=implicit-fallthrough=]
7958 | addr = 0x14;
| ~~~~~^~~~~~
target/arm/helper.c:7959:5: note: here
7959 | default:
| ^~~~~~~
cc1: all warnings being treated as errors

Backports commit 9bbb4ef991fa93323f87769a6e217c2b9273a128 from qemu
2019-08-08 19:43:01 -04:00
Peter Maydell cdb9422f3a
target/arm: NS BusFault on vector table fetch escalates to NS HardFault
In the M-profile architecture, when we do a vector table fetch and it
fails, we need to report a HardFault. Whether this is a Secure HF or
a NonSecure HF depends on several things. If AIRCR.BFHFNMINS is 0
then HF is always Secure, because there is no NonSecure HardFault.
Otherwise, the answer depends on whether the 'underlying exception'
(MemManage, BusFault, SecureFault) targets Secure or NonSecure. (In
the pseudocode, this is handled in the Vector() function: the final
exc.isSecure is calculated by looking at the exc.isSecure from the
exception returned from the memory access, not the isSecure input
argument.)

We weren't doing this correctly, because we were looking at
the target security domain of the exception we were trying to
load the vector table entry for. This produces errors of two kinds:
* a load from the NS vector table which hits the "NS access
to S memory" SecureFault should end up as a Secure HardFault,
but we were raising an NS HardFault
* a load from the S vector table which causes a BusFault
should raise an NS HardFault if BFHFNMINS == 1 (because
in that case all BusFaults are NonSecure), but we were raising
a Secure HardFault

Correct the logic.

We also fix a comment error where we claimed that we might
be escalating MemManage to HardFault, and forgot about SecureFault.
(Vector loads can never hit MPU access faults, because they're
always aligned and always use the default address map.)

Backports commit 51c9122e92b776a3f16af0b9282f1dc5012e2a19 from qemu
2019-08-08 19:32:53 -04:00
Peter Maydell 8ec683b874
target/arm: Set VFP-related MVFR0 fields for arm926 and arm1026
The ARMv5 architecture didn't specify detailed per-feature ID
registers. Now that we're using the MVFR0 register fields to
gate the existence of VFP instructions, we need to set up
the correct values in the cpu->isar structure so that we still
provide an FPU to the guest.

This fixes a regression in the arm926 and arm1026 CPUs, which
are the only ones that both have VFP and are ARMv5 or earlier.
This regression was introduced by the VFP refactoring, and more
specifically by commits 1120827fa182f0e76 and 266bd25c485597c,
which accidentally disabled VFP short-vector support and
double-precision support on these CPUs.

Backports commit cb7cef8b32033f6284a47d797edd5c19c5491698 from qemu
2019-08-08 19:29:56 -04:00
Alex Bennée f893ff0b89
target/arm: report ARMv8-A FP support for AArch32 -cpu max
When we converted to using feature bits in 602f6e42cfbf we missed out
the fact (dp && arm_dc_feature(s, ARM_FEATURE_V8)) was supported for
-cpu max configurations. This caused a regression in the GCC test
suite. Fix this by setting the appropriate bits in mvfr1.FPHP to
report ARMv8-A with FP support (but not ARMv8.2-FP16).

Fixes: https://bugs.launchpad.net/qemu/+bug/1836078

Backports commit 45b1a243b81a7c9ae56235937280711dd9914ca7 from qemu
2019-08-08 19:28:39 -04:00
Philippe Mathieu-Daudé 6f8c8046d8
target/arm/vfp_helper: Call set_fpscr_to_host before updating to FPSCR
In commit e9d652824b0 we extracted the vfp_set_fpscr_to_host()
function but failed at calling it in the correct place, we call
it after xregs[ARM_VFP_FPSCR] is modified.

Fix by calling this function before we update FPSCR.

Backports commit 85795187f416326f87177cabc39fae1911f04c50 from qemu
2019-08-08 19:21:28 -04:00
Richard Henderson c687259bf6
target/arm: Fix sve_zcr_len_for_el
Off by one error in the EL2 and EL3 tests. Remove the test
against EL3 entirely, since it must always be true.

Backports commit 6a02a73211c5bc634fccd652777230954b83ccba from qemu
2019-08-08 19:20:35 -04:00
Peter Maydell 1f4c3d6bcc
target/arm: Correct VMOV_imm_dp handling of short vectors
Coverity points out (CID 1402195) that the loop in trans_VMOV_imm_dp()
that iterates over the destination registers in a short-vector VMOV
accidentally throws away the returned updated register number
from vfp_advance_dreg(). Add the missing assignment. (We got this
correct in trans_VMOV_imm_sp().)

Backports commit 89a11ff756410aecb87d2c774df6e45dbf4105c1 from qemu
2019-08-08 18:08:55 -04:00
Peter Maydell 0d89bce217
target/arm: Execute Thumb instructions when their condbits are 0xf
Thumb instructions in an IT block are set up to be conditionally
executed depending on a set of condition bits encoded into the IT
bits of the CPSR/XPSR. The architecture specifies that if the
condition bits are 0b1111 this means "always execute" (like 0b1110),
not "never execute"; we were treating it as "never execute". (See
the ConditionHolds() pseudocode in both the A-profile and M-profile
Arm ARM.)

This is a bit of an obscure corner case, because the only legal
way to get to an 0b1111 set of condbits is to do an exception
return which sets the XPSR/CPSR up that way. An IT instruction
which encodes a condition sequence that would include an 0b1111 is
UNPREDICTABLE, and for v8A the CONSTRAINED UNPREDICTABLE choices
for such an IT insn are to NOP, UNDEF, or treat 0b1111 like 0b1110.
Add a comment noting that we take the latter option.

Backports commit 5529de1e5512c05276825fa8b922147663fd6eac from qemu
2019-08-08 18:07:57 -04:00
Peter Maydell 9d01d50db8
target/arm: Use _ra versions of cpu_stl_data() in v7M helpers
In the various helper functions for v7M/v8M instructions, use
the _ra versions of cpu_stl_data() and friends. Otherwise we
may get wrong behaviour or an assert() due to not being able
to locate the TB if there is an exception on the memory access
or if it performs an IO operation when in icount mode

Backports commit 2884fbb60412049ec92389039ae716b32057382e from qemu
2019-08-08 18:06:23 -04:00
Philippe Mathieu-Daudé bde186433d
target/arm/helper: Move M profile routines to m_helper.c
In preparation for supporting TCG disablement on ARM, we move most
of TCG related v7m/v8m helpers and APIs into their own file.

Note: It is easier to review this commit using the 'histogram'
diff algorithm:

$ git diff --diff-algorithm=histogram ...
or
$ git diff --histogram ...

Backports commit 7aab5a8c8bb525ea390b4ebc17ab82c0835cfdb6 from qemu
2019-08-08 18:04:08 -04:00
Philippe Mathieu-Daudé 199e2f8a7d
target/arm: Restrict semi-hosting to TCG
Semihosting hooks either SVC or HLT instructions, and inside KVM
both of those go to EL1, ie to the guest, and can't be trapped to
KVM.

Let check_for_semihosting() return False when not running on TCG.

backports commit 91f78c58da9ba78c8ed00f5d822b701765be8499 from qemu
2019-08-08 17:48:34 -04:00
Philippe Mathieu-Daudé 6295fd7156
target/arm: Move debug routines to debug_helper.c
These routines are TCG specific.

Backports commit 9dd5cca42448770a940fa2145f1ff18cdc7b01a9 from qemu
2019-08-08 17:46:56 -04:00