Commit graph

3091 commits

Author SHA1 Message Date
Marc-André Lureau cf80a410a9
build-sys: silence make by default or V=0
Move generic make flags in MAKEFLAGS (SUBDIR_MAKEFLAGS is more qemu specific).

Use --quiet to silence make 'is up to date' message.

Backports commit 42a77f1ce4934b243df003f95bda88530631387a from qemu
2018-03-06 08:58:03 -05:00
Peter Maydell b658f1f36c
target/arm: Handle page table walk load failures correctly
Instead of ignoring the response from address_space_ld*()
(indicating an attempt to read a page table descriptor from
an invalid physical address), use it to report the failure
correctly.

Since this is another couple of locations where we need to
decide the value of the ARMMMUFaultInfo ea bit based on a
MemTxResult, we factor out that operation into a helper
function.

Backports commit 3b39d734141a71296d08af3d4c32f872fafd782e from qemu
2018-03-06 08:55:08 -05:00
Peter Maydell fe9271d5bd
get_phys_addr_pmsav7: Support AP=0b111 for v7M
For PMSAv7, the v7A/R Arm ARM defines that setting AP to 0b111
is an UNPREDICTABLE reserved combination. However, for v7M
this value is documented as having the same behaviour as 0b110:
read-only for both privileged and unprivileged. Accept this
value on an M profile core rather than treating it as a guest
error and a no-access page.

Backports commit 8638f1ad7403b63db880dadce38e6690b5d82b64 from qemu
2018-03-06 08:51:55 -05:00
Peter Maydell bfd6d3e59b
target/arm: Make disas_thumb2_insn() generate its own UNDEF exceptions
Refactor disas_thumb2_insn() so that it generates the code for raising
an UNDEF exception for invalid insns, rather than returning a flag
which the caller must check to see if it needs to generate the UNDEF
code. This brings the function in to line with the behaviour of
disas_thumb_insn() and disas_arm_insn().

Backports commit 2eea841c11096e8dcc457b80e21f3fbdc32d2590 from qemu
2018-03-06 08:50:18 -05:00
Michael Weiser 5fabebabee
target/arm: Fix stlxp for aarch64_be
ldxp loads two consecutive doublewords from memory regardless of CPU
endianness. On store, stlxp currently assumes to work with a 128bit
value and consequently switches order in big-endian mode. With this
change it packs the doublewords in reverse order in anticipation of the
128bit big-endian store operation interposing them so they end up in
memory in the right order. This makes it work for both MTTCG and !MTTCG.
It effectively implements the ARM ARM STLXP operation pseudo-code:

data = if BigEndian() then el1:el2 else el2:el1;

With this change an aarch64_be Linux 4.14.4 kernel succeeds to boot up
in system emulation mode.

Backports commit 0785557f8811133bd69be02aeccf018d47a26373 from qemu
2018-03-06 08:48:12 -05:00
Jean-Christophe Dubois 0a43963f3b
target/sparc: remove MemoryRegionSection check code from sparc_cpu_get_phys_page_debug()
This code is preventing the MMU debug code from displaying virtual
mappings of IO devices (anything that is not located in the RAM).

Before this patch, Qemu would output 0xffffffffffffffff (-1) as the
physical address corresponding to an IO device virtual address.

With this patch the intended physical address is displayed.

Backports commit 7e450a8f50ac12fc8f69b6ce555254c84efcf407 from qemu
2018-03-06 08:42:13 -05:00
Laurent Vivier 13e1357dbf
target/m68k: add the Interrupt Stack Pointer
Add the third stack pointer, the Interrupt Stack Pointer (ISP)
(680x0 only). This stack will be needed in softmmu mode.

Update movec to set/get the value of the three stacks.

Backports commit 6e22b28e22aa6ed1b8db6f24da2633868019d4c9 from qemu
2018-03-06 08:41:07 -05:00
Laurent Vivier ec1c2e9576
target/m68k: add andi/ori/eori to SR/CCR
Backports commit b5ae1edc294f78865ede38377c0a9b92da4370e0 from qemu
2018-03-06 08:36:01 -05:00
Laurent Vivier 9527a5c994
target/m68k: add 680x0 "move to SR" instruction
Some cleanup, and allows SR to be moved from any addressing mode.
Previous code was wrong for coldfire: coldfire also allows to
use addressing mode to set SR/CCR. It only supports Data register
to get SR/CCR (move from)

Backports commit b6a21d8d8f69ac04fd6180e752a65d582c07e948 from qemu
2018-03-06 08:33:49 -05:00
Laurent Vivier 6559be21ad
target/m68k: move CCR/SR functions
The following patches will be clearer if we move
functions before adding new ones.

Backports commit 01490ea8f575656a9431fc0170a82bc6064fa2ef from qemu
2018-03-06 08:29:52 -05:00
Laurent Vivier f9ee2d24cc
target/m68k: implement fsave/frestore
Backports commit fff3b4b0e16c76669e56173acb9d3cc6aac85e85 from qemu
2018-03-06 08:28:36 -05:00
Laurent Vivier b82fe8b95c
target/m68k: add reset
The instruction traps if the CPU is not in
Supervisor state but the helper is empty because
there is no easy way to reset all the peripherals
without resetting the CPU itself.

Backports commit 0bdb2b3bf5660f892ddbfa09baea56cdca57ad1d from qemu
2018-03-06 08:26:01 -05:00
Laurent Vivier c5643956e3
target/m68k: add cpush/cinv
Add cache lines invalidate and cache lines push
as no-op operations, as we don't have cache.

These instructions are 68040 only.

Backports commit f58ed1c50add3e76331afdc92387c0da9dd9e443 from qemu
2018-03-06 08:24:40 -05:00
Laurent Vivier 6487da53eb
target/m68k: softmmu cleanup
don't compile supervisor only instructions in linux-user mode

Backports commit 6ad257641d60f8c4a47972af9027b1c9bb5af787 from qemu
2018-03-06 08:22:39 -05:00
Laurent Vivier 03096cb560
target/m68k: add move16
move16 moves the source line to the destination line. Lines are aligned
to 16-byte boundaries and are 16 bytes long.

Backports commit 9d4f0429f3dc1dc6c67de3eaa3106e6c1cfa1524 from qemu
2018-03-06 08:15:33 -05:00
Laurent Vivier c652028eb5
target/m68k: add chk and chk2
chk and chk2 compare a value to boundaries, and
trigger a CHK exception if the value is out of bounds.

Backports commit 8bf6cbaf396a8b54b138bb8a7c3377f2868ed16e from qemu
2018-03-06 08:08:53 -05:00
Laurent Vivier 172f3709e3
target/m68k: manage 680x0 stack frames
680x0 manages several stack frame formats:
- format 0: four-word stack frame
- format 1: four-word throwaway stack frame
- format 2: six-word stack frame
- format 3: Floating-Point post-instruction stack frame
- format 4: eight-word stack frame
- format 7: access-error stack frame

Backports commit d2f8fb8e7f8e7d082103d705e178c9f72e0bea77 from qemu
2018-03-06 08:04:08 -05:00
Laurent Vivier 134916d653
target/m68k: add CPU_LOG_INT trace
Display the interrupts/exceptions information
in QEMU logs (-d int)

Backports commit 5beb144e04f44772804ac8405b6a54a17fe78909 from qemu
2018-03-06 07:58:38 -05:00
Laurent Vivier 6d46cb09dc
target/m68k: use insn_pc to generate instruction fault address
Backports commit 16a14cdf575a2eda4698930d22b75072537754dd from qemu
2018-03-06 07:51:28 -05:00
Laurent Vivier 3a12e69ad6
target/m68k: fix gen_get_ccr()
As gen_helper_get_ccr() is able to compute CCR from cc_op and
flags, we don't need to flush flags before to call it.
flush_flags() and get_ccr() use COMPUTE_CCR() to compute
flags. get_ccr() computes CCR value,
whereas flush_flags update live cc_op and flags.

Backports commit 4131c242cc850aaf76e59d4c787d220f07850cf5 from qemu
2018-03-06 07:47:25 -05:00
Laurent Vivier 57d48199a8
target-m68k: sync CC_OP before gen_jmp_tb()
And remove update_cc_op() from gen_exception() because there is
one in gen_jmp_im().

Backports commit 7cd7b5ca9be805e8a4ced4c07014c24e34812f27 from qemu
2018-03-06 07:46:46 -05:00
Richard Henderson bbd87f9d73
tcg: Add tcg_signed_cond
Complimenting the existing tcg_unsigned_cond.

Backports commit 923ed1750186591b04d7d61399f6d68b4e0608f2 from qemu
2018-03-05 16:55:17 -05:00
Richard Henderson 140058221d
tcg: Generalize TCGOp parameters
We had two fields specific to INDEX_op_call. Rename these and
add some macros so that the fields may be reused for other opcodes.

Backports commit cd9090aa9dbba30db8aec9a2fc103aaf1ab0f5a7 from qemu
2018-03-05 16:53:50 -05:00
Richard Henderson 7fe5f620df
tcg: Dynamically allocate TCGOps
With no fixed array allocation, we can't overflow a buffer.
This will be important as optimizations related to host vectors
may expand the number of ops used.

Use QTAILQ to link the ops together.

Backports commit 15fa08f8451babc88d733bd411d4c94976f9d0f8 from qemu
2018-03-05 16:34:40 -05:00
Richard Henderson 5f074f09ab
tcg: Remove TCGV_UNUSED* and TCGV_IS_UNUSED*
These are now trivial sets and tests against NULL. Unwrap.

Backports commit f764718d0cb30af9f1f8e1d6a33622cc05ca4155 from qemu
2018-03-05 15:58:15 -05:00
Lioncash 640b104ac0
translate-all: Zero out the TCGContext instance
Makes debugging wonky state a little nicer
2018-03-05 15:40:51 -05:00
Alex Bennée 8e973e762d
target/*helper: don't check retaddr before calling cpu_restore_state
cpu_restore_state officially supports being passed an address it can't
resolve the state for. As a result the checks in the helpers are
superfluous and can be removed. This makes the code consistent with
other users of cpu_restore_state.

Of course this does nothing to address what to do if cpu_restore_state
can't resolve the state but so far it seems this is handled elsewhere.

The change was made with included coccinelle script.

Backports commit 65255e8efdd5fca602bcc4ff61a879939ff75f4f from qemu
2018-03-05 14:47:41 -05:00
Laurent Vivier 4db1e153ae
target/m68k: fix set_cc_op()
The first call of set_cc_op() in a new translation sequence
is done with old_op set to CC_OP_DYNAMIC (-1).

This will do an out of bound access to the array cc_op_live[].

We fix that by adding an entry in cc_op_live[] for CC_OP_DYNAMIC.

Backports commit 7deddf96e94f3e1eb3677db0ea7b53e61751b544 from qemu
2018-03-05 14:44:38 -05:00
Peter Xu cd93d4eb52
cpu: suffix cpu address spaces with cpu index
Renaming cpu address space names so that they won't be the same when
there are more than one.

Backports commit 87a621d857be1b2b3dd1d0847ca311a863dbcb53 from qemu
2018-03-05 14:41:25 -05:00
Peter Xu 1bb34aadf9
cpu: refactor cpu_address_space_init()
Normally we create an address space for that CPU and pass that address
space into the function. Let's just do it inside to unify address space
creations. It'll simplify my next patch to rename those address spaces.

Backports commit 80ceb07a83375e3a0091591f96bd47bce2f640ce from qemu
2018-03-05 14:39:25 -05:00
Stefan Weil 55b19c099e
target/i386: Fix compiler warnings
These gcc warnings are fixed:

target/i386/translate.c:4461:12: warning:
variable 'prefixes' might be clobbered by 'longjmp' or 'vfork' [-Wclobbered]
target/i386/translate.c:4466:9: warning:
variable 'rex_w' might be clobbered by 'longjmp' or 'vfork' [-Wclobbered]
target/i386/translate.c:4466:16: warning:
variable 'rex_r' might be clobbered by 'longjmp' or 'vfork' [-Wclobbered]

Tested with x86_64-w64-mingw32-gcc from Debian stretch.

Backports commit a4926d99129a1d8072fc4681cd4efdb214f65ed4 from qemu
2018-03-05 14:20:36 -05:00
Yang Zhong 258b885b17
x86/cpu: Enable new SSE/AVX/AVX512 cpu features
Intel IceLake cpu has added new cpu features,AVX512_VBMI2/GFNI/
VAES/VPCLMULQDQ/AVX512_VNNI/AVX512_BITALG. Those new cpu features
need expose to guest VM.

The bit definition:
CPUID.(EAX=7,ECX=0):ECX[bit 06] AVX512_VBMI2
CPUID.(EAX=7,ECX=0):ECX[bit 08] GFNI
CPUID.(EAX=7,ECX=0):ECX[bit 09] VAES
CPUID.(EAX=7,ECX=0):ECX[bit 10] VPCLMULQDQ
CPUID.(EAX=7,ECX=0):ECX[bit 11] AVX512_VNNI
CPUID.(EAX=7,ECX=0):ECX[bit 12] AVX512_BITALG

The release document ref below link:
https://software.intel.com/sites/default/files/managed/c5/15/\
architecture-instruction-set-extensions-programming-reference.pdf

Backports commit aff9e6e46a343e1404498be4edd03db1112f0950 from qemu
2018-03-05 14:19:37 -05:00
Philippe Mathieu-Daudé 3e5d4f4ee8
Makefile: use $(MAKE) variable
For some systems (i.e. FreeBSD) the default 'make' is not compatible with the
GNU extensions used by QEMU makefiles.

Calling the GNU make (gmake) works, however the help displayed refers to the
host 'make' and copy/paste leads to lot of unobvious errors:

$ gmake check-help
[...]
make check Run all tests

$ make check
make: "Makefile" line 28: Missing dependency operator
make: "Makefile" line 37: Need an operator
make: "Makefile" line 41: warning: duplicate script for target "git-submodule-update" ignored
make: "rules.mak" line 70: warning: duplicate script for target "%.o" ignored
make: Unknown modifier ' '
make: Unclosed substitution for eval modules (= missing)
make: "tests/Makefile.include" line 24: Variable/Value missing from "export"
make: "tests/" line 1: warning: Zero byte read from file, skipping rest of line.
make: "tests/" line 1: Need an operator
make: "Makefile" line 660: warning: duplicate script for target "ifneq" ignored
make: "Makefile" line 78: warning: using previous script for "ifneq" defined here
make: Fatal errors encountered -- cannot continue

Using the $(MAKE) variable, the help displayed is consistent with the 'make'
program used.

Backports commit b98a3bae2596fd9bf60f140d042c8e993daba930 from qemu
2018-03-05 14:16:02 -05:00
Marc-André Lureau ffa45adb57
memory: remove unused memory_region_set_global_locking()
This was never used since its introduction in commit
196ea13104f8 ("memory: Add global-locking property to memory
regions").

Backports commit e2fbe20851ceec5ccd7b539a89db0420393fb85d from qemu
2018-03-05 14:14:43 -05:00
Emilio G. Cota fdca966ec8
translate-all: fix 'consisits' typo in comment
Backports commit 55bbc8610c310c689eb7bd889f1924a6c128e1cf from qemu
2018-03-05 14:11:18 -05:00
Peter Maydell b54cfecf1e
sparc: Make sure we mmap at SHMLBA alignment
SPARC Linux has an oddity that it insists that mmap()
of MAP_FIXED memory must be at an alignment defined by
SHMLBA, which is more aligned than the page size
(typically, SHMLBA alignment is to 16K, and pages are 8K).
This is a relic of ancient hardware that had cache
aliasing constraints, but even on modern hardware the
kernel still insists on the alignment.

To ensure that we get mmap() alignment sufficient to
make the kernel happy, change QEMU_VMALLOC_ALIGN,
qemu_fd_getpagesize() and qemu_mempath_getpagesize()
to use the maximum of getpagesize() and SHMLBA.

In particular, this allows 'make check' to pass on Sparc:
we were previously failing the ivshmem tests.

Backports commit 57d1f6d7ce23e79a8ebe4a57bd2363b269b4664b from qemu
2018-03-05 14:09:58 -05:00
Edgar E. Iglesias 60136b485c
target/arm: Extend PAR format determination
Now that do_ats_write() is entirely in control of whether to
generate a 32-bit PAR or a 64-bit PAR, we can make it use the
correct (complicated) condition for doing so.

Backports commit 1313e2d7e2cd8b21741e0cf542eb09dfc4188f79 from qemu
2018-03-05 14:08:50 -05:00
Peter Maydell 0dfb84ea50
target/arm: Remove fsr argument from get_phys_addr() and arm_tlb_fill()
All of the callers of get_phys_addr() and arm_tlb_fill() now ignore
the FSR values they return, so we can just remove the argument
entirely.

Backports commit bc52bfeb3be2052942b7dac8ba284f342ac9605b from qemu
2018-03-05 14:05:08 -05:00
Peter Maydell ec686af668
target/arm: Ignore fsr from get_phys_addr() in do_ats_write()
In do_ats_write(), rather than using the FSR value from get_phys_addr(),
construct the PAR values using the information in the ARMMMUFaultInfo
struct. This allows us to create a PAR of the correct format regardless
of what the translation table format is.

For the moment we leave the condition for "when should this be a
64 bit PAR" as it was previously; this will need to be fixed to
properly support AArch32 Hyp mode.

Backports commit 5efe9ed45dec775ebe91ce72bd805ee780d16064 from qemu
2018-03-05 14:01:26 -05:00
Peter Maydell 3a5701e030
target/arm: Use ARMMMUFaultInfo in deliver_fault()
Now that ARMMMUFaultInfo is guaranteed to have enough information
to construct a fault status code, we can pass it in to the
deliver_fault() function and let it generate the correct type
of FSR for the destination, rather than relying on the value
provided by get_phys_addr().

I don't think there are any cases the old code was getting
wrong, but this is more obviously correct.

Backports commit 681f9a89d201d7891e2c60dff5e5415d8f618518 from qemu
2018-03-05 13:59:50 -05:00
Peter Maydell 4d666dbb99
target/arm: Convert get_phys_addr_pmsav8() to not return FSC values
Make get_phys_addr_pmsav8() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.

Backports commit 3f551b5b7380ff131fe22944aa6f5b166aa13caf from qemu
2018-03-05 13:57:33 -05:00
Peter Maydell 3eb4a2ea84
target/arm: Convert get_phys_addr_pmsav7() to not return FSC values
Make get_phys_addr_pmsav7() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.

Backports commit 9375ad15338b24e06109071ac3a85df48a2cc2e6 from qemu
2018-03-05 13:55:20 -05:00
Peter Maydell 79b2c4b1e7
target/arm: Convert get_phys_addr_pmsav5() to not return FSC values
Make get_phys_addr_pmsav5() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.

Note that PMSAv5 does not define any guest-visible fault status
register, so the different "fsr" values we were previously
returning are entirely arbitrary. So we can just switch to using
the most appropriae fi->type values without worrying that we
need to special-case FaultInfo->FSC conversion for PMSAv5.

Backports commit 53a4e5c5b07b2f50c538511b74b0d3d4964695ea from qemu
2018-03-05 13:54:05 -05:00
Peter Maydell 013e7873ee
target/arm: Convert get_phys_addr_lpae() to not return FSC values
Make get_phys_addr_v6() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.

Backports commit da909b2c23a68e57bbcb6be98229e40df606f0c8 from qemu
2018-03-05 13:52:18 -05:00
Peter Maydell c6496ec00a
target/arm: Convert get_phys_addr_v6() to not return FSC values
Make get_phys_addr_v6() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.

Backports commit f06cf243945ccb24cb9578304306ae7fcb4cf3fd from qemu
2018-03-05 13:48:32 -05:00
Peter Maydell 4ccf3c3927
target/arm: Convert get_phys_addr_v5() to not return FSC values
Make get_phys_addr_v5() return a fault type in the ARMMMUFaultInfo
structure, which we convert to the FSC at the callsite.

Backports commit f989983e8dc9be6bc3468c6dbe46fcb1501a740c from qemu
2018-03-05 13:48:32 -05:00
Peter Maydell 457e4f35ac
target/arm: Remove fsr argument from arm_ld*_ptw()
All the callers of arm_ldq_ptw() and arm_ldl_ptw() ignore the value
that those functions store in the fsr argument on failure: if they
return failure to their callers they will always overwrite the fsr
value with something else.

Remove the argument from these functions and S1_ptw_translate().
This will simplify removing fsr from the calling functions.

Backports commit 3795a6de9f7ec4a7e3dcb8bf02a88a014147b0b0 from qemu
2018-03-05 13:48:32 -05:00
Peter Maydell 211cbde764
target/arm: Provide fault type enum and FSR conversion functions
Currently get_phys_addr() and its various subfunctions return
a hard-coded fault status register value for translation
failures. This is awkward because FSR values these days may
be either long-descriptor format or short-descriptor format.
Worse, the right FSR type to use doesn't depend only on the
translation table being walked -- some cases, like fault
info reported to AArch32 EL2 for some kinds of ATS operation,
must be in long-descriptor format even if the translation
table being walked was short format. We can't get those cases
right with our current approach.

Provide fields in the ARMMMUFaultInfo struct which allow
get_phys_addr() to provide sufficient information for a caller to
construct an FSR value themselves, and utility functions which do
this for both long and short format FSR values, as a first step in
switching get_phys_addr() and its children to only returning the
failure cause in the ARMMMUFaultInfo struct.

Backports commit 1fa498fe0de979030bd1f481046e9f1c5574a584 from qemu
2018-03-05 13:48:32 -05:00
Peter Maydell 8fe6b6c308
target/arm: Implement TT instruction
Implement the TT instruction which queries the security
state and access permissions of a memory location.

Backports commit 5158de241b0fb344a6c948dfcbc4e611ab5fafbe from qemu
2018-03-05 13:48:31 -05:00
Peter Maydell 4e5ec9c0dc
target/arm: Factor MPU lookup code out of get_phys_addr_pmsav8()
For the TT instruction we're going to need to do an MPU lookup that
also tells us which MPU region the access hit. This requires us
to do the MPU lookup without first doing the SAU security access
check, so pull the MPU lookup parts of get_phys_addr_pmsav8()
out into their own function.

The TT instruction also needs to know the MPU region number which
the lookup hit, so provide this information to the caller of the
MPU lookup code, even though get_phys_addr_pmsav8() doesn't
need to know it.

Backports commit 54317c0ff3a3c0f6b2c3a1d3c8b5d93686a86d24 from qemu
2018-03-05 13:48:31 -05:00
Peter Maydell c441b19d76
target/arm: Create new arm_v7m_mmu_idx_for_secstate_and_priv()
The TT instruction is going to need to look up the MMU index
for a specified security and privilege state. Refactor the
existing arm_v7m_mmu_idx_for_secstate() into a version that
lets you specify the privilege state and one that uses the
current state of the CPU.

Backports commit ec8e3340286a87d3924c223d60ba5c994549f796 from qemu
2018-03-05 13:48:31 -05:00
Peter Maydell 89acdeb9af
target/arm: Split M profile MNegPri mmu index into user and priv
For M profile, we currently have an mmu index MNegPri for
"requested execution priority negative". This fails to
distinguish "requested execution priority negative, privileged"
from "requested execution priority negative, usermode", but
the two can return different results for MPU lookups. Fix this
by splitting MNegPri into MNegPriPriv and MNegPriUser, and
similarly for the Secure equivalent MSNegPri.

This takes us from 6 M profile MMU modes to 8, which means
we need to bump NB_MMU_MODES; this is OK since the point
where we are forced to reduce TLB sizes is 9 MMU modes.

(It would in theory be possible to stick with 6 MMU indexes:
{mpu-disabled,user,privileged} x {secure,nonsecure} since
in the MPU-disabled case the result of an MPU lookup is
always the same for both user and privileged code. However
we would then need to rework the TB flags handling to put
user/priv into the TB flags separately from the mmuidx.
Adding an extra couple of mmu indexes is simpler.)

Backports commit 62593718d77c06ad2b5e942727cead40775d2395 from qemu
2018-03-05 13:48:31 -05:00
Peter Maydell d877985eea
target/arm: Add missing M profile case to regime_is_user()
When we added the ARMMMUIdx_MSUser MMU index we forgot to
add it to the case statement in regime_is_user(), so we
weren't treating it as unprivileged when doing MPU lookups.
Correct the omission.

Backports commit 871bec7c44a453d9cab972ce1b5d12e1af0545ab from qemu
2018-03-05 13:48:31 -05:00
Peter Maydell 999080382f
target/arm: Allow explicit writes to CONTROL.SPSEL in Handler mode
In ARMv7M the CPU ignores explicit writes to CONTROL.SPSEL
in Handler mode. In v8M the behaviour is slightly different:
writes to the bit are permitted but will have no effect.

We've already done the hard work to handle the value in
CONTROL.SPSEL being out of sync with what stack pointer is
actually in use, so all we need to do to fix this last loose
end is to update the condition we use to guard whether we
call write_v7m_control_spsel() on the register write.

Backports commit 83d7f86d3d27473c0aac79c1baaa5c2ab01b02d9 from qemu
2018-03-05 13:48:30 -05:00
Peter Maydell 6713884243
target/arm: Handle SPSEL and current stack being out of sync in MSP/PSP reads
For v8M it is possible for the CONTROL.SPSEL bit value and the
current stack to be out of sync. This means we need to update
the checks used in reads and writes of the PSP and MSP special
registers to use v7m_using_psp() rather than directly checking
the SPSEL bit in the control register.

Backports commit 1169d3aa5b19adca9384d954d80e1f48da388284 from qemu
2018-03-05 13:48:30 -05:00
Eduardo Habkost 64a535ea8c
i386: Add EPYC-IBPB CPU model
EPYC-IBPB is a copy of the EPYC CPU model with
just CPUID_8000_0008_EBX_IBPB added.

Backports commit 8ebfafa796ca0cb2b035a7f06f836a675d8b48be from qemu
2018-03-05 13:48:30 -05:00
Eduardo Habkost 676409d54e
i386: Add new -IBRS versions of Intel CPU models
The new MSR IA32_SPEC_CTRL MSR was introduced by a recent Intel
microcode updated and can be used by OSes to mitigate
CVE-2017-5715. Unfortunately we can't change the existing CPU
models without breaking existing setups, so users need to
explicitly update their VM configuration to use the new *-IBRS
CPU model if they want to expose IBRS to guests.

The new CPU models are simple copies of the existing CPU models,
with just CPUID_7_0_EDX_SPEC_CTRL added and model_id updated.

Backports commit 61efbbf869293f1deb9ee39d44bd4e635de59fa7 from qemu
2018-03-05 13:48:30 -05:00
Eduardo Habkost 6fbabb4bce
i386: Add FEAT_8000_0008_EBX CPUID feature word
Add the new feature word and the "ibpb" feature flag.

Based on a patch by Paolo Bonzini.

Backports commit 1ade973f5202404e772aae7b1acd331270d246dc from qemu
2018-03-05 13:48:30 -05:00
Eduardo Habkost 7bee95ad28
i386: Add spec-ctrl CPUID bit
Add the feature name and a CPUID_7_0_EDX_SPEC_CTRL macro.

Backports commit 803d42fa65a371f7bb13180a5953299dc3a160e0 from qemu
2018-03-05 13:48:30 -05:00
Paolo Bonzini 7cf98b1a6e
i386: Add support for SPEC_CTRL MSR
Backports commit cb2637a5ae1f4e61af423395300548f14e8a2e2a from qemu
2018-03-05 13:48:29 -05:00
Eduardo Habkost 181524d695
i386: Change X86CPUDefinition::model_id to const char*
It is valid to have a 48-character model ID on CPUID, however the
definition of X86CPUDefinition::model_id is char[48], which can
make the compiler drop the null terminator from the string.

If a CPU model happens to have 48 bytes on model_id, "-cpu help"
will print garbage and the object_property_set_str() call at
x86_cpu_load_def() will read data outside the model_id array.

We could increase the array size to 49, but this would mean the
compiler would not issue a warning if a 49-char string is used by
mistake for model_id.

To make things simpler, simply change model_id to be const char*,
and validate the string length using an assert() on
x86_register_cpudef_type().

Backports commit 4b220d88ba76fb2623ce4b8ba1f1eea66b82144e from qemu
2018-03-05 13:48:29 -05:00
Peter Maydell d89704eb0f
target/i386: Fix handling of VEX prefixes
In commit e3af7c788b73a6495eb9d94992ef11f6ad6f3c56 we
replaced direct calls to to cpu_ld*_code() with calls
to the x86_ld*_code() wrappers which incorporate an
advance of s->pc. Unfortunately we didn't notice that
in one place the old code was deliberately not incrementing
s->pc:

@@ -4501,7 +4528,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
static const int pp_prefix[4] = {
0, PREFIX_DATA, PREFIX_REPZ, PREFIX_REPNZ
};
- int vex3, vex2 = cpu_ldub_code(env, s->pc);
+ int vex3, vex2 = x86_ldub_code(env, s);

if (!CODE64(s) && (vex2 & 0xc0) != 0xc0) {
/* 4.1.4.6: In 32-bit mode, bits [7:6] must be 11b,

This meant we were mishandling this set of instructions.
Remove the manual advance of s->pc for the "is VEX" case
(which is now done by x86_ldub_code()) and instead rewind
PC in the case where we decide that this isn't really VEX.

Backports commit 817a9fcba8043faa467929e7b0193df6bdc92211 from qemu
2018-03-05 13:48:29 -05:00
Peter Maydell 352a7b2501
target/arm: Generate UNDEF for 32-bit Thumb2 insns
The refactoring of commit 296e5a0a6c3935 has a nasty bug:
it accidentally dropped the generation of code to raise
the UNDEF exception when disas_thumb2_insn() returns nonzero.
This means that 32-bit Thumb2 instruction patterns that
ought to UNDEF just act like nops instead. This is likely
to break any number of things, including the kernel's "disable
the FPU and use the UNDEF exception to identify when to turn
it back on again" trick.

Backports commit 7472e2efb049ea65a6a5e7261b78ebf5c561bc2f from qemu
2018-03-05 13:48:29 -05:00
Peter Maydell 6285ed170e
osdep.h: Make TIME_MAX handle different time_t types
In our various supported host OSes, the time_t type may be either 32
or 64 bit, and could in theory also be either signed or unsigned.
Notably, in OpenBSD time_t is a 64 bit type even if 'long' is 32
bits, so using LONG_MAX for TIME_MAX is incorrect.

Use an approach suggested by Paolo Bonzini which calculates
the maximum value of the type rather than hardcoding it;
to do this we use the TYPE_MAXIMUM macro from Gnulib.

Backports commit e7b47c22e2df14d55e3e4426688c929bf8e3f7fb from qemu
2018-03-05 13:48:29 -05:00
Peter Maydell c01b9a3cfe
arm: check regime, not current state, for ATS write PAR format
In do_ats_write(), rather than using extended_addresses_enabled() to
decide whether the value we get back from get_phys_addr() is a 64-bit
format PAR or a 32-bit one, use arm_s1_regime_using_lpae_format().

This is not really the correct answer, because the PAR format
depends on the AT instruction being used, not just on the
translation regime. However getting this correct requires a
significant refactoring, so that get_phys_addr() returns raw
information about the fault which the caller can then assemble
into a suitable FSR/PAR/syndrome for its purposes, rather than
get_phys_addr() returning a pre-formatted FSR.

However this change at least improves the situation by making
the PAR work correctly for address translation operations done
at AArch64 EL2 on the EL2 translation regime. In particular,
this is necessary for Xen to be able to run in our emulation,
so this seems like a safer interim fix given that we are in freeze.

Backports commit 50cd71b0d347c74517dcb7da447fe657fca57d9c from qemu
2018-03-05 13:48:28 -05:00
Peter Maydell 175b632c91
target/arm: Report GICv3 sysregs present in ID registers if needed
The CPU ID registers ID_AA64PFR0_EL1, ID_PFR1_EL1 and ID_PFR1
have a field for reporting presence of GICv3 system registers.
We need to report this field correctly in order for Xen to
work as a guest inside QEMU emulation. We mustn't incorrectly
claim the sysregs exist when they don't, though, or Linux will
crash.

Unfortunately the way we've designed the GICv3 emulation in QEMU
puts the system registers as part of the GICv3 device, which
may be created after the CPU proper has been realized. This
means that we don't know at the point when we define the ID
registers what the correct value is. Handle this by switching
them to calling a function at runtime to read the value, where
we can fill in the GIC field appropriately.

Backports commit 96a8b92ed8f02d5e86ad380d3299d9f41f99b072 from qemu
2018-03-05 13:48:28 -05:00
Wanpeng Li 72430e4d75
target-i386: adds PV_TLB_FLUSH CPUID feature bit
Adds PV_TLB_FLUSH CPUID feature bit.

Backports commit 6976af663d3a19d1f86a56b5504f1b4559d0f1ae from qemu
2018-03-05 13:48:28 -05:00
Richard Henderson a58eb310eb
target/arm: Use helper_retaddr in stxp helpers
We use raw memory primitives along the !parallel_cpus paths in order to
simplify the endianness handling. Because of that, we did not benefit
from the generic changes to cpu_ldst_user_only_template.h.

The simplest fix is to manipulate helper_retaddr here.

Backports commit 3bdb5fcc9a08a9a47ce30c4e0c2d64c95190b49d from qemu
2018-03-05 13:48:28 -05:00
Richard Henderson f76eb22a46
tcg: Record code_gen_buffer address for user-only memory helpers
When we handle a signal from a fault within a user-only memory helper,
we cannot cpu_restore_state with the PC found within the signal frame.
Use a TLS variable, helper_retaddr, to record the unwind start point
to find the faulting guest insn.

Backports commit ec603b5584fa71213ef8f324fe89e4b27cc9d2bc from qemu
2018-03-05 13:48:28 -05:00
Lioncash 7ec1f12429
compiler: Add defines for abstracting thread-local storage 2018-03-05 13:48:27 -05:00
Richard Henderson 9f0393479e
tcg: Record code_gen_buffer address for user-only memory helpers
When we handle a signal from a fault within a user-only memory helper,
we cannot cpu_restore_state with the PC found within the signal frame.
Use a TLS variable, helper_retaddr, to record the unwind start point
to find the faulting guest insn.

Backports commit ec603b5584fa71213ef8f324fe89e4b27cc9d2bc from qemu
2018-03-05 13:48:27 -05:00
Emilio G. Cota 208014df9e
arm/translate-a64: mark path as unreachable to eliminate warning
Fixes the following warning when compiling with gcc 5.4.0 with -O1
optimizations and --enable-debug:

target/arm/translate-a64.c: In function ‘aarch64_tr_translate_insn’:
target/arm/translate-a64.c:2361:8: error: ‘post_index’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
if (!post_index) {
^
target/arm/translate-a64.c:2307:10: note: ‘post_index’ was declared here
bool post_index;
^
target/arm/translate-a64.c:2386:8: error: ‘writeback’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
if (writeback) {
^
target/arm/translate-a64.c:2308:10: note: ‘writeback’ was declared here
bool writeback;
^

Note that idx comes from selecting 2 bits, and therefore its value
can be at most 3.

Backports commit 5ca66278c859bb1ded243755aeead2be6992ce73 from qemu
2018-03-05 11:40:11 -05:00
Peter Maydell 33d42df60c
translate.c: Fix usermode big-endian AArch32 LDREXD and STREXD
For AArch32 LDREXD and STREXD, architecturally the 32-bit word at the
lowest address is always Rt and the one at addr+4 is Rt2, even if the
CPU is big-endian. Our implementation does these with a single
64-bit store, so if we're big-endian then we need to put the two
32-bit halves together in the opposite order to little-endian,
so that they end up in the right places. We were trying to do
this with the gen_aa32_frob64() function, but that is not correct
for the usermode emulator, because there there is a distinction
between "load a 64 bit value" (which does a BE 64-bit access
and doesn't need swapping) and "load two 32 bit values as one
64 bit access" (where we still need to do the swapping, like
system mode BE32).

Backports commit 3448d47b3172015006b79197eb5a69826c6a7b6d from qemu
2018-03-05 11:39:29 -05:00
Andrew Baumann 5250db33b5
arm: implement cache/shareability attribute bits for PAR registers
On a successful address translation instruction, PAR is supposed to
contain cacheability and shareability attributes determined by the
translation. We previously returned 0 for these bits (in line with the
general strategy of ignoring caches and memory attributes), but some
guest OSes may depend on them.

This patch collects the attribute bits in the page-table walk, and
updates PAR with the correct attributes for all LPAE translations.
Short descriptor formats still return 0 for these bits, as in the
prior implementation.

Backports commit 5b2d261d60caf9d988d91ca1e02392d6fc8ea104 from qemu
2018-03-05 11:35:28 -05:00
Paolo Bonzini 5216dcd157
build: disable -Wmissing-braces on older compilers
GCC 4.9 and newer stopped warning for missing braces around the
"universal" C zero initializer {0}. One such initializer sneaked
into scsi/qemu-pr-helper.c and is breaking the build with such
older GCC versions.

Detect the lack of support for the idiom, and disable the warning
in that case.

Backports commit 20bc94a2b8449b7700b6bfa25a87ce2320a1c649 from qemu
2018-03-05 11:29:54 -05:00
Richard Henderson 5ef155a68f
tcg/s390x: Use constant pool for prologue
Rather than have separate code only used for guest_base,
rely on a recent change to handle constant pool entries.

Backports commit ba2c747992f8c315c2fbddba196ce9137430d61d from qemu
2018-03-05 11:28:39 -05:00
Richard Henderson ef3f552229
tcg: Allow constant pool entries in the prologue
Both ARMv6 and AArch64 currently may drop complex guest_base values
into the constant pool. But generic code wasn't expecting that, and
the pool is not emitted. Correct that.

Backports commit 5b38ee31616d1532c3c3a6dc644a9160d608ed2f from qemu
2018-03-05 11:25:56 -05:00
Stefano Stabellini 1212c9b73c
fix WFI/WFE length in syndrome register
WFI/E are often, but not always, 4 bytes long. When they are, we need to
set ARM_EL_IL_SHIFT in the syndrome register.

Pass the instruction length to HELPER(wfi), use it to decrement pc
appropriately and to pass an is_16bit flag to syn_wfx, which sets
ARM_EL_IL_SHIFT if needed.

Set dc->insn in both arm_tr_translate_insn and thumb_tr_translate_insn.

Backports commit 58803318e5a546b2eb0efd7a053ed36b6c29ae6f from qemu
2018-03-05 11:21:51 -05:00
Richard Henderson ab9df6244c
tcg: Use offsets not indices for TCGv_*
Using the offset of a temporary, relative to TCGContext, rather than
its index means that we don't use 0. That leaves offset 0 free for
a NULL representation without having to leave index 0 unused.

Backports commit e89b28a63501c0ad6d2501fe851d0c5202055e70 from qemu
2018-03-05 10:12:08 -05:00
Richard Henderson 28061c2e59
qom: Introduce CPUClass.tcg_initialize
Move target cpu tcg initialization to common code,
called from cpu_exec_realizefn.

Backports commit 55c3ceef61fcf06fc98ddc752b7cce788ce7680b from qemu
2018-03-05 09:49:26 -05:00
Richard Henderson 4d9c8583fa
tcg: Remove TCGV_EQUAL*
When we used structures for TCGv_*, we needed a macro in order to
perform a comparison. Now that we use pointers, this is just clutter

Backports commit 11f4e8f8bfaa2caaab24bef6bbbb8a0205015119 from qemu
2018-03-05 09:16:07 -05:00
Richard Henderson d450156414
tcg: Remove GET_TCGV_* and MAKE_TCGV_*
The GET and MAKE functions weren't really specific enough.
We now have a full complement of functions that convert exactly
between temporaries, arguments, tcgv pointers, and indices.

The target/sparc change is also a bug fix, which would have affected
a host that defines TCG_TARGET_HAS_extr[lh]_i64_i32, i.e. MIPS64.

Backports commit dc41aa7d34989b552efe712ffe184236216f960b from qemu
2018-03-05 09:12:26 -05:00
Richard Henderson 960eb3f4f9
tcg: Introduce temp_tcgv_{i32,i64,ptr}
Backports commit 085272b35e0644fea373c33b5265c1818b7a978c from qemu
2018-03-05 08:55:52 -05:00
Richard Henderson 2bb5011b18
tcg: Introduce tcgv_{i32,i64,ptr}_{arg,temp}
Transform TCGv_* to an "argument" or a temporary.
For now, an argument is simply the temporary index.

Backports commit ae8b75dc6ec808378487064922f25f1e7ea7a9be from qemu
2018-03-05 08:46:12 -05:00
Richard Henderson 9f8c6a456b
tcg: Use per-temp state data in optimize
While we're touching many of the lines anyway, adjust the naming
of the functions to better distinguish when "TCGArg" vs "TCGTemp"
should be used.

Backports commit 6349039d0b06eda59820629b934944246b14a1c1 from qemu
2018-03-05 08:24:06 -05:00
Richard Henderson 387060ccf5
tcg: Remove unused TCG_CALL_DUMMY_TCGV
Backports commit 54534d7cfd3bdff1aa1f6c9472d94243d2303656 from qemu
2018-03-05 07:52:35 -05:00
Richard Henderson d104b792a6
tcg: Change temp_allocate_frame arg to TCGTemp
Backports commit 2272e4a791b7e1a01ffac143616ba4ece9a5762d from qemu
2018-03-05 07:51:40 -05:00
Richard Henderson 35a7a9c9a4
tcg: Avoid loops against variable bounds
Copy s->nb_globals or s->nb_temps to a local variable for the purposes
of iteration. This should allow the compiler to use low-overhead
looping constructs on some hosts.

Backports commit ac3b88911ebc6fc841f28898ee8aed40839debe2 from qemu
2018-03-05 07:50:06 -05:00
Richard Henderson 1f4ac863bf
tcg: Use per-temp state data in liveness
This avoids having to allocate external memory for each temporary.

Backports commit b83eabeac06e38706738bd5e92b1ba117a1b554d from qemu
2018-03-05 07:47:51 -05:00
Richard Henderson 87f2067aac
tcg: Introduce temp_arg, export temp_idx
At the same time, drop the TCGContext argument and use tcg_ctx instead.

Backports commit 1807f4c40098070008eb84b2032e25b7ac42569e from qemu
2018-03-05 07:24:17 -05:00
Richard Henderson a659a03ff5
tcg: Return NULL temp for TCG_CALL_DUMMY_ARG
Backports commit c6c7d84df8889b9d6298466999b88a8a42e5f976 from qemu
2018-03-05 07:22:38 -05:00
Richard Henderson 010ded3088
tcg: Add temp_global bit to TCGTemp
This avoids needing to test the index of a temp against nb_globals.

Backports commit fa477d25470187030614288d35bc734edffa41ee from qemu
2018-03-05 07:21:10 -05:00
Richard Henderson a9c46ad7a0
tcg: Introduce arg_temp
Backports commit 434391390ba99996af1591b427a73b3f5c05065e from qemu
2018-03-05 07:17:44 -05:00
Richard Henderson c8f0f6901e
tcg: Propagate TCGOp down to allocators
Backports commit dd186292017641d5b31fc13225a420677e1d20d3 from qemu
2018-03-05 07:12:48 -05:00
Richard Henderson f1e2ea6847
tcg: Propagate args to op->args in tcg.c
Backports commit efee3746fa471852daba7674b0d34f8c88be7559 from qemu
2018-03-05 07:06:50 -05:00
Richard Henderson 845cfc2ae9
tcg: Propagate args to op->args in optimizer
Backports commit acd937019bdaf933fcf1a7b57679ba07119c89b7 from qemu
2018-03-05 06:56:06 -05:00
Richard Henderson eb488f5bd6
tcg: Merge opcode arguments into TCGOp
Rather than have a separate buffer of 10*max_ops entries,
give each opcode 10 entries. The result is actually a bit
smaller and should have slightly more cache locality.

Backports commit 75e8b9b7aa0b95a761b9add7e2f09248b101a392 from qemu
2018-03-05 04:45:20 -05:00
Paolo Bonzini 7dd4afd8d9
target/i386: trap on instructions longer than >15 bytes
Besides being more correct, arbitrarily long instruction allow the
generation of a translation block that spans three pages. This
confuses the generator and even allows ring 3 code to poison the
translation block cache and inject code into other processes that are
in guest ring 3.

This is an improved (and more invasive) fix for commit 30663fd ("tcg/i386:
Check the size of instruction being translated", 2017-03-24). In addition
to being more precise (and generating the right exception, which is #GP
rather than #UD), it distinguishes better between page faults and too long
instructions, as shown by this test case:

int main()
{
char *x = mmap(NULL, 8192, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON, -1, 0);
memset(x, 0x66, 4096);
x[4096] = 0x90;
x[4097] = 0xc3;
char *i = x + 4096 - 15;
mprotect(x + 4096, 4096, PROT_READ|PROT_WRITE);
((void(*)(void)) i) ();
}

... which produces a #GP without the mprotect, and a #PF with it.

Backports commit b066c5375737ad0d630196dab2a2b329515a1d00 from qemu
2018-03-05 04:12:28 -05:00
Paolo Bonzini 44f87a8fbf
target/i386: introduce x86_ld*_code
These take care of advancing s->pc, and will provide a unified point
where to check for the 15-byte instruction length limit.

Backports commit e3af7c788b73a6495eb9d94992ef11f6ad6f3c56 from qemu
2018-03-05 04:11:02 -05:00
Richard Henderson c897ba3e2c
tcg: Fix off-by-one in assert in page_set_flags
Most of the users of page_set_flags offset (page, page + len) as
the end points. One might consider this an error, since the other
users do supply an endpoint as the last byte of the region.

However, the first thing that page_set_flags does is round end UP
to the start of the next page. Which means computing page + len - 1
is in the end pointless. Therefore, accept this usage and do not
assert when given the exact size of the vm as the endpoint.

Backports commit de258eb07db6cf893ef1bfad8c0cedc0b983db55 from qemu
2018-03-05 03:53:14 -05:00
Igor Mammedov 6f265062ef
qom: add helper macro DEFINE_TYPES()
DEFINE_TYPES() will help to simplify following routine patterns:

static void foo_register_types(void)
{
type_register_static(&foo1_type_info);
type_register_static(&foo2_type_info);
...
}

type_init(foo_register_types)

or

static void foo_register_types(void)
{
int i;

for (i = 0; i < ARRAY_SIZE(type_infos); i++) {
type_register_static(&type_infos[i]);
}
}

type_init(foo_register_types)

with a single line

DEFINE_TYPES(type_infos)

where types have static definition which could be consolidated in
a single array of TypeInfo structures.
It saves us ~6-10LOC per use case and would help to replace
imperative foo_register_types() there with declarative style of
type registration.

Backports commit 38b5d79b2e8cf6085324066d84e8bb3b3bbe8548 from qemu
2018-03-05 03:51:54 -05:00
Igor Mammedov c97583fc42
qom: introduce type_register_static_array()
it will help to remove code duplication of registration
static types in places that have open coded loop to
perform batch type registering.

Backports commit aa04c9d20704fa5b9ab239d5111adbcce5f49808 from qemu
2018-03-05 03:49:50 -05:00
Peter Maydell 0c06666800
target/arm: Implement SG instruction corner cases
The common situation of the SG instruction is that it is
executed from S&NSC memory by a CPU in NS state. That case
is handled by v7m_handle_execute_nsc(). However the instruction
also has defined behaviour in a couple of other cases:
* SG instruction in NS memory (behaves as a NOP)
* SG in S memory but CPU already secure (clears IT bits and
does nothing else)
* SG instruction in v8M without Security Extension (NOP)

These can be implemented in translate.c.

Backports commit 76eff04d166b8fe747adbe82de8b7e060e668ff9 from qemu
2018-03-05 03:47:20 -05:00
Peter Maydell 272427b4a0
target/arm: Support some Thumb insns being always unconditional
A few Thumb instructions are always unconditional even inside an
IT block (as opposed to being UNPREDICTABLE if used inside an
IT block): BKPT, the v8M SG instruction, and the A profile
HLT (debug halt) instruction.

This means we need to suppress the jump-over-instruction-on-condfail
code generation (though the IT state still advances as usual and
subsequent insns in the IT block may be conditional).

Backports commit dcf14dfb704519846f396a376339ebdb93eaf049 from qemu
2018-03-05 03:46:10 -05:00
Peter Maydell 7a293cd7cc
target-arm: Simplify insn_crosses_page()
Recent changes have left insn_crosses_page() more complicated
than it needed to be:
* it's only called from thumb_tr_translate_insn() so we know
for certain that we're looking at a Thumb insn
* the caller's check for dc->pc >= dc->next_page_start - 3
means that dc->pc can't possibly be 4 aligned, so there's
no need to check that (the check was partly there to ensure
that we didn't treat an ARM insn as Thumb, I think)
* we now have thumb_insn_is_16bit() which lets us do a precise
check of the length of the next insn, rather than opencoding
an inaccurate check

Simplify it down to just loading the first half of the insn
and calling thumb_insn_is_16bit() on it.

Backports commit 5b8d7289e9e92a0d7bcecb93cd189e245fef10cd from qemu
2018-03-05 03:44:54 -05:00
Peter Maydell 96f86f472a
target/arm: Pull Thumb insn word loads up to top level
Refactor the Thumb decode to do the loads of the instruction words at
the top level rather than only loading the second half of a 32-bit
Thumb insn in the middle of the decode.

This is simple apart from the awkward case of Thumb1, where the
BL/BLX prefix and suffix instructions live in what in Thumb2 is the
32-bit insn space. To handle these we decode enough to identify
whether we're looking at a prefix/suffix that we handle as a 16 bit
insn, or a prefix that we're going to merge with the following suffix
to consider as a 32 bit insn. The translation of the 16 bit cases
then moves from disas_thumb2_insn() to disas_thumb_insn().

The refactoring has the benefit that we don't need to pass the
CPUARMState* down into the decoder code any more, but the major
reason for doing this is that some Thumb instructions must be always
unconditional regardless of the IT state bits, so we need to know the
whole insn before we emit the "skip this insn if the IT bits and cond
state tell us to" code. (The always unconditional insns are BKPT,
HLT and SG; the last of these is 32 bits.)

Backports commit 296e5a0a6c393553079a641c50521ae33ff89324 from qemu
2018-03-05 03:43:38 -05:00
Peter Maydell b85d617bda
target-arm: Don't check for "Thumb2 or M profile" for not-Thumb1
The code which implements the Thumb1 split BL/BLX instructions
is guarded by a check on "not M or THUMB2". All we really need
to check here is "not THUMB2" (and we assume that elsewhere too,
eg in the ARCH(6T2) test that UNDEFs the Thumb2 insns).

This doesn't change behaviour because all M profile cores
have Thumb2 and so ARM_FEATURE_M implies ARM_FEATURE_THUMB2.
(v6M implements a very restricted subset of Thumb2, but we
can cross that bridge when we get to it with appropriate
feature bits.)

Backports commit 6b8acf256df09c8a8dd7dcaa79b06eaff4ad63f7 from qemu
2018-03-05 03:34:48 -05:00
Peter Maydell ee9b8a20c9
target/arm: Implement secure function return
Secure function return happens when a non-secure function has been
called using BLXNS and so has a particular magic LR value (either
0xfefffffe or 0xfeffffff). The function return via BX behaves
specially when the new PC value is this magic value, in the same
way that exception returns are handled.

Adjust our BX excret guards so that they recognize the function
return magic number as well, and perform the function-return
unstacking in do_v7m_exception_exit().

Backports commit d02a8698d7ae2bfed3b11fe5b064cb0aa406863b from qemu
2018-03-05 03:33:42 -05:00
Peter Maydell e312993f1f
target/arm: Implement BLXNS
Implement the BLXNS instruction, which allows secure code to
call non-secure code.

Backports commit 3e3fa230e3b8ffe119f14ba57a6bc677a411be57 from qemu
2018-03-05 03:31:59 -05:00
Peter Maydell 2c4578f46e
target/arm: Implement SG instruction
Implement the SG instruction, which we emulate 'by hand' in the
exception handling code path.

Backports commit 333e10c51ef5876ced26f77b61b69ce0f83161a9 from qemu
2018-03-05 03:28:28 -05:00
Peter Maydell 19ecd4f732
target/arm: Add M profile secure MMU index values to get_a32_user_mem_index()
Add the M profile secure MMU index values to the switch in
get_a32_user_mem_index() so that LDRT/STRT work correctly
rather than asserting at translate time.

Backports commit b9f587d62cebed427206539750ebf59bde4df422 from qemu
2018-03-05 03:25:54 -05:00
Jiang Biao 60ef6d016d
tcg/mips: delete commented out extern keyword
Backports commit 8df8d529ed958de4e23dcbf38bd34eff1a4716f2 from qemu
2018-03-05 03:24:25 -05:00
Emilio G. Cota 239e9771df
tcg: define TCG_HIGHWATER
Will come in handy very soon.

Backports commit a505785cd221994dd3713bde860861869a059940 from qemu
2018-03-05 03:22:27 -05:00
Emilio G. Cota 8552d95c52
exec-all: extract tb->tc_* into a separate struct tc_tb
In preparation for adding tc.size to be able to keep track of
TB's using the binary search tree implementation from glib.

Backports commit e7e168f41364c6e83d0f75fc1b3ce7f9c41ccf76 from qemu
2018-03-05 02:57:22 -05:00
Emilio G. Cota 7bd35ab27d
translate-all: define and use DEBUG_TB_CHECK_GATE
This prevents bit rot by ensuring the debug code is compiled when
building a user-mode target.

Unfortunately the helpers are user-mode-only so we cannot fully
get rid of the ifdef checks. Add a comment to explain this.

Backports commit 6eb062abd66611333056633899d3f09c2e795f4c from qemu
2018-03-05 02:51:46 -05:00
Emilio G. Cota f778c02bbc
translate-all: define and use DEBUG_TB_INVALIDATE_GATE
This gets rid of an ifdef check while ensuring that the debug code
is compiled, which prevents bit rot.

Backports commit dae9e03aed8e652f5dce2e5cab05dff83aa193b8 from qemu
2018-03-05 02:50:24 -05:00
Emilio G. Cota 5fc83f3eb2
exec-all: introduce TB_PAGE_ADDR_FMT
And fix the following warning when DEBUG_TB_INVALIDATE is enabled
in translate-all.c:

CC mipsn32-linux-user/accel/tcg/translate-all.o
/data/src/qemu/accel/tcg/translate-all.c: In function ‘tb_alloc_page’:
/data/src/qemu/accel/tcg/translate-all.c:1201:16: error: format ‘%lx’ expects argument of type ‘long unsigned int’, but argument 2 has type ‘tb_page_addr_t {aka unsigned int}’ [-Werror=format=]
printf("protecting code page: 0x" TARGET_FMT_lx "\n",
^
cc1: all warnings being treated as errors
/data/src/qemu/rules.mak:66: recipe for target 'accel/tcg/translate-all.o' failed
make[1]: *** [accel/tcg/translate-all.o] Error 1
Makefile:328: recipe for target 'subdir-mipsn32-linux-user' failed
make: *** [subdir-mipsn32-linux-user] Error 2
cota@flamenco:/data/src/qemu/build ((18f3fe1...) *$)$

Backports commit 67a5b5d2f6eb6d3b980570223ba5c478487ddb6f from qemu
2018-03-05 02:49:44 -05:00
Emilio G. Cota 79f53cf97b
translate-all: define and use DEBUG_TB_FLUSH_GATE
This gets rid of some ifdef checks while ensuring that the debug code
is compiled, which prevents bit rot.

Backports commit 424079c13b692cfcd08866bc9ffec77b887fed4e from qemu
2018-03-05 02:48:16 -05:00
Emilio G. Cota b4a7d8b773
exec-all: bring tb->invalid into tb->cflags
This gets rid of a hole in struct TranslationBlock.

Backports commit 84f1c148da2b35fbb5a436597872765257e8914e from qemu
2018-03-05 02:46:21 -05:00
Emilio G. Cota 210d13ec49
tcg: consolidate TB lookups in tb_lookup__cpu_state
This avoids duplicating code. cpu_exec_step will also use the
new common function once we integrate parallel_cpus into tb->cflags.

Note that in this commit we also fix a race, described by Richard Henderson
during review. Think of this scenario with threads A and B:

(A) Lookup succeeds for TB in hash without tb_lock
(B) Sets the TB's tb->invalid flag
(B) Removes the TB from tb_htable
(B) Clears all CPU's tb_jmp_cache
(A) Store TB into local tb_jmp_cache

Given that order of events, (A) will keep executing that invalid TB until
another flush of its tb_jmp_cache happens, which in theory might never happen.
We can fix this by checking the tb->invalid flag every time we look up a TB
from tb_jmp_cache, so that in the above scenario, next time we try to find
that TB in tb_jmp_cache, we won't, and will therefore be forced to look it
up in tb_htable.

Performance-wise, I measured a small improvement when booting debian-arm.
Note that inlining pays off:

Performance counter stats for 'taskset -c 0 qemu-system-arm \
-machine type=virt -nographic -smp 1 -m 4096 \
-netdev user,id=unet,hostfwd=tcp::2222-:22 \
-device virtio-net-device,netdev=unet \
-drive file=jessie.qcow2,id=myblock,index=0,if=none \
-device virtio-blk-device,drive=myblock \
-kernel kernel.img -append console=ttyAMA0 root=/dev/vda1 \
-name arm,debug-threads=on -smp 1' (10 runs):

Before:
18714.917392 task-clock # 0.952 CPUs utilized ( +- 0.95% )
23,142 context-switches # 0.001 M/sec ( +- 0.50% )
1 CPU-migrations # 0.000 M/sec
10,558 page-faults # 0.001 M/sec ( +- 0.95% )
53,957,727,252 cycles # 2.883 GHz ( +- 0.91% ) [83.33%]
24,440,599,852 stalled-cycles-frontend # 45.30% frontend cycles idle ( +- 1.20% ) [83.33%]
16,495,714,424 stalled-cycles-backend # 30.57% backend cycles idle ( +- 0.95% ) [66.66%]
76,267,572,582 instructions # 1.41 insns per cycle
12,692,186,323 branches # 678.186 M/sec ( +- 0.92% ) [83.35%]
263,486,879 branch-misses # 2.08% of all branches ( +- 0.73% ) [83.34%]

19.648474449 seconds time elapsed ( +- 0.82% )

After, w/ inline (this patch):
18471.376627 task-clock # 0.955 CPUs utilized ( +- 0.96% )
23,048 context-switches # 0.001 M/sec ( +- 0.48% )
1 CPU-migrations # 0.000 M/sec
10,708 page-faults # 0.001 M/sec ( +- 0.81% )
53,208,990,796 cycles # 2.881 GHz ( +- 0.98% ) [83.34%]
23,941,071,673 stalled-cycles-frontend # 44.99% frontend cycles idle ( +- 0.95% ) [83.34%]
16,161,773,848 stalled-cycles-backend # 30.37% backend cycles idle ( +- 0.76% ) [66.67%]
75,786,269,766 instructions # 1.42 insns per cycle
12,573,617,143 branches # 680.708 M/sec ( +- 1.34% ) [83.33%]
260,235,550 branch-misses # 2.07% of all branches ( +- 0.66% ) [83.33%]

19.340502161 seconds time elapsed ( +- 0.56% )

After, w/o inline:
18791.253967 task-clock # 0.954 CPUs utilized ( +- 0.78% )
23,230 context-switches # 0.001 M/sec ( +- 0.42% )
1 CPU-migrations # 0.000 M/sec
10,563 page-faults # 0.001 M/sec ( +- 1.27% )
54,168,674,622 cycles # 2.883 GHz ( +- 0.80% ) [83.34%]
24,244,712,629 stalled-cycles-frontend # 44.76% frontend cycles idle ( +- 1.37% ) [83.33%]
16,288,648,572 stalled-cycles-backend # 30.07% backend cycles idle ( +- 0.95% ) [66.66%]
77,659,755,503 instructions # 1.43 insns per cycle
12,922,780,045 branches # 687.702 M/sec ( +- 1.06% ) [83.34%]
261,962,386 branch-misses # 2.03% of all branches ( +- 0.71% ) [83.35%]

19.700174670 seconds time elapsed ( +- 0.56% )

Backports commit f6bb84d53110398f4899c19dab4e0fe9908ec060 from qemu
2018-03-05 02:42:46 -05:00
Emilio G. Cota 5fae6dd433
tcg: remove addr argument from lookup_tb_ptr
It is unlikely that we will ever want to call this helper passing
an argument other than the current PC. So just remove the argument,
and use the pc we already get from cpu_get_tb_cpu_state.

This change paves the way to having a common "tb_lookup" function.

Backports commit 7f11636dbee89b0e4d03e9e2b96e14649a7db778 from qemu
2018-03-05 02:16:34 -05:00
Emilio G. Cota 68ddc0cb08
exec-all: fix typos in TranslationBlock's documentation
Backports commit eb5e2b9e3b141de0c435eedc31c26cbbdefbee1b from qemu
2018-03-05 02:10:28 -05:00
Emilio G. Cota 921c71a9ee
cpu-exec: rename have_tb_lock to acquired_tb_lock in tb_find
Reusing the have_tb_lock name, which is also defined in translate-all.c,
makes code reviewing unnecessarily harder.

Avoid potential confusion by renaming the local have_tb_lock variable
to something else.

Backports commit 841710c78e022cbc1f798cced035789725702dac from qemu
2018-03-05 02:09:42 -05:00
Emilio G. Cota f1d6630893
tcg/mips: constify tcg_target_callee_save_regs
Backports commit d453ec78251d03cbd4ffc28dbf6070931c8ae469 from qemu
2018-03-05 02:08:36 -05:00
Emilio G. Cota 3cf23eb256
tcg/i386: constify tcg_target_callee_save_regs
Backports commit e268f4c036d2b47a4f8bf293c1371b328e03ca04 from qemu
2018-03-05 02:08:02 -05:00
Todd Eisenberger 75bdfd85a7
x86: Correct translation of some rdgsbase and wrgsbase encodings
It looks like there was a transcription error when writing this code
initially. The code previously only decoded src or dst of rax. This
resolves
https://bugs.launchpad.net/qemu/+bug/1719984.

Backports commit e0dd5fd41a1a38766009f442967fab700d2d0550 from qemu
2018-03-05 02:05:26 -05:00
Philippe Mathieu-Daudé 0cb01a52bd
qom/cpu: move cpu_model null check to cpu_class_by_name()
and clean every implementation.

Backports commit 8301ea444abb49f7b7fb939b09c1e23b22977f30 from qemu
2018-03-05 02:02:29 -05:00
Peter Maydell 059f238f11
target/arm: Factor out "get mmuidx for specified security state"
For the SG instruction and secure function return we are going
to want to do memory accesses using the MMU index of the CPU
in secure state, even though the CPU is currently in non-secure
state. Write arm_v7m_mmu_idx_for_secstate() to do this job,
and use it in cpu_mmu_index().

Backports commit b81ac0eb6315e602b18439961e0538538e4aed4f from qemu
2018-03-05 02:00:23 -05:00
Peter Maydell 6958a4763d
target/arm: Fix calculation of secure mm_idx values
In cpu_mmu_index() we try to do this:
if (env->v7m.secure) {
mmu_idx += ARMMMUIdx_MSUser;
}
but it will give the wrong answer, because ARMMMUIdx_MSUser
includes the 0x40 ARM_MMU_IDX_M field, and so does the
mmu_idx we're adding to, and we'll end up with 0x8n rather
than 0x4n. This error is then nullified by the call to
arm_to_core_mmu_idx() which masks out the high part, but
we're about to factor out the code that calculates the
ARMMMUIdx values so it can be used without passing it through
arm_to_core_mmu_idx(), so fix this bug first.

Backports commit fe768788d29597ee56fc11ba2279d502c2617457 from qemu
2018-03-05 01:58:42 -05:00
Peter Maydell 7988aec017
target/arm: Implement security attribute lookups for memory accesses
Implement the security attribute lookups for memory accesses
in the get_phys_addr() functions, causing these to generate
various kinds of SecureFault for bad accesses.

The major subtlety in this code relates to handling of the
case when the security attributes the SAU assigns to the
address don't match the current security state of the CPU.

In the ARM ARM pseudocode for validating instruction
accesses, the security attributes of the address determine
whether the Secure or NonSecure MPU state is used. At face
value, handling this would require us to encode the relevant
bits of state into mmu_idx for both S and NS at once, which
would result in our needing 16 mmu indexes. Fortunately we
don't actually need to do this because a mismatch between
address attributes and CPU state means either:
* some kind of fault (usually a SecureFault, but in theory
perhaps a UserFault for unaligned access to Device memory)
* execution of the SG instruction in NS state from a
Secure & NonSecure code region

The purpose of SG is simply to flip the CPU into Secure
state, so we can handle it by emulating execution of that
instruction directly in arm_v7m_cpu_do_interrupt(), which
means we can treat all the mismatch cases as "throw an
exception" and we don't need to encode the state of the
other MPU bank into our mmu_idx values.

This commit doesn't include the actual emulation of SG;
it also doesn't include implementation of the IDAU, which
is a per-board way to specify hard-coded memory attributes
for addresses, which override the CPU-internal SAU if they
specify a more secure setting than the SAU is programmed to.

Backports commit 35337cc391245f251bfb9134f181c33e6375d6c1 from qemu
2018-03-05 01:57:07 -05:00
Peter Maydell f9b4381ce0
nvic: Implement Security Attribution Unit registers
Implement the register interface for the SAU: SAU_CTRL,
SAU_TYPE, SAU_RNR, SAU_RBAR and SAU_RLAR. None of the
actual behaviour is implemented here; registers just
read back as written.

When the CPU definition for Cortex-M33 is eventually
added, its initfn will set cpu->sau_sregion, in the same
way that we currently set cpu->pmsav7_dregion for the
M3 and M4.

Number of SAU regions is typically a configurable
CPU parameter, but this patch doesn't provide a
QEMU CPU property for it. We can easily add one when
we have a board that requires it.

Backports commit 9901c576f6c02d43206e5faaf6e362ab7ea83246 from qemu
2018-03-05 01:55:11 -05:00
Peter Maydell 3da3a3fb41
target/arm: Add v8M support to exception entry code
Add support for v8M and in particular the security extension
to the exception entry code. This requires changes to:
* calculation of the exception-return magic LR value
* push the callee-saves registers in certain cases
* clear registers when taking non-secure exceptions to avoid
leaking information from the interrupted secure code
* switch to the correct security state on entry
* use the vector table for the security state we're targeting

Backports commit d3392718e1fcf0859fb7c0774a8e946bacb8419c from qemu
2018-03-05 01:51:22 -05:00
Peter Maydell 39466771d6
target/arm: Add support for restoring v8M additional state context
For v8M, exceptions from Secure to Non-Secure state will save
callee-saved registers to the exception frame as well as the
caller-saved registers. Add support for unstacking these
registers in exception exit when necessary.

Backports commit 907bedb3f3ce134c149599bd9cb61856d811b8ca from qemu
2018-03-05 01:47:25 -05:00
Peter Maydell 2feecbac0d
target/arm: Update excret sanity checks for v8M
In v8M, more bits are defined in the exception-return magic
values; update the code that checks these so we accept
the v8M values when the CPU permits them.

Backports commit bfb2eb52788b9605ef2fc9bc72683d4299117fde from qemu
2018-03-05 01:44:33 -05:00
Peter Maydell 33d2358c91
target/arm: Add new-in-v8M SFSR and SFAR
Add the new M profile Secure Fault Status Register
and Secure Fault Address Register.

Backports commit bed079da04dd9e0e249b9bc22bca8dce58b67f40 from qemu
2018-03-05 01:42:52 -05:00
Peter Maydell 7af730ed3e
target/arm: Don't warn about exception return with PC low bit set for v8M
In the v8M architecture, return from an exception to a PC which
has bit 0 set is not UNPREDICTABLE; it is defined that bit 0
is discarded [R_HRJH]. Restrict our complaint about this to v7M.

Backports commit 4e4259d3c574a8e89c3af27bcb84bc19a442efb1 from qemu
2018-03-05 01:41:51 -05:00
Peter Maydell 2aea283c4f
target/arm: Warn about restoring to unaligned stack
Attempting to do an exception return with an exception frame that
is not 8-aligned is UNPREDICTABLE in v8M; warn about this.
(It is not UNPREDICTABLE in v7M, and our implementation can
handle the merely-4-aligned case fine, so we don't need to
do anything except warn.)

Backports commit cb484f9a6e790205e69d9a444c3e353a3a1cfd84 from qemu
2018-03-05 01:40:40 -05:00
Peter Maydell 5063ca11ab
target/arm: Check for xPSR mismatch usage faults earlier for v8M
ARM v8M specifies that the INVPC usage fault for mismatched
xPSR exception field and handler mode bit should be checked
before updating the PSR and SP, so that the fault is taken
with the existing stack frame rather than by pushing a new one.
Perform this check in the right place for v8M.

Since v7M specifies in its pseudocode that this usage fault
check should happen later, we have to retain the original
code for that check rather than being able to merge the two.
(The distinction is architecturally visible but only in
very obscure corner cases like attempting an invalid exception
return with an exception frame in read only memory.)

Backports commit 224e0c300a0098fb577a03bd29d774d0769f632a from qemu
2018-03-05 01:39:39 -05:00
Peter Maydell 6f08acdcfe
target/arm: Restore SPSEL to correct CONTROL register on exception return
On exception return for v8M, the SPSEL bit in the EXC_RETURN magic
value should be restored to the SPSEL bit in the CONTROL register
banked specified by the EXC_RETURN.ES bit.

Add write_v7m_control_spsel_for_secstate() which behaves like
write_v7m_control_spsel() but allows the caller to specify which
CONTROL bank to use, reimplement write_v7m_control_spsel() in
terms of it, and use it in exception return.

Backports commit 3f0cddeee1f266d43c956581f3050058360a810d from qemu
2018-03-05 01:35:17 -05:00
Peter Maydell 0bb50b9a7e
target/arm: Restore security state on exception return
Now that we can handle the CONTROL.SPSEL bit not necessarily being
in sync with the current stack pointer, we can restore the correct
security state on exception return. This happens before we start
to read registers off the stack frame, but after we have taken
possible usage faults for bad exception return magic values and
updated CONTROL.SPSEL.

Backports commit 3919e60b6efd9a86a0e6ba637aa584222855ac3a from qemu
2018-03-05 01:31:58 -05:00
Peter Maydell c7b5fccfb8
target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler mode
In the v7M architecture, there is an invariant that if the CPU is
in Handler mode then the CONTROL.SPSEL bit cannot be nonzero.
This in turn means that the current stack pointer is always
indicated by CONTROL.SPSEL, even though Handler mode always uses
the Main stack pointer.

In v8M, this invariant is removed, and CONTROL.SPSEL may now
be nonzero in Handler mode (though Handler mode still always
uses the Main stack pointer). In preparation for this change,
change how we handle this bit: rename switch_v7m_sp() to
the now more accurate write_v7m_control_spsel(), and make it
check both the handler mode state and the SPSEL bit.

Note that this implicitly changes the point at which we switch
active SP on exception exit from before we pop the exception
frame to after it.

Backports commit de2db7ec894f11931932ca78cd14a8d2b1389d5b from qemu
2018-03-05 01:29:54 -05:00
Peter Maydell 8036c5b3de
target/arm: Don't switch to target stack early in v7M exception return
Currently our M profile exception return code switches to the
target stack pointer relatively early in the process, before
it tries to pop the exception frame off the stack. This is
awkward for v8M for two reasons:
* in v8M the process vs main stack pointer is not selected
purely by the value of CONTROL.SPSEL, so updating SPSEL
and relying on that to switch to the right stack pointer
won't work
* the stack we should be reading the stack frame from and
the stack we will eventually switch to might not be the
same if the guest is doing strange things

Change our exception return code to use a 'frame pointer'
to read the exception frame rather than assuming that we
can switch the live stack pointer this early.

Backports commit 5b5223997c04b769bb362767cecb5f7ec382c5f0 from qemu
2018-03-05 01:26:05 -05:00
Jan Kiszka ae16a26c20
arm: Fix SMC reporting to EL2 when QEMU provides PSCI
This properly forwards SMC events to EL2 when PSCI is provided by QEMU
itself and, thus, ARM_FEATURE_EL3 is off.

Found and tested with the Jailhouse hypervisor. Solution based on
suggestions by Peter Maydell.

Backports commit 77077a83006c3c9bdca496727f1735a3c5c5355d from qemu
2018-03-05 01:19:22 -05:00
Peter Xu 0741c3880a
qom: provide root container for internal objs
We have object_get_objects_root() to keep user created objects, however
no place for objects that will be used internally. Create such a
container for internal objects.

Backports commit 7c47c4ead75d0b733ee8f2f51fd1de0644cc1308 from qemu
2018-03-05 01:16:50 -05:00
KONRAD Frederic 61ecdd1032
memory: avoid a name clash with access macro
This avoids a name clash with the access macro on windows 64:

make
CHK version_gen.h
CC aarch64-softmmu/memory.o
/home/konrad/qemu/memory.c: In function 'access_with_adjusted_size':
/home/konrad/qemu/memory.c:591:73: error: macro "access" passed 7 arguments, \
but takes just 2
(size - access_size - i) * 8, access_mask, attrs);
^

Backports commit 05e015f73c3b5c50c237d3d8e555e25cfa543a5c from qemu
2018-03-05 01:13:01 -05:00
Peter Xu 4956effd11
bitmap: provide to_le/from_le helpers
Provide helpers to convert bitmaps to little endian format. It can be
used when we want to send one bitmap via network to some other hosts.

One thing to mention is that, these helpers only solve the problem of
endianess, but it does not solve the problem of different word size on
machines (the bitmaps managing same count of bits may contains different
size when malloced). So we need to take care of the size alignment issue
on the callers for now.

Backports commit d7788151a0807d5d2d410e3f8944d8c8a651f8d2 from qemu
2018-03-05 01:11:13 -05:00
Peter Xu 3d5fa79305
bitmap: introduce bitmap_count_one()
Count how many bits set in the bitmap.

Backports commit fc7deeea26af3d08f45bad85b8bd3fc3d790a090 from qemu
2018-03-05 01:08:29 -05:00
Peter Xu 296c30aaca
bitmap: remove BITOP_WORD()
We have BIT_WORD(). It's the same.

Backports commit ab089e058eb443a9a11ade4d3fc57ced1947bfa3 from qemu
2018-03-05 01:07:02 -05:00
Peter Maydell f0569ba11a
target/arm: Remove out of date ARM ARM section references in A64 decoder
In the A64 decoder, we have a lot of references to section numbers
from version A.a of the v8A ARM ARM (DDI0487). This version of the
document is now long obsolete (we are currently on revision B.a),
and various intervening versions renumbered all the sections.

The most recent B.a version of the document doesn't assign
section numbers at all to the individual instruction classes
in the way that the various A.x versions did. The simplest thing
to do is just to delete all the out of date C.x.x references.

Backports commit 4ce31af4aeb8471f6a913de7c59d3bde1fc4f03d from qemu
2018-03-05 01:05:53 -05:00
Peter Maydell 72dadc6518
target/arm: Handle banking in negative-execution-priority check in cpu_mmu_index()
Now that we have a banked FAULTMASK register and banked exceptions,
we can implement the correct check in cpu_mmu_index() for whether
the MPU_CTRL.HFNMIENA bit's effect should apply. This bit causes
handlers which have requested a negative execution priority to run
with the MPU disabled. In v8M the test has to check this for the
current security state and so takes account of banking.

Backports relevant part of commit 5d4791991d4de12e83d44738417c9e964167b6e8 from qemu
2018-03-05 00:54:28 -05:00
Peter Maydell 4b8bdda695
target/arm: Implement MSR/MRS access to NS banked registers
In v8M the MSR and MRS instructions have extra register value
encodings to allow secure code to access the non-secure banked
version of various special registers.

(We don't implement the MSPLIM_NS or PSPLIM_NS aliases, because
we don't currently implement the stack limit registers at all.)

Backports commit 50f11062d4c896408731d6a286bcd116d1e08465 from qemu
2018-03-05 00:53:13 -05:00
Eric Blake f31c3b32fb
mips: Improve macro parenthesization
Although none of the existing macro call-sites were broken,
it's always better to write macros that properly parenthesize
arguments that can be complex expressions, so that the intended
order of operations is not broken.

Backports commit 2a2be359c4335607c7f746cf27c412c08ab89aff from qemu
2018-03-05 00:51:51 -05:00
Igor Mammedov 00d52414c1
mips: replace cpu_mips_init() with cpu_generic_init()
now cpu_mips_init() reimplements subset of cpu_generic_init()
tasks, so just drop it and use cpu_generic_init() directly.

Backports commit c4c8146cfd0fc3f95418fbc82a2eded594675022 from qemu
2018-03-05 00:49:10 -05:00
Igor Mammedov 97b525a794
mips: MIPSCPU model subclasses
Register separate QOM types for each mips cpu model,
so it would be possible to reuse generic CPU creation
routines.

Backports commit 41da212c9ce9482fcfd490170c2611470254f8dc from qemu
2018-03-05 00:42:29 -05:00
Philippe Mathieu-Daudé 4729b633f1
mips: call cpu_mips_realize_env() from mips_cpu_realizefn()
This changes the order between cpu_mips_realize_env() and
cpu_exec_initfn(), but cpu_exec_initfn() don't have anything that
depends on cpu_mips_realize_env() being called first.

Backports commit df4dc10284e1d871db8adb512816a561473ffe3e from qemu
2018-03-05 00:29:54 -05:00
Philippe Mathieu-Daudé 3257a8f8c3
mips: split cpu_mips_realize_env() out of cpu_mips_init()
so it can be used in mips_cpu_realizefn() in the next commit

Backports commit 27e38392ca07f97edfb2257b6a1394a04d84e8d5 from qemu
2018-03-05 00:28:17 -05:00
Philippe Mathieu-Daudé c4f351394f
mips: introduce internal.h and cleanup cpu.h
no logical change, only code movement (and fix a comment typo).

Backports commit 26aa3d9aecbb6fe9bce808a1d127191bdf3cc3d2 from qemu
Also backports commit 5502b66fc7d0bebd08b9b7017cb7e8b5261c3a2d
2018-03-05 00:25:56 -05:00
Igor Mammedov 607bc396c3
arm: drop intermediate cpu_model -> cpu type parsing and use cpu type directly
Backports defines from commit ba1ba5cca3962a9cc400c713c736b4fb8db1f38e from qemu
2018-03-05 00:10:21 -05:00
Cornelia Huck 6f9b7a9363
cpu: drop old comments describing members
These comments are obviously stale.

Backports commit 6fda014e1a65474c4877b36cc42e8a0f377817a4 from qemu
2018-03-05 00:03:31 -05:00
Eric Blake 3017797f7d
osdep.h: Prohibit disabling assert() in supported builds
We already have several files that knowingly require assert()
to work, sometimes because refactoring the code for proper
error handling has not been tackled yet; there are probably
other files that have a similar situation but with no comments
documenting the same. In fact, we have places in migration
that handle untrusted input with assertions, where disabling
the assertions risks a worse security hole than the current
behavior of losing the guest to SIGABRT when migration fails
because of the assertion. Promote our current per-file
safety-valve to instead be project-wide, and expand it to also
cover glib's g_assert().

Note that we do NOT want to encourage 'assert(side-effects);'
(that is a bad practice that prevents copy-and-paste of code to
other projects that CAN disable assertions; plus it costs
unnecessary reviewer mental cycles to remember whether a project
special-cases the crippling of asserts); and we would LIKE to
fix migration to not rely on asserts (but that takes a big code
audit). But in the meantime, we DO want to send a message
that anyone that disables assertions has to tweak code in order
to compile, making it obvious that they are taking on additional
risk that we are not going to support. At the same time, leave
comments mentioning NDEBUG in files that we know still need to
be scrubbed, so there is at least something to grep for.

It would be possible to come up with some other mechanism for
doing runtime checking by default, but which does not abort
the program on failure, while leaving side effects in place
(unlike how crippling assert() avoids even the side effects),
perhaps under the name q_verify(); but it was not deemed worth
the effort (developers should not have to learn a replacement
when the standard C macro works just fine, and it would be a lot
of churn for little gain). The patch specifically uses #error
rather than #warn so that a user is forced to tweak the header
to acknowledge the issue, even when not using a -Werror
compilation.

Backports commit 262a69f4282e44426c7a132138581d400053e0a1 from qemu
2018-03-05 00:01:57 -05:00
Gonglei ce71be5c05
i386/cpu/hyperv: support over 64 vcpus for windows guests
Starting with Windows Server 2012 and Windows 8, if
CPUID.40000005.EAX contains a value of -1, Windows assumes specific
limit to the number of VPs. In this case, Windows Server 2012
guest VMs may use more than 64 VPs, up to the maximum supported
number of processors applicable to the specific Windows
version being used.

https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/tlfs

For compatibility, Let's introduce a new property for X86CPU,
named "x-hv-max-vps" as Eduardo's suggestion, and set it
to 0x40 before machine 2.10.

(The "x-" prefix indicates that the property is not supposed to
be a stable user interface.)

Backports relevant parts of commit 6c69dfb67e84747cf071958594d939e845dfcc0c from qemu
2018-03-05 00:00:53 -05:00
Joseph Myers a237d9dbca
target/i386: fix phminposuw in-place operation
The SSE4.1 phminposuw instruction finds the minimum 16-bit element in
the source vector, putting the value of that element in the low 16
bits of the destination vector, the index of that element in the next
three bits and zeroing the rest of the destination. The helper for
this operation fills the destination from high to low, meaning that
when the source and destination are the same register, the minimum
source element can be overwritten before it is copied to the
destination. This patch fixes it to fill the destination from low to
high instead, so the minimum source element is always copied first.
This fixes one gcc test failure in my GCC 6-based testing (and so
concludes the present sequence of patches, as I don't have any further
gcc test failures left in that testing that I attribute to QEMU bugs).

Backports commit aa406feadfc5b095ca147ec56d6187c64be015a7 from qemu
2018-03-04 23:59:26 -05:00
Joseph Myers 85b647e486
target/i386: fix pcmpxstrx substring search
One of the cases of the SSE4.2 pcmpestri / pcmpestrm / pcmpistri /
pcmpistrm instructions does a substring search. The implementation of
this case in the pcmpxstrx helper is incorrect. The operation in this
case is a search for a string (argument d to the helper) in another
string (argument s to the helper); if a copy of d at a particular
position would run off the end of s, the resulting output bit should
be 0 whether or not the strings match in the region where they
overlap, but the QEMU implementation was wrongly comparing only up to
the point where s ends and counting it as a match if an initial
segment of d matched a terminal segment of s. Here, "run off the end
of s" means that some byte of d would overlap some byte outside of s;
thus, if d has zero length, it is considered to match everywhere,
including after the end of s. This patch fixes the implementation to
correspond with the proper instruction semantics. This fixes four gcc
test failures in my GCC 6-based testing.

Backports commit ae35eea7e4a9f21dd147406dfbcd0c4c6aaf2a60 from qemu
2018-03-04 23:58:45 -05:00
Joseph Myers 59df0bae12
target/i386: fix packusdw in-place operation
The SSE4.1 packusdw instruction combines source and destination
vectors of signed 32-bit integers into a single vector of unsigned
16-bit integers, with unsigned saturation. When the source and
destination are the same register, this means each 32-bit element of
that register is used twice as an input, to produce two of the 16-bit
output elements, and so if the operation is carried out
element-by-element in-place, no matter what the order in which it is
applied to the elements, the first element's operation will overwrite
some future input. The helper for packssdw avoids this issue by
computing the result in a local temporary and copying it to the
destination at the end; this patch fixes the packusdw helper to do
likewise. This fixes three gcc test failures in my GCC 6-based
testing.

Backports commit 80e19606215d4df370dfe8fe21c558a129f00f0b from qemu
2018-03-04 23:57:54 -05:00
Joseph Myers 84b3c54b18
target/i386: set rip_offset for further SSE instructions
It turns out that my recent fix to set rip_offset when emulating some
SSE4.1 instructions needs generalizing to cover a wider class of
instructions. Specifically, every instruction in the sse_op_table7
table, coming from various instruction set extensions, has an 8-bit
immediate operand that comes after any memory operand, and so needs
rip_offset set for correctness if there is a memory operand that is
rip-relative, and my patch only set it for a subset of those
instructions. This patch moves the rip_offset setting to cover the
wider class of instructions, so fixing 9 further gcc testsuite
failures in my GCC 6-based testing. (I do not know whether there
might be still further classes of instructions missing this setting.)

Backports commit c6a8242915328cda0df0fbc0803da3448137e614 from qemu
2018-03-04 23:57:12 -05:00
Joseph Myers e883a15231
target/i386: fix pmovsx/pmovzx in-place operations
The SSE4.1 pmovsx* and pmovzx* instructions take packed 1-byte, 2-byte
or 4-byte inputs and sign-extend or zero-extend them to a wider vector
output. The associated helpers for these instructions do the
extension on each element in turn, starting with the lowest. If the
input and output are the same register, this means that all the input
elements after the first have been overwritten before they are read.
This patch makes the helpers extend starting with the highest element,
not the lowest, to avoid such overwriting. This fixes many GCC test
failures (161 in the gcc testsuite in my GCC 6-based testing) when
testing with a default CPU setting enabling those instructions.

Backports commit c6a56c8e990b213a1638af2d34352771d5fa4d9c from qemu
2018-03-04 23:56:01 -05:00
Richard Henderson 7168f72d4d
tcg/mips: Fully convert tcg_target_op_def
Backports commit 89b2e37e6506d92b00ac478e7953be6ddd7a86a9 from qemu
2018-03-04 23:54:26 -05:00
Richard Henderson 24c5be0472
tcg/sparc: Fully convert tcg_target_op_def
Backports commit 9be44a16c258287aab5a3accda153d3a5144359f from qemu
2018-03-04 23:52:18 -05:00
Richard Henderson d3b1c8d5a4
tcg/ppc: Fully convert tcg_target_op_def
Backports commit 6cb3658a04149b2c1fb92e2ea9d2e2f6cecc0014 from qemu
2018-03-04 23:50:58 -05:00
Richard Henderson 3094e7927e
tcg/arm: Fully convert tcg_target_op_def
Backports commit 7536b82d28876d1ffe0359667b28c93d49386fa0 from qemu
2018-03-04 23:48:55 -05:00
Richard Henderson 47ed20fdd4
tcg/aarch64: Fully convert tcg_target_op_def
Backports commit 1897cc2eb8be2d8be23380b45a2d3c1a2808723f from qemu
2018-03-04 23:46:38 -05:00
Richard Henderson fe632c4df8
tcg: Fix types in tcg_regset_{set,reset}_reg
There was a potential problem here with an ILP32 host
with 64 host registers.

Backports commit 80a8b9a910e14d4a1937f70dce944891990f3441 from qemu
2018-03-04 23:44:13 -05:00
Richard Henderson fc8b4316a9
tcg: Remove tcg_regset_set32
It's not even clear what the interface REG and VAL32 were supposed to mean.
All uses had REG = 0 and VAL32 was the bitset assigned to the destination.

Backports commit f46934df662182097dce07d57ec00f37e4d2abf1 from qemu
2018-03-04 23:42:59 -05:00
Richard Henderson 9a9c2ede4a
tcg: Remove tcg_regset_{or,and,andnot,not}
Backports commit 07ddf036fa66bca279590c09fe1c46bcdcc5bcff from qemu
2018-03-04 23:34:16 -05:00
Richard Henderson 7ba6f6f5e6
tcg: Remove tcg_regset_set
Backports commit d21369f5fb41299d5e7b032ec6da12da7f95f72f from qemu
2018-03-04 23:31:35 -05:00
Richard Henderson 49d09d6888
tcg: Remove tcg_regset_clear
Backports commit ccb1bb66ea2a42e773bfa04178d8b383ff86d4d8 from qemu
2018-03-04 23:24:45 -05:00
Richard Henderson 7b68a8f0ca
tcg: Add tcg_op_supported
Backports commit be0f34b5840312bbe9627c2b9f68a25f32903dae from qemu
2018-03-04 23:20:28 -05:00
Richard Henderson c5e952978c
target/arm: Avoid an extra temporary for store_exclusive
Instead of copying addr to a local temp, reuse the value (which we
have just compared as equal) already saved in cpu_exclusive_addr.

Backports commit 37e29a64254bf82a1901784fcca17c25f8164c2f from qemu
2018-03-04 23:17:50 -05:00
Jaroslaw Pelczar 7fded6c15c
AArch64: Fix single stepping of ERET instruction
Previously when single stepping through ERET instruction via GDB
would result in debugger entering the "next" PC after ERET instruction.
When debugging in kernel mode, this will also cause unintended behavior,
because debugger will try to access memory from EL0 point of view.

Backports commit dddbba9943ef6a81c8702e4a50cb0a8b1a4201fe from qemu
2018-03-04 23:15:30 -05:00
Peter Maydell 6a951f17ed
target/arm: Rename 'type' to 'excret' in do_v7m_exception_exit()
In the v7M and v8M ARM ARM, the magic exception return values are
referred to as EXC_RETURN values, and in QEMU we use V7M_EXCRET_*
constants to define bits within them. Rename the 'type' variable
which holds the exception return value in do_v7m_exception_exit()
to excret, making it clearer that it does hold an EXC_RETURN value.

Backports commit 351e527a613147aa2a2e6910f92923deef27ee48 from qemu
2018-03-04 23:14:22 -05:00
Peter Maydell 1301cb1771
target/arm: Add and use defines for EXCRET constants
The exception-return magic values get some new bits in v8M, which
makes some bit definitions for them worthwhile.

We don't use the bit definitions for the switch on the low bits
which checks the return type for v7M, because this is defined
in the v7M ARM ARM as a set of valid values rather than via
per-bit checks.

Backports commit 4d1e7a4745c050f7ccac49a1c01437526b5130b5 from qemu
2018-03-04 23:12:37 -05:00
Peter Maydell aa71933721
target/arm: Remove unnecessary '| 0xf0000000' from do_v7m_exception_exit()
In do_v7m_exception_exit(), there's no need to force the high 4
bits of 'type' to 1 when calling v7m_exception_taken(), because
we know that they're always 1 or we could not have got to this
"handle return to magic exception return address" code. Remove
the unnecessary ORs.

Backports commit 7115cdf5782922611bcc44c89eec5990db7f6466 from qemu
2018-03-04 23:11:13 -05:00
Peter Maydell 2718aa8233
target/arm: Get PRECISERR and IBUSERR the right way round
For a bus fault, the M profile BFSR bit PRECISERR means a bus
fault on a data access, and IBUSERR means a bus fault on an
instruction access. We had these the wrong way around; fix this.

Backports commit c6158878650c01b2c753b2ea7d0967c8fe5ca59e from qemu
2018-03-04 23:10:33 -05:00
Peter Maydell ceccd92940
target/arm: Clear exclusive monitor on v7M reset, exception entry/exit
For M profile we must clear the exclusive monitor on reset, exception
entry and exception exit. We weren't doing any of these things; fix
this bug.

Backports commit dc3c4c14f0f12854dbd967be3486f4db4e66d25b from qemu
2018-03-04 23:09:41 -05:00
Peter Maydell 2a9b62c12b
target/arm: Clear exclusive monitor on v7M reset, exception entry/exit
For M profile we must clear the exclusive monitor on reset, exception
entry and exception exit. We weren't doing any of these things; fix
this bug.

Backports commit dc3c4c14f0f12854dbd967be3486f4db4e66d25b from qemu
2018-03-04 23:08:31 -05:00
Peter Maydell 09ca9356a3
target/arm: Use M_REG_NUM_BANKS rather than hardcoding 2
Use a symbolic constant M_REG_NUM_BANKS for the array size for
registers which are banked by M profile security state, rather
than hardcoding lots of 2s.

Backports commit 4a16724f06ead684a5962477a557c26c677c2729 from qemu
2018-03-04 23:07:30 -05:00
Dr. David Alan Gilbert 9730b2ccf6
sparc: Fix typedef clash
Older compilers (rhel6) don't like redefinition of typedefs

Fixes: 12a6c15ef31c98ecefa63e91ac36955383038384

Backports commit 9d81b2d2000f41be55a0624a26873f993fb6e928 from qemu
2018-03-04 23:05:50 -05:00
Kamil Rytarowski b10922fb33
target/m68k: Switch fpu_rom from make_floatx80() to make_floatx80_init()
GCC 4.7.2 on SunOS reports that the values assigned to array members are not
real constants:

target/m68k/fpu_helper.c:32:5: error: initializer element is not constant
target/m68k/fpu_helper.c:32:5: error: (near initialization for 'fpu_rom[0]')
rules.mak:66: recipe for target 'target/m68k/fpu_helper.o' failed

Convert the array to make_floatx80_init() to fix it.
Replace floatx80_pi-like constants with make_floatx80_init() as they are
defined as make_floatx80().

This fixes build on SmartOS (Joyent).

Backports commit 6fa9ba09dbf4eb8b52bcb47d6820957f1b77ee0b from qemu
2018-03-04 23:05:01 -05:00
Lioncash 3c5f8b2800
tcg/ppc: Update to commit 53c89efd02cef626040165cc8f06b5cf2c15355d 2018-03-04 23:00:03 -05:00
Lioncash c786137691
tcg/arm: Update to commit afe74dbd6a58031741b68e99843c1f1d390996b2 2018-03-04 22:58:36 -05:00
Richard Henderson 504bdad70d
tcg/arm: Tighten tlb indexing offset test
We are not going to use ldrd for loading the comparator
for 32-bit guests, so don't limit cmp_off to 8 bits then.
This eliminates one insn in the tlb load for some guests.

Backports commit 95ede84f4de18747d03d79c148013cff99acd60b from qemu
2018-03-04 22:57:04 -05:00
Richard Henderson e4d05c2567
tcg/arm: Improve tlb load for armv7
Use UBFX to avoid limitation on CPU_TLB_BITS. Since we're dropping
the initial shift, we need to replace the page masking. We can use
MOVW+BIC to do this without shifting. The result is the same size
as the armv6 path with one less conditional instruction.

Backports commit 647ab96aaf5defeb138e48d610f7f633c587b40d from qemu
2018-03-04 22:56:27 -05:00
Richard Henderson b3fd6a8c8c
tcg/sparc: Use constant pool for movi
Backports commit e9823b4c3347370414b63010ec4a2a4754e4abb5 from qemu
2018-03-04 22:53:59 -05:00
Richard Henderson b786e2d27e
tcg/sparc: Introduce TCG_REG_TB
Backports commit ab20bdc11624837bd0c8aea83c603b66f0406e8b from qemu
2018-03-04 22:51:38 -05:00
Richard Henderson 0c3781e7eb
tcg/aarch64: Use constant pool for movi
Backports commit 55129955e92ec164ee2d778f20070dc214109bc6 from qemu
2018-03-04 22:46:50 -05:00
Richard Henderson 5150970625
tcg/s390: Use constant pool for cmpi
Also use CHI/CGHI for 16-bit signed constants.

Backports commit a534bb15f30ff7e420434b3e5746bcad595c5429 from qemu
2018-03-04 22:44:26 -05:00
Richard Henderson c08620b984
tcg/s390: Use constant pool for xori
Backports commit 5bf67a9217a31512f35b036924e1db1baf2f9ebf from qemu
2018-03-04 22:39:14 -05:00
Lioncash 35d3118469
tcg/s390: Use constant pool for ori 2018-03-04 22:35:27 -05:00
Richard Henderson bdadfa7520
tcg/s390: Use constant pool for andi
Backports commit bdcd5d1926a7ae42c060efdcaa15074930a92ebb from qemu
2018-03-04 22:33:08 -05:00
Richard Henderson bc23bab79d
tcg/s390: Use constant pool for movi
Split out maybe_out_small_movi for use with other operations
that want to add to the constant pool.

Backports commit 28eef8aaece5e83df4568d9842ab9611ec130b2c from qemu
2018-03-04 22:32:04 -05:00
Richard Henderson ba1563eb2f
tcg/s390: Fix sign of patch_reloc addend
We were passing in -2 instead of +2, but then ignoring
the actual contents of addend in the calculation.

Backports commit e692a3492d04500355bcf23575eed7cf137b38d5 from qemu
2018-03-04 22:28:24 -05:00
Richard Henderson 2fff7d54cb
tcg/s390: Introduce TCG_REG_TB
Backports commit 829e1376d94009a7ccacc0535bffcc679f7bb507 from qemu
2018-03-04 22:26:52 -05:00
Richard Henderson b96f53e8a3
tcg/i386: Store out-of-range call targets in constant pool
Already it saves 2 bytes per call, but also the constant pool
entry may well be shared across multiple calls.

Backports commit 4e45f23943c0bb91588627de3801826546155ad8 from qemu
2018-03-04 22:22:49 -05:00
Richard Henderson e9d8cef430
tcg: Infrastructure for managing constant pools
A new shared header tcg-pool.inc.c adds new_pool_label,
for registering a tcg_target_ulong to be emitted after
the generated code, plus relocation data to install a
pointer to the data.

A new pointer is added to the TCGContext, so that we
dump the constant pool as data, not code.

Backports commit 57a269469dbf70013dab3a176e1735636010a772 from qemu
2018-03-04 22:17:33 -05:00
Richard Henderson f96514a99c
tcg: Rearrange ldst label tracking
Dispense with TCGBackendData, as it has never been used for more than
holding a single pointer. Use a define in the cpu/tcg-target.h to
signal requirement for TCGLabelQemuLdst, so that we can drop the no-op
tcg-be-null.h stubs. Rename tcg-be-ldst.h to tcg-ldst.inc.c.

Backports commit 659ef5cbb893872d25e9d95191cc23b16546c8a1 from qemu
2018-03-04 22:13:13 -05:00
Richard Henderson 3c8cdb237a
tcg: Use tcg_malloc to allocate TCGLabelQemuLdst
Pre-allocating 640 of them per TB is a waste.

Backports commit 686461c96254f34bcce67a949c72867ab6ec3fcf from qemu
2018-03-04 22:00:24 -05:00
Richard Henderson 31b8b67cd3
tcg: Move USE_DIRECT_JUMP discriminator to tcg/cpu/tcg-target.h
Replace the USE_DIRECT_JUMP ifdef with a TCG_TARGET_HAS_direct_jump
boolean test. Replace the tb_set_jmp_target1 ifdef with an unconditional
function tb_target_set_jmp_target.

While we're touching all backends, add a parameter for tb->tc_ptr;
we're going to need it shortly for some backends.

Move tb_set_jmp_target and tb_add_jump from exec-all.h to cpu-exec.c.

Backports commit a85833933628384d74ec412024d55cf012640287 from qemu
2018-03-04 21:52:35 -05:00
Peter Maydell 433bdcff34
configure: Drop AIX host support
Nobody has mentioned AIX host support on the mailing list for years,
and we have no test systems for it so it is most likely broken.
We've advertised in configure for two releases now that we plan
to drop support for this host OS, and have had no complaints.
Drop the AIX host support code.

We can also drop the now-unused AIX version of sys_cache_info().

Note that the _CALL_AIX define used in the PPC tcg backend is
also used for Linux PPC64, and so that code should not be removed.

Backports commit 7872375219c03682bda3f6191fa5f6a58238ed36 from qemu
2018-03-04 21:32:40 -05:00
Peter Maydell 8d02ee3b51
target/arm: Implement new do_transaction_failed hook
Implement the new do_transaction_failed hook for ARM, which should
cause the CPU to take a prefetch abort or data abort.

Backports commit c79c0a314c43b78f6326d5f137bdbafdbf8e9766 from qemu
2018-03-04 21:29:05 -05:00
Peter Maydell 2070ef1c37
boards.h: Define new flag ignore_memory_transaction_failures
Define a new MachineClass field ignore_memory_transaction_failures.
If this is flag is true then the CPU will ignore memory transaction
failures which should cause the CPU to take an exception due to an
access to an unassigned physical address; the transaction will
instead return zero (for a read) or be ignored (for a write). This
should be set only by legacy board models which rely on the old
RAZ/WI behaviour for handling devices that QEMU does not yet model.
New board models should instead use "unimplemented-device" for all
memory ranges where the guest will attempt to probe for a device that
QEMU doesn't implement and a stub device is required.

We need this for ARM boards, where we're about to implement support for
generating external aborts on memory transaction failures. Too many
of our legacy board models rely on the RAZ/WI behaviour and we
would break currently working guests when their "probe for device"
code provoked an external abort rather than a RAZ.

Backports commit ed860129acd3fcd0b1e47884e810212aaca4d21b from qemu
2018-03-04 21:27:15 -05:00
Peter Maydell 4b816fe0aa
target/arm: Implement BXNS, and banked stack pointers
Implement the BXNS v8M instruction, which is like BX but will do a
jump-and-switch-to-NonSecure if the branch target address has bit 0
clear.

This is the first piece of code which implements "switch to the
other security state", so the commit also includes the code to
switch the stack pointers around, which is the only complicated
part of switching security state.

BLXNS is more complicated than just "BXNS but set the link register",
so we leave it for a separate commit.

Backports commit fb602cb726b3ebdd01ef3b1732d74baf9fee7ec9 from qemu
2018-03-04 21:21:23 -05:00
Peter Maydell 221232fb35
target/arm: Move regime_is_secure() to target/arm/internals.h
Move the regime_is_secure() utility function to internals.h;
we are going to want to call it from translate.c.

Backports commit 61fcd69b0db268e7612b07fadc436b93def91768 from qemu
2018-03-04 21:14:05 -05:00
Peter Maydell 07b9144ef2
target/arm: Make CFSR register banked for v8M
Make the CFSR register banked if v8M security extensions are enabled.

Not all the bits in this register are banked: the BFSR
bits [15:8] are shared between S and NS, and we store them
in the NS copy of the register.

Backports commit 334e8dad7a109d15cb20b090131374ae98682a50 from qemu
2018-03-04 21:12:55 -05:00
Peter Maydell 74c66cc2a9
target/arm: Make MMFAR banked for v8M
Make the MMFAR register banked if v8M security extensions are
enabled.

Backports commit c51a5cfc9fae82099028eb12cb1d064ee07f348e from qemu
2018-03-04 21:10:47 -05:00
Peter Maydell 4b24f6d87b
target/arm: Make CCR register banked for v8M
Make the CCR register banked if v8M security extensions are enabled.

This is slightly more complicated than the other "add banking"
patches because there is one bit in the register which is not
banked. We keep the live data in the NS copy of the register,
and adjust it on register reads and writes. (Since we don't
currently implement the behaviour that the bit controls, there
is nowhere else that needs to care.)

This patch includes the enforcement of the bits which are newly
RES1 in ARMv8M.

Backports commit 9d40cd8a68cfc7606f4548cc9e812bab15c6dc28 from qemu
2018-03-04 21:09:34 -05:00
Peter Maydell f88f4b5e31
target/arm: Make MPU_CTRL register banked for v8M
Make the MPU_CTRL register banked if v8M security extensions are
enabled.

Backports commit ecf5e8eae8b0b5fa41f00b53d67747b42fd1b8b9 from qemu
2018-03-04 21:08:16 -05:00
Peter Maydell 683830d5ac
target/arm: Make MPU_RNR register banked for v8M
Make the MPU_RNR register banked if v8M security extensions are
enabled.

Backports commit 1bc04a8880374407c4b12d82ceb8752e12ff5336 from qemu
2018-03-04 21:06:01 -05:00
Peter Maydell 5e14b33c65
target/arm: Make MPU_RBAR, MPU_RLAR banked for v8M
Make the MPU registers MPU_MAIR0 and MPU_MAIR1 banked if v8M security
extensions are enabled.

We can freely add more items to vmstate_m_security without
breaking migration compatibility, because no CPU currently
has the ARM_FEATURE_M_SECURITY bit enabled and so this
subsection is not yet used by anything.

Backports commit 62c58ee0b24eafb44c06402fe059fbd7972eb409 from qemu
2018-03-04 21:04:41 -05:00
Peter Maydell 5b6e1e2150
target/arm: Make MPU_MAIR0, MPU_MAIR1 registers banked for v8M
Make the MPU registers MPU_MAIR0 and MPU_MAIR1 banked if v8M security
extensions are enabled.

Backports commit 4125e6feb71c810ca38f0d8e66e748b472a9cc54 from qemu
2018-03-04 21:02:51 -05:00
Peter Maydell 3e35eee327
target/arm: Make VTOR register banked for v8M
Make the VTOR register banked if v8M security extensions are enabled.

Backports commit 45db7ba681ede57113a67499840e69ee586bcdf2 from qemu
2018-03-04 21:01:51 -05:00
Peter Maydell 59c6845ada
target/arm: Make CONTROL register banked for v8M
Make the CONTROL register banked if v8M security extensions are enabled.

Backports commit 8bfc26ea302ec03585d7258a7cf8938f76512730 from qemu
2018-03-04 21:00:58 -05:00
Peter Maydell 14cb6925f3
target/arm: Make FAULTMASK register banked for v8M
Make the FAULTMASK register banked if v8M security extensions are enabled.

Note that we do not yet implement the functionality of the new
AIRCR.PRIS bit (which allows the effect of the NS copy of FAULTMASK to
be restricted).

This patch includes the code to determine for v8M which copy
of FAULTMASK should be updated on exception exit; further
changes will be required to the exception exit code in general
to support v8M, so this is just a small piece of that.

The v8M ARM ARM introduces a notation where individual paragraphs
are labelled with R (for rule) or I (for information) followed
by a random group of subscript letters. In comments where we want
to refer to a particular part of the manual we use this convention,
which should be more stable across document revisions than using
section or page numbers.

Backports commit 42a6686b2f6199d086a58edd7731faeb2dbe7c14 from qemu
2018-03-04 20:58:38 -05:00
Peter Maydell ff3f7811ce
target/arm: Make PRIMASK register banked for v8M
Make the PRIMASK register banked if v8M security extensions are enabled.

Note that we do not yet implement the functionality of the new
AIRCR.PRIS bit (which allows the effect of the NS copy of PRIMASK to
be restricted).

Backports commit 6d8048341995b31a77dc2e0dcaaf4e3df0e3121a from qemu
2018-03-04 20:55:49 -05:00
Peter Maydell c9a7aad4dc
target/arm: Make BASEPRI register banked for v8M
Make the BASEPRI register banked if v8M security extensions are enabled.

Note that we do not yet implement the functionality of the new
AIRCR.PRIS bit (which allows the effect of the NS copy of BASEPRI to
be restricted).

Backports commit acf949411ffb675edbfb707e235800b02e6a36f8 from qemu
2018-03-04 20:54:44 -05:00
Peter Maydell f4d155ad3a
target/arm: Add MMU indexes for secure v8M
Now that MPU lookups can return different results for v8M
when the CPU is in secure vs non-secure state, we need to
have separate MMU indexes; add the secure counterparts
to the existing three M profile MMU indexes.

Backports commit 66787c7868d05d29974e09201611b718c976f955 from qemu
2018-03-04 20:53:04 -05:00
Peter Maydell 13bad2c234
target/arm: Register second AddressSpace for secure v8M CPUs
If a v8M CPU supports the security extension then we need to
give it two AddressSpaces, the same way we do already for
an A profile core with EL3.

Backports commit 1d2091bc75ab7f9e2c43082f361a528a63c79527 from qemu
2018-03-04 20:51:00 -05:00
Peter Maydell 8ce42ad30c
target/arm: Add state field, feature bit and migration for v8M secure state
As the first step in implementing ARM v8M's security extension:
* add a new feature bit ARM_FEATURE_M_SECURITY
* add the CPU state field that indicates whether the CPU is
currently in the secure state
* add a migration subsection for this new state
(we will add the Secure copies of banked register state
to this subsection in later patches)
* add a #define for the one new-in-v8M exception type
* make the CPU debug log print S/NS status

Backports commit 1e577cc7cffd3de14dbd321de5c3ef191c6ab07f from qemu
2018-03-04 20:50:04 -05:00
Peter Maydell 829a34ec55
target/arm: Implement new PMSAv8 behaviour
Implement the behavioural side of the new PMSAv8 specification.

Backports commit 504e3cc36b68b34c176f3f4116b1d5677471ec20 from qemu
2018-03-04 20:47:54 -05:00
Peter Maydell 1acd9efdc2
target/arm: Implement ARMv8M's PMSAv8 registers
As part of ARMv8M, we need to add support for the PMSAv8 MPU
architecture.

PMSAv8 differs from PMSAv7 both in register/data layout (for instance
using base and limit registers rather than base and size) and also in
behaviour (for example it does not have subregions); rather than
trying to wedge it into the existing PMSAv7 code and data structures,
we define separate ones.

This commit adds the data structures which hold the state for a
PMSAv8 MPU and the register interface to it. The implementation of
the MPU behaviour will be added in a subsequent commit.

Backports commit 0e1a46bbd2d6c39614b87f4e88ea305acce8a35f from qemu
2018-03-04 20:45:49 -05:00
Richard Henderson 6d2bcf6ed8
target/arm: Perform per-insn cross-page check only for Thumb
ARM is a fixed-length ISA and we can compute the page crossing
condition exactly once during init_disas_context.

Backports commit d0264d86b026e9d948de577b05ff86d708658576 from qemu
2018-03-04 20:42:22 -05:00
Richard Henderson ab21785d3f
target/arm: Split out thumb_tr_translate_insn
We need not check for ARM vs Thumb state in order to dispatch
disassembly of every instruction.

Backports commit 722ef0a562a8cd810297b00516e36380e2f33353 from qemu
2018-03-04 20:41:07 -05:00
Richard Henderson 23d769c856
target/arm: Move ss check to init_disas_context
We can check for single-step just once.

Backports commit f7708456aac23a8bb8864b12bcf1f20c6e4b7045 from qemu
2018-03-04 20:34:33 -05:00
Richard Henderson dd36ec2bbf
target/arm: [a64] Move page and ss checks to init_disas_context
Since AArch64 uses a fixed-width ISA, we can pre-compute the number of
insns remaining on the page. Also, we can check for single-step once.

Backports commit dcc3a21209a8eeae0fe43966012f8e08d3566f98 from qemu
2018-03-04 20:32:45 -05:00
Lioncash 6586c88706
target/i386: Remove unnecessary unicorn hooking code in i386_tr_init_disas_context
This is all centralized in translator_loop now
2018-03-04 20:31:07 -05:00
Lluís Vilanova 74d437827b
target/arm: [tcg] Port to generic translation framework
Backports commit 2316922420da6fd0d1ffb5557d0cdcc5958bcf44 from qemu
2018-03-04 20:28:06 -05:00
Lluís Vilanova cc00feb2df
target/arm: [tcg,a64] Port to disas_log
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 58350fa4b2852fede96cfebad0b26bf79bca419c from qemu
2018-03-04 20:09:39 -05:00
Lluís Vilanova 5d3ff533a1
target/arm: [tcg] Port to disas_log
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 4013f7fc811e90b89da3a516dc71b01ca0e7e54e from qemu
2018-03-04 20:05:16 -05:00
Lluís Vilanova 7a02cb360c
target/arm: [tcg,a64] Port to tb_stop
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit be4079641f1bc755fc5d3ff194cf505c506227d8 from qemu
2018-03-04 20:02:45 -05:00
Lluís Vilanova d8def0cdb5
target/arm: [tcg] Port to tb_stop
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 70d3c035ae36a2c5c0f991ba958526127c92bb67 from qemu
2018-03-04 20:02:32 -05:00
Lluís Vilanova 665192d96f
target/arm: [tcg,a64] Port to translate_insn
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 24299c892cbfe29120f051b6b7d0bcf3e0cc8e85 from qemu
2018-03-04 19:47:54 -05:00
Lluís Vilanova 0c4909738d
target/arm: [tcg] Port to translate_insn
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 13189a9080b35b13af23f2be4806fa0cdbb31af3 from qemu
2018-03-04 19:44:01 -05:00
Lluís Vilanova 7b89c4c813
target/arm: [tcg,a64] Port to breakpoint_check
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 0cb56b373da70047979b61b042f59aaff4012e1b from qemu
2018-03-04 19:34:06 -05:00
Lluís Vilanova 67e0d99080
target/arm: [tcg,a64] Port to insn_start
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit a68956ad7f8510bdc0b54793c65c62c6a94570a4 from qemu
2018-03-04 19:31:22 -05:00
Lluís Vilanova b9df4e0ca0
target/arm: [tcg] Port to insn_start
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit f62bd897e64c6fb1f93e8795e835980516fe53b5 from qemu
2018-03-04 19:25:29 -05:00
Lluís Vilanova b3878f117e
target/arm: [tcg] Port to tb_start
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit b14768544fd715a3f1742c10fc36ae81c703cbc1 from qemu
2018-03-04 19:22:20 -05:00
Lluís Vilanova 529c6c17f1
target/arm: [tcg,a64] Port to init_disas_context
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 5c03990665aa9095e4d2734c8ca0f936a8e8f000 from qemu
2018-03-04 19:17:09 -05:00
Lluís Vilanova 5e5c722359
target/arm: [tcg] Port to init_disas_context
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 1d8a5535238fc5976e0542a413f4ad88f5d4b233 from qemu
2018-03-04 19:10:55 -05:00
Lluís Vilanova 8581e6f6fe
target/arm: [tcg] Port to DisasContextBase
Incrementally paves the way towards using the generic
instruction translation loop.

Backports commit dcba3a8d443842f7a30a2c52d50a6b50b6982b35 from qemu
2018-03-04 19:00:06 -05:00
Lluís Vilanova c40f5eb73e
target/i386: [tcg] Port to generic translation framework
Backports commit d2e6eedf5078d0f2ac17fc1a0d24f6be79c071d7 from qemu
2018-03-04 17:42:42 -05:00
Lluís Vilanova 579a23cfa0
target/i386: [tcg] Port to disas_log
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit e0d110d943891b719de7ca075fc17fa8ea5749b8 from qemu
2018-03-04 17:31:25 -05:00
Lluís Vilanova 75ddf81d2c
target/i386: [tcg] Port to tb_stop
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 47e981b42553f00110024c33897354f9014e83e9 from qemu
2018-03-04 17:27:45 -05:00
Lluís Vilanova bea36e432c
target/i386: [tcg] Port to translate_insn
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 2c2f8cacd8cf4f67d6f1384b19d38f9a0a25878b from qemu
2018-03-04 17:24:32 -05:00
Lluís Vilanova 5f020bdf07
target/i386: [tcg] Port to breakpoint_check
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit e6b41ec37f0a9742374dfdb90e662745969cd7ea from qemu
2018-03-04 17:19:43 -05:00
Lluís Vilanova e3ea2c0393
target/i386: [tcg] Port to breakpoint_check
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit e6b41ec37f0a9742374dfdb90e662745969cd7ea from qemu
2018-03-04 17:16:55 -05:00
Lluís Vilanova 1f0f1fb302
target/i386: [tcg] Port to insn_start
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 9d75f52b34053066b8e8fc37610d5f300d67538b from qemu
2018-03-04 17:15:37 -05:00
Lluís Vilanova 8896a2887e
target/i386: [tcg] Port to init_disas_context
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 9761d39b09c4beb1340bf3074be3d3e0a5d453a4 from qemu
2018-03-04 17:14:16 -05:00
Lluís Vilanova 4babc3ff64
target/i386: [tcg] Port to DisasContextBase
Incrementally paves the way towards using the generic instruction translation
loop.

Backports commit 6cf147aa299e49f7794858609a1e8ef19f81c007 from qemu
2018-03-04 14:48:29 -05:00
Lluís Vilanova ed7225e685
tcg: Add generic translation framework
Backports commit bb2e0039dc07177f928f9fe24758967da02d60a2 from qemu
2018-03-04 14:31:16 -05:00
Paolo Bonzini 6997a5a090
gen-icount: check cflags instead of use_icount global
Backports commit cd42d5b23691ad73edfd6dbcfc935a960a9c5a65 from qemu
2018-03-04 14:26:26 -05:00
Richard Henderson cbb20881a2
target/arm: Delay check for magic kernel page
There's nothing magic about the exception that we generate in order
to execute the magic kernel page. We can and should allow gdb to
set a breakpoint at this location.

Backports commit 3805c2eba8999049bbbea29fdcdea4d47d943c88 from qemu
2018-03-04 14:09:09 -05:00
Lluís Vilanova 3a196c62ae
target: [tcg] Use a generic enum for DISAS_ values
Used later. An enum makes expected values explicit and
bounds the value space of switches.

Backports commit 77fc6f5e28667634916f114ae04c6029cd7b9c45 from qemu
2018-03-04 14:08:43 -05:00
Richard Henderson 4a5b1aec34
target/arm: Use DISAS_NORETURN
Fold DISAS_EXC and DISAS_TB_JUMP into DISAS_NORETURN.

In both cases all following code is dead. In the first
case because we have exited the TB via exception; in the
second case because we have exited the TB via goto_tb
and its associated machinery.

Backports commit a0c231e651b249960906f250b8e5eef5ed9888c4 from qemu
2018-03-04 13:57:18 -05:00
Richard Henderson b7ba55a5b5
target/i386: Use generic DISAS_* enumerators
This target is not sophisticated in its use of cleanups at the
end of the translation loop. For the most part, any condition
that exits the TB is dealt with by emitting the exiting opcode
right then and there. Therefore the only is_jmp indicator that
is needed is DISAS_NORETURN.

For two stack segment modifying cases, we have not yet exited
the TB (therefore DISAS_NORETURN feels wrong), but intend to exit.
The caller of gen_movl_seg_T0 currently checks for any non-zero
value, therefore DISAS_TOO_MANY seems acceptable for that usage.

Backports commit 1e39d97af086d525cd0408eaa5d19783ea165906 from qemu
2018-03-04 13:52:03 -05:00
Richard Henderson b8a16f841a
tcg: Add generic DISAS_NORETURN
This will allow some amount of cleanup to happen before
switching the backends over to enum DisasJumpType.

Backports commit 5dc66895b0113034cd37fd5e65911d7959fc26a9 from qemu
2018-03-04 13:49:18 -05:00
Richard Henderson 1642f7d404
tcg/s390: Use slbgr for setcond le and leu
Backports commit 4609190b5f7f68a5e2a8738029594f45a062d4c9 from qemu
2018-03-04 13:48:42 -05:00
Richard Henderson 83e703d2bd
tcg/s390: Use load-on-condition-2 facility
This allows LOAD HALFWORD IMMEDIATE ON CONDITION,
eliminating one insn in some common cases.

Backports commit 7af525af01b9615c4f4df5da2e8a50f2fe00b023 from qemu
2018-03-04 13:46:06 -05:00
Richard Henderson d87e7126c3
tcg/s390: Use distinct-operands facility
This allows using a 3-operand insn form for some arithmetic,
logicals and shifts.

Backports commit c2097136ad6e3f476fd177fc3d2e48fa6bffacfd from qemu
2018-03-04 13:42:56 -05:00
Richard Henderson 3df9d84459
tcg/s390: Merge ori+xori facilities check to tcg_target_op_def
Backports commit e42349cbd6afd1f6838e719184e3d07190c02de7 from qemu
2018-03-04 13:36:20 -05:00
Richard Henderson becadbe755
tcg/s390: Merge add2i facilities check to tcg_target_op_def
Backports commit ba18b07dc689a21caa31feee922c165e90b4c28b from qemu
2018-03-04 13:34:16 -05:00
Richard Henderson a1b4fa71cf
tcg/s390: Merge muli facilities check to tcg_target_op_def
Backports commit a8f0269e9edde143d831b4a016b1e86c1f175123 from qemu
2018-03-04 13:32:29 -05:00
Richard Henderson 168ebcce61
tcg/s390: Merge cmpi facilities check to tcg_target_op_def
Backports commit 07952d9570add4c78594b46605825408d956b2ad from qemu
2018-03-04 13:30:57 -05:00
Richard Henderson 9a29afcb50
tcg/s390: Fully convert tcg_target_op_def
Use a switch instead of searching a table.

Backports commit 9b5500b697b61460f433f0e3a30619ace2c32ca6 from qemu
2018-03-04 13:28:01 -05:00
Pranith Kumar 902886cc45
tcg: Implement implicit ordering semantics
Currently, we cannot use mttcg for running strong memory model guests
on weak memory model hosts due to missing ordering semantics.

We implicitly generate fence instructions for stronger guests if an
ordering mismatch is detected. We generate fences only for the orders
for which fence instructions are necessary, for example a fence is not
necessary between a store and a subsequent load on x86 since its
absence in the guest binary tells that ordering need not be
ensured. Also note that if we find multiple subsequent fence
instructions in the generated IR, we combine them in the TCG
optimization pass.

This patch allows us to boot an x86 guest on ARM64 hosts using mttcg.

Backports commit b32dc3370a666e237b2099c22166b15e58cb6df8 from qemu
2018-03-04 13:24:27 -05:00
Pranith Kumar 862bbef07d
tcg: Add tcg target default memory ordering
Backports commit 71650df7b0ee0600308810a267a123b971b3d533 from qemu
2018-03-04 13:22:41 -05:00
Peter Maydell b9f06be41d
target/arm: Allow deliver_fault() caller to specify EA bit
For external aborts, we will want to be able to specify the EA
(external abort type) bit in the syndrome field. Allow callers of
deliver_fault() to do that by adding a field to ARMMMUFaultInfo which
we use when constructing the syndrome values.

Backports commit c528af7aa64f159eb30b46e567b650c5440fc117 from qemu
2018-03-04 13:20:23 -05:00
Peter Maydell 320655293a
target/arm: Factor out fault delivery code
We currently have some similar code in tlb_fill() and in
arm_cpu_do_unaligned_access() for delivering a data abort or prefetch
abort. We're also going to want to do the same thing to handle
external aborts. Factor out the common code into a new function
deliver_fault().

Backports commit aac43da1d772a50778ab1252c13c08c2eb31fb39 from qemu
2018-03-04 13:18:31 -05:00
Peter Maydell b1bff4d5c3
cputlb: Support generating CPU exceptions on memory transaction failures
Call the new cpu_transaction_failed() hook at the places where
CPU generated code interacts with the memory system:
io_readx()
io_writex()
get_page_addr_code()

Any access from C code (eg via cpu_physical_memory_rw(),
address_space_rw(), ld/st_*_phys()) will *not* trigger CPU exceptions
via cpu_transaction_failed(). Handling for transactions failures for
this kind of call should be done by using a function which returns a
MemTxResult and treating the failure case appropriately in the
calling code.

In an ideal world we would not generate CPU exceptions for
instruction fetch failures in get_page_addr_code() but instead wait
until the code translation process tried a load and it failed;
however that change would require too great a restructuring and
redesign to attempt at this point.

Backports commit 04e3aabde397e7abc78ba1ce6cbd144d5fbb1722 from qemu
2018-03-04 13:14:50 -05:00
Peter Maydell c44d323359
cpu: Define new cpu_transaction_failed() hook
Currently we have a rather half-baked setup for allowing CPUs to
generate exceptions on accesses to invalid memory: the CPU has a
cpu_unassigned_access() hook which the memory system calls in
unassigned_mem_write() and unassigned_mem_read() if the current_cpu
pointer is non-NULL. This was originally designed before we
implemented the MemTxResult type that allows memory operations to
report a success or failure code, which is why the hook is called
right at the bottom of the memory system. The major problem with
this is that it means that the hook can be called even when the
access was not actually done by the CPU: for instance if the CPU
writes to a DMA engine register which causes the DMA engine to begin
a transaction which has been set up by the guest to operate on
invalid memory then this will casue the CPU to take an exception
incorrectly. Another minor problem is that currently if a device
returns a transaction error then this won't turn into a CPU exception
at all.

The right way to do this is to have allow the CPU to respond
to memory system transaction failures at the point where the
CPU specific code calls into the memory system.

Define a new QOM CPU method and utility function
cpu_transaction_failed() which is called in these cases.
The functionality here overlaps with the existing
cpu_unassigned_access() because individual target CPUs will
need some work to convert them to the new system. When this
transition is complete we can remove the old cpu_unassigned_access()
code.

Backports commit 0dff0939f6fc6a7abd966d4295f06a06d7a01df9 from qemu
2018-03-04 13:11:50 -05:00
Peter Maydell 26c8f31d9e
memory.h: Move MemTxResult type to memattrs.h
Move the MemTxResult type to memattrs.h. We're going to want to
use it in cpu/qom.h, which doesn't want to include all of
memory.h. In practice MemTxResult and MemTxAttrs are pretty
closely linked since both are used for the new-style
read_with_attrs and write_with_attrs callbacks, so memattrs.h
is a reasonable home for this rather than creating a whole
new header file for it.

Backports commit 3114d092b1740f9db9aa559aeb48ee387011e1da from qemu
2018-03-04 13:10:47 -05:00
Peter Maydell 06619904c6
target/arm: Create and use new function arm_v7m_is_handler_mode()
Add a utility function for testing whether the CPU is in Handler
mode; this is just a check whether v7m.exception is non-zero, but
we do it in several places and it makes the code a bit easier
to read to not have to mentally figure out what the test is testing.

Backports commit 15b3f556bab4f961bf92141eb8521c8da3df5eb2 from qemu
2018-03-04 13:06:45 -05:00
Peter Maydell a897ee919b
target-arm: v7M: ignore writes to CONTROL.SPSEL from Thread mode
For v7M, writes to the CONTROL register are only permitted for
privileged code. However even if the code is privileged, the
write must not affect the SPSEL bit in the CONTROL register
if the CPU is in Thread mode (as documented in the pseudocode
for the MSR instruction). Implement this, instead of permitting
SPSEL to be written in all cases.

This was causing mbed applications not to run, because the
RTX RTOS they use relies on this behaviour.

Backports commit 792dac309c8660306557ba058b8b5a6a75ab3c1f from qemu
2018-03-04 13:04:20 -05:00
Peter Maydell 4ae080e27f
target/arm: Don't calculate lr in arm_v7m_cpu_do_interrupt() until needed
Move the code in arm_v7m_cpu_do_interrupt() that calculates the
magic LR value down to when we're actually going to use it.
Having the calculation and use so far apart makes the code
a little harder to understand than it needs to be.

Backports commit bd70b29ba92e4446f9e4eb8b9acc19ef6ff4a4d5 from qemu
2018-03-04 12:59:38 -05:00
Peter Maydell 75f8224d13
target/arm: Make arm_cpu_dump_state() handle the M-profile XPSR
Make the arm_cpu_dump_state() debug logging handle the M-profile XPSR
rather than assuming it's an A-profile CPSR. On M profile the PSR
line of a register dump will now look like this:

XPSR=41000000 -Z-- T priv-thread

Backports commit 5b906f3589443a3c69d8feeaac37263843ecfb8d from qemu
2018-03-04 12:58:56 -05:00
Peter Maydell 9056a93c9a
target/arm: Don't store M profile PRIMASK and FAULTMASK in daif
We currently store the M profile CPU register state PRIMASK and
FAULTMASK in the daif field of the CPU state in its I and F
bits. This is a legacy from the original implementation, which
tried to share the cpu_exec_interrupt code between A profile
and M profile. We've since separated out the two cases because
they are significantly different, so now there is no common
code between M and A profile which looks at env->daif: all the
uses are either in A-only or M-only code paths. Sharing the state
fields now is just confusing, and will make things awkward
when we implement v8M, where the PRIMASK and FAULTMASK
registers are banked between security states.

Switch M profile over to using v7m.faultmask and v7m.primask
fields for these registers.

Backports commit e6ae5981ea4b0f6feb223009a5108582e7644f8f from qemu
2018-03-04 12:56:29 -05:00
Peter Maydell 5d6b031550
target/arm: Define and use XPSR bit masks
The M profile XPSR is almost the same format as the A profile CPSR,
but not quite. Define some XPSR_* macros and use them where we
definitely dealing with an XPSR rather than reusing the CPSR ones.

Backports commit 987ab45e108953c1c98126c338c2119c243c372b from qemu
2018-03-04 12:54:41 -05:00
Peter Maydell 64c6727e4a
target/arm: Fix outdated comment about exception exit
When we switched our handling of exception exit to detect
the magic addresses at translate time rather than via
a do_unassigned_access hook, we forgot to update a
comment; correct the omission.

Backports commit 9d17da4b68a05fc78daa47f0f3d914eea5d802ea from qemu
2018-03-04 12:52:34 -05:00
Peter Maydell 219b3e8a08
target/arm: Remove incorrect comment about MPU_CTRL
Remove the comment that claims that some MPU_CTRL bits are stored
in sctlr_el[1]. This has never been true since MPU_CTRL was added
in commit 29c483a50607 -- the comment is a leftover from
Michael Davidsaver's original implementation, which I modified
not to use sctlr_el[1]; I forgot to delete the comment then.

Backports commit 59e4972c3fc63d981e8b613ebb3bb01a05848075 from qemu
2018-03-04 12:52:02 -05:00
Peter Maydell 108cff5e61
target/arm: Tighten up Thumb decode where new v8M insns will be
Tighten up the T32 decoder in the places where new v8M instructions
will be:
* TT/TTT/TTA/TTAT are in what was nominally LDREX/STREX r15, ...
which is UNPREDICTABLE:
make the UNPREDICTABLE behaviour be to UNDEF
* BXNS/BLXNS are distinguished from BX/BLX via the low 3 bits,
which in previous architectural versions are SBZ:
enforce the SBZ via UNDEF rather than ignoring it, and move
the "ARCH(5)" UNDEF case up so we don't leak a TCG temporary
* SG is in the encoding which would be LDRD/STRD with rn = r15;
this is UNPREDICTABLE and we currently UNDEF:
move this check further up the code so that we don't leak
TCG temporaries in the UNDEF case and have a better place
to put the SG decode.

This means that if a v8M binary is accidentally run on v7M
or if a test case hits something that we haven't implemented
yet the behaviour will be obvious (UNDEF) rather than obscure
(plough on treating it as a different instruction).

In the process, add some comments about the instruction patterns
at these points in the decode. Our Thumb and ARM decoders are
very difficult to understand currently, but gradually adding
comments like this should help to clarify what exactly has
been decoded when.

Backports commit ebfe27c593e5b222aa2a1fc545b447be3d995faa from qemu
2018-03-04 12:51:08 -05:00
Peter Maydell 6f4afe1a13
target/arm: Consolidate PMSA handling in get_phys_addr()
Currently get_phys_addr() has PMSAv7 handling before the
"is translation disabled?" check, and then PMSAv5 after it.
Tidy this up by making the PMSAv5 code handle the "MPU disabled"
case itself, so that we have all the PMSA code in one place.
This will make adding the PMSAv8 code slightly cleaner, and
also means that pre-v7 PMSA cores benefit from the MPU lookup
logging that the PMSAv7 codepath had.

Backports commit 3279adb95e34dd3d67c66d729458f7784747cf8d from qemu
2018-03-04 12:48:22 -05:00
Peter Maydell f85f301316
target/arm: Don't trap WFI/WFE for M profile
M profile cores can never trap on WFI or WFE instructions. Check for
M profile in check_wfx_trap() to ensure this.

The existing code will do the right thing for v7M cores because
the hcr_el2 and scr_el3 registers will be all-zeroes and so we
won't attempt to trap, but when we start setting ARM_FEATURE_V8
for v8M cores the v8A handling of SCTLR.nTWE and .nTWI will not
give the right results.

Backports commit 0e2845689ebdb4ea7174f96f6797e2d8942bd114 from qemu
2018-03-04 12:46:37 -05:00
Peter Maydell 2c9a196efe
target/arm: Use MMUAccessType enum rather than int
In the ARM get_phys_addr() code, switch to using the MMUAccessType
enum and its MMU_* values rather than int and literal 0/1/2.

Backports commit 03ae85f858fc46495258a5dd4551fff2c34bd495 from qemu
2018-03-04 12:45:56 -05:00
Brijesh Singh b9c18f22cd
target-i386/cpu: Add new EPYC CPU model
Add a new base CPU model called 'EPYC' to model processors from AMD EPYC
family (which includes EPYC 76xx,75xx,74xx, 73xx and 72xx).

The following features bits have been added/removed compare to Opteron_G5

Added: monitor, movbe, rdrand, mmxext, ffxsr, rdtscp, cr8legacy, osvw,
fsgsbase, bmi1, avx2, smep, bmi2, rdseed, adx, smap, clfshopt, sha
xsaveopt, xsavec, xgetbv1, arat

Removed: xop, fma4, tbm

Backports commit 2e2efc7dbe2b0adc1200b5aa286cdbed729f6751 from qemu
2018-03-04 12:22:27 -05:00
Eduardo Habkost 382022929e
cpu: cpu_by_arch_id() helper
The helper can be used for CPU object lookup using the CPU's
arch-specific ID (the one returned by CPUClass::get_arch_id()).

Backports commit 5ce46cb34eecec0bc94a4b1394763f9a1bbe20c3 from qemu
2018-03-04 12:16:39 -05:00
Alexey Kardashevskiy 75afdffa45
memory: Move FlatView allocation to a helper
This moves a FlatView allocation and initialization to a helper.
While we are nere, replace g_new with g_new0 to not to bother if we add
new fields in the future.

This should cause no behavioural change.

Backports commit de7e6815b84c797cbda56dc96fcacaf5f37d3a20 from qemu
2018-03-04 02:08:37 -05:00
Alexey Kardashevskiy e723b8dd49
memory: Open code FlatView rendering
We are going to share FlatView's between AddressSpace's and per-AS
memory listeners won't suit the purpose anymore so open code
the dispatch tree rendering.

Since there is a good chance that dispatch_listener was the only
listener, this avoids address_space_update_topology_pass() if there is
no registered listeners; this should improve starting time.

This should cause no behavioural change.

Backports commit 1b04a1580917d9e41fd37ca62cbff9b4bf061e96 from qemu
2018-03-04 02:06:48 -05:00
Alexey Kardashevskiy f74fcb194f
exec: Explicitly export target AS from address_space_translate_internal
This adds an AS** parameter to address_space_do_translate()
to make it easier for the next patch to share FlatViews.

This should cause no behavioural change.

Backports commit 6424975ce912061ac9e4a375237b0c89d83d93e3 from qemu
2018-03-04 01:56:13 -05:00
Eric Blake be742759b0
osdep: Fix ROUND_UP(64-bit, 32-bit)
When using bit-wise operations that exploit the power-of-two
nature of the second argument of ROUND_UP(), we still need to
ensure that the mask is as wide as the first argument (done
by using a ternary to force proper arithmetic promotion).
Unpatched, ROUND_UP(2ULL*1024*1024*1024*1024, 512U) produces 0,
instead of the intended 2TiB, because negation of an unsigned
32-bit quantity followed by widening to 64-bits does not
sign-extend the mask.

Broken since its introduction in commit 292c8e50 (v1.5.0).
Callers that passed the same width type to both macro parameters,
or that had other code to ensure the first parameter's maximum
runtime value did not exceed the second parameter's width, are
unaffected, but I did not audit to see which (if any) existing
clients of the macro could trigger incorrect behavior (I found
the bug while adding a new use of the macro).

While preparing the patch, checkpatch complained about poor
spacing, so I also fixed that here and in the nearby DIV_ROUND_UP.

Backports commit 33a599667a9e70588483a31286dfff8cfc27d513 from qemu
2018-03-04 01:54:09 -05:00
Alistair Francis 5d742aad0b
target/arm: Require alignment for load exclusive
According to the ARM ARM exclusive loads require the same alignment as
exclusive stores. Let's update the memops used for the load to match
that of the store. This adds the alignment requirement to the memops.

Backports commit 4a2fdb78e794c1ad93aa9e160235d6a61a2125de from qemu
2018-03-04 01:53:04 -05:00
Richard Henderson 4a8f556c29
target/arm: Correct load exclusive pair atomicity
We are not providing the required single-copy atomic semantics for
the 64-bit operation that is the 32-bit paired load.

At the same time, leave the entire 64-bit value in cpu_exclusive_val
and stop writing to cpu_exclusive_high. This means that we do not
have to re-assemble the 64-bit quantity when it comes time to store.

At the same time, drop a redundant temporary and perform all loads
directly into the cpu_exclusive_* globals.

Backports commit 19514cde3b92938df750acaecf2caaa85e1d36a6 from qemu
2018-03-04 01:49:35 -05:00
Alistair Francis 009a52dd13
target/arm: Correct exclusive store cmpxchg memop mask
When we perform the atomic_cmpxchg operation we want to perform the
operation on a pair of 32-bit registers. Previously we were just passing
the register size in which was set to MO_32. This would result in the
high register to be ignored. To fix this issue we hardcode the size to
be 64-bits long when operating on 32-bit pairs.

Backports commit 955fd0ad5d610f62ba2f4ce46a872bf50434dcf8 from qemu
2018-03-04 01:43:55 -05:00