Commit graph

4386 commits

Author SHA1 Message Date
Richard Henderson cbb20881a2
target/arm: Delay check for magic kernel page
There's nothing magic about the exception that we generate in order
to execute the magic kernel page. We can and should allow gdb to
set a breakpoint at this location.

Backports commit 3805c2eba8999049bbbea29fdcdea4d47d943c88 from qemu
2018-03-04 14:09:09 -05:00
Lluís Vilanova 3a196c62ae
target: [tcg] Use a generic enum for DISAS_ values
Used later. An enum makes expected values explicit and
bounds the value space of switches.

Backports commit 77fc6f5e28667634916f114ae04c6029cd7b9c45 from qemu
2018-03-04 14:08:43 -05:00
Richard Henderson 4a5b1aec34
target/arm: Use DISAS_NORETURN
Fold DISAS_EXC and DISAS_TB_JUMP into DISAS_NORETURN.

In both cases all following code is dead. In the first
case because we have exited the TB via exception; in the
second case because we have exited the TB via goto_tb
and its associated machinery.

Backports commit a0c231e651b249960906f250b8e5eef5ed9888c4 from qemu
2018-03-04 13:57:18 -05:00
Richard Henderson b7ba55a5b5
target/i386: Use generic DISAS_* enumerators
This target is not sophisticated in its use of cleanups at the
end of the translation loop. For the most part, any condition
that exits the TB is dealt with by emitting the exiting opcode
right then and there. Therefore the only is_jmp indicator that
is needed is DISAS_NORETURN.

For two stack segment modifying cases, we have not yet exited
the TB (therefore DISAS_NORETURN feels wrong), but intend to exit.
The caller of gen_movl_seg_T0 currently checks for any non-zero
value, therefore DISAS_TOO_MANY seems acceptable for that usage.

Backports commit 1e39d97af086d525cd0408eaa5d19783ea165906 from qemu
2018-03-04 13:52:03 -05:00
Richard Henderson b8a16f841a
tcg: Add generic DISAS_NORETURN
This will allow some amount of cleanup to happen before
switching the backends over to enum DisasJumpType.

Backports commit 5dc66895b0113034cd37fd5e65911d7959fc26a9 from qemu
2018-03-04 13:49:18 -05:00
Richard Henderson 1642f7d404
tcg/s390: Use slbgr for setcond le and leu
Backports commit 4609190b5f7f68a5e2a8738029594f45a062d4c9 from qemu
2018-03-04 13:48:42 -05:00
Richard Henderson 83e703d2bd
tcg/s390: Use load-on-condition-2 facility
This allows LOAD HALFWORD IMMEDIATE ON CONDITION,
eliminating one insn in some common cases.

Backports commit 7af525af01b9615c4f4df5da2e8a50f2fe00b023 from qemu
2018-03-04 13:46:06 -05:00
Richard Henderson d87e7126c3
tcg/s390: Use distinct-operands facility
This allows using a 3-operand insn form for some arithmetic,
logicals and shifts.

Backports commit c2097136ad6e3f476fd177fc3d2e48fa6bffacfd from qemu
2018-03-04 13:42:56 -05:00
Richard Henderson 3df9d84459
tcg/s390: Merge ori+xori facilities check to tcg_target_op_def
Backports commit e42349cbd6afd1f6838e719184e3d07190c02de7 from qemu
2018-03-04 13:36:20 -05:00
Richard Henderson becadbe755
tcg/s390: Merge add2i facilities check to tcg_target_op_def
Backports commit ba18b07dc689a21caa31feee922c165e90b4c28b from qemu
2018-03-04 13:34:16 -05:00
Richard Henderson a1b4fa71cf
tcg/s390: Merge muli facilities check to tcg_target_op_def
Backports commit a8f0269e9edde143d831b4a016b1e86c1f175123 from qemu
2018-03-04 13:32:29 -05:00
Richard Henderson 168ebcce61
tcg/s390: Merge cmpi facilities check to tcg_target_op_def
Backports commit 07952d9570add4c78594b46605825408d956b2ad from qemu
2018-03-04 13:30:57 -05:00
Richard Henderson 9a29afcb50
tcg/s390: Fully convert tcg_target_op_def
Use a switch instead of searching a table.

Backports commit 9b5500b697b61460f433f0e3a30619ace2c32ca6 from qemu
2018-03-04 13:28:01 -05:00
Pranith Kumar 902886cc45
tcg: Implement implicit ordering semantics
Currently, we cannot use mttcg for running strong memory model guests
on weak memory model hosts due to missing ordering semantics.

We implicitly generate fence instructions for stronger guests if an
ordering mismatch is detected. We generate fences only for the orders
for which fence instructions are necessary, for example a fence is not
necessary between a store and a subsequent load on x86 since its
absence in the guest binary tells that ordering need not be
ensured. Also note that if we find multiple subsequent fence
instructions in the generated IR, we combine them in the TCG
optimization pass.

This patch allows us to boot an x86 guest on ARM64 hosts using mttcg.

Backports commit b32dc3370a666e237b2099c22166b15e58cb6df8 from qemu
2018-03-04 13:24:27 -05:00
Pranith Kumar 862bbef07d
tcg: Add tcg target default memory ordering
Backports commit 71650df7b0ee0600308810a267a123b971b3d533 from qemu
2018-03-04 13:22:41 -05:00
Peter Maydell b9f06be41d
target/arm: Allow deliver_fault() caller to specify EA bit
For external aborts, we will want to be able to specify the EA
(external abort type) bit in the syndrome field. Allow callers of
deliver_fault() to do that by adding a field to ARMMMUFaultInfo which
we use when constructing the syndrome values.

Backports commit c528af7aa64f159eb30b46e567b650c5440fc117 from qemu
2018-03-04 13:20:23 -05:00
Peter Maydell 320655293a
target/arm: Factor out fault delivery code
We currently have some similar code in tlb_fill() and in
arm_cpu_do_unaligned_access() for delivering a data abort or prefetch
abort. We're also going to want to do the same thing to handle
external aborts. Factor out the common code into a new function
deliver_fault().

Backports commit aac43da1d772a50778ab1252c13c08c2eb31fb39 from qemu
2018-03-04 13:18:31 -05:00
Peter Maydell b1bff4d5c3
cputlb: Support generating CPU exceptions on memory transaction failures
Call the new cpu_transaction_failed() hook at the places where
CPU generated code interacts with the memory system:
io_readx()
io_writex()
get_page_addr_code()

Any access from C code (eg via cpu_physical_memory_rw(),
address_space_rw(), ld/st_*_phys()) will *not* trigger CPU exceptions
via cpu_transaction_failed(). Handling for transactions failures for
this kind of call should be done by using a function which returns a
MemTxResult and treating the failure case appropriately in the
calling code.

In an ideal world we would not generate CPU exceptions for
instruction fetch failures in get_page_addr_code() but instead wait
until the code translation process tried a load and it failed;
however that change would require too great a restructuring and
redesign to attempt at this point.

Backports commit 04e3aabde397e7abc78ba1ce6cbd144d5fbb1722 from qemu
2018-03-04 13:14:50 -05:00
Peter Maydell c44d323359
cpu: Define new cpu_transaction_failed() hook
Currently we have a rather half-baked setup for allowing CPUs to
generate exceptions on accesses to invalid memory: the CPU has a
cpu_unassigned_access() hook which the memory system calls in
unassigned_mem_write() and unassigned_mem_read() if the current_cpu
pointer is non-NULL. This was originally designed before we
implemented the MemTxResult type that allows memory operations to
report a success or failure code, which is why the hook is called
right at the bottom of the memory system. The major problem with
this is that it means that the hook can be called even when the
access was not actually done by the CPU: for instance if the CPU
writes to a DMA engine register which causes the DMA engine to begin
a transaction which has been set up by the guest to operate on
invalid memory then this will casue the CPU to take an exception
incorrectly. Another minor problem is that currently if a device
returns a transaction error then this won't turn into a CPU exception
at all.

The right way to do this is to have allow the CPU to respond
to memory system transaction failures at the point where the
CPU specific code calls into the memory system.

Define a new QOM CPU method and utility function
cpu_transaction_failed() which is called in these cases.
The functionality here overlaps with the existing
cpu_unassigned_access() because individual target CPUs will
need some work to convert them to the new system. When this
transition is complete we can remove the old cpu_unassigned_access()
code.

Backports commit 0dff0939f6fc6a7abd966d4295f06a06d7a01df9 from qemu
2018-03-04 13:11:50 -05:00
Peter Maydell 26c8f31d9e
memory.h: Move MemTxResult type to memattrs.h
Move the MemTxResult type to memattrs.h. We're going to want to
use it in cpu/qom.h, which doesn't want to include all of
memory.h. In practice MemTxResult and MemTxAttrs are pretty
closely linked since both are used for the new-style
read_with_attrs and write_with_attrs callbacks, so memattrs.h
is a reasonable home for this rather than creating a whole
new header file for it.

Backports commit 3114d092b1740f9db9aa559aeb48ee387011e1da from qemu
2018-03-04 13:10:47 -05:00
Peter Maydell 06619904c6
target/arm: Create and use new function arm_v7m_is_handler_mode()
Add a utility function for testing whether the CPU is in Handler
mode; this is just a check whether v7m.exception is non-zero, but
we do it in several places and it makes the code a bit easier
to read to not have to mentally figure out what the test is testing.

Backports commit 15b3f556bab4f961bf92141eb8521c8da3df5eb2 from qemu
2018-03-04 13:06:45 -05:00
Peter Maydell a897ee919b
target-arm: v7M: ignore writes to CONTROL.SPSEL from Thread mode
For v7M, writes to the CONTROL register are only permitted for
privileged code. However even if the code is privileged, the
write must not affect the SPSEL bit in the CONTROL register
if the CPU is in Thread mode (as documented in the pseudocode
for the MSR instruction). Implement this, instead of permitting
SPSEL to be written in all cases.

This was causing mbed applications not to run, because the
RTX RTOS they use relies on this behaviour.

Backports commit 792dac309c8660306557ba058b8b5a6a75ab3c1f from qemu
2018-03-04 13:04:20 -05:00
Peter Maydell 4ae080e27f
target/arm: Don't calculate lr in arm_v7m_cpu_do_interrupt() until needed
Move the code in arm_v7m_cpu_do_interrupt() that calculates the
magic LR value down to when we're actually going to use it.
Having the calculation and use so far apart makes the code
a little harder to understand than it needs to be.

Backports commit bd70b29ba92e4446f9e4eb8b9acc19ef6ff4a4d5 from qemu
2018-03-04 12:59:38 -05:00
Peter Maydell 75f8224d13
target/arm: Make arm_cpu_dump_state() handle the M-profile XPSR
Make the arm_cpu_dump_state() debug logging handle the M-profile XPSR
rather than assuming it's an A-profile CPSR. On M profile the PSR
line of a register dump will now look like this:

XPSR=41000000 -Z-- T priv-thread

Backports commit 5b906f3589443a3c69d8feeaac37263843ecfb8d from qemu
2018-03-04 12:58:56 -05:00
Peter Maydell 9056a93c9a
target/arm: Don't store M profile PRIMASK and FAULTMASK in daif
We currently store the M profile CPU register state PRIMASK and
FAULTMASK in the daif field of the CPU state in its I and F
bits. This is a legacy from the original implementation, which
tried to share the cpu_exec_interrupt code between A profile
and M profile. We've since separated out the two cases because
they are significantly different, so now there is no common
code between M and A profile which looks at env->daif: all the
uses are either in A-only or M-only code paths. Sharing the state
fields now is just confusing, and will make things awkward
when we implement v8M, where the PRIMASK and FAULTMASK
registers are banked between security states.

Switch M profile over to using v7m.faultmask and v7m.primask
fields for these registers.

Backports commit e6ae5981ea4b0f6feb223009a5108582e7644f8f from qemu
2018-03-04 12:56:29 -05:00
Peter Maydell 5d6b031550
target/arm: Define and use XPSR bit masks
The M profile XPSR is almost the same format as the A profile CPSR,
but not quite. Define some XPSR_* macros and use them where we
definitely dealing with an XPSR rather than reusing the CPSR ones.

Backports commit 987ab45e108953c1c98126c338c2119c243c372b from qemu
2018-03-04 12:54:41 -05:00
Peter Maydell 64c6727e4a
target/arm: Fix outdated comment about exception exit
When we switched our handling of exception exit to detect
the magic addresses at translate time rather than via
a do_unassigned_access hook, we forgot to update a
comment; correct the omission.

Backports commit 9d17da4b68a05fc78daa47f0f3d914eea5d802ea from qemu
2018-03-04 12:52:34 -05:00
Peter Maydell 219b3e8a08
target/arm: Remove incorrect comment about MPU_CTRL
Remove the comment that claims that some MPU_CTRL bits are stored
in sctlr_el[1]. This has never been true since MPU_CTRL was added
in commit 29c483a50607 -- the comment is a leftover from
Michael Davidsaver's original implementation, which I modified
not to use sctlr_el[1]; I forgot to delete the comment then.

Backports commit 59e4972c3fc63d981e8b613ebb3bb01a05848075 from qemu
2018-03-04 12:52:02 -05:00
Peter Maydell 108cff5e61
target/arm: Tighten up Thumb decode where new v8M insns will be
Tighten up the T32 decoder in the places where new v8M instructions
will be:
* TT/TTT/TTA/TTAT are in what was nominally LDREX/STREX r15, ...
which is UNPREDICTABLE:
make the UNPREDICTABLE behaviour be to UNDEF
* BXNS/BLXNS are distinguished from BX/BLX via the low 3 bits,
which in previous architectural versions are SBZ:
enforce the SBZ via UNDEF rather than ignoring it, and move
the "ARCH(5)" UNDEF case up so we don't leak a TCG temporary
* SG is in the encoding which would be LDRD/STRD with rn = r15;
this is UNPREDICTABLE and we currently UNDEF:
move this check further up the code so that we don't leak
TCG temporaries in the UNDEF case and have a better place
to put the SG decode.

This means that if a v8M binary is accidentally run on v7M
or if a test case hits something that we haven't implemented
yet the behaviour will be obvious (UNDEF) rather than obscure
(plough on treating it as a different instruction).

In the process, add some comments about the instruction patterns
at these points in the decode. Our Thumb and ARM decoders are
very difficult to understand currently, but gradually adding
comments like this should help to clarify what exactly has
been decoded when.

Backports commit ebfe27c593e5b222aa2a1fc545b447be3d995faa from qemu
2018-03-04 12:51:08 -05:00
Peter Maydell 6f4afe1a13
target/arm: Consolidate PMSA handling in get_phys_addr()
Currently get_phys_addr() has PMSAv7 handling before the
"is translation disabled?" check, and then PMSAv5 after it.
Tidy this up by making the PMSAv5 code handle the "MPU disabled"
case itself, so that we have all the PMSA code in one place.
This will make adding the PMSAv8 code slightly cleaner, and
also means that pre-v7 PMSA cores benefit from the MPU lookup
logging that the PMSAv7 codepath had.

Backports commit 3279adb95e34dd3d67c66d729458f7784747cf8d from qemu
2018-03-04 12:48:22 -05:00
Peter Maydell f85f301316
target/arm: Don't trap WFI/WFE for M profile
M profile cores can never trap on WFI or WFE instructions. Check for
M profile in check_wfx_trap() to ensure this.

The existing code will do the right thing for v7M cores because
the hcr_el2 and scr_el3 registers will be all-zeroes and so we
won't attempt to trap, but when we start setting ARM_FEATURE_V8
for v8M cores the v8A handling of SCTLR.nTWE and .nTWI will not
give the right results.

Backports commit 0e2845689ebdb4ea7174f96f6797e2d8942bd114 from qemu
2018-03-04 12:46:37 -05:00
Peter Maydell 2c9a196efe
target/arm: Use MMUAccessType enum rather than int
In the ARM get_phys_addr() code, switch to using the MMUAccessType
enum and its MMU_* values rather than int and literal 0/1/2.

Backports commit 03ae85f858fc46495258a5dd4551fff2c34bd495 from qemu
2018-03-04 12:45:56 -05:00
Brijesh Singh b9c18f22cd
target-i386/cpu: Add new EPYC CPU model
Add a new base CPU model called 'EPYC' to model processors from AMD EPYC
family (which includes EPYC 76xx,75xx,74xx, 73xx and 72xx).

The following features bits have been added/removed compare to Opteron_G5

Added: monitor, movbe, rdrand, mmxext, ffxsr, rdtscp, cr8legacy, osvw,
fsgsbase, bmi1, avx2, smep, bmi2, rdseed, adx, smap, clfshopt, sha
xsaveopt, xsavec, xgetbv1, arat

Removed: xop, fma4, tbm

Backports commit 2e2efc7dbe2b0adc1200b5aa286cdbed729f6751 from qemu
2018-03-04 12:22:27 -05:00
Eduardo Habkost 382022929e
cpu: cpu_by_arch_id() helper
The helper can be used for CPU object lookup using the CPU's
arch-specific ID (the one returned by CPUClass::get_arch_id()).

Backports commit 5ce46cb34eecec0bc94a4b1394763f9a1bbe20c3 from qemu
2018-03-04 12:16:39 -05:00
Alexey Kardashevskiy 75afdffa45
memory: Move FlatView allocation to a helper
This moves a FlatView allocation and initialization to a helper.
While we are nere, replace g_new with g_new0 to not to bother if we add
new fields in the future.

This should cause no behavioural change.

Backports commit de7e6815b84c797cbda56dc96fcacaf5f37d3a20 from qemu
2018-03-04 02:08:37 -05:00
Alexey Kardashevskiy e723b8dd49
memory: Open code FlatView rendering
We are going to share FlatView's between AddressSpace's and per-AS
memory listeners won't suit the purpose anymore so open code
the dispatch tree rendering.

Since there is a good chance that dispatch_listener was the only
listener, this avoids address_space_update_topology_pass() if there is
no registered listeners; this should improve starting time.

This should cause no behavioural change.

Backports commit 1b04a1580917d9e41fd37ca62cbff9b4bf061e96 from qemu
2018-03-04 02:06:48 -05:00
Alexey Kardashevskiy f74fcb194f
exec: Explicitly export target AS from address_space_translate_internal
This adds an AS** parameter to address_space_do_translate()
to make it easier for the next patch to share FlatViews.

This should cause no behavioural change.

Backports commit 6424975ce912061ac9e4a375237b0c89d83d93e3 from qemu
2018-03-04 01:56:13 -05:00
Eric Blake be742759b0
osdep: Fix ROUND_UP(64-bit, 32-bit)
When using bit-wise operations that exploit the power-of-two
nature of the second argument of ROUND_UP(), we still need to
ensure that the mask is as wide as the first argument (done
by using a ternary to force proper arithmetic promotion).
Unpatched, ROUND_UP(2ULL*1024*1024*1024*1024, 512U) produces 0,
instead of the intended 2TiB, because negation of an unsigned
32-bit quantity followed by widening to 64-bits does not
sign-extend the mask.

Broken since its introduction in commit 292c8e50 (v1.5.0).
Callers that passed the same width type to both macro parameters,
or that had other code to ensure the first parameter's maximum
runtime value did not exceed the second parameter's width, are
unaffected, but I did not audit to see which (if any) existing
clients of the macro could trigger incorrect behavior (I found
the bug while adding a new use of the macro).

While preparing the patch, checkpatch complained about poor
spacing, so I also fixed that here and in the nearby DIV_ROUND_UP.

Backports commit 33a599667a9e70588483a31286dfff8cfc27d513 from qemu
2018-03-04 01:54:09 -05:00
Alistair Francis 5d742aad0b
target/arm: Require alignment for load exclusive
According to the ARM ARM exclusive loads require the same alignment as
exclusive stores. Let's update the memops used for the load to match
that of the store. This adds the alignment requirement to the memops.

Backports commit 4a2fdb78e794c1ad93aa9e160235d6a61a2125de from qemu
2018-03-04 01:53:04 -05:00
Richard Henderson 4a8f556c29
target/arm: Correct load exclusive pair atomicity
We are not providing the required single-copy atomic semantics for
the 64-bit operation that is the 32-bit paired load.

At the same time, leave the entire 64-bit value in cpu_exclusive_val
and stop writing to cpu_exclusive_high. This means that we do not
have to re-assemble the 64-bit quantity when it comes time to store.

At the same time, drop a redundant temporary and perform all loads
directly into the cpu_exclusive_* globals.

Backports commit 19514cde3b92938df750acaecf2caaa85e1d36a6 from qemu
2018-03-04 01:49:35 -05:00
Alistair Francis 009a52dd13
target/arm: Correct exclusive store cmpxchg memop mask
When we perform the atomic_cmpxchg operation we want to perform the
operation on a pair of 32-bit registers. Previously we were just passing
the register size in which was set to MO_32. This would result in the
high register to be ignored. To fix this issue we hardcode the size to
be 64-bits long when operating on 32-bit pairs.

Backports commit 955fd0ad5d610f62ba2f4ce46a872bf50434dcf8 from qemu
2018-03-04 01:43:55 -05:00
Michael S. Tsirkin fd472c53c6
Revert "cpu: add APIs to allocate/free CPU environment"
This reverts commit e2a7f28693aea7e194ec1435697ec4feb24f8a6f.

This was not supposed to go upstream yet. Reverting.

Backports commit cde0a63ad721dbb538419a00f9405587680be436 from qemu
2018-03-04 01:42:49 -05:00
Joseph Myers e5b84c6d59
target/i386: set rip_offset for some SSE4.1 instructions
When emulating various SSE4.1 instructions such as pinsrd, the address
of a memory operand is computed without allowing for the 8-bit
immediate operand located after the memory operand, meaning that the
memory operand uses the wrong address in the case where it is
rip-relative. This patch adds the required rip_offset setting for
those instructions, so fixing some GCC test failures (13 in the gcc
testsuite in my GCC 6-based testing) when testing with a default CPU
setting enabling those instructions.

Backports commit ab6ab3e9972a49a359f59895a88bed311472ca97 from qemu
2018-03-04 01:41:43 -05:00
Michael S. Tsirkin 71bf994214
cpu: add APIs to allocate/free CPU environment
These will be implemented and then used by follow-up patches.

Backports commit e2a7f28693aea7e194ec1435697ec4feb24f8a6f from qemu
2018-03-04 01:39:09 -05:00
Richard Henderson b33f2b40e8
tcg: Increase minimum alignment from tcg_malloc to 8
For a 64-bit ILP32 host, aligning to sizeof(long) is not enough.
Guess the minimum for any host is 8, as that covers uint64_t.
Qemu doesn't use a host long double or host vectors, except in
extremely limited circumstances.

Fixes a bus error for a sparc v8plus host.

Backports commit 13aaef678ed377b12b76dc7fb9e615b2f2f9047b from qemu
2018-03-04 01:36:59 -05:00
Richard Henderson 29ea0681d0
tcg/arm: Fix runtime overalignment test
Patch 85aa80813dd changed the IF emitting the TST instruction,
but failed to change the ?: converting CMP to CMPEQ, so the
result of the TST is ignored.

Backports commit ca671de8af96798e0f493378240034620a3a04ee from qemu
2018-03-04 01:36:20 -05:00
James Hogan 4cc63bac09
target/mips: Fix RDHWR CC with icount
RDHWR CC reads the CPU timer like MFC0 CP0_Count, so with icount enabled
it must set can_do_io while it calls the helper to avoid the "Bad icount
read" error. It should also break out of the translation loop to ensure
that timer interrupts are immediately handled.

Backports commit d673a68db6963e86536b125af464bb6ed03eba33 from qemu
2018-03-04 01:35:25 -05:00
James Hogan cb20fdce64
target/mips: Drop redundant gen_io_start/stop()
DMTC0 CP0_Cause does a redundant gen_io_start() and gen_io_end() pair,
even though this is done for all DMTC0 operations outside of the switch
statement. Remove these redundant calls.

Backports commit 51ca717b079dccae5b6cc9f45153f5044abd34f0 from qemu
2018-03-04 01:33:54 -05:00
James Hogan 0afa0c8ddc
target/mips: Use BS_EXCP where interrupts are expected
Commit e350d8ca3ac7 ("target/mips: optimize indirect branches") made
indirect branches able to directly find the next TB and jump straight to
it without breaking out of translated code and going around the main
execution loop. This breaks the assumption in target/mips/translate.c
that BS_STOP is sufficient to cause pending interrupts to be handled,
since interrupts are only checked in the main loop.

Fix a few of these assumptions by using gen_save_pc to update the saved
PC and using BS_EXCP instead of BS_STOP:

- [D]MFC0 CP0_Count may trigger a timer interrupt which should be
immediately handled.

- [D]MTC0 CP0_Cause may trigger an interrupt (but in fact translation
was only even being stopped in the DMTC0 case).

- [D]MTC0 CP0_<any> when icount is used is assumed could potentially
cause interrupts.

- EI may trigger an interrupt which was pending. I specifically hit
this case when running KVM nested in mipsel-softmmu. A timer
interrupt while the 2nd guest was executing is caught by KVM which
switches back to the normal Linux exception base and re-enables
interrupts with EI. Since the above commit QEMU doesn't leave
translated code until the nested KVM has already restored the KVM
exception base and returned to the 2nd guest, at which point it is
too late to check for pending interrupts and it gets stuck in an
infinite loop of unhandled interrupts.

Something similar was needed for ARM in commit b29fd33db578
("target/arm: use DISAS_EXIT for eret handling").

Backports commit b74cddcbf6063f684725e3f8bca49a68e30cba71 from qemu
2018-03-04 01:32:24 -05:00
Leon Alrae 4a1ec3bb80
target-mips: apply CP0.PageMask before writing into TLB entry
PFN0 and PFN1 have to be masked out with PageMask_Mask.

Backports commit 2d1847ec1ca47fe82f1d8122409cedffdd3925d5 from qemu
2018-03-04 01:27:51 -05:00