Commit graph

4018 commits

Author SHA1 Message Date
Peter Maydell c44d323359
cpu: Define new cpu_transaction_failed() hook
Currently we have a rather half-baked setup for allowing CPUs to
generate exceptions on accesses to invalid memory: the CPU has a
cpu_unassigned_access() hook which the memory system calls in
unassigned_mem_write() and unassigned_mem_read() if the current_cpu
pointer is non-NULL. This was originally designed before we
implemented the MemTxResult type that allows memory operations to
report a success or failure code, which is why the hook is called
right at the bottom of the memory system. The major problem with
this is that it means that the hook can be called even when the
access was not actually done by the CPU: for instance if the CPU
writes to a DMA engine register which causes the DMA engine to begin
a transaction which has been set up by the guest to operate on
invalid memory then this will casue the CPU to take an exception
incorrectly. Another minor problem is that currently if a device
returns a transaction error then this won't turn into a CPU exception
at all.

The right way to do this is to have allow the CPU to respond
to memory system transaction failures at the point where the
CPU specific code calls into the memory system.

Define a new QOM CPU method and utility function
cpu_transaction_failed() which is called in these cases.
The functionality here overlaps with the existing
cpu_unassigned_access() because individual target CPUs will
need some work to convert them to the new system. When this
transition is complete we can remove the old cpu_unassigned_access()
code.

Backports commit 0dff0939f6fc6a7abd966d4295f06a06d7a01df9 from qemu
2018-03-04 13:11:50 -05:00
Peter Maydell 26c8f31d9e
memory.h: Move MemTxResult type to memattrs.h
Move the MemTxResult type to memattrs.h. We're going to want to
use it in cpu/qom.h, which doesn't want to include all of
memory.h. In practice MemTxResult and MemTxAttrs are pretty
closely linked since both are used for the new-style
read_with_attrs and write_with_attrs callbacks, so memattrs.h
is a reasonable home for this rather than creating a whole
new header file for it.

Backports commit 3114d092b1740f9db9aa559aeb48ee387011e1da from qemu
2018-03-04 13:10:47 -05:00
Peter Maydell 06619904c6
target/arm: Create and use new function arm_v7m_is_handler_mode()
Add a utility function for testing whether the CPU is in Handler
mode; this is just a check whether v7m.exception is non-zero, but
we do it in several places and it makes the code a bit easier
to read to not have to mentally figure out what the test is testing.

Backports commit 15b3f556bab4f961bf92141eb8521c8da3df5eb2 from qemu
2018-03-04 13:06:45 -05:00
Peter Maydell a897ee919b
target-arm: v7M: ignore writes to CONTROL.SPSEL from Thread mode
For v7M, writes to the CONTROL register are only permitted for
privileged code. However even if the code is privileged, the
write must not affect the SPSEL bit in the CONTROL register
if the CPU is in Thread mode (as documented in the pseudocode
for the MSR instruction). Implement this, instead of permitting
SPSEL to be written in all cases.

This was causing mbed applications not to run, because the
RTX RTOS they use relies on this behaviour.

Backports commit 792dac309c8660306557ba058b8b5a6a75ab3c1f from qemu
2018-03-04 13:04:20 -05:00
Peter Maydell 4ae080e27f
target/arm: Don't calculate lr in arm_v7m_cpu_do_interrupt() until needed
Move the code in arm_v7m_cpu_do_interrupt() that calculates the
magic LR value down to when we're actually going to use it.
Having the calculation and use so far apart makes the code
a little harder to understand than it needs to be.

Backports commit bd70b29ba92e4446f9e4eb8b9acc19ef6ff4a4d5 from qemu
2018-03-04 12:59:38 -05:00
Peter Maydell 75f8224d13
target/arm: Make arm_cpu_dump_state() handle the M-profile XPSR
Make the arm_cpu_dump_state() debug logging handle the M-profile XPSR
rather than assuming it's an A-profile CPSR. On M profile the PSR
line of a register dump will now look like this:

XPSR=41000000 -Z-- T priv-thread

Backports commit 5b906f3589443a3c69d8feeaac37263843ecfb8d from qemu
2018-03-04 12:58:56 -05:00
Peter Maydell 9056a93c9a
target/arm: Don't store M profile PRIMASK and FAULTMASK in daif
We currently store the M profile CPU register state PRIMASK and
FAULTMASK in the daif field of the CPU state in its I and F
bits. This is a legacy from the original implementation, which
tried to share the cpu_exec_interrupt code between A profile
and M profile. We've since separated out the two cases because
they are significantly different, so now there is no common
code between M and A profile which looks at env->daif: all the
uses are either in A-only or M-only code paths. Sharing the state
fields now is just confusing, and will make things awkward
when we implement v8M, where the PRIMASK and FAULTMASK
registers are banked between security states.

Switch M profile over to using v7m.faultmask and v7m.primask
fields for these registers.

Backports commit e6ae5981ea4b0f6feb223009a5108582e7644f8f from qemu
2018-03-04 12:56:29 -05:00
Peter Maydell 5d6b031550
target/arm: Define and use XPSR bit masks
The M profile XPSR is almost the same format as the A profile CPSR,
but not quite. Define some XPSR_* macros and use them where we
definitely dealing with an XPSR rather than reusing the CPSR ones.

Backports commit 987ab45e108953c1c98126c338c2119c243c372b from qemu
2018-03-04 12:54:41 -05:00
Peter Maydell 64c6727e4a
target/arm: Fix outdated comment about exception exit
When we switched our handling of exception exit to detect
the magic addresses at translate time rather than via
a do_unassigned_access hook, we forgot to update a
comment; correct the omission.

Backports commit 9d17da4b68a05fc78daa47f0f3d914eea5d802ea from qemu
2018-03-04 12:52:34 -05:00
Peter Maydell 219b3e8a08
target/arm: Remove incorrect comment about MPU_CTRL
Remove the comment that claims that some MPU_CTRL bits are stored
in sctlr_el[1]. This has never been true since MPU_CTRL was added
in commit 29c483a50607 -- the comment is a leftover from
Michael Davidsaver's original implementation, which I modified
not to use sctlr_el[1]; I forgot to delete the comment then.

Backports commit 59e4972c3fc63d981e8b613ebb3bb01a05848075 from qemu
2018-03-04 12:52:02 -05:00
Peter Maydell 108cff5e61
target/arm: Tighten up Thumb decode where new v8M insns will be
Tighten up the T32 decoder in the places where new v8M instructions
will be:
* TT/TTT/TTA/TTAT are in what was nominally LDREX/STREX r15, ...
which is UNPREDICTABLE:
make the UNPREDICTABLE behaviour be to UNDEF
* BXNS/BLXNS are distinguished from BX/BLX via the low 3 bits,
which in previous architectural versions are SBZ:
enforce the SBZ via UNDEF rather than ignoring it, and move
the "ARCH(5)" UNDEF case up so we don't leak a TCG temporary
* SG is in the encoding which would be LDRD/STRD with rn = r15;
this is UNPREDICTABLE and we currently UNDEF:
move this check further up the code so that we don't leak
TCG temporaries in the UNDEF case and have a better place
to put the SG decode.

This means that if a v8M binary is accidentally run on v7M
or if a test case hits something that we haven't implemented
yet the behaviour will be obvious (UNDEF) rather than obscure
(plough on treating it as a different instruction).

In the process, add some comments about the instruction patterns
at these points in the decode. Our Thumb and ARM decoders are
very difficult to understand currently, but gradually adding
comments like this should help to clarify what exactly has
been decoded when.

Backports commit ebfe27c593e5b222aa2a1fc545b447be3d995faa from qemu
2018-03-04 12:51:08 -05:00
Peter Maydell 6f4afe1a13
target/arm: Consolidate PMSA handling in get_phys_addr()
Currently get_phys_addr() has PMSAv7 handling before the
"is translation disabled?" check, and then PMSAv5 after it.
Tidy this up by making the PMSAv5 code handle the "MPU disabled"
case itself, so that we have all the PMSA code in one place.
This will make adding the PMSAv8 code slightly cleaner, and
also means that pre-v7 PMSA cores benefit from the MPU lookup
logging that the PMSAv7 codepath had.

Backports commit 3279adb95e34dd3d67c66d729458f7784747cf8d from qemu
2018-03-04 12:48:22 -05:00
Peter Maydell f85f301316
target/arm: Don't trap WFI/WFE for M profile
M profile cores can never trap on WFI or WFE instructions. Check for
M profile in check_wfx_trap() to ensure this.

The existing code will do the right thing for v7M cores because
the hcr_el2 and scr_el3 registers will be all-zeroes and so we
won't attempt to trap, but when we start setting ARM_FEATURE_V8
for v8M cores the v8A handling of SCTLR.nTWE and .nTWI will not
give the right results.

Backports commit 0e2845689ebdb4ea7174f96f6797e2d8942bd114 from qemu
2018-03-04 12:46:37 -05:00
Peter Maydell 2c9a196efe
target/arm: Use MMUAccessType enum rather than int
In the ARM get_phys_addr() code, switch to using the MMUAccessType
enum and its MMU_* values rather than int and literal 0/1/2.

Backports commit 03ae85f858fc46495258a5dd4551fff2c34bd495 from qemu
2018-03-04 12:45:56 -05:00
Brijesh Singh b9c18f22cd
target-i386/cpu: Add new EPYC CPU model
Add a new base CPU model called 'EPYC' to model processors from AMD EPYC
family (which includes EPYC 76xx,75xx,74xx, 73xx and 72xx).

The following features bits have been added/removed compare to Opteron_G5

Added: monitor, movbe, rdrand, mmxext, ffxsr, rdtscp, cr8legacy, osvw,
fsgsbase, bmi1, avx2, smep, bmi2, rdseed, adx, smap, clfshopt, sha
xsaveopt, xsavec, xgetbv1, arat

Removed: xop, fma4, tbm

Backports commit 2e2efc7dbe2b0adc1200b5aa286cdbed729f6751 from qemu
2018-03-04 12:22:27 -05:00
Eduardo Habkost 382022929e
cpu: cpu_by_arch_id() helper
The helper can be used for CPU object lookup using the CPU's
arch-specific ID (the one returned by CPUClass::get_arch_id()).

Backports commit 5ce46cb34eecec0bc94a4b1394763f9a1bbe20c3 from qemu
2018-03-04 12:16:39 -05:00
Alexey Kardashevskiy 75afdffa45
memory: Move FlatView allocation to a helper
This moves a FlatView allocation and initialization to a helper.
While we are nere, replace g_new with g_new0 to not to bother if we add
new fields in the future.

This should cause no behavioural change.

Backports commit de7e6815b84c797cbda56dc96fcacaf5f37d3a20 from qemu
2018-03-04 02:08:37 -05:00
Alexey Kardashevskiy e723b8dd49
memory: Open code FlatView rendering
We are going to share FlatView's between AddressSpace's and per-AS
memory listeners won't suit the purpose anymore so open code
the dispatch tree rendering.

Since there is a good chance that dispatch_listener was the only
listener, this avoids address_space_update_topology_pass() if there is
no registered listeners; this should improve starting time.

This should cause no behavioural change.

Backports commit 1b04a1580917d9e41fd37ca62cbff9b4bf061e96 from qemu
2018-03-04 02:06:48 -05:00
Alexey Kardashevskiy f74fcb194f
exec: Explicitly export target AS from address_space_translate_internal
This adds an AS** parameter to address_space_do_translate()
to make it easier for the next patch to share FlatViews.

This should cause no behavioural change.

Backports commit 6424975ce912061ac9e4a375237b0c89d83d93e3 from qemu
2018-03-04 01:56:13 -05:00
Eric Blake be742759b0
osdep: Fix ROUND_UP(64-bit, 32-bit)
When using bit-wise operations that exploit the power-of-two
nature of the second argument of ROUND_UP(), we still need to
ensure that the mask is as wide as the first argument (done
by using a ternary to force proper arithmetic promotion).
Unpatched, ROUND_UP(2ULL*1024*1024*1024*1024, 512U) produces 0,
instead of the intended 2TiB, because negation of an unsigned
32-bit quantity followed by widening to 64-bits does not
sign-extend the mask.

Broken since its introduction in commit 292c8e50 (v1.5.0).
Callers that passed the same width type to both macro parameters,
or that had other code to ensure the first parameter's maximum
runtime value did not exceed the second parameter's width, are
unaffected, but I did not audit to see which (if any) existing
clients of the macro could trigger incorrect behavior (I found
the bug while adding a new use of the macro).

While preparing the patch, checkpatch complained about poor
spacing, so I also fixed that here and in the nearby DIV_ROUND_UP.

Backports commit 33a599667a9e70588483a31286dfff8cfc27d513 from qemu
2018-03-04 01:54:09 -05:00
Alistair Francis 5d742aad0b
target/arm: Require alignment for load exclusive
According to the ARM ARM exclusive loads require the same alignment as
exclusive stores. Let's update the memops used for the load to match
that of the store. This adds the alignment requirement to the memops.

Backports commit 4a2fdb78e794c1ad93aa9e160235d6a61a2125de from qemu
2018-03-04 01:53:04 -05:00
Richard Henderson 4a8f556c29
target/arm: Correct load exclusive pair atomicity
We are not providing the required single-copy atomic semantics for
the 64-bit operation that is the 32-bit paired load.

At the same time, leave the entire 64-bit value in cpu_exclusive_val
and stop writing to cpu_exclusive_high. This means that we do not
have to re-assemble the 64-bit quantity when it comes time to store.

At the same time, drop a redundant temporary and perform all loads
directly into the cpu_exclusive_* globals.

Backports commit 19514cde3b92938df750acaecf2caaa85e1d36a6 from qemu
2018-03-04 01:49:35 -05:00
Alistair Francis 009a52dd13
target/arm: Correct exclusive store cmpxchg memop mask
When we perform the atomic_cmpxchg operation we want to perform the
operation on a pair of 32-bit registers. Previously we were just passing
the register size in which was set to MO_32. This would result in the
high register to be ignored. To fix this issue we hardcode the size to
be 64-bits long when operating on 32-bit pairs.

Backports commit 955fd0ad5d610f62ba2f4ce46a872bf50434dcf8 from qemu
2018-03-04 01:43:55 -05:00
Michael S. Tsirkin fd472c53c6
Revert "cpu: add APIs to allocate/free CPU environment"
This reverts commit e2a7f28693aea7e194ec1435697ec4feb24f8a6f.

This was not supposed to go upstream yet. Reverting.

Backports commit cde0a63ad721dbb538419a00f9405587680be436 from qemu
2018-03-04 01:42:49 -05:00
Joseph Myers e5b84c6d59
target/i386: set rip_offset for some SSE4.1 instructions
When emulating various SSE4.1 instructions such as pinsrd, the address
of a memory operand is computed without allowing for the 8-bit
immediate operand located after the memory operand, meaning that the
memory operand uses the wrong address in the case where it is
rip-relative. This patch adds the required rip_offset setting for
those instructions, so fixing some GCC test failures (13 in the gcc
testsuite in my GCC 6-based testing) when testing with a default CPU
setting enabling those instructions.

Backports commit ab6ab3e9972a49a359f59895a88bed311472ca97 from qemu
2018-03-04 01:41:43 -05:00
Michael S. Tsirkin 71bf994214
cpu: add APIs to allocate/free CPU environment
These will be implemented and then used by follow-up patches.

Backports commit e2a7f28693aea7e194ec1435697ec4feb24f8a6f from qemu
2018-03-04 01:39:09 -05:00
Richard Henderson b33f2b40e8
tcg: Increase minimum alignment from tcg_malloc to 8
For a 64-bit ILP32 host, aligning to sizeof(long) is not enough.
Guess the minimum for any host is 8, as that covers uint64_t.
Qemu doesn't use a host long double or host vectors, except in
extremely limited circumstances.

Fixes a bus error for a sparc v8plus host.

Backports commit 13aaef678ed377b12b76dc7fb9e615b2f2f9047b from qemu
2018-03-04 01:36:59 -05:00
Richard Henderson 29ea0681d0
tcg/arm: Fix runtime overalignment test
Patch 85aa80813dd changed the IF emitting the TST instruction,
but failed to change the ?: converting CMP to CMPEQ, so the
result of the TST is ignored.

Backports commit ca671de8af96798e0f493378240034620a3a04ee from qemu
2018-03-04 01:36:20 -05:00
James Hogan 4cc63bac09
target/mips: Fix RDHWR CC with icount
RDHWR CC reads the CPU timer like MFC0 CP0_Count, so with icount enabled
it must set can_do_io while it calls the helper to avoid the "Bad icount
read" error. It should also break out of the translation loop to ensure
that timer interrupts are immediately handled.

Backports commit d673a68db6963e86536b125af464bb6ed03eba33 from qemu
2018-03-04 01:35:25 -05:00
James Hogan cb20fdce64
target/mips: Drop redundant gen_io_start/stop()
DMTC0 CP0_Cause does a redundant gen_io_start() and gen_io_end() pair,
even though this is done for all DMTC0 operations outside of the switch
statement. Remove these redundant calls.

Backports commit 51ca717b079dccae5b6cc9f45153f5044abd34f0 from qemu
2018-03-04 01:33:54 -05:00
James Hogan 0afa0c8ddc
target/mips: Use BS_EXCP where interrupts are expected
Commit e350d8ca3ac7 ("target/mips: optimize indirect branches") made
indirect branches able to directly find the next TB and jump straight to
it without breaking out of translated code and going around the main
execution loop. This breaks the assumption in target/mips/translate.c
that BS_STOP is sufficient to cause pending interrupts to be handled,
since interrupts are only checked in the main loop.

Fix a few of these assumptions by using gen_save_pc to update the saved
PC and using BS_EXCP instead of BS_STOP:

- [D]MFC0 CP0_Count may trigger a timer interrupt which should be
immediately handled.

- [D]MTC0 CP0_Cause may trigger an interrupt (but in fact translation
was only even being stopped in the DMTC0 case).

- [D]MTC0 CP0_<any> when icount is used is assumed could potentially
cause interrupts.

- EI may trigger an interrupt which was pending. I specifically hit
this case when running KVM nested in mipsel-softmmu. A timer
interrupt while the 2nd guest was executing is caught by KVM which
switches back to the normal Linux exception base and re-enables
interrupts with EI. Since the above commit QEMU doesn't leave
translated code until the nested KVM has already restored the KVM
exception base and returned to the 2nd guest, at which point it is
too late to check for pending interrupts and it gets stuck in an
infinite loop of unhandled interrupts.

Something similar was needed for ARM in commit b29fd33db578
("target/arm: use DISAS_EXIT for eret handling").

Backports commit b74cddcbf6063f684725e3f8bca49a68e30cba71 from qemu
2018-03-04 01:32:24 -05:00
Leon Alrae 4a1ec3bb80
target-mips: apply CP0.PageMask before writing into TLB entry
PFN0 and PFN1 have to be masked out with PageMask_Mask.

Backports commit 2d1847ec1ca47fe82f1d8122409cedffdd3925d5 from qemu
2018-03-04 01:27:51 -05:00
James Hogan 7cf1a4276e
mips: Improve segment defs for KVM T&E guests
Improve the segment definitions used by get_physical_address() to yield
target_ulong types, e.g. 0xffffffff80000000 instead of 0x80000000. This
is in preparation for enabling emulation of MIPS KVM T&E segments in TCG
MIPS targets, which unlike KVM could potentially have 64-bit
target_ulong. In such a case the offset guest KSEG0 address ends up at
e.g. 0x000000008xxxxxxx instead of 0xffffffff8xxxxxxx.

This also allows the casts to int32_t that force sign extension to be
removed, which removes any confusion due to relational comparison of
unsigned (target_ulong) and signed (int32_t) types.

Backports commit 6743334568933199927af4992a04bfb3c30610f5 from qemu
2018-03-04 01:26:42 -05:00
James Hogan 987401c4d4
target-mips: Don't stop on [d]mtc0 DESAVE/KScratch
Writing to the MIPS DESAVE register (and now the KScratch registers)
will stop translation, supposedly due to risk of execution mode
switches. However these registers are basically RW scratch registers
with no side effects so there is no risk of them triggering execution
mode changes.

Drop the bstate = BS_STOP for these registers for both mtc0 and dmtc0.

Backports commit cb539fd241900f51de7d21244f7a55422ad0d40a from qemu
2018-03-04 01:25:27 -05:00
Anthony PERARD 567bc68803
exec: Add lock parameter to qemu_ram_ptr_length
Commit 04bf2526ce87f21b32c9acba1c5518708c243ad0 (exec: use
qemu_ram_ptr_length to access guest ram) start using qemu_ram_ptr_length
instead of qemu_map_ram_ptr, but when used with Xen, the behavior of
both function is different. They both call xen_map_cache, but one with
"lock", meaning the mapping of guest memory is never released
implicitly, and the second one without, which means, mapping can be
release later, when needed.

In the context of address_space_{read,write}_continue, the ptr to those
mapping should not be locked because it is used immediatly and never
used again.

The lock parameter make it explicit in which context qemu_ram_ptr_length
is called.

Backports commit f5aa69bdc3418773f26747ca282c291519626ece from qemu
2018-03-04 01:23:14 -05:00
Peter Maydell d72175d671
target/arm: Move PMSAv7 reset into arm_cpu_reset() so M profile MPUs get reset
When the PMSAv7 implementation was originally added it was for R profile
CPUs only, and reset was handled using the cpreg .resetfn hooks.
Unfortunately for M profile cores this doesn't work, because they do
not register any cpregs. Move the reset handling into arm_cpu_reset(),
where it will work for both R profile and M profile cores.

Backports commit 69ceea64bf565559a2b865ffb2a097d2caab805b from qemu
2018-03-04 01:20:57 -05:00
Peter Maydell 6add2f0f65
target/arm: Rename cp15.c6_rgnr to pmsav7.rnr
Almost all of the PMSAv7 state is in the pmsav7 substruct of
the ARM CPU state structure. The exception is the region
number register, which is in cp15.c6_rgnr. This exception
is a bit odd for M profile, which otherwise generally does
not store state in the cp15 substruct.

Rename cp15.c6_rgnr to pmsav7.rnr accordingly.

Backports commit 8531eb4f614a60e6582d4832b15eee09f7d27874 from qemu
2018-03-04 01:18:53 -05:00
Peter Maydell 266885f50f
target/arm: Don't allow guest to make System space executable for M profile
For an M profile v7PMSA, the system space (0xe0000000 - 0xffffffff) can
never be executable, even if the guest tries to set the MPU registers
up that way. Enforce this restriction.

Backports commit bf446a11dfb17ae7d8ed2b61a2444804eb458075 from qemu
2018-03-04 01:17:01 -05:00
Peter Maydell 34b9740081
target/arm: Don't do MPU lookups for addresses in M profile PPB region
The M profile PMSAv7 specification says that if the address being looked
up is in the PPB region (0xe0000000 - 0xe00fffff) then we do not use
the MPU regions but always use the default memory map. Implement this
(we were previously behaving like an R profile PMSAv7, which does not
special case this).

Backports commit 38aaa60ca464b48e6feef346709e97335d01b289 from qemu
2018-03-04 01:14:22 -05:00
Peter Maydell 4dc69f4b26
target/arm: Correct MPU trace handling of write vs execute
Correct off-by-one bug in the PSMAv7 MPU tracing where it would print
a write access as "reading", an insn fetch as "writing", and a read
access as "execute".

Since we have an MMUAccessType enum now, we can make the code clearer
in the process by using that rather than the raw 0/1/2 values.

Backports commit 709e4407add7acacc593cb6cdac026558c9a8fb6 from qemu
2018-03-04 01:13:19 -05:00
James Hogan b35fb57c84
target/mips: Enable CP0_EBase.WG on MIPS64 CPUs
Enable the CP0_EBase.WG (write gate) on the I6400 and MIPS64R2-generic
CPUs. This allows 64-bit guests to run KVM itself, which uses
CP0_EBase.WG to point CP0_EBase at XKPhys.

Backports commit bad63a8008a0aaefcd00542c89bee01623d7c9de from qemu
2018-03-04 01:09:47 -05:00
James Hogan 16d97568e2
target/mips: Add EVA support to P5600
Add the Enhanced Virtual Addressing (EVA) feature to the P5600 core
configuration, along with the related Segmentation Control (SC) feature
and writable CP0_EBase.WG bit.

This allows it to run Malta EVA kernels.

Backports commit 574da58e4678b3c09048f268821295422d8cde6d from qemu
2018-03-04 01:08:19 -05:00
James Hogan 1ef8c8bd48
target/mips: Implement segmentation control
Implement the optional segmentation control feature in the virtual to
physical address translation code.

The fixed legacy segment and xkphys handling is replaced with a dynamic
layout based on the segmentation control registers (which should be set
up even when the feature is not exposed to the guest).

Backports commit 480e79aedd322fcfac17052caff21626ea7c78e2 from qemu
2018-03-04 01:06:13 -05:00
James Hogan ddbea9422c
target/mips: Add segmentation control registers
The optional segmentation control registers CP0_SegCtl0, CP0_SegCtl1 &
CP0_SegCtl2 control the behaviour and required privilege of the legacy
virtual memory segments.

Add them to the CP0 interface so they can be read and written when
CP0_Config3.SC=1, and initialise them to describe the standard legacy
layout so they can be used in future patches regardless of whether they
are exposed to the guest.

Backports commit cec56a733dd2c3fa81dbedbecf03922258747f7d from qemu
2018-03-04 01:00:42 -05:00
James Hogan 7e9b84ca1a
target/mips: Add an MMU mode for ERL
The segmentation control feature allows a legacy memory segment to
become unmapped uncached at error level (according to CP0_Status.ERL),
and in fact the user segment is already treated in this way by QEMU.

Add a new MMU mode for this state so that QEMU's mappings don't persist
between ERL=0 and ERL=1.

Backports commit 42c86612d507c2a8789f2b8d920a244693c4ef7b from qemu
2018-03-04 00:47:19 -05:00
James Hogan f285157856
target/mips: Abstract mmu_idx from hflags
The MIPS mmu_idx is sometimes calculated from hflags without an env
pointer available as cpu_mmu_index() requires.

Create a common hflags_mmu_index() for the purpose of this calculation
which can operate on any hflags, not just with an env pointer, and
update cpu_mmu_index() itself and gen_intermediate_code() to use it.

Also update debug_post_eret() and helper_mtc0_status() to log the MMU
mode with the status change (SM, UM, or nothing for kernel mode) based
on cpu_mmu_index() rather than directly testing hflags.

This will also allow the logic to be more easily updated when a new MMU
mode is added.

Backports commit b0fc6003224543d2bdb172eca752656a6223e4a1 from qemu
2018-03-04 00:45:00 -05:00
James Hogan 8595d11eb4
target/mips: Check memory permissions with mem_idx
When performing virtual to physical address translation, check the
required privilege level based on the mem_idx rather than the mode in
the hflags. This will allow EVA loads & stores to operate safely only on
user memory from kernel mode.

For the cases where the mmu_idx doesn't need to be overridden
(mips_cpu_get_phys_page_debug() and cpu_mips_translate_address()), we
calculate the required mmu_idx using cpu_mmu_index(). Note that this
only tests the MIPS_HFLAG_KSU bits rather than MIPS_HFLAG_MODE, so we
don't test the debug mode hflag MIPS_HFLAG_DM any longer. This should be
fine as get_physical_address() only compares against MIPS_HFLAG_UM and
MIPS_HFLAG_SM, neither of which should get set by compute_hflags() when
MIPS_HFLAG_DM is set.

Backports commit 9fbf4a58c90183b30bb2c8ad971ccce7e6716a16 from qemu
2018-03-04 00:40:22 -05:00
James Hogan 54b349aee5
target/mips: Decode microMIPS EVA load & store instructions
Implement decoding of microMIPS EVA load and store instruction groups in
the POOL31C pool. These use the same gen_ld(), gen_st(), gen_st_cond()
helpers as the MIPS32 decoding, passing the equivalent MIPS32 opcodes as
opc.

Backports commit 8fffc64696783b1ff1d17262d098976479895660 from qemu
2018-03-04 00:37:39 -05:00
Leon Alrae 8fadc55db3
target-mips: make ITC Configuration Tags accessible to the CPU
Add CP0.ErrCtl register with WST, SPR and ITC bits. In 34K and interAptiv
processors these bits are used to enable CACHE instruction access to
different arrays. When WST=0, SPR=0 and ITC=1 the CACHE instruction will
access ITC tag values.

Generally we do not model caches and we have been treating the CACHE
instruction as NOP. But since CACHE can operate on ITC Tags new
MIPS_HFLAG_ITC_CACHE hflag is introduced to generate the helper only when
CACHE is in the ITC Access mode.

Backports commit 0d74a222c27e26fc40f4f6120c61c3f9ceaa3776 from qemu
2018-03-04 00:34:30 -05:00
Leon Alrae a338e9c855
target-mips: enable CM GCR in MIPS64R6-generic CPU 2018-03-04 00:24:09 -05:00