Commit graph

2340 commits

Author SHA1 Message Date
Yongbok Kim 922d30c448
target-mips: Misaligned memory accesses for MSA
MIPS SIMD Architecture vector loads and stores require misalignment support.
MSA Memory access should work as an atomic operation. Therefore, it has to
check validity of all addresses for a vector store access if it is spanning
into two pages.

Separating helper functions for each data format as format is known in
translation.
To use mmu_idx from cpu_mmu_index() instead of calculating it from hflag.
Removing save_cpu_state() call in translation because it is able to use
cpu_restore_state() on fault as GETRA() is passed.

Backports commit adc370a48fd26b92188fa4848dfb088578b1936c from qemu
2018-02-13 13:27:31 -05:00
Yongbok Kim 6d0766f246
target-mips: Misaligned memory accesses for R6
Release 6 requires misaligned memory access support for all ordinary memory
access instructions (for example, LW/SW, LWC1/SWC1).
However misaligned support is not provided for certain special memory accesses
such as atomics (for example, LL/SC).

Backports commit be3a8c53b4f18bcc51a462d977cc61a0f46ebb1c from qemu
2018-02-13 13:11:39 -05:00
Leon Alrae c54458b638
target-mips: add Config5.FRE support allowing Status.FR=0 emulation
This relatively small architectural feature adds the following:

FIR.FREP: Read-only. If FREP=1, then Config5.FRE and Config5.UFE are
available.

Config5.FRE: When enabled all single-precision FP arithmetic instructions,
LWC1/LWXC1/MTC1, SWC1/SWXC1/MFC1 cause a Reserved Instructions
exception.

Config5.UFE: Allows user to write/read Config5.FRE using CTC1/CFC1
instructions.

Enable the feature in MIPS64R6-generic CPU.

Backports commit 7c979afd11b09a16634699dd6344e3ba10c9677e from qemu
2018-02-13 13:05:22 -05:00
Leon Alrae 428ffed744
target-mips: move group of functions above gen_load_fpr32()
Move the "Tests" group of functions so that gen_load_fpr32() and
gen_store_fpr32() can use generate_exception().

Backports commit eab9944c7801525737626fa45cddaf00932dd2c8 from qemu
2018-02-13 12:45:27 -05:00
Paolo Bonzini 91503663e2
target-i386: create a separate AddressSpace for each CPU
Different CPUs can be in SMM or not at the same time, thus they
will see different things where the chipset places SMRAM.

Backports commit 2001d0cd6d55e5efa9956fa8ff8b89034d6a4329 from qemu
2018-02-13 12:36:26 -05:00
Paolo Bonzini fa57438734
target-i386: wake up processors that receive an SMI
An SMI should definitely wake up a processor in halted state!
This lets OVMF boot with SMM on multiprocessor systems, although
it halts very soon after that with a "CpuIndex != BspIndex"
assertion failure.

Backports commit a9bad65d2c1f61af74ce2ff43238d4b20bf81c3a from qemu
2018-02-13 12:32:24 -05:00
Paolo Bonzini d4621935bb
target-i386: set G=1 in SMM big real mode selectors
Because the limit field's bits 31:20 is 1, G should be 1.
VMX actually enforces this, let's do it for completeness
in QEMU as well.

Backports commit b4854f1384176d897747de236f426d020668fa3c from qemu
2018-02-13 12:31:18 -05:00
Paolo Bonzini 1fed54da89
target-i386: mask NMIs on entry to SMM
QEMU is not blocking NMIs on entry to SMM. Implementing this has to
cover a few corner cases, because:

- NMIs can then be enabled by an IRET instruction and there
is no mechanism to _set_ the "NMIs masked" flag on exit from SMM:
"A special case can occur if an SMI handler nests inside an NMI handler
and then another NMI occurs. [...] When the processor enters SMM while
executing an NMI handler, the processor saves the SMRAM state save map
but does not save the attribute to keep NMI interrupts disabled.

- However, there is some hidden state, because "If NMIs were blocked
before the SMI occurred [and no IRET is executed while in SMM], they
are blocked after execution of RSM." This is represented by the new
HF2_SMM_INSIDE_NMI_MASK bit. If it is zero, NMIs are _unblocked_
on exit from RSM.

Backports commit 9982f74bad70479939491b69522da047a3be5a0d from qemu
2018-02-13 12:29:31 -05:00
Paolo Bonzini e57e92feca
target-i386: Use correct memory attributes for ioport accesses
In order to do this, stop using the cpu_in*/out* helpers, and instead
access address_space_io directly.

cpu_in* and cpu_out* remain for usage in the monitor, in qtest, and
in Xen.

Backports commit 3f7d84648607cc0fcb3812bb4b88978e2a7aa24f from qemu
2018-02-13 12:27:43 -05:00
Lioncash 25a58231fc
target-i386: Use correct memory attributes for memory accesses
These include page table walks, SVM accesses and SMM state save accesses.

The bulk of the patch is obtained with

sed -i 's/\(\<[a-z_]*_phys\(_notdirty\)\?\>(cs\)->as,/x86_\1,/'

Backports commit b216aa6c0fcbaa8ff4128969c14594896a5485a4 from qemu
2018-02-13 11:54:12 -05:00
Paolo Bonzini dc80b0893f
target-i386: introduce cpu_get_mem_attrs
Backports commit f794aa4a2fd772a3ec413c4e478cc23857cfee98 from qemu
2018-02-13 11:33:39 -05:00
Paolo Bonzini 88d1506e6a
memory: use mr->ram_addr in "is this RAM?" assertions
mr->terminates alone doesn't guarantee that we are looking at a RAM region.
mr->ram_addr also has to be checked, in order to distinguish RAM and I/O
regions.

So, do the following:

1) add a new define RAM_ADDR_INVALID, and test it in the assertions
instead of mr->terminates

2) IOMMU regions were not setting mr->ram_addr to a bogus value, initialize
it in the instance_init function so that the new assertions would fire
for IOMMU regions as well.

Backports commit ec05ec26f940564b1e07bf88857035ec27e21dd8 from qemu
2018-02-13 11:31:02 -05:00
Stefan Hajnoczi fc7b95d06a
memory: replace cpu_physical_memory_reset_dirty() with test-and-clear
The cpu_physical_memory_reset_dirty() function is sometimes used
together with cpu_physical_memory_get_dirty(). This is not atomic since
two separate accesses to the dirty memory bitmap are made.

Turn cpu_physical_memory_reset_dirty() and
cpu_physical_memory_clear_dirty_range_type() into the atomic
cpu_physical_memory_test_and_clear_dirty().

Backports commit 03eebc9e3246b9b3f5925aa41f7dfd7c1e467875 from qemu
2018-02-13 11:25:45 -05:00
Stefan Hajnoczi 18ccd4b5be
memory: use atomic ops for setting dirty memory bits
Use set_bit_atomic() and bitmap_set_atomic() so that multiple threads
can dirty memory without race conditions.

Backports commit d114875b9a1c21162f69a12d72f69a22e7bab376 from qemu
2018-02-13 11:07:48 -05:00
Paolo Bonzini 6d509f7333
exec: only check relevant bitmaps for cleanliness
Most of the time, not all bitmaps have to be marked as dirty;
do not do anything if the interesting ones are already dirty.
Previously, any clean bitmap would have cause all the bitmaps to be
marked dirty.

In fact, unless running TCG most of the time bitmap operations need
not be done at all, because memory_region_is_logging returns zero.
In this case, skip the call to cpu_physical_memory_range_includes_clean
altogether as well.

With this patch, cpu_physical_memory_set_dirty_range is called
unconditionally, so there need not be anymore a separate call to
xen_modified_memory.

Backports commit e87f7778b64d4a6a78e16c288c7fdc6c15317d5f from qemu
2018-02-13 11:03:26 -05:00
Paolo Bonzini 6bbfcf65e8
memory: do not touch code dirty bitmap unless TCG is enabled
cpu_physical_memory_set_dirty_lebitmap unconditionally syncs the
DIRTY_MEMORY_CODE bitmap. This however is unused unless TCG is
enabled.

Backports commit 9460dee4b2258e3990906fb34099481c8334c267 from qemu
2018-02-13 10:48:14 -05:00
Stefan Hajnoczi 6172e3dc29
bitmap: add atomic test and clear
The new bitmap_test_and_clear_atomic() function clears a range and
returns whether or not the bits were set.

Backports commit 36546e5b803f6e363906607307f27c489441fd15 from qemu
2018-02-13 10:02:12 -05:00
Stefan Hajnoczi 7ff5f05c82
bitmap: add atomic set functions
Use atomic_or() for atomic bitmaps where several threads may set bits at
the same time. This avoids the race condition between threads loading
an element, bitwise ORing, and then storing the element.

When setting all bits in a word we can avoid atomic ops and instead just
use an smp_mb() at the end.

Most bitmap users don't need atomicity so introduce new functions.

Backports commit 9f02cfc84b85929947b32fe1674fbc6a429f332a from qemu
2018-02-13 09:59:30 -05:00
Paolo Bonzini 1b1f82cef7
exec: invert return value of cpu_physical_memory_get_clean, rename
While it is obvious that cpu_physical_memory_get_dirty returns true even if
a single page is dirty, the same is not true for cpu_physical_memory_get_clean;
one would expect that it returns true only if all the pages are clean, but
it actually looks for even one clean page. (By contrast, the caller of that
function, cpu_physical_memory_range_includes_clean, has a good name).

To clarify, rename the function to cpu_physical_memory_all_dirty and return
true if _all_ the pages are dirty. This is the opposite of the previous
meaning, because "all are 1" is the same as "not (any is 0)", so we have to
modify cpu_physical_memory_range_includes_clean as well

Backports commit 72b47e79cef36ed6ffc718f10e21001d7ec2a66f from qemu
2018-02-13 09:54:12 -05:00
Paolo Bonzini c333585a4d
translate-all: make less of tb_invalidate_phys_page_range depend on is_cpu_write_access
is_cpu_write_access is only set if tb_invalidate_phys_page_range is called
from tb_invalidate_phys_page_fast, and hence from notdirty_mem_write.
However:

- the code bitmap can be built directly in tb_invalidate_phys_page_fast
(unconditionally, since is_cpu_write_access would always be passed as 1);

- the virtual address is not needed to mark the page as "not containing
code" (dirty code bitmap = 1), so we can also remove that use of
is_cpu_write_access. For calls of tb_invalidate_phys_page_range
that do not come from notdirty_mem_write, the next call to
notdirty_mem_write will notice that the page does not contain code
anymore, and will fix up the TLB entry.

The parameter needs to remain in order to guard accesses to cpu->mem_io_pc.

Backports commit fc377bcf617a48233a99a9fe0a26247c38b5cb76 from qemu
2018-02-13 09:18:49 -05:00
Lioncash 98e2ffba3a
unicorn_arm: m68k/translate: Build fixes 2018-02-13 09:15:46 -05:00
Paolo Bonzini f578c89e8b
cputlb: remove useless arguments to tlb_unprotect_code_phys, rename
These days modification of the TLB is done in notdirty_mem_write,
so the virtual address and env pointer as unnecessary.

The new name of the function, tlb_unprotect_code, is consistent with
tlb_protect_code.

Backports commit 9564f52da7eb061326956ed9a468935e3352512d from qemu
2018-02-13 09:07:41 -05:00
Paolo Bonzini b5c9645a0f
translate-all: remove unnecessary argument to tb_invalidate_phys_range
The is_cpu_write_access argument is always 0, remove it.

Backports commit 358653391b0c0beaa0e3f9e28304e1918cd223b3 from qemu
2018-02-13 09:04:51 -05:00
Lioncash 72c8e4d264
exec: move functions to translate-all.h
Remove them from the sundry exec-all.h header, since they are only used by
the TCG runtime in exec.c and user-exec.c.

Backports commit 1652b974766401743879d78f796f44b8929b0787 from qemu
2018-02-13 09:01:45 -05:00
Paolo Bonzini c82ea2b20b
memory: track DIRTY_MEMORY_CODE in mr->dirty_log_mask
DIRTY_MEMORY_CODE is only needed for TCG. By adding it directly to
mr->dirty_log_mask, we avoid testing for TCG everywhere a region is
checked for the enabled/disabled state of dirty logging.

Backports commit 677e7805cf95f3b2bca8baf0888d1ebed7f0c606 from qemu
2018-02-13 08:55:42 -05:00
Paolo Bonzini e3d1cef8fb
memory: prepare for multiple bits in the dirty log mask
When the dirty log mask will also cover other bits than DIRTY_MEMORY_VGA,
some listeners may be interested in the overall zero/non-zero value of
the dirty log mask; others may be interested in the value of single bits.

For this reason, always call log_start/log_stop if bits have respectively
appeared or disappeared, and pass the old and new values of the dirty log
mask so that listeners can distinguish the kinds of change.

For example, KVM checks if dirty logging used to be completely disabled
(in log_start) or is now completely disabled (in log_stop). On the
other hand, Xen has to check manually if DIRTY_MEMORY_VGA changed,
since that is the only bit it cares about.

Backports commit b2dfd71c4843a762f2befe702adb249cf55baf66 from qemu
2018-02-13 08:52:23 -05:00
Paolo Bonzini 1551573acc
memory: differentiate memory_region_is_logging and memory_region_get_dirty_log_mask
For now memory regions only track DIRTY_MEMORY_VGA individually, but
this will change soon. To support this, split memory_region_is_logging
in two functions: one that returns a given bit from dirty_log_mask,
and one that returns the entire mask. memory_region_is_logging gets an
extra parameter so that the compiler flags misuse.

While VGA-specific users (including the Xen listener!) will want to keep
checking that bit, KVM and vhost check for "any bit except migration"
(because migration is handled via the global start/stop listener
callbacks).

Backports commit 2d1a35bef0ed96b3f23535e459c552414ccdbafd from qemu
2018-02-13 08:41:44 -05:00
Paolo Bonzini 9847ba46d6
exec: optimize phys_page_set_level
phys_page_set_level is writing zeroes to a struct that has just been
filled in by phys_map_node_alloc. Instead, tell phys_map_node_alloc
whether to fill in the page "as a leaf" or "as a non-leaf".

memcpy is faster than struct assignment, which copies each bitfield
individually. A compiler bug (https://gcc.gnu.org/PR66391), and
small memcpys like this one are special-cased anyway, and optimized
to a register move, so just use the memcpy.

This cuts the cost of phys_page_set_level from 25% to 5% when
booting qboot.

Backports commit db94604b20278c1dc227a04e4c564d80230e6c3f from qemu
2018-02-13 08:38:24 -05:00
Paolo Bonzini 96e7e32972
softmmu: support up to 12 MMU modes
At 8k per TLB (for 64-bit host or target), 8 or more modes
make the TLBs bigger than 64k, and some RISC TCG backends do
not like that. On the affected hosts, cut the TLB size in
half---there is still a measurable speedup on PPC with the
next patch.

Backports commit 1de29aef17a7d70dbc04a7fe51e18942e3ebe313 from qemu
2018-02-13 08:34:52 -05:00
Paolo Bonzini b34c233c2f
tcg: add TCG_TARGET_TLB_DISPLACEMENT_BITS
This will be used to size the TLB when more than 8 MMU modes are
used by the target. Limitations come from the limited size of
the immediate fields (which sometimes, as in the case of Aarch64,
extend to instructions that shift the immediate).

Backports commit 006f8638c62bca2b0caf609485f47fa5e14d8a3c from qemu
2018-02-13 08:28:29 -05:00
Paolo Bonzini e37ed23e7f
translate-all: delete prototype for non-existent function
Missed in commit 3a808cc40

Backports commit b6b099541d6cf3c50b0fb5af916fff0db6508805 from qemu
2018-02-13 08:26:08 -05:00
Eduardo Habkost 79df79434d
target-i386: Fix signedness of MSR_IA32_APICBASE_BASE
Existing definition triggers the following when using clang
-fsanitize=undefined:

hw/intc/apic_common.c:314:55: runtime error: left shift of 1048575 by 12
places cannot be represented in type 'int'

Fix it so we won't try to shift a 1 to the sign bit of a signed integer.

Backports commit 458cf469f4a1cb520b07092f5537c5a6d2389d23 from qemu
2018-02-13 08:17:18 -05:00
Peter Maydell ed58b3afac
target-arm: Remove v8_ prefix from names of non-v8-specific cpreg arrays
The ARMCPRegInfo arrays v8_el3_no_el2_cp_reginfo and v8_el2_cp_reginfo
are actually used on non-v8 CPUs as well. Remove the incorrect v8_
prefix from their names.

Backports commit 4771cd01daaccb2a8929fa04c88c608e378cf814 from qemu
2018-02-13 08:16:23 -05:00
Edgar E. Iglesias 8a8e174981
target-arm: Add TLBI_VAE2{IS}
Backports commit 8742d49d6f2278d353a1623dfa8a5e237dbfd906 from qemu
2018-02-13 08:13:37 -05:00
Edgar E. Iglesias a2bab5d679
target-arm: Add TLBI_ALLE2
Backports commit 51da90140bba4333eeb9c1d8d8d8afc2ca790628 from qemu
2018-02-13 08:10:37 -05:00
Edgar E. Iglesias 4fdfb4e39b
target-arm: Add TLBI_ALLE1{IS}
Backports commit bdb9e2d66afbe0571dce48a9430c35ae4d6bbd32 from qemu
2018-02-13 08:07:46 -05:00
Edgar E. Iglesias 74daefe28b
target-arm: Add TTBR0_EL2
Backports commit a57633c08fa861807a0713505785bd4d441d7df8 from qemu
2018-02-13 08:04:35 -05:00
Edgar E. Iglesias 2ec3c2da5d
target-arm: Add TPIDR_EL2
Backports commit ff05f37babe7874f28dcead6e9e4f1904d35a13a from qemu
2018-02-13 08:00:13 -05:00
Edgar E. Iglesias 0374ab0421
target-arm: Add SCTLR_EL2
Backports commit b9cb5323bb671a0f2bfecc36168d3a3763e90261 from qemu
2018-02-13 07:57:45 -05:00
Edgar E. Iglesias ca6a626ad6
target-arm: Add TCR_EL2
Backports commit 06ec4c8c9f9e21b7671c79296f3a47ab63d50067 from qemu
2018-02-12 23:33:29 -05:00
Edgar E. Iglesias 8fc0629227
target-arm: Add MAIR_EL2
Backports commit 95f949ac3dc7d4a6ebee512a9d122db18210df64 from qemu
2018-02-12 23:27:49 -05:00
Edgar E. Iglesias 956655f449
target-arm: Break down TLB_LOCKDOWN
Break down the overly broad wildcard definition of TLB_LOCKDOWN
down to v7 level.

Backports commit a903c449b41f105aadd5f762a7aede531b4950f0 from qemu
2018-02-12 23:22:29 -05:00
Edgar E. Iglesias 49d2795fd9
target-arm: Correct check for non-EL3
This fixes a compile warning from clang 3.5 (the assertion
could never fire).

Backports commit 3fc827d591679f3e262b9d1f8b34528eabfca8c0 from qemu
2018-02-12 23:15:53 -05:00
Greg Bellows 2ffb7d0707
target-arm: Add WFx instruction trap support
Add support for trapping WFI and WFE instructions to the proper EL when
SCTLR/SCR/HCR settings apply.

Backports commit b1eced713d9913a5c58ba9daa795f10e4c856c49 from qemu
2018-02-12 23:13:23 -05:00
Peter Maydell 6d7370457f
target-arm: Don't halt on WFI unless we don't have any work
Just NOP the WFI instruction if we have work to do.
This doesn't make much difference currently (though it does avoid
jumping out to the top level loop and immediately restarting),
but the distinction between "halt" and "don't halt" will become
more important when the decision to halt requires us to trap
to a higher exception level instead.

Backport commit 84549b6dcf9147559ec08b066de673587be6b763 from qemu
2018-02-12 23:10:45 -05:00
Peter Maydell db727fad68
target-arm: Move TB flags down to fill gap
Deleting the now-unused ARM_TBFLAG_CPACR_FPEN left a gap in the
bit usage; move the following ARM_TBFLAG_XSCALE_CPAR and
ARM_TBFLAG_NS_SHIFT down 3 bits to fill the gap.

Backports commit 647f767ba3b37fb229275086187e96242248a4ac from qemu
2018-02-12 23:07:55 -05:00
Lioncash 63f04de43b
target-arm: Amend assert conditional in access_check_cp_reg
This was present in the original backported patch
2018-02-12 23:06:24 -05:00
Greg Bellows 3c87e50745
target-arm: Extend FP checks to use an EL
Extend the ARM disassemble context to take a target exception EL instead of a
boolean enable. This change reverses the polarity of the check making a value
of 0 indicate floating point enabled (no exception).

Backports commit 9dbbc748d671c70599101836cd1c2719d92f3017 from qemu
2018-02-12 23:04:19 -05:00
Peter Maydell a41d967577
target-arm: Make singlestate TB flags common between AArch32/64
Currently we keep the TB flags PSTATE_SS and SS_ACTIVE in different
bit positions for AArch64 and AArch32. Replace these separate
definitions with a single common flag in the upper part of the
flags word.

Backports commit 3cf6a0fcedd429693d439556543400d5f0e31e1d from qemu
2018-02-12 22:57:53 -05:00
Greg Bellows 8c674b105c
target-arm: Add AArch64 CPTR registers
Adds CPTR_EL2/3 system registers definitions and access function.

Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
[PMM: merge CPTR_EL2 and HCPTR definitions into a single
def using STATE_BOTH;
don't use readfn/writefn to implement RAZ/WI registers;
don't use accessfn for the no-EL2 CPTR_EL2;
fix cpacr_access logic to catch EL2 accesses to CPACR being
trapped to EL3;
use new CP_ACCESS_TRAP_EL[23] rather than setting
exception.target_el directly]

Backports commit c6f191642a4027909813b4e6e288411f8371e951 from qemu
2018-02-12 22:54:14 -05:00