Commit graph

2157 commits

Author SHA1 Message Date
Peter Crosthwaite 4c204e6f3f
arm: helper: Factor out CP regs common to [pv]msa
V6+ PMSA and VMSA share some common registers that are currently
in the VMSA definition block. Split them out into a new def that can
be shared to PMSA.

Backports commit 8e5d75c950a1241f6e1243c37f28cd58f68fedc9 from qemu
2018-02-17 15:22:31 -05:00
Peter Crosthwaite 62ddaba69f
arm: Don't add v7mp registers in MPU systems
These registers are VMSA specific so they should be conditional on
VMSA (i.e. !MPU).

Backports commit 5e5cf9e35f25f9f932a6ce25107c11b67b426a43 from qemu
2018-02-13 14:37:22 -05:00
Peter Crosthwaite 70fae13253
arm: Do not define TLBTR in PMSA systems
If doing a PMSA (MPU) system do not define the VMSA specific TLBTR CP.
The def is done separately from VMSA registers group as it is affected
by both the OMAP/STRONGARM RW errata and the MIDR backgrounding.

Backports commit 8085ce63c5967d200f1241b6c0a189371993c5df from qemu
2018-02-13 14:35:38 -05:00
Pavel Fedin caed2f123d
target-arm: Use the kernel's idea of MPIDR if we're using KVM
When we're using KVM, the kernel's internal idea of the MPIDR
affinity fields must match the values we tell it for the guest
vcpu cluster configuration in the device tree. Since at the moment
the kernel doesn't support letting userspace tell it the correct
affinity fields to use, we must read the kernel's view and
reflect that back in the device tree.

Backports commit eb5e1d3c85dffe677da2550d211f9304a7d5ba3b from qemu
2018-02-13 14:32:46 -05:00
Sergey Fedorov 614ecb2bf2
target-arm: add AArch32 MIDR aliases in ARMv8
According to ARMv8 ARM, there are additional aliases to MIDR system register in
AArch32 state. So add them to the list.

Backports commit ac00c79ff6635ae9fd732ff357ada0d05e795500 from qemu
2018-02-13 14:28:11 -05:00
Sergey Fedorov 113cda90c3
target-arm: Fix REVIDR reset value
According to ARM Cortex-A53/A57 TRM, REVIDR reset value should be zero. So let
REVIDR reset value be specified by CPU model and correct it for Cortex-A53/A57.

Backports commit 13b72b2b9aa7ab7ee129e38e9587acd6a1b9a932 from qemu
2018-02-13 14:24:08 -05:00
Alex Bennée f6d104b33a
target-arm/cpu.h: remove pending_exception
This isn't used by any of the code. In fact it looks like it was never
used as it came in with ARMv7 support.

Backports commit a79e0218e0ae27c9cdd2648bd46e5a916c903cc2 from qemu
2018-02-13 14:24:08 -05:00
Sergey Fedorov 7f53358ec1
target-arm: use extended address bits from supersection short descriptor
Since ARMv7 with LPAE support, a supersection short translation table
descriptor has had extended base address fields which hold bits 39:32 of
translated address. These fields are IMPDEF in ARMv6 and ARMv7 without
LPAE support.

Backports commit 4e42a6ca37e39e56725518851f4388e46bd91129 from qemu
2018-02-13 14:24:08 -05:00
Peter Maydell 84c75286f5
target-arm: Handle "extended small page" descriptors correctly
The old ARMv5-style page table format includes a kind of second level
descriptor named the "extended small page" format, whose primary purpose
is to allow specification of the TEX memory attribute bits on a 4K page.
This exists on ARMv6 and also (as an implementation extension) on XScale
CPUs; it's UNPREDICTABLE on v5.

We were mishandling this in two ways:
(1) we weren't implementing it for v6 (probably never noticed because
Linux will use the new-style v6 page table format there)
(2) we were not correctly setting the page_size, which is 4K, not 1K

The latter bug went unnoticed for years because the only thing which
the page_size affects is which TLB entries get flushed when the guest
does a TLB invalidate on an address in the page, and prior to commit
2f0d8631b7 we were doing a full TLB flush very frequently due to Linux's
habit of writing the SCTLR pointlessly a lot.

(We can assume that after commit 2f0d8631b7 the bug went unnoticed
for a year because nobody's actually using the Zaurus/XScale emulation...)

Report the correct page size for these descriptors, and permit them
on ARMv6 CPUs. This fixes a problem where a kernel image for Zaurus
can boot the kernel OK but gets random segfaults when it tries to
run userspace programs.

Backports commit fc1891c74ae122a9dc7854f38bae7db03cd911e6 from qemu
2018-02-13 14:19:53 -05:00
Leon Alrae 3d72ec65bd
target-mips: enable XPA and LPA features
Enable XPA in MIPS32R5-generic and LPA in MIPS64R6-generic.

Backports commit 6773f9b687e0a8ab4b638ef88d075fb233fb7669 from qemu
2018-02-13 14:14:59 -05:00
Leon Alrae aff700ae6d
target-mips: remove misleading comments in translate_init.c
PABITS are not hardcoded to 36 bits and we do not model 59 PABITS (which is
the architectural limit) in QEMU.

Backports commit 28b027d5b63c7550c7390041d6dd50948c8f55b8 from qemu
2018-02-13 14:05:25 -05:00
Leon Alrae 8743ec8b6d
target-mips: add MTHC0 and MFHC0 instructions
Implement MTHC0 and MFHC0 instructions. In MIPS32 they are used to access
upper word of extended to 64-bits CP0 registers.

In MIPS64, when CP0 destination register specified is the EntryLo0 or
EntryLo1, bits 1:0 of the GPR appear at bits 31:30 of EntryLo0 or
EntryLo1. This is to compensate for RI and XI, which were shifted to bits
63:62 by MTC0 to EntryLo0 or EntryLo1. Therefore creating separate
functions for EntryLo0 and EntryLo1.

Backports commit 5204ea79ea739b557f47fc4db96c94edcb33a5d6 from qemu
2018-02-13 14:03:59 -05:00
Leon Alrae 59865351e0
target-mips: add CP0.PageGrain.ELPA support
CP0.PageGrain.ELPA enables support for large physical addresses. This field
is encoded as follows:
0: Large physical address support is disabled.
1: Large physical address support is enabled.

If this bit is a 1, the following changes occur to coprocessor 0 registers:
- The PFNX field of the EntryLo0 and EntryLo1 registers is writable and
concatenated with the PFN field to form the full page frame number.
- Access to optional COP0 registers with PA extension, LLAddr, TagLo is
defined.

P5600 can operate in 32-bit or 40-bit Physical Address Mode. Therefore if
XPA is disabled (CP0.PageGrain.ELPA = 0) then assume 32-bit Address Mode.
In MIPS64 assume 36 as default PABITS (when CP0.PageGrain.ELPA = 0).

env->PABITS value is constant and indicates maximum PABITS available on
a core, whereas env->PAMask is calculated from env->PABITS and is also
affected by CP0.PageGrain.ELPA.

Backports commit e117f52636d04502fab28bd3abe93347c29f39a5 from qemu
2018-02-13 13:55:53 -05:00
Leon Alrae a13d401fe8
target-mips: support Page Frame Number Extension field
Update tlb->PFN to contain PFN concatenated with PFNX. PFNX is 0 if large
physical address is not supported.

Backports commit cd0d45c40133ef8b409aede5ad8a99aeaf6a70fe from qemu
2018-02-13 13:49:55 -05:00
Leon Alrae 95ed79d9c2
target-mips: extend selected CP0 registers to 64-bits in MIPS32
Extend EntryLo0, EntryLo1, LLAddr and TagLo from 32 to 64 bits in MIPS32.

Introduce gen_move_low32() function which moves low 32 bits from 64-bit
temp to GPR; it sign extends 32-bit value on MIPS64 and truncates on
MIPS32.

Backports commit 284b731a6ae47b9ebabb9613e753c4d83cf75dd3 from qemu
2018-02-13 13:45:29 -05:00
Leon Alrae 907bb26e5f
target-mips: correct MFC0 for CP0.EntryLo in MIPS64
CP0.EntryLo bits 31:30 have to be cleared.

Backports commit b435f3f3d174721382b55bbd0c785ec50c1796a9 from qemu
2018-02-13 13:36:01 -05:00
Leon Alrae 57f57a9de4
target-mips: add ERETNC instruction and Config5.LLB bit
ERETNC is identical to ERET except that an ERETNC will not clear the LLbit
that is set by execution of an LL instruction, and thus when placed between
an LL and SC sequence, will never cause the SC to fail.

Presence of ERETNC is denoted by the Config5.LLB.

Backports commit ce9782f40ac16660ea9437bfaa2c9c34d5ed8110 from qemu
2018-02-13 13:33:37 -05:00
Yongbok Kim 922d30c448
target-mips: Misaligned memory accesses for MSA
MIPS SIMD Architecture vector loads and stores require misalignment support.
MSA Memory access should work as an atomic operation. Therefore, it has to
check validity of all addresses for a vector store access if it is spanning
into two pages.

Separating helper functions for each data format as format is known in
translation.
To use mmu_idx from cpu_mmu_index() instead of calculating it from hflag.
Removing save_cpu_state() call in translation because it is able to use
cpu_restore_state() on fault as GETRA() is passed.

Backports commit adc370a48fd26b92188fa4848dfb088578b1936c from qemu
2018-02-13 13:27:31 -05:00
Yongbok Kim 6d0766f246
target-mips: Misaligned memory accesses for R6
Release 6 requires misaligned memory access support for all ordinary memory
access instructions (for example, LW/SW, LWC1/SWC1).
However misaligned support is not provided for certain special memory accesses
such as atomics (for example, LL/SC).

Backports commit be3a8c53b4f18bcc51a462d977cc61a0f46ebb1c from qemu
2018-02-13 13:11:39 -05:00
Leon Alrae c54458b638
target-mips: add Config5.FRE support allowing Status.FR=0 emulation
This relatively small architectural feature adds the following:

FIR.FREP: Read-only. If FREP=1, then Config5.FRE and Config5.UFE are
available.

Config5.FRE: When enabled all single-precision FP arithmetic instructions,
LWC1/LWXC1/MTC1, SWC1/SWXC1/MFC1 cause a Reserved Instructions
exception.

Config5.UFE: Allows user to write/read Config5.FRE using CTC1/CFC1
instructions.

Enable the feature in MIPS64R6-generic CPU.

Backports commit 7c979afd11b09a16634699dd6344e3ba10c9677e from qemu
2018-02-13 13:05:22 -05:00
Leon Alrae 428ffed744
target-mips: move group of functions above gen_load_fpr32()
Move the "Tests" group of functions so that gen_load_fpr32() and
gen_store_fpr32() can use generate_exception().

Backports commit eab9944c7801525737626fa45cddaf00932dd2c8 from qemu
2018-02-13 12:45:27 -05:00
Paolo Bonzini 91503663e2
target-i386: create a separate AddressSpace for each CPU
Different CPUs can be in SMM or not at the same time, thus they
will see different things where the chipset places SMRAM.

Backports commit 2001d0cd6d55e5efa9956fa8ff8b89034d6a4329 from qemu
2018-02-13 12:36:26 -05:00
Paolo Bonzini fa57438734
target-i386: wake up processors that receive an SMI
An SMI should definitely wake up a processor in halted state!
This lets OVMF boot with SMM on multiprocessor systems, although
it halts very soon after that with a "CpuIndex != BspIndex"
assertion failure.

Backports commit a9bad65d2c1f61af74ce2ff43238d4b20bf81c3a from qemu
2018-02-13 12:32:24 -05:00
Paolo Bonzini d4621935bb
target-i386: set G=1 in SMM big real mode selectors
Because the limit field's bits 31:20 is 1, G should be 1.
VMX actually enforces this, let's do it for completeness
in QEMU as well.

Backports commit b4854f1384176d897747de236f426d020668fa3c from qemu
2018-02-13 12:31:18 -05:00
Paolo Bonzini 1fed54da89
target-i386: mask NMIs on entry to SMM
QEMU is not blocking NMIs on entry to SMM. Implementing this has to
cover a few corner cases, because:

- NMIs can then be enabled by an IRET instruction and there
is no mechanism to _set_ the "NMIs masked" flag on exit from SMM:
"A special case can occur if an SMI handler nests inside an NMI handler
and then another NMI occurs. [...] When the processor enters SMM while
executing an NMI handler, the processor saves the SMRAM state save map
but does not save the attribute to keep NMI interrupts disabled.

- However, there is some hidden state, because "If NMIs were blocked
before the SMI occurred [and no IRET is executed while in SMM], they
are blocked after execution of RSM." This is represented by the new
HF2_SMM_INSIDE_NMI_MASK bit. If it is zero, NMIs are _unblocked_
on exit from RSM.

Backports commit 9982f74bad70479939491b69522da047a3be5a0d from qemu
2018-02-13 12:29:31 -05:00
Paolo Bonzini e57e92feca
target-i386: Use correct memory attributes for ioport accesses
In order to do this, stop using the cpu_in*/out* helpers, and instead
access address_space_io directly.

cpu_in* and cpu_out* remain for usage in the monitor, in qtest, and
in Xen.

Backports commit 3f7d84648607cc0fcb3812bb4b88978e2a7aa24f from qemu
2018-02-13 12:27:43 -05:00
Lioncash 25a58231fc
target-i386: Use correct memory attributes for memory accesses
These include page table walks, SVM accesses and SMM state save accesses.

The bulk of the patch is obtained with

sed -i 's/\(\<[a-z_]*_phys\(_notdirty\)\?\>(cs\)->as,/x86_\1,/'

Backports commit b216aa6c0fcbaa8ff4128969c14594896a5485a4 from qemu
2018-02-13 11:54:12 -05:00
Paolo Bonzini dc80b0893f
target-i386: introduce cpu_get_mem_attrs
Backports commit f794aa4a2fd772a3ec413c4e478cc23857cfee98 from qemu
2018-02-13 11:33:39 -05:00
Paolo Bonzini 88d1506e6a
memory: use mr->ram_addr in "is this RAM?" assertions
mr->terminates alone doesn't guarantee that we are looking at a RAM region.
mr->ram_addr also has to be checked, in order to distinguish RAM and I/O
regions.

So, do the following:

1) add a new define RAM_ADDR_INVALID, and test it in the assertions
instead of mr->terminates

2) IOMMU regions were not setting mr->ram_addr to a bogus value, initialize
it in the instance_init function so that the new assertions would fire
for IOMMU regions as well.

Backports commit ec05ec26f940564b1e07bf88857035ec27e21dd8 from qemu
2018-02-13 11:31:02 -05:00
Stefan Hajnoczi fc7b95d06a
memory: replace cpu_physical_memory_reset_dirty() with test-and-clear
The cpu_physical_memory_reset_dirty() function is sometimes used
together with cpu_physical_memory_get_dirty(). This is not atomic since
two separate accesses to the dirty memory bitmap are made.

Turn cpu_physical_memory_reset_dirty() and
cpu_physical_memory_clear_dirty_range_type() into the atomic
cpu_physical_memory_test_and_clear_dirty().

Backports commit 03eebc9e3246b9b3f5925aa41f7dfd7c1e467875 from qemu
2018-02-13 11:25:45 -05:00
Stefan Hajnoczi 18ccd4b5be
memory: use atomic ops for setting dirty memory bits
Use set_bit_atomic() and bitmap_set_atomic() so that multiple threads
can dirty memory without race conditions.

Backports commit d114875b9a1c21162f69a12d72f69a22e7bab376 from qemu
2018-02-13 11:07:48 -05:00
Paolo Bonzini 6d509f7333
exec: only check relevant bitmaps for cleanliness
Most of the time, not all bitmaps have to be marked as dirty;
do not do anything if the interesting ones are already dirty.
Previously, any clean bitmap would have cause all the bitmaps to be
marked dirty.

In fact, unless running TCG most of the time bitmap operations need
not be done at all, because memory_region_is_logging returns zero.
In this case, skip the call to cpu_physical_memory_range_includes_clean
altogether as well.

With this patch, cpu_physical_memory_set_dirty_range is called
unconditionally, so there need not be anymore a separate call to
xen_modified_memory.

Backports commit e87f7778b64d4a6a78e16c288c7fdc6c15317d5f from qemu
2018-02-13 11:03:26 -05:00
Paolo Bonzini 6bbfcf65e8
memory: do not touch code dirty bitmap unless TCG is enabled
cpu_physical_memory_set_dirty_lebitmap unconditionally syncs the
DIRTY_MEMORY_CODE bitmap. This however is unused unless TCG is
enabled.

Backports commit 9460dee4b2258e3990906fb34099481c8334c267 from qemu
2018-02-13 10:48:14 -05:00
Stefan Hajnoczi 6172e3dc29
bitmap: add atomic test and clear
The new bitmap_test_and_clear_atomic() function clears a range and
returns whether or not the bits were set.

Backports commit 36546e5b803f6e363906607307f27c489441fd15 from qemu
2018-02-13 10:02:12 -05:00
Stefan Hajnoczi 7ff5f05c82
bitmap: add atomic set functions
Use atomic_or() for atomic bitmaps where several threads may set bits at
the same time. This avoids the race condition between threads loading
an element, bitwise ORing, and then storing the element.

When setting all bits in a word we can avoid atomic ops and instead just
use an smp_mb() at the end.

Most bitmap users don't need atomicity so introduce new functions.

Backports commit 9f02cfc84b85929947b32fe1674fbc6a429f332a from qemu
2018-02-13 09:59:30 -05:00
Paolo Bonzini 1b1f82cef7
exec: invert return value of cpu_physical_memory_get_clean, rename
While it is obvious that cpu_physical_memory_get_dirty returns true even if
a single page is dirty, the same is not true for cpu_physical_memory_get_clean;
one would expect that it returns true only if all the pages are clean, but
it actually looks for even one clean page. (By contrast, the caller of that
function, cpu_physical_memory_range_includes_clean, has a good name).

To clarify, rename the function to cpu_physical_memory_all_dirty and return
true if _all_ the pages are dirty. This is the opposite of the previous
meaning, because "all are 1" is the same as "not (any is 0)", so we have to
modify cpu_physical_memory_range_includes_clean as well

Backports commit 72b47e79cef36ed6ffc718f10e21001d7ec2a66f from qemu
2018-02-13 09:54:12 -05:00
Paolo Bonzini c333585a4d
translate-all: make less of tb_invalidate_phys_page_range depend on is_cpu_write_access
is_cpu_write_access is only set if tb_invalidate_phys_page_range is called
from tb_invalidate_phys_page_fast, and hence from notdirty_mem_write.
However:

- the code bitmap can be built directly in tb_invalidate_phys_page_fast
(unconditionally, since is_cpu_write_access would always be passed as 1);

- the virtual address is not needed to mark the page as "not containing
code" (dirty code bitmap = 1), so we can also remove that use of
is_cpu_write_access. For calls of tb_invalidate_phys_page_range
that do not come from notdirty_mem_write, the next call to
notdirty_mem_write will notice that the page does not contain code
anymore, and will fix up the TLB entry.

The parameter needs to remain in order to guard accesses to cpu->mem_io_pc.

Backports commit fc377bcf617a48233a99a9fe0a26247c38b5cb76 from qemu
2018-02-13 09:18:49 -05:00
Lioncash 98e2ffba3a
unicorn_arm: m68k/translate: Build fixes 2018-02-13 09:15:46 -05:00
Paolo Bonzini f578c89e8b
cputlb: remove useless arguments to tlb_unprotect_code_phys, rename
These days modification of the TLB is done in notdirty_mem_write,
so the virtual address and env pointer as unnecessary.

The new name of the function, tlb_unprotect_code, is consistent with
tlb_protect_code.

Backports commit 9564f52da7eb061326956ed9a468935e3352512d from qemu
2018-02-13 09:07:41 -05:00
Paolo Bonzini b5c9645a0f
translate-all: remove unnecessary argument to tb_invalidate_phys_range
The is_cpu_write_access argument is always 0, remove it.

Backports commit 358653391b0c0beaa0e3f9e28304e1918cd223b3 from qemu
2018-02-13 09:04:51 -05:00
Lioncash 72c8e4d264
exec: move functions to translate-all.h
Remove them from the sundry exec-all.h header, since they are only used by
the TCG runtime in exec.c and user-exec.c.

Backports commit 1652b974766401743879d78f796f44b8929b0787 from qemu
2018-02-13 09:01:45 -05:00
Paolo Bonzini c82ea2b20b
memory: track DIRTY_MEMORY_CODE in mr->dirty_log_mask
DIRTY_MEMORY_CODE is only needed for TCG. By adding it directly to
mr->dirty_log_mask, we avoid testing for TCG everywhere a region is
checked for the enabled/disabled state of dirty logging.

Backports commit 677e7805cf95f3b2bca8baf0888d1ebed7f0c606 from qemu
2018-02-13 08:55:42 -05:00
Paolo Bonzini e3d1cef8fb
memory: prepare for multiple bits in the dirty log mask
When the dirty log mask will also cover other bits than DIRTY_MEMORY_VGA,
some listeners may be interested in the overall zero/non-zero value of
the dirty log mask; others may be interested in the value of single bits.

For this reason, always call log_start/log_stop if bits have respectively
appeared or disappeared, and pass the old and new values of the dirty log
mask so that listeners can distinguish the kinds of change.

For example, KVM checks if dirty logging used to be completely disabled
(in log_start) or is now completely disabled (in log_stop). On the
other hand, Xen has to check manually if DIRTY_MEMORY_VGA changed,
since that is the only bit it cares about.

Backports commit b2dfd71c4843a762f2befe702adb249cf55baf66 from qemu
2018-02-13 08:52:23 -05:00
Paolo Bonzini 1551573acc
memory: differentiate memory_region_is_logging and memory_region_get_dirty_log_mask
For now memory regions only track DIRTY_MEMORY_VGA individually, but
this will change soon. To support this, split memory_region_is_logging
in two functions: one that returns a given bit from dirty_log_mask,
and one that returns the entire mask. memory_region_is_logging gets an
extra parameter so that the compiler flags misuse.

While VGA-specific users (including the Xen listener!) will want to keep
checking that bit, KVM and vhost check for "any bit except migration"
(because migration is handled via the global start/stop listener
callbacks).

Backports commit 2d1a35bef0ed96b3f23535e459c552414ccdbafd from qemu
2018-02-13 08:41:44 -05:00
Paolo Bonzini 9847ba46d6
exec: optimize phys_page_set_level
phys_page_set_level is writing zeroes to a struct that has just been
filled in by phys_map_node_alloc. Instead, tell phys_map_node_alloc
whether to fill in the page "as a leaf" or "as a non-leaf".

memcpy is faster than struct assignment, which copies each bitfield
individually. A compiler bug (https://gcc.gnu.org/PR66391), and
small memcpys like this one are special-cased anyway, and optimized
to a register move, so just use the memcpy.

This cuts the cost of phys_page_set_level from 25% to 5% when
booting qboot.

Backports commit db94604b20278c1dc227a04e4c564d80230e6c3f from qemu
2018-02-13 08:38:24 -05:00
Paolo Bonzini 96e7e32972
softmmu: support up to 12 MMU modes
At 8k per TLB (for 64-bit host or target), 8 or more modes
make the TLBs bigger than 64k, and some RISC TCG backends do
not like that. On the affected hosts, cut the TLB size in
half---there is still a measurable speedup on PPC with the
next patch.

Backports commit 1de29aef17a7d70dbc04a7fe51e18942e3ebe313 from qemu
2018-02-13 08:34:52 -05:00
Paolo Bonzini b34c233c2f
tcg: add TCG_TARGET_TLB_DISPLACEMENT_BITS
This will be used to size the TLB when more than 8 MMU modes are
used by the target. Limitations come from the limited size of
the immediate fields (which sometimes, as in the case of Aarch64,
extend to instructions that shift the immediate).

Backports commit 006f8638c62bca2b0caf609485f47fa5e14d8a3c from qemu
2018-02-13 08:28:29 -05:00
Paolo Bonzini e37ed23e7f
translate-all: delete prototype for non-existent function
Missed in commit 3a808cc40

Backports commit b6b099541d6cf3c50b0fb5af916fff0db6508805 from qemu
2018-02-13 08:26:08 -05:00
Eduardo Habkost 79df79434d
target-i386: Fix signedness of MSR_IA32_APICBASE_BASE
Existing definition triggers the following when using clang
-fsanitize=undefined:

hw/intc/apic_common.c:314:55: runtime error: left shift of 1048575 by 12
places cannot be represented in type 'int'

Fix it so we won't try to shift a 1 to the sign bit of a signed integer.

Backports commit 458cf469f4a1cb520b07092f5537c5a6d2389d23 from qemu
2018-02-13 08:17:18 -05:00
Peter Maydell ed58b3afac
target-arm: Remove v8_ prefix from names of non-v8-specific cpreg arrays
The ARMCPRegInfo arrays v8_el3_no_el2_cp_reginfo and v8_el2_cp_reginfo
are actually used on non-v8 CPUs as well. Remove the incorrect v8_
prefix from their names.

Backports commit 4771cd01daaccb2a8929fa04c88c608e378cf814 from qemu
2018-02-13 08:16:23 -05:00