Guest CPU TLB maintenance operations may be sufficiently
specialized to only need to flush TLB entries corresponding
to a particular MMU index. Implement cputlb functions for
this, to avoid the inefficiency of flushing TLB entries
which we don't need to.
Backports commit d7a74a9d4a68e27b3a8ceda17bb95cb0a23d8e4d from qemu
apic_internal.h relies on cpu.h having been included (for the
X86CPU type); include it directly rather than relying on it
being pulled in via one of the other includes like timer.h.
Backports commit 20fbcfdd58ea47607a5755979d43f8c48ac93f08 from qemu
Move the muldiv64() function from qemu-common.h to host-utils.h.
This puts it together with all the other arithmetic functions
where we provide a version with __int128_t and a fallback
without, and allows headers which need muldiv64() to avoid
including qemu-common.h.
We don't include host-utils from qemu-common.h, to avoid dragging
more things into qemu-common.h than it already has; in practice
everywhere that needs muldiv64() can get it via qemu/timer.h.
Backports commit 49caffe0cc95a9d0dc344e3328be8197f3536cf8 from qemu
Add a header comment to osdep.h, explaining what the header is for
and some rules to avoid circular-include difficulties.
Backports commit 03557b9abaee78e9d1ef5cd236d32a7b3e75e6f8 from qemu
qemu-common.h has some system header includes and fixups for
things that might be missing. This is really an OS dependency
and belongs in osdep.h, so move it across.
Backports commit bfe7e449f14313f646da621288ca2fd12223414f from qemu
qemu-common.h includes some fixups for things the Win32
headers don't define or define weirdly. These really
belong in os-win32.h, so move them there.
Backports commit 1aad8104f3b69206da1f868639e1f69c26f6d482 from qemu
Add documentation comments for various utility string functions
which we have implemented in util/cutils.c:
pstrcpy()
strpadcpy()
pstrcat()
strstart()
stristart()
qemu_strnlen()
qemu_strsep()
Backports commit ab6036630865eff8bb12dd51dfa6921b4607fc81 from qemu
Rather than rolling custom concatenate-strings macros for the
QEMU_BUILD_BUG_ON macro to use, use the glue() macro we already
have (since it's now available to us in this header).
Backports commit 24134c4e9126bf505b612e901c63a102fc471083 from qemu
osdep.h has a few things which are really compiler specific;
move them to compiler.h, and include compiler.h from osdep.h.
Backports commit 4912086865083a008f4fb73173fd0ddf2206c4d9 from qemu
qemu_printf is an ancient remnant which has been a simple #define to
printf for over a decade, and is used in only a few places. Expand
it out in those places and remove the #define.
Backports commit 71baf787d8fa2a5d186f22d8154069fd212be37f from qemu
There was a complicated subtractive arithmetic for determining the
padding on the CPUTLBEntry structure. Simplify this with a union.
Backports commit b4a4b8d0e0767c85946fd8fc404643bf5766351a from qemu
The callers (most of them in target-foo/cpu.c) to this function all
have the cpu pointer handy. Just pass it to avoid an ENV_GET_CPU() from
core code (in exec.c).
Backports commit 4bad9e392e788a218967167a38ce2ae7a32a6231 from qemu
All of the core-code usages of this API have the cpu pointer handy so
pass it in. There are only 3 architecture specific usages (2 of which
are commented out) which can just use ENV_GET_CPU() locally to get the
cpu pointer. The reduces core code usage of the CPU env, which brings
us closer to common-obj'ing these core files.
Backports commit bbd77c180d7ff1b04a7661bb878939b2e1d23798 from qemu
To prepare for a generic internal cipher API, move the
built-in AES implementation into the crypto/ directory
Backports commit 6f2945cde60545aae7f31ab9d5ef29531efbc94f from qemu
Introduce a new crypto/ directory that will (eventually) contain
all the cryptographic related code. This initially defines a
wrapper for initializing gnutls and for computing hashes with
gnutls. The former ensures that gnutls is guaranteed to be
initialized exactly once in QEMU regardless of CLI args. The
block quorum code currently fails to initialize gnutls so it
only works by luck, if VNC server TLS is not requested. The
hash APIs avoids the need to litter the rest of the code with
preprocessor checks and simplifies callers by allocating the
correct amount of memory for the requested hash.
Backports commit ddbb0d09661f5fce21b335ba9aea8202d189b98e from qemu
Currently the "host" page size alignment API is really aligning to both
host and target page sizes. There is the qemu_real_page_size which can
be used for the actual host page size but it's missing a mask and ALIGN
macro as provided for qemu_page_size. Complete the API. This allows
system level code that cares about the host page size to use a
consistent alignment interface without having to un-needingly align to
the target page size. This also reduces system level code dependency
on the cpu specific TARGET_PAGE_SIZE.
Backports commit 4e51361d79289aee2985dfed472f8d87bd53a8df from qemu
Including qemu-common.h from other header files is generally a bad
idea, because it means it's very easy to end up with a circular
dependency. For instance, if we wanted to include memory.h from
qom/cpu.h we'd end up with this loop:
memory.h -> qemu-common.h -> cpu.h -> cpu-qom.h -> qom/cpu.h -> memory.h
Remove the include from memory.h. This requires us to fix up a few
other files which were inadvertently getting declarations indirectly
through memory.h.
The biggest change is splitting the fprintf_function typedef out
into its own header so other headers can get at it without having
to include qemu-common.h.
Backports commit fba0a593b2809ecdda68650952cf3d3332ac1990 from qemu
This introduces the memory region property "global_locking". It is true
by default. By setting it to false, a device model can request BQL-free
dispatching of region accesses to its r/w handlers. The actual BQL
break-up will be provided in a separate patch.
Backports commit 196ea13104f802c508e57180b2a0d2b3418989a3 from qemu
These are not Architecture specific in any way so move them out of
cpu-defs.h. tb-hash.h is an appropriate place as a leading user and
their strong relationship to TB hashing and caching.
Backports commit 41da4bd6420afd1209c408974920f63ff9c658e1 from qemu
This is one of very few things in exec-all with a genuine CPU
architecture dependency. Move these hashing helpers to a new
header to trim exec-all.h down to a near architecture-agnostic
header.
The defs are only used by cpu-exec and translate-all which are both
arch-obj's so the new tb-hash.h has no core code usage.
Backports commit e1b89321bafea9fb33d87852fc91fee579d17dfe from qemu
These exception indicies are generic and don't have any reliance on the
per-arch cpu.h defs. Move them to cpu-all.h so they can be used by core
code that does not have access to cpu-defs.h.
Backports commit 9e0dc48c9f05505b53cb28f860456a0648e56ddf from qemu
Intel C Compiler version 15.0.3.187 Build 20150407 doesn't support
'|' function for non floating-point simd operands.
Define VEC_OR macro which uses _mm_or_si128 supported
both in icc and gcc on x86 platform.
Backports commit 34664507c7f038842f20a2c787915680b1fabba2 from qemu
The usages of this define are pure TCG and there is no architecture
specific variation of the value. Localise it to the TCG engine to
remove another architecture agnostic piece from cpu-defs.h.
This follows on from a28177820a868eafda8fab007561cc19f41941f4 where
temp_buf was moved out of the CPU_COMMON obsoleting the need for
the super early definition.
Backports commit 6e0b07306d1793e8402dd218d2e38a7377b5fc27 from qemu
Remove it except for two things in qerror.h:
* Two #include to be cleaned up separately to avoid cluttering this
patch.
* The QERR_ macros. Mark as obsolete.
Backports commit 4629ed1e98961bbe678db68ef5f4342ff174a6c3 from qemu
These macros expand into error class enumeration constant, comma,
string. Unclean. Has been that way since commit 13f59ae.
The error class is always ERROR_CLASS_GENERIC_ERROR since the previous
commit.
* Prepend every use of a QERR_ macro by ERROR_CLASS_GENERIC_ERROR, and
delete it from the QERR_ macro. No change after preprocessing.
* Rewrite error_set(ERROR_CLASS_GENERIC_ERROR, ...) into
error_setg(...). Again, no change after preprocessing.
Backports commit c6bd8c706a799eb0fece99f468aaa22b818036f3 from qemu
Error classes other than ERROR_CLASS_GENERIC_ERROR should not be used
in new code. Hiding them in QERR_ macros makes new uses hard to spot.
Fortunately, there's just one such macro left. Eliminate it with this
coccinelle semantic patch:
@@
expression EP, E;
@@
-error_set(EP, QERR_DEVICE_NOT_FOUND, E)
+error_set(EP, ERROR_CLASS_DEVICE_NOT_FOUND, "Device '%s' not found", E)
Backports commit 75158ebbe259f0bd8bf435e8f4827a43ec89c877 from qemu
Now that qbool is fixed, let's fix getting and setting a bool
value to a qdict member to also use C99 bool rather than int.
I audited all callers to ensure that the changed return type
will not cause any changed semantics.
Backports commit 34acbc95229f9f841bde83691a5af949c15e105b from qemu
We require a C99 compiler, so let's use 'bool' instead of 'int'
when dealing with boolean values. There are few enough clients
to fix them all in one pass.
Backports commit fc48ffc39ed1060856475e4320d5896f26c945e8 from qemu
To avoid to many #ifdef in target code, provide a tlb_vaddr_to_host for
both user and softmmu modes. In the first case the function always
succeed and just call the g2h function.
Backports commit 2e83c496261c799b0fe6b8e18ac80cdc0a5c97ce from qemu
The cpu_physical_memory_reset_dirty() function is sometimes used
together with cpu_physical_memory_get_dirty(). This is not atomic since
two separate accesses to the dirty memory bitmap are made.
Turn cpu_physical_memory_reset_dirty() and
cpu_physical_memory_clear_dirty_range_type() into the atomic
cpu_physical_memory_test_and_clear_dirty().
Backports commit 03eebc9e3246b9b3f5925aa41f7dfd7c1e467875 from qemu
Use set_bit_atomic() and bitmap_set_atomic() so that multiple threads
can dirty memory without race conditions.
Backports commit d114875b9a1c21162f69a12d72f69a22e7bab376 from qemu
Most of the time, not all bitmaps have to be marked as dirty;
do not do anything if the interesting ones are already dirty.
Previously, any clean bitmap would have cause all the bitmaps to be
marked dirty.
In fact, unless running TCG most of the time bitmap operations need
not be done at all, because memory_region_is_logging returns zero.
In this case, skip the call to cpu_physical_memory_range_includes_clean
altogether as well.
With this patch, cpu_physical_memory_set_dirty_range is called
unconditionally, so there need not be anymore a separate call to
xen_modified_memory.
Backports commit e87f7778b64d4a6a78e16c288c7fdc6c15317d5f from qemu
cpu_physical_memory_set_dirty_lebitmap unconditionally syncs the
DIRTY_MEMORY_CODE bitmap. This however is unused unless TCG is
enabled.
Backports commit 9460dee4b2258e3990906fb34099481c8334c267 from qemu
The new bitmap_test_and_clear_atomic() function clears a range and
returns whether or not the bits were set.
Backports commit 36546e5b803f6e363906607307f27c489441fd15 from qemu
Use atomic_or() for atomic bitmaps where several threads may set bits at
the same time. This avoids the race condition between threads loading
an element, bitwise ORing, and then storing the element.
When setting all bits in a word we can avoid atomic ops and instead just
use an smp_mb() at the end.
Most bitmap users don't need atomicity so introduce new functions.
Backports commit 9f02cfc84b85929947b32fe1674fbc6a429f332a from qemu
While it is obvious that cpu_physical_memory_get_dirty returns true even if
a single page is dirty, the same is not true for cpu_physical_memory_get_clean;
one would expect that it returns true only if all the pages are clean, but
it actually looks for even one clean page. (By contrast, the caller of that
function, cpu_physical_memory_range_includes_clean, has a good name).
To clarify, rename the function to cpu_physical_memory_all_dirty and return
true if _all_ the pages are dirty. This is the opposite of the previous
meaning, because "all are 1" is the same as "not (any is 0)", so we have to
modify cpu_physical_memory_range_includes_clean as well
Backports commit 72b47e79cef36ed6ffc718f10e21001d7ec2a66f from qemu
These days modification of the TLB is done in notdirty_mem_write,
so the virtual address and env pointer as unnecessary.
The new name of the function, tlb_unprotect_code, is consistent with
tlb_protect_code.
Backports commit 9564f52da7eb061326956ed9a468935e3352512d from qemu
Remove them from the sundry exec-all.h header, since they are only used by
the TCG runtime in exec.c and user-exec.c.
Backports commit 1652b974766401743879d78f796f44b8929b0787 from qemu
DIRTY_MEMORY_CODE is only needed for TCG. By adding it directly to
mr->dirty_log_mask, we avoid testing for TCG everywhere a region is
checked for the enabled/disabled state of dirty logging.
Backports commit 677e7805cf95f3b2bca8baf0888d1ebed7f0c606 from qemu
When the dirty log mask will also cover other bits than DIRTY_MEMORY_VGA,
some listeners may be interested in the overall zero/non-zero value of
the dirty log mask; others may be interested in the value of single bits.
For this reason, always call log_start/log_stop if bits have respectively
appeared or disappeared, and pass the old and new values of the dirty log
mask so that listeners can distinguish the kinds of change.
For example, KVM checks if dirty logging used to be completely disabled
(in log_start) or is now completely disabled (in log_stop). On the
other hand, Xen has to check manually if DIRTY_MEMORY_VGA changed,
since that is the only bit it cares about.
Backports commit b2dfd71c4843a762f2befe702adb249cf55baf66 from qemu
For now memory regions only track DIRTY_MEMORY_VGA individually, but
this will change soon. To support this, split memory_region_is_logging
in two functions: one that returns a given bit from dirty_log_mask,
and one that returns the entire mask. memory_region_is_logging gets an
extra parameter so that the compiler flags misuse.
While VGA-specific users (including the Xen listener!) will want to keep
checking that bit, KVM and vhost check for "any bit except migration"
(because migration is handled via the global start/stop listener
callbacks).
Backports commit 2d1a35bef0ed96b3f23535e459c552414ccdbafd from qemu
At 8k per TLB (for 64-bit host or target), 8 or more modes
make the TLBs bigger than 64k, and some RISC TCG backends do
not like that. On the affected hosts, cut the TLB size in
half---there is still a measurable speedup on PPC with the
next patch.
Backports commit 1de29aef17a7d70dbc04a7fe51e18942e3ebe313 from qemu
Add a transaction attribute indicating that a memory access is being
done from user-mode (unprivileged). This corresponds to an equivalent
signal in ARM AMBA buses.
Backports commit 0995bf8cd91b81ec9c1078e37b808794080dc5c0 from qemu