These macros expand into error class enumeration constant, comma,
string. Unclean. Has been that way since commit 13f59ae.
The error class is always ERROR_CLASS_GENERIC_ERROR since the previous
commit.
* Prepend every use of a QERR_ macro by ERROR_CLASS_GENERIC_ERROR, and
delete it from the QERR_ macro. No change after preprocessing.
* Rewrite error_set(ERROR_CLASS_GENERIC_ERROR, ...) into
error_setg(...). Again, no change after preprocessing.
Backports commit c6bd8c706a799eb0fece99f468aaa22b818036f3 from qemu
Error classes other than ERROR_CLASS_GENERIC_ERROR should not be used
in new code. Hiding them in QERR_ macros makes new uses hard to spot.
Fortunately, there's just one such macro left. Eliminate it with this
coccinelle semantic patch:
@@
expression EP, E;
@@
-error_set(EP, QERR_DEVICE_NOT_FOUND, E)
+error_set(EP, ERROR_CLASS_DEVICE_NOT_FOUND, "Device '%s' not found", E)
Backports commit 75158ebbe259f0bd8bf435e8f4827a43ec89c877 from qemu
Now that qbool is fixed, let's fix getting and setting a bool
value to a qdict member to also use C99 bool rather than int.
I audited all callers to ensure that the changed return type
will not cause any changed semantics.
Backports commit 34acbc95229f9f841bde83691a5af949c15e105b from qemu
We require a C99 compiler, so let's use 'bool' instead of 'int'
when dealing with boolean values. There are few enough clients
to fix them all in one pass.
Backports commit fc48ffc39ed1060856475e4320d5896f26c945e8 from qemu
To avoid to many #ifdef in target code, provide a tlb_vaddr_to_host for
both user and softmmu modes. In the first case the function always
succeed and just call the g2h function.
Backports commit 2e83c496261c799b0fe6b8e18ac80cdc0a5c97ce from qemu
The cpu_physical_memory_reset_dirty() function is sometimes used
together with cpu_physical_memory_get_dirty(). This is not atomic since
two separate accesses to the dirty memory bitmap are made.
Turn cpu_physical_memory_reset_dirty() and
cpu_physical_memory_clear_dirty_range_type() into the atomic
cpu_physical_memory_test_and_clear_dirty().
Backports commit 03eebc9e3246b9b3f5925aa41f7dfd7c1e467875 from qemu
Use set_bit_atomic() and bitmap_set_atomic() so that multiple threads
can dirty memory without race conditions.
Backports commit d114875b9a1c21162f69a12d72f69a22e7bab376 from qemu
Most of the time, not all bitmaps have to be marked as dirty;
do not do anything if the interesting ones are already dirty.
Previously, any clean bitmap would have cause all the bitmaps to be
marked dirty.
In fact, unless running TCG most of the time bitmap operations need
not be done at all, because memory_region_is_logging returns zero.
In this case, skip the call to cpu_physical_memory_range_includes_clean
altogether as well.
With this patch, cpu_physical_memory_set_dirty_range is called
unconditionally, so there need not be anymore a separate call to
xen_modified_memory.
Backports commit e87f7778b64d4a6a78e16c288c7fdc6c15317d5f from qemu
cpu_physical_memory_set_dirty_lebitmap unconditionally syncs the
DIRTY_MEMORY_CODE bitmap. This however is unused unless TCG is
enabled.
Backports commit 9460dee4b2258e3990906fb34099481c8334c267 from qemu
The new bitmap_test_and_clear_atomic() function clears a range and
returns whether or not the bits were set.
Backports commit 36546e5b803f6e363906607307f27c489441fd15 from qemu
Use atomic_or() for atomic bitmaps where several threads may set bits at
the same time. This avoids the race condition between threads loading
an element, bitwise ORing, and then storing the element.
When setting all bits in a word we can avoid atomic ops and instead just
use an smp_mb() at the end.
Most bitmap users don't need atomicity so introduce new functions.
Backports commit 9f02cfc84b85929947b32fe1674fbc6a429f332a from qemu
While it is obvious that cpu_physical_memory_get_dirty returns true even if
a single page is dirty, the same is not true for cpu_physical_memory_get_clean;
one would expect that it returns true only if all the pages are clean, but
it actually looks for even one clean page. (By contrast, the caller of that
function, cpu_physical_memory_range_includes_clean, has a good name).
To clarify, rename the function to cpu_physical_memory_all_dirty and return
true if _all_ the pages are dirty. This is the opposite of the previous
meaning, because "all are 1" is the same as "not (any is 0)", so we have to
modify cpu_physical_memory_range_includes_clean as well
Backports commit 72b47e79cef36ed6ffc718f10e21001d7ec2a66f from qemu
These days modification of the TLB is done in notdirty_mem_write,
so the virtual address and env pointer as unnecessary.
The new name of the function, tlb_unprotect_code, is consistent with
tlb_protect_code.
Backports commit 9564f52da7eb061326956ed9a468935e3352512d from qemu
Remove them from the sundry exec-all.h header, since they are only used by
the TCG runtime in exec.c and user-exec.c.
Backports commit 1652b974766401743879d78f796f44b8929b0787 from qemu
DIRTY_MEMORY_CODE is only needed for TCG. By adding it directly to
mr->dirty_log_mask, we avoid testing for TCG everywhere a region is
checked for the enabled/disabled state of dirty logging.
Backports commit 677e7805cf95f3b2bca8baf0888d1ebed7f0c606 from qemu
When the dirty log mask will also cover other bits than DIRTY_MEMORY_VGA,
some listeners may be interested in the overall zero/non-zero value of
the dirty log mask; others may be interested in the value of single bits.
For this reason, always call log_start/log_stop if bits have respectively
appeared or disappeared, and pass the old and new values of the dirty log
mask so that listeners can distinguish the kinds of change.
For example, KVM checks if dirty logging used to be completely disabled
(in log_start) or is now completely disabled (in log_stop). On the
other hand, Xen has to check manually if DIRTY_MEMORY_VGA changed,
since that is the only bit it cares about.
Backports commit b2dfd71c4843a762f2befe702adb249cf55baf66 from qemu
For now memory regions only track DIRTY_MEMORY_VGA individually, but
this will change soon. To support this, split memory_region_is_logging
in two functions: one that returns a given bit from dirty_log_mask,
and one that returns the entire mask. memory_region_is_logging gets an
extra parameter so that the compiler flags misuse.
While VGA-specific users (including the Xen listener!) will want to keep
checking that bit, KVM and vhost check for "any bit except migration"
(because migration is handled via the global start/stop listener
callbacks).
Backports commit 2d1a35bef0ed96b3f23535e459c552414ccdbafd from qemu
At 8k per TLB (for 64-bit host or target), 8 or more modes
make the TLBs bigger than 64k, and some RISC TCG backends do
not like that. On the affected hosts, cut the TLB size in
half---there is still a measurable speedup on PPC with the
next patch.
Backports commit 1de29aef17a7d70dbc04a7fe51e18942e3ebe313 from qemu
Add a transaction attribute indicating that a memory access is being
done from user-mode (unprivileged). This corresponds to an equivalent
signal in ARM AMBA buses.
Backports commit 0995bf8cd91b81ec9c1078e37b808794080dc5c0 from qemu
Honour the NS bit in ARM page tables:
* when adding entries to the TLB, include the Secure/NonSecure
transaction attribute
* set the NS bit in the PAR when doing ATS operations
Note that we don't yet correctly use the NSTable bit to
cause the page table walk itself to use the right attributes.
Backports commit 8bf5b6a9c1911d2c8473385fc0cebfaaeef42dbc from qem
Add new address_space_ld*/st* functions which allow transaction
attributes and error reporting for basic load and stores. These
are named to be in line with the address_space_read/write/rw
buffer operations.
The existing ld/st*_phys functions are now wrappers around
the new functions.
Backports commit 500131154d677930fce35ec3a6f0b5a26bcd2973 from qemu
Make address_space_rw take transaction attributes, rather
than always using the 'unspecified' attributes.
Backports commit 5c9eb0286c819c1836220a32f2e1a7b5004ac79a from qemu
Add a MemTxAttrs field to the IOTLB, and allow target-specific
code to set it via a new tlb_set_page_with_attrs() function;
pass the attributes through to the device when making IO accesses.
Backports commit fadc1cbe85c6b032d5842ec0d19d209f50fcb375 from qemu
Make the CPU iotlb a structure rather than a plain hwaddr;
this will allow us to add transaction attributes to it.
Backports commit e469b22ffda40188954fafaf6e3308f58d50f8f8 from qemu
Rather than retaining io_mem_read/write as simple wrappers around
the memory_region_dispatch_read/write functions, make the latter
public and change all the callers to use them, since we need to
touch all the callsites anyway to add MemTxAttrs and MemTxResult
support. Delete io_mem_read and io_mem_write entirely.
(All the callers currently pass MEMTXATTRS_UNSPECIFIED
and convert the return value back to bool or ignore it.)
Backports commit 3b6434953934e6d4a776ed426d8c6d6badee176f from qemu
Define an API so that devices can register MemoryRegionOps whose read
and write callback functions are passed an arbitrary pointer to some
transaction attributes and can return a success-or-failure status code.
This will allow us to model devices which:
* behave differently for ARM Secure/NonSecure memory accesses
* behave differently for privileged/unprivileged accesses
* may return a transaction failure (causing a guest exception)
for erroneous accesses
This patch defines the new API and plumbs the attributes parameter through
to the memory.c public level functions io_mem_read() and io_mem_write(),
where it is currently dummied out.
The success/failure response indication is also propagated out to
io_mem_read() and io_mem_write(), which retain the old-style
boolean true-for-error return.
Backports commit cc05c43ad942165ecc6ffd39e41991bee43af044 from qemu
Since the BSP bit is writable on real hardware, during reset all the CPUs which
were not chosen to be the BSP should have their BSP bit cleared. This fix is
required for KVM to work correctly when it changes the BSP bit.
An additional fix is required for QEMU tcg to allow software to change the BSP
bit.
Backports commit 9cb11fd7539b5b787d8fb3834004804a58dd16ae from qemu
The documentation for sextract64() claims that the return type is
an int64_t, but the code itself disagrees. Fix the return type to
conform to the documentation and to bring it into line with
sextract32(), which returns int32_t.
Backports commit 4f9950520a115acf9c0a209f0befa45758ad0215 from qemu
After the previous patch, TLBs will be flushed on every change to
the memory mapping. This patch augments that with synchronization
of the MemoryRegionSections referred to in the iotlb array.
With this change, it is guaranteed that iotlb_to_region will access
the correct memory map, even once the TLB will be accessed outside
the BQL.
Backports commit 9d82b5a792236db31a75b9db5c93af69ac07c7c5 from qemu
This for now is a simple TLB flush. This can change later for two
reasons:
1) an AddressSpaceDispatch will be cached in the CPUState object
2) it will not be possible to do tlb_flush once the TCG-generated code
runs outside the BQL.
Backports commit 76e5c76f2e2e0d20bab2cd5c7a87452f711654fb from qemu
The code in the softfloat source files is under a mixture of
licenses: the original code and many changes from QEMU contributors
are under the base SoftFloat-2a license; changes from Stefan Weil
and RedHat employees are GPLv2-or-later; changes from Fabrice Bellard
are under the BSD license. Clarify this in the comments at the
top of each affected source file, including a statement about
the assumed licensing for future contributions, so we don't need
to remember to ask patch submitters explicitly to pick a license.
Backports commit 16017c48547960539fcadb1f91d252124f442482 from qemu
Revert the remaining portions of commits 75d62a5856 and 3430b0be36f
which are under a SoftFloat-2b license, ie the functions
uint64_to_float32() and uint64_to_float64(). (The float64_to_uint64()
and float64_to_uint64_round_to_zero() functions were completely
rewritten in commits fb3ea83aa and 0a87a3107d so can stay.)
Reimplement from scratch the uint64_to_float64() and uint64_to_float32()
conversion functions.
[This is a mechanical squashing together of two separate "revert"
and "reimplement" patches.]
Backports commit 6bb8e0f130bd4aecfe835a0caa94390fa2235fde from qemu
This commit applies the changes to master which correspond to
replacing commit 158142c2c2df with a set of changes made by:
* taking the SoftFloat-2a release
* mechanically transforming the block comment style
* reapplying Fabrice's original changes from 158142c2c2df
This commit was created by:
diff -u 158142c2c2df import-sf-2a
patch -p1 --fuzz 10 <../relicense-patch.txt
(where import-sf-2a is the branch resulting from the changes above).
Backports commit a7d1ac78e0f1101df2ff84502029a4b0da6024ae from qemu
Support guest CPUs which need 7 MMU index values.
Add a comment about what would be required to raise the limit
further (trivial for 8, TCG backend rework for 9 or more).
Backports commit 8f3ae2ae2d02727f6d56610c09d7535e43650dd4 from qemu
This is improved type checking for the translators -- it's no longer
possible to accidentally swap arguments to the branch functions.
Note that the code generating backends still manipulate labels as int.
With notable exceptions, the scope of the change is just a few lines
for each target, so it's not worth building extra machinery to do this
change in per-target increments.
Backports commit 42a268c241183877192c376d03bd9b6d527407c7 from qemu
* unicorn: use waitable timer to implement usleep() on Windows
Signed-off-by: vardyh <vardyh.dev@gmail.com>
* atomic: implement barrier() for msvc
Signed-off-by: vardyh <vardyh.dev@gmail.com>
* Changed some MSVC compatibility defines based on MSVC version.
* Added prebuild_script.bat to remove leftover configure generated files before building.
Also added project files and MSVC copies of configure generated files for all supported CPUs.
* Moved ./bindings/msvc_native into ./msvc
* Remove old project dir.
* isnan() fix for msvc2013 onwards
This commit fixes the following issues:
- Any unmapped/free'd memory regions (MemoryRegion instances) are not
removed from the object property linked list of its owner (which is
always qdev_get_machine(uc)). This issue makes adding new memory
mapping by calling mem_map() or mem_map_ptr() slower as more and more
memory pages are mapped and unmapped - yes, even if those memory pages
are unmapped, they still impact the speed of future memory page
mappings due to this issue.
- FlatView is not reconstructed after a memory region is freed during
unmapping, which leads to a use-after-free the next time a new memory
region is mapped in address_space_update_topology().