Commit graph

20 commits

Author SHA1 Message Date
Lioncash a81439c7ca
exec: Drop unnecessary code for unicorn
The dirty memory code isn't strictly necessary
2018-03-12 10:11:46 -04:00
Paolo Bonzini f26f1f123c
memory: remove qemu_get_ram_fd, qemu_set_ram_fd, qemu_ram_block_host_ptr
Remove direct uses of ram_addr_t and optimize memory_region_{get,set}_fd
now that a MemoryRegion knows its RAMBlock directly.

Backports commit 4ff87573df3606856a92c14eef3393a63d736d11 from qemu
2018-02-24 03:34:44 -05:00
Gonglei feff56cc11
memory: drop find_ram_block()
On the one hand, we have already qemu_get_ram_block() whose function
is similar. On the other hand, we can directly use mr->ram_block but
searching RAMblock by ram_addr which is a kind of waste.

Backports commit fa53a0e53efdc7002497ea4a76aacf6cceb170ef from qemu
2018-02-24 02:52:20 -05:00
Paolo Bonzini 9479199c6b
memory: fix usage of find_next_bit and find_next_zero_bit
The last two arguments to these functions are the last and first bit to
check relative to the base. The code was using incorrectly the first
bit and the number of bits. Fix this in cpu_physical_memory_get_dirty
and cpu_physical_memory_all_dirty. This requires a few changes in the
iteration; change the code in cpu_physical_memory_set_dirty_range to
match.

Backports commit 88c73d16ad1b6c22a2ab082064d0d521f756296a from qemu
2018-02-22 19:51:43 -05:00
Stefan Hajnoczi e79e0881cd
memory: RCU ram_list.dirty_memory[] for safe RAM hotplug
Although accesses to ram_list.dirty_memory[] use atomics so multiple
threads can safely dirty the bitmap, the data structure is not fully
thread-safe yet.

This patch handles the RAM hotplug case where ram_list.dirty_memory[] is
grown.  ram_list.dirty_memory[] is change from a regular bitmap to an
RCU array of pointers to fixed-size bitmap blocks.  Threads can continue
accessing bitmap blocks while the array is being extended.  See the
comments in the code for an in-depth explanation of struct
DirtyMemoryBlocks.

I have tested that live migration with virtio-blk dataplane works.

Backports commit 5b82b703b69acc67b78b98a5efc897a3912719eb from qemu
2018-02-22 15:38:03 -05:00
Gonglei aa80edbef0
exec: Return RAMBlock pointer from allocating functions
Previously we return RAMBlock.offset; now return the pointer to the
whole structure.

ram_block_add returns void now, error is completely passed with errp.

Backports commit 528f46af6ecd1e300db18684969104d4067b867b from qemu
2018-02-21 08:52:57 -05:00
Lioncash c658126845
include: Move RAMList to ramlist.h
Moves the struct back into qemu's headers
2018-02-20 08:47:51 -05:00
Lioncash cdd4003ce9
Move RAMBlock to ram_addr.h
Moves it back into qemu's includes.
2018-02-20 08:35:44 -05:00
Paolo Bonzini cbc56b3ceb
memory: add early bail out from cpu_physical_memory_set_dirty_range
This condition is true in the common case, so we can cut out the body of
the function. In addition, this makes it easier for the compiler to do
at least partial inlining, even if it decides that fully inlining the
function is unreasonable.

Backports commit 8bafcb21643a39a5b29109f8bd5ee5a6f0f6850b from qemu
2018-02-20 08:32:10 -05:00
Lioncash a268815478
include: Add stubbed xen function
Will allow us to not comment out code all the time for xen checks (ideally)
2018-02-20 08:29:58 -05:00
Paolo Bonzini 1650af8c8b
memory: try to inline constant-length reads
memcpy can take a large amount of time for small reads and writes.
Handle the common case of reading s/g descriptors from memory (there
is no corresponding "write" case that is as common, because writes
often use address_space_st* functions) by inlining the relevant
parts of address_space_read into the caller.

Backports commit 3cc8f884996584630734a90c9b3c535af81e3c92 from qemu
2018-02-17 20:44:39 -05:00
Eduardo Habkost 26791ea61b
exec: Eliminate qemu_ram_free_from_ptr()
Replace qemu_ram_free_from_ptr() with qemu_ram_free().

The only difference between qemu_ram_free_from_ptr() and
qemu_ram_free() is that g_free_rcu() is used instead of
call_rcu(reclaim_ramblock). We can safely replace it because:

* RAM blocks allocated by qemu_ram_alloc_from_ptr() always have
RAM_PREALLOC set;
* reclaim_ramblock(block) will do nothing except g_free(block)
if RAM_PREALLOC is set at block->flags.

Backports commit a29ac16632aec6065c72985b9f7eeb1ca6fbef4a from qemu
2018-02-17 19:37:45 -05:00
Peter Maydell e1a4e4208f
pc: resizeable ROM blocks
This makes ROM blocks resizeable. This infrastructure is required for other
functionality we have queued.

Backports commit aaf03019175949eda5087329448b8a0033b89479 from qemu
2018-02-17 17:18:38 -05:00
Stefan Hajnoczi fc7b95d06a
memory: replace cpu_physical_memory_reset_dirty() with test-and-clear
The cpu_physical_memory_reset_dirty() function is sometimes used
together with cpu_physical_memory_get_dirty(). This is not atomic since
two separate accesses to the dirty memory bitmap are made.

Turn cpu_physical_memory_reset_dirty() and
cpu_physical_memory_clear_dirty_range_type() into the atomic
cpu_physical_memory_test_and_clear_dirty().

Backports commit 03eebc9e3246b9b3f5925aa41f7dfd7c1e467875 from qemu
2018-02-13 11:25:45 -05:00
Stefan Hajnoczi 18ccd4b5be
memory: use atomic ops for setting dirty memory bits
Use set_bit_atomic() and bitmap_set_atomic() so that multiple threads
can dirty memory without race conditions.

Backports commit d114875b9a1c21162f69a12d72f69a22e7bab376 from qemu
2018-02-13 11:07:48 -05:00
Paolo Bonzini 6d509f7333
exec: only check relevant bitmaps for cleanliness
Most of the time, not all bitmaps have to be marked as dirty;
do not do anything if the interesting ones are already dirty.
Previously, any clean bitmap would have cause all the bitmaps to be
marked dirty.

In fact, unless running TCG most of the time bitmap operations need
not be done at all, because memory_region_is_logging returns zero.
In this case, skip the call to cpu_physical_memory_range_includes_clean
altogether as well.

With this patch, cpu_physical_memory_set_dirty_range is called
unconditionally, so there need not be anymore a separate call to
xen_modified_memory.

Backports commit e87f7778b64d4a6a78e16c288c7fdc6c15317d5f from qemu
2018-02-13 11:03:26 -05:00
Paolo Bonzini 6bbfcf65e8
memory: do not touch code dirty bitmap unless TCG is enabled
cpu_physical_memory_set_dirty_lebitmap unconditionally syncs the
DIRTY_MEMORY_CODE bitmap. This however is unused unless TCG is
enabled.

Backports commit 9460dee4b2258e3990906fb34099481c8334c267 from qemu
2018-02-13 10:48:14 -05:00
Paolo Bonzini 1b1f82cef7
exec: invert return value of cpu_physical_memory_get_clean, rename
While it is obvious that cpu_physical_memory_get_dirty returns true even if
a single page is dirty, the same is not true for cpu_physical_memory_get_clean;
one would expect that it returns true only if all the pages are clean, but
it actually looks for even one clean page. (By contrast, the caller of that
function, cpu_physical_memory_range_includes_clean, has a good name).

To clarify, rename the function to cpu_physical_memory_all_dirty and return
true if _all_ the pages are dirty. This is the opposite of the previous
meaning, because "all are 1" is the same as "not (any is 0)", so we have to
modify cpu_physical_memory_range_includes_clean as well

Backports commit 72b47e79cef36ed6ffc718f10e21001d7ec2a66f from qemu
2018-02-13 09:54:12 -05:00
Nguyen Anh Quynh ac68745a9c we dont need to handle VGA & Migration memories 2017-01-20 17:03:39 +08:00
Nguyen Anh Quynh 344d016104 import 2015-08-21 15:04:50 +08:00