Commit graph

3644 commits

Author SHA1 Message Date
Alex Bennée d56a4b0be4
tcg: handle EXCP_ATOMIC exception for system emulation
The patch enables handling atomic code in the guest. This should be
preferably done in cpu_handle_exception(), but the current assumptions
regarding when we can execute atomic sections cause a deadlock.

The current mechanism discards the flags which were set in atomic
execution. We ensure they are properly saved by calling the
cc->cpu_exec_enter/leave() functions around the loop.

As we are running cpu_exec_step_atomic() from the outermost loop we
need to avoid an abort() when single stepping over atomic code since
debug exception longjmp will point to the the setlongjmp in
cpu_exec(). We do this by setting a new jmp_env so that it jumps back
here on an exception.

Backports relevant parts of commit 08e73c48b053566bfe0c994f154f73991cd0ff0e from qemu
2018-03-02 09:56:43 -05:00
Alex Bennée 6760605e1c
tcg: enable thread-per-vCPU
There are a couple of changes that occur at the same time here:

- introduce a single vCPU qemu_tcg_cpu_thread_fn

One of these is spawned per vCPU with its own Thread and Condition
variables. qemu_tcg_rr_cpu_thread_fn is the new name for the old
single threaded function.

- the TLS current_cpu variable is now live for the lifetime of MTTCG
vCPU threads. This is for future work where async jobs need to know
the vCPU context they are operating in.

The user to switch on multi-thread behaviour and spawn a thread
per-vCPU. For a simple test kvm-unit-test like:

./arm/run ./arm/locking-test.flat -smp 4 -accel tcg,thread=multi

Will now use 4 vCPU threads and have an expected FAIL (instead of the
unexpected PASS) as the default mode of the test has no protection when
incrementing a shared variable.

We enable the parallel_cpus flag to ensure we generate correct barrier
and atomic code if supported by the front and backends. This doesn't
automatically enable MTTCG until default_mttcg_enabled() is updated to
check the configuration is supported.

Backports relevant parts of commit 372579427a5040a26dfee78464b50e2bdf27ef26
2018-03-02 09:43:14 -05:00
Alex Bennée 632b853761
tcg: remove global exit_request
There are now only two uses of the global exit_request left.

The first ensures we exit the run_loop when we first start to process
pending work and in the kick handler. This is just as easily done by
setting the first_cpu->exit_request flag.

The second use is in the round robin kick routine. The global
exit_request ensured every vCPU would set its local exit_request and
cause a full exit of the loop. Now the iothread isn't being held while
running we can just rely on the kick handler to push us out as intended.

We lightly re-factor the main vCPU thread to ensure cpu->exit_requests
cause us to exit the main loop and process any IO requests that might
come along. As an cpu->exit_request may legitimately get squashed
while processing the EXCP_INTERRUPT exception we also check
cpu->queued_work_first to ensure queued work is expedited as soon as
possible.

Backports commit e5143e30fb87fbf179029387f83f98a5a9b27f19 from qemu
2018-03-02 09:38:08 -05:00
Alex Bennée 4d90497d14
tcg: rename tcg_current_cpu to tcg_current_rr_cpu
..and make the definition local to cpus. In preparation for MTTCG the
concept of a global tcg_current_cpu will no longer make sense. However
we still need to keep track of it in the single-threaded case to be able
to exit quickly when required.

qemu_cpu_kick_no_halt() moves and becomes qemu_cpu_kick_rr_cpu() to
emphasise its use-case. qemu_cpu_kick now kicks the relevant cpu as
well as qemu_kick_rr_cpu() which will become a no-op in MTTCG.

For the time being the setting of the global exit_request remains.

Backports commit 791158d93b27f22a17c2ada06621831d54f09a2c from qemu

Also atomically sets the unicorn equivalents
2018-03-02 09:28:51 -05:00
Lioncash 18a229a69f
Resolve symbol errors with softfloat 2018-03-02 09:25:05 -05:00
KONRAD Frederic c5730ff194
tcg: add options for enabling MTTCG
We know there will be cases where MTTCG won't work until additional work
is done in the front/back ends to support. It will however be useful to
be able to turn it on.

As a result MTTCG will default to off unless the combination is
supported. However the user can turn it on for the sake of testing.

Backports commit 8d4e9146b3568022ea5730d92841345d41275d66 from qemu
2018-03-02 09:25:01 -05:00
Alex Bennée 8c89344517
tcg: move TCG_MO/BAR types into own file
We'll be using the memory ordering definitions to define values for
both the host and guest. To avoid fighting with circular header
dependencies just move these types into their own minimal header.

Backports commit 20937143145b8f5a4194e5c407731ba38797864e from qemu
2018-03-02 09:08:44 -05:00
Pranith Kumar 616becc2dc
mttcg: translate-all: Enable locking debug in a debug build
Enable tcg lock debug asserts in a debug build by default instead of
relying on DEBUG_LOCKING. None of the other DEBUG_* macros have
asserts, so this patch removes DEBUG_LOCKING and enable these asserts
in a debug build.

Backports commit 6ac3d7e845549f08473f020c1c70f14b8911a67e from qemu
2018-03-02 09:00:58 -05:00
Markus Armbruster 89d8e58718
util/cutils: Change qemu_strtosz*() from int64_t to uint64_t
This will permit its use in parse_option_size().

Backports commit f46bfdbfc8f95cf65d7818ef68a801e063c40332 from qemu
2018-03-02 08:58:55 -05:00
Markus Armbruster 8650d0213c
util/cutils: Return qemu_strtosz*() error and value separately
This makes qemu_strtosz(), qemu_strtosz_mebi() and
qemu_strtosz_metric() similar to qemu_strtoi64(), except negative
values are rejected.

Backports commit f17fd4fdf0df3d2f3444399d04c38d22b9a3e1b7 from qemu
2018-03-02 08:57:16 -05:00
Markus Armbruster 6093e67947
util/cutils: Let qemu_strtosz*() optionally reject trailing crap
Change the qemu_strtosz() & friends to return -EINVAL when @endptr is
null and the conversion doesn't consume the string completely.
Matches how qemu_strtol() & friends work.

Only test_qemu_strtosz_simple() passes a null @endptr. No functional
change there, because its conversion consumes the string.

Simplify callers that use @endptr only to fail when it doesn't point
to '\0' to pass a null @endptr instead.

Backports commit 4fcdf65ae2c00ae69f7625f26ed41f37d77b403c from qemu
2018-03-02 08:54:53 -05:00
Markus Armbruster f9c9eb7334
util/cutils: Drop QEMU_STRTOSZ_DEFSUFFIX_* macros
Writing QEMU_STRTOSZ_DEFSUFFIX_* instead of '*' gains nothing. Get
rid of these eyesores.

Backports commit 17f942560e54f8ee72996bc3276c697503606d7b from qemu
2018-03-02 08:53:15 -05:00
Markus Armbruster 858acd4142
util/cutils: New qemu_strtosz()
Most callers of qemu_strtosz_suffix() pass QEMU_STRTOSZ_DEFSUFFIX_B.
Capture the pattern in new qemu_strtosz().

Inline qemu_strtosz_suffix() into its only remaining caller.

Backports commit 466dea14e677555dd24465aca75d00a3537ad062 from qemu
2018-03-02 08:50:56 -05:00
Markus Armbruster a3358798d6
util/cutils: Rename qemu_strtosz() to qemu_strtosz_MiB()
With qemu_strtosz(), no suffix means mebibytes. It's used rarely.
I'm going to add a similar function where no suffix means bytes.
Rename qemu_strtosz() to qemu_strtosz_MiB() to make the name
qemu_strtosz() available for the new function.

Backports commit e591591b323772eea733de6027f5e8b50692d0ff from qemu
2018-03-02 08:49:26 -05:00
Markus Armbruster f656cd91ec
util/cutils: New qemu_strtosz_metric()
To parse numbers with metric suffixes, we use

qemu_strtosz_suffix_unit(nptr, &eptr, QEMU_STRTOSZ_DEFSUFFIX_B, 1000)

Capture this in a new function for legibility:

qemu_strtosz_metric(nptr, &eptr)

Replace test_qemu_strtosz_suffix_unit() by test_qemu_strtosz_metric().

Rename qemu_strtosz_suffix_unit() to do_strtosz() and give it internal
linkage.

Backports commit d2734d2629266006b0413433778474d5801c60be from qemu
2018-03-02 08:47:40 -05:00
Markus Armbruster fb962d2e74
util/cutils: Clean up control flow around qemu_strtol() a bit
Reorder check_strtox_error() to make it obvious that we always store
through a non-null @endptr.

Transform

if (some error) {
error case ...
err = value for error case;
} else {
normal case ...
err = value for normal case;
}
return err;

to

if (some error) {
error case ...
return value for error case;
}
normal case ...
return value for normal case;

Backports commit 4baef2679e029c76707be1e2ed54bf3dd21693fe from qemu
2018-03-02 08:45:18 -05:00
Markus Armbruster 9236950e61
util/cutils: Clean up variable names around qemu_strtol()
Name same things the same, different things differently.

* qemu_strtol()'s parameter @nptr is called @p in
check_strtox_error(). Rename the latter.

* qemu_strtol()'s parameter @endptr is called @next in
check_strtox_error(). Rename the latter.

* qemu_strtol()'s variable @p is called @endptr in
check_strtox_error(). Rename both to @ep.

* qemu_strtol()'s variable @err is *negative* errno,
check_strtox_error()'s parameter @err is *positive*. Rename the
latter to @libc_errno.

Same for qemu_strtoul(), qemu_strtoi64(), qemu_strtou64(), of course.

Backports commit 717adf960933da0650d995f050d457063d591914 from qemu
2018-03-02 08:41:47 -05:00
Markus Armbruster 41c2e1168f
util/cutils: Rename qemu_strtoll(), qemu_strtoull()
The name qemu_strtoll() suggests conversion to long long, but it
actually converts to int64_t. Rename to qemu_strtoi64().

The name qemu_strtoull() suggests conversion to unsigned long long,
but it actually converts to uint64_t. Rename to qemu_strtou64().

Backports commit b30d188677456b17c1cd68969e08ddc634cef644 from qemu
2018-03-02 08:39:45 -05:00
Markus Armbruster ac34d92d09
util/cutils: Rewrite documentation of qemu_strtol() & friends
Fixes the following documentation bugs:

* Fails to document that null @nptr is safe.

* Fails to document that we return -EINVAL when no conversion could be
performed (commit 47d4be1).

* Confuses long long with int64_t, and unsigned long long with
uint64_t.

* Claims the unsigned conversions can underflow. They can't.

While there, mark problematic assumptions that int64_t is long long,
and uint64_t is unsigned long long with FIXME comments.

Backports commit 4295f879becfbbb9f4330489311586b96915d920 from qemu
2018-03-02 08:37:57 -05:00
Markus Armbruster 9d1937f25d
qdict: Make qdict_get_qlist() safe like qdict_get_qdict()
Commit 89cad9f changed qdict_get_qdict() to return NULL instead of
crash when the key doesn't exist or its value isn't a QDict.
Commit 2d6421a neglected to do the same for qdict_get_qlist().
Correct that, and update the function comments.

qdict_get_obj() is now unused, remove.

Backports commit b25f23e7dbc6bc0dcda010222a4f178669d1aedc from qemu
2018-03-02 08:35:17 -05:00
Bharata B Rao 7fadaf0bc4
softfloat: Add float128_to_uint32_round_to_zero()
float128_to_uint32_round_to_zero() is needed by xscvqpuwz instruction
of PowerPC ISA 3.0.

Backports commit fd425037d25cecaaffdb3831697e0adc10ca2ba3 from qemu
2018-03-02 08:33:09 -05:00
Bharata B Rao 64d32a2237
softfloat: Add float128_to_uint64_round_to_zero()
Implement float128_to_uint64() and use that to implement
float128_to_uint64_round_to_zero()

This is required by xscvqpudz instruction of PowerPC ISA 3.0.

Backports commit 2e6d85683576c970c714c1cc071dca742835b9d4 from qemu
2018-03-02 08:32:02 -05:00
Bharata B Rao 80e522b499
softfloat: Add round-to-odd rounding mode
Power ISA 3.0 introduces a few quadruple precision floating point
instructions that support round-to-odd rounding mode. The
round-to-odd mode is explained as under:

Let Z be the intermediate arithmetic result or the operand of a convert
operation. If Z can be represented exactly in the target format, the
result is Z. Otherwise the result is either Z1 or Z2 whichever is odd.
Here Z1 and Z2 are the next larger and smaller numbers representable
in the target format respectively.

Backports commit 9ee6f678f473007e252934d6acd09c24490d9d42 from qemu
2018-03-02 08:25:00 -05:00
Paul Burton 411ddd16cf
target-mips: Provide function to test if a CPU supports an ISA
Provide a new cpu_supports_isa function which allows callers to
determine whether a CPU supports one of the ISA_ flags, by testing
whether the associated struct mips_def_t sets the ISA flags in its
insn_flags field.

An example use of this is to allow boards which generate bootloader code
to determine the properties of the CPU that will be used, for example
whether the CPU is 64 bit or which architecture revision it implements.

Backports commit bed9e5ceb158c886d548fe59675a6eba18baeaeb from qemu
2018-03-02 08:20:19 -05:00
Paolo Bonzini 37918ba5b0
exec: make address_space_cache_destroy idempotent
Clear cache->mr so that address_space_cache_destroy does nothing
the second time it is called.

Backports commit 91047df38dffa80222179f63fbb74c1dfefa25ed from qemu
2018-03-02 08:16:17 -05:00
Paolo Bonzini e66da21a56
cpu-exec: remove outermost infinite loop
Reorganize the sigsetjmp so that the restart case falls through
to cpu_handle_exception and the execution loop.

Backports commit 4515e58d60dc3aac53dbd5e53e4c3bec126967d8 from qemu
2018-03-02 08:13:43 -05:00
Paolo Bonzini af524401ad
cpu-exec: avoid repeated sigsetjmp on interrupts
The sigsetjmp only needs to be prepared once for the whole execution
of cpu_exec. This patch takes care of the "== 0" side, using a
nested loop so that cpu_handle_interrupt goes straight back to
cpu_handle_exception without doing another sigsetjmp.

Backports commit a42cf3f3f266a97ceb13e8b99bc7b13f7bf4192a from qemu
2018-03-02 08:09:50 -05:00
Paolo Bonzini 28b615a8b7
cpu-exec: avoid cpu_loop_exit in cpu_handle_interrupt
The siglongjmp goes straight back to the beginning of cpu_exec's
outermost loop. We do not need a siglongjmp, we can simply
leave the inner TB execution loop.

Backports commit 209b71b60ef3341246038e1c926c3b704969cdd3 from qemu
2018-03-02 08:03:18 -05:00
Paolo Bonzini b39acfc3c6
cpu-exec: tighten barrier on TCG_EXIT_REQUESTED
This seems to have worked just fine so far on weakly-ordered
architectures, but I don't see anything that prevents the
reordering from:

store 1 to exit_request
store 1 to tcg_exit_req
load tcg_exit_req
store 0 to tcg_exit_req
load exit_request
store 0 to exit_request
store 1 to exit_request
store 1 to tcg_exit_req

to this:

store 1 to exit_request
store 1 to tcg_exit_req
load tcg_exit_req
load exit_request
store 1 to exit_request
store 1 to tcg_exit_req
store 0 to tcg_exit_req
store 0 to exit_request

therefore losing a request. It's possible that other memory barriers
(e.g. in rcu_read_unlock) are hiding it, but better safe than
sorry.

Backports commit a70fe14b7dddcb944fbd6c9f3739cd3a22089af5 from qemu
2018-03-02 08:01:08 -05:00
Wei Huang c9bdf5e6c7
target-arm: Enable vPMU support under TCG mode
This patch contains several fixes to enable vPMU under TCG mode. It
first removes the checking of kvm_enabled() while unsetting
ARM_FEATURE_PMU. With it, the .pmu option can be used to turn on/off vPMU
under TCG mode. Secondly the PMU node of DT table is now created under TCG.
The last fix is to disable the masking of PMUver field of ID_AA64DFR0_EL1.

Backports commit d6f02ce3b8a43ddd8f83553fe754a34b26fb273f from qemu
2018-03-02 07:58:48 -05:00
Wei Huang 5e3349a818
target-arm: Add support for PMU register PMINTENSET_EL1
This patch adds access support for PMINTENSET_EL1.

Backports commit e6ec54571e424bb1d6e50e32fe317c616cde3e05 from qemu
2018-03-02 07:57:40 -05:00
Wei Huang 3b34b7f0f9
target-arm: Add support for AArch64 PMU register PMXEVTYPER_EL0
In order to support Linux perf, which uses PMXEVTYPER register,
this patch adds read/write access support for PMXEVTYPER. The access
is CONSTRAINED UNPREDICTABLE when PMSELR is not 0x1f. Additionally
this patch adds support for PMXEVTYPER_EL0.

Backports commit fdb8665672ded05f650d18f8b62d5c8524b4385b from qemu
2018-03-02 07:53:05 -05:00
Wei Huang 1165020022
target-arm: Add support for PMU register PMSELR_EL0
This patch adds support for AArch64 register PMSELR_EL0. The existing
PMSELR definition is revised accordingly.

Backports commit 6b0407805d46bbeba70f4be426285d0a0e669750 from qemu
2018-03-02 07:39:43 -05:00
Peter Maydell bddeac4430
target/arm: A32, T32: Create Instruction Syndromes for Data Aborts
Add support for generating the ISS (Instruction Specific Syndrome)
for Data Abort exceptions taken from AArch32. These syndromes are
used by hypervisors for example to trap and emulate memory accesses.

This is the equivalent for AArch32 guests of the work done for AArch64
guests in commit aaa1f954d4cab243.

Backports commit 9bb6558a218bf7e466e5ac1100639517d8a30d33 from qemu
2018-03-02 00:37:06 -05:00
Peter Maydell 74d42aa939
target/arm: Abstract out pbit/wbit tests in ARM ldr/str decode
In the ARM ldr/str decode path, rather than directly testing
"insn & (1 << 21)" and "insn & (1 << 24)", abstract these
bits out into wbit and pbit local flags. (We will want to
do more tests against them to determine whether we need to
provide syndrome information.)

Backports commit 63f26fcfda8e19f94ce23336726d14805250a5b6 from qemu
2018-03-02 00:26:58 -05:00
Julian Brown cc217b0c90
arm: Correctly handle watchpoints for BE32 CPUs
In BE32 mode, sub-word size watchpoints can fail to trigger because the
address of the access is adjusted in the opcode helpers before being
compared with the watchpoint registers. This patch reverses the address
adjustment before performing the comparison with the help of a new CPUClass
hook.

This version of the patch augments and tidies up comments a little.

Backports commit 40612000599e52e792d23c998377a0fa429c4036 from qemu
2018-03-02 00:24:33 -05:00
Julian Brown 58059c3a35
Fix Thumb-1 BE32 execution and disassembly.
Thumb-1 code has some issues in BE32 mode (as currently implemented). In
short, since bytes are swapped within words at load time for BE32
executables, this also swaps pairs of adjacent Thumb-1 instructions.

This patch un-swaps those pairs of instructions again, both for execution,
and for disassembly. (The previous version of the patch always read four
bytes in arm_read_memory_func and then extracted the proper two bytes,
in a probably misguided attempt to match the behaviour of actual hardware
as described by e.g. the ARM9TDMI TRM, section 3.3 "Endian effects for
instruction fetches". It's less complicated to just read the correct
two bytes though.)

Backports commit f7478a92dd9ee2276bfaa5b7317140d3f9d6a53b from qemu
2018-03-02 00:20:11 -05:00
Julian Brown 1aedb26670
target/arm: Add cfgend parameter for ARM CPU selection.
Add a new "cfgend" property which selects whether the CPU resets into
big-endian mode or not. This setting affects whether we reset with
SCTLR_B (ARMv6 and earlier) or SCTLR_EE (ARMv7 and later) set.

Backports commit 3a062d5730266b2386eeda68b1a1c6e96451db31 from qemu
2018-03-02 00:18:18 -05:00
Bharata B Rao 4324d1e97e
softfloat: Fix the default qNAN for target-ppc
Currently float128_default_nan() returns 0xFFFF800000000000 in the
higher double word, but it should return 0x7FFF800000000000 which
is the correct higher double word for default qNAN on PowerPC.

Backports commit 5d51eaea84899d88cb161fab3f089168e3812e9e from qemu
2018-03-02 00:15:36 -05:00
Michael S. Tsirkin ad6873ec57
arm: better stub version for MISMATCH_CHECK
stub version of MISMATCH_CHECK is empty so it's easy to misuse for
people not building kvm on arm. Use QEMU_BUILD_BUG_ON similar to the
non-stub version to make it easier to catch bugs.

Backports commit 705ae59fecae341a4b1a45ce48b46de4b1bb3cf4 from qemu
2018-03-02 00:13:45 -05:00
Michael S. Tsirkin 4d1139f83f
arm: add trailing ; after MISMATCH_CHECK
Macro calls without a trailing ; look weird in C, this works as a side
effect of how QEMU_BUILD_BUG_ON is implemented. Fix this up.

Backports commit 1b28762a333bd238611103e9ed2348d7af93b0db from qemu
2018-03-02 00:12:04 -05:00
Michael S. Tsirkin 0455644974
ARRAY_SIZE: check that argument is an array
It's a familiar pattern: some code uses ARRAY_SIZE, then refactoring
changes the argument from an array to a pointer to a dynamically
allocated buffer. Code keeps compiling but any ARRAY_SIZE calls now
return the size of the pointer divided by element size.

Let's add build time checks to ARRAY_SIZE before we allow more
of these in the code-base.

Backports commit ed63ec0d22ccdce3b2222d9a514423b7fbba3a0d from qemu
2018-03-02 00:09:51 -05:00
Michael S. Tsirkin ac013df0a2
compiler: expression version of QEMU_BUILD_BUG_ON
QEMU_BUILD_BUG_ON uses a typedef in order to be safe
to use outside functions, but sometimes it's useful
to have a version that can be used within an expression.
Following what Linux does, introduce QEMU_BUILD_BUG_ON_ZERO
that return zero after checking condition at build time.

Backports commit d757573e69f2ef58a4a7b41f6c55d65fa1e1c5c2 from qemu
2018-03-02 00:07:33 -05:00
Michael S. Tsirkin 634a8094f1
compiler: rework BUG_ON using a struct
There are theoretical concerns that some compilers might not trigger
build failures on attempts to define an array of size (x ? -1 : 1) where
x is a variable and make it a variable sized array instead. Let rewrite
using a struct with a negative bit field size instead as there are no
dynamic bit field sizes. This is similar to what Linux does.

Backports commit f291887e8eef5d37d31484638f6e62401b4b99a2 from qemu
2018-03-02 00:05:07 -05:00
Michael S. Tsirkin 7f9fb3395c
QEMU_BUILD_BUG_ON: use __COUNTER__
Some headers use QEMU_BUILD_BUG_ON. This causes a problem
if the C file including that header happens to have
QEMU_BUILD_BUG_ON at the same line number.

Fix using a widely available extension: __COUNTER__.
If unavailable, provide a stub.

Backports commit 60abf0a5e05134187e274ce5f32524ccf0cae1a6 from qemu
2018-03-02 00:03:44 -05:00
Michael S. Tsirkin beca05eb5f
compiler: drop ; after BUILD_BUG_ON
All users include the trailing ; anyway, let's require that -
it seems cleaner.

Backports commit f29831828441318c7916ae28e6e16e4a1c4a6795 from qemu
2018-03-02 00:01:44 -05:00
Ladi Prosek babf848b82
memory: don't sign-extend 32-bit writes
ldl_p has a signed return type so assigning it to uint64_t implicitly
sign-extends the value. This results in devices with min_access_size = 8
seeing unexpected values passed to their write handlers.

Example: guest performs a 32-bit write of 0x80000000 to an mmio region
and the handler receives 0xFFFFFFFF80000000 in its value argument.

Backports commit 6da67de6803e93cbb7e93ac3497865832f8c00ea from qemu
2018-03-02 00:00:22 -05:00
Peter Maydell 48825c1be2
target/arm: Drop IS_M() macro
We only use the IS_M() macro in two places, and it's a bit of a
namespace grab to put in cpu.h. Drop it in favour of just explicitly
calling arm_feature() in the places where it was used.

Backports commit 531c60a97ab51618b4b9ccef1c5fe00607079706 from qemu
2018-03-01 23:59:09 -05:00
Cao jin f2a5ddf5dc
util/mmap-alloc: refactor a little bit for readability
1st mmap returns *ptr* which aligns to host page size,

| size + align |
------------------------------------------
ptr

input param *align* could be 1M, or 2M, or host page size. After
QEMU_ALIGN_UP, offset will >= 0

2nd mmap use flag MAP_FIXED, then it return ptr+offset, or else fail.
If it success, then we will have something like:

| offset | size |
--------------------------------------
ptr ptr1

*ptr1* is what we really want to return, it equals ptr+offset.

Backports commit 6e4c890e15b23f078650499fbde11760b8eccf10 from qemu
2018-03-01 23:55:15 -05:00
Cao jin 217c14ad3e
util/mmap-alloc: check parameter before using
Backports commit 4a3ecf201a1a49a804e8506df5906e446707c3b1 from qemu
2018-03-01 23:53:45 -05:00