Commit graph

653 commits

Author SHA1 Message Date
Emilio G. Cota 7d0440dec4
tb-hash: improve tb_jmp_cache hash function in user mode
Optimizations to cross-page chaining and indirect branches make
performance more sensitive to the hit rate of tb_jmp_cache.
The constraint of reserving some bits for the page number
lowers the achievable quality of the hashing function.

However, user-mode does not have this requirement. Thus,
with this change we use for user-mode a hashing function that
is both faster and of better quality than the previous one.

Measurements:

Note: baseline (i.e. speedup == 1x) is QEMU v2.9.0.

- SPECint06 (test set), x86_64-linux-user. Host: Intel i7-6700K @ 4.00GHz

2.2x +-+--------------------------------------------------------------------------------------------------------------+-+
| |
| jr |
2x +jr+multhash +....................................................+++++...................................+-+
| jr+hash |$$$ |
| |$+$ |
| ### $ |
1.8x +-+......................................................................#|#.$...................................+-+
| ++#+# $ |
| |# # $ |
1.6x +-+....................................................................***.#.$....................++$$$..........+-+
| $$$ *+* # $ |$+$ |
| ++$$$ ### $ * * # $ +++|$ $ |
| ++###+$ # # $ * * # $ ### ****## $ |
1.4x +-+...................***+#.$.........***.#.$..........................*.*.#.$...........#+#$$.*++*|#.$..........+-+
| *+* # $ * * # $ * * # $ # # $ * *+# $ |
| * * # $ +++++ * * # $ * * # $ *** # $ * * # $ ###$$ |
1.2x +-+...................*.*.#.$.***##$$.*.*.#.$..........................*.*.#.$.........*.*.#.$.*..*.#.$.***+#+$..+-+
| * * # $ *+* # $ * * # $ +++ * * # $ ++###$$ * * # $ * * # $ * * # $ |
| ***##$$ * * # $ * * # $ * * # $ ***##$$ ++### * * # $ *** #+$ * * # $ * * # $ * * # $ |
| *+*+#+$ ***##$$$ * * # $ * * # $ * * # $ *+* # $ ++####$$ ***+# * * # $ * * # $ * * # $ * * # $ * * # $ |
1x +-++-*+*+#+$+*+*+#-+$+*+*-#+$+*+*+#+$+*+*+#+$+*-*+#+$+***++#+$+*+*+#$$+*+*+#+$+*+*+#+$+*+*-#+$+*+-*+#+$+*+*+#+$-++-+
| * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ |
| * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ * * # $ |
0.8x +-+--***##$$-***##$$$-***##$$-***##$$-***##$$-***##$$-***###$$-***##$$-***##$$-***##$$-***##$$-****##$$-***##$$--+-+
astar bzip2 gcc gobmk h264ref hmmlibquantum mcf omnetpperlbench sjengxalancbmk hmean
png: http://imgur.com/4UXTrEc

Here I also tried the hash function suggested by Paolo ("multhash"):

return ((uint64_t) (pc * 2654435761) >> 32) & (TB_JMP_CACHE_SIZE - 1);

As you can see it is just as good as the other new function ("hash"),
which is what I ended up going with.

- SPECint06 (train set), x86_64-linux-user. Host: Intel i7-6700K @ 4.00GHz

2.6x +-+--------------------------------------------------------------------------------------------------------------+-+
| |
| jr ### |
2.4x +jr+hash...........................................................................................#.#...........+-+
| # # |
| # # |
2.2x +-+................................................................................................#.#...........+-+
| # # |
| # # |
2x +-+................................................................................................#.#...........+-+
| **** # |
| * * # |
1.8x +-+.............................................................................................*..*.#...........+-+
| +++ * * # |
| #### #### * * # |
1.6x +-+......................................####.............................#..#.****..#..........*..*.#...........+-+
| +++ #++# **** # * * # #### * * # |
| ### # # * * # * * # # # * * # |
1.4x +-+...................****+#..........****..#..........................*..*..#.*..*..#....#..#..*..*.#...........+-+
| *++* # * * # * * # * * # *** # * * # #### |
| * * # #### * * # * * # * * # * * # * * # **** # |
1.2x +-+...................*..*.#..****++#.*..*..#..........................*..*..#.*..*..#..*.*..#..*..*.#..*..*..#..+-+
| ****### * * # * * # * * # * * # * * # * * # * * # * * # |
| * * # ***### * * # * * # * * # ****## * * # * * # * * # * * # * * # |
1x +-+--****###--***###--****##--****###-****###--***###--***###--****##--****###-****###--***###--****##--****###--+-+
astar bzip2 gcc gobmk h264ref hmmlibquantum mcf omnetpperlbench sjengxalancbmk hmean
png: http://imgur.com/ArCbHqo

- NBench, x86_64-linux-user. Host: Intel i7-6700K @ 4.00GHz

1.12x +-+-------------------------------------------------------------------------------------------------------------+-+
| |
| jr +++ |
1.1x +jr+hash...........................................................####.........................................+-+
| +++#| # |
| | #++# |
1.08x +-+................................+++................+++.+++..*****..#.........................................+-+
| | +++ | | * | * # |
| | | | | *+++* # |
1.06x +-+................................****###.............|...|...*...*..#.........................+++.............+-+
| *| * |# ****### * * # | |
| *| *++# *| * |# * * # #### |
1.04x +-+................................*++*..#............*|.*.|#..*...*..#........................#.|#.............+-+
| * * # *++*++# * * # +++#++# |
| * * # * * # * * # | # # +++#### |
1.02x +-+................................*..*..#......+++...*..*..#..*...*..#.....................****..#..*****++#...+-+
| +++ * * # +++ | * * # * * # +++ *| * # *+++* # |
| +++ | +++ +++ ++++++ * * # *****### * * # * * # | +++ ++++++ *++* # * * # |
1x +-++-+++++####++****###++++-+####+-*++*++#-+*+++*-+#++*++*++#++*+-+*++#+-+++####-+*****###++*++*++#++*+-+*++#+-++-+
| *****| # *++* |# *****| # * * # * *++# * * # * * # **** |# * * # * * # * * # |
| * | *| # * *++# * | *++# * * # * * # * * # * * # *| *++# * * # * * # * * # |
0.98x +-+...*.|.*++#..*..*..#..*+++*..#..*..*..#..*...*..#..*..*..#..*...*..#..*++*..#..*...*..#..*..*..#..*...*..#...+-+
| *+++* # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # |
| * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # * * # |
0.96x +-+---*****###--****###--*****###--****###--*****###--****###--*****###--****###--*****###--****###--*****###---+-+
ASSIGNMENT BITFIELD FOURFP EMULATION HUFFMAN LU DECOMPOSITIONEURAL NNUMERIC SOSTRING SORT hmean
png: http://imgur.com/ZXFX0hJ

- NBench, arm-linux-user. Host: Intel i7-4790K @ 4.00GHz

1.3x +-+-------------------------------------------------------------------------------------------------------------+-+
| #### |
| jr # # +++ |
1.25x +jr+hash.....................#..#...........................................####................................+-+
| # # # # |
| # # # # |
1.2x +-+..........................#..#...........................................#..#................................+-+
| # # # # |
| # # # # |
1.15x +-+..........................#..#...........................................#..#................................+-+
| # # #### # # |
| # # # # # # |
1.1x +-+..........................#..#..................................#..#.....#..#................................+-+
| # # # # # # +++ |
| # # #### # # # # #### |
1.05x +-+..........................#..#...............#..#.....####......#..#.....#..#.........................#..#...+-+
| # # # # # # # # # # +++ # # |
| +++ ***** # #### ***** # # # +++# # **** # ****### # # |
1x +-++-+*****###++****+++++*+-+*++#+-****++#-+*+++*-+#+++++#++#++*****++#+-*++*++#-+*****-++++*++*++#++*****++#+-++-+
| * * # * * | * * # * * # * * # **** # * * # * * # * *### * *++# * * # |
| * * # * *### * * # * * # * * # * * # * * # * * # * * # * * # * * # |
0.95x +-+...*...*..#..*..*.|#..*...*..#..*..*..#..*...*..#..*..*..#..*...*..#..*..*..#..*...*..#..*..*..#..*...*..#...+-+
| * * # * * |# * * # * * # * * # * * # * * # * * # * * # * * # * * # |
| * * # * * |# * * # * * # * * # * * # * * # * * # * * # * * # * * # |
0.9x +-+---*****###--****###--*****###--****###--*****###--****###--*****###--****###--*****###--****###--*****###---+-+
ASSIGNMENT BITFIELD FOURFP EMULATION HUFFMAN LU DECOMPOSITIONEURAL NNUMERIC SOSTRING SORT hmean
png: http://imgur.com/FfD27ey

Backports commit 6f1653180f5701c6a8f1b35b89a80b1e3260928e from qemu
2018-03-03 14:11:29 -05:00
Emilio G. Cota 8f4f15e5f5
tcg: Introduce goto_ptr opcode and tcg_gen_lookup_and_goto_ptr
Instead of exporting goto_ptr directly to TCG frontends, export
tcg_gen_lookup_and_goto_ptr(), which calls goto_ptr with the pointer
returned by the lookup_tb_ptr() helper. This is the only use case
we have for goto_ptr and lookup_tb_ptr, so having this function is
very convenient. Furthermore, it trivially allows us to avoid calling
the lookup helper if goto_ptr is not implemented by the backend.

Backports commit cedbcb01529cb6cf9a2289cdbebbc63f6149fc18 from qemu
2018-03-02 21:05:18 -05:00
Richard Henderson 23d8f5fba2
qemu/atomic: Loosen restrictions for 64-bit ILP32 hosts
We need to coordinate with the TCG_OVERSIZED_GUEST test in cputlb.c,
and allow 64-bit atomics even though sizeof(void *) == 4.

Backports commit 374aae653499f4d405caf32b7fff0c8639113fe4 from qemu
2018-03-02 20:06:39 -05:00
Peter Xu fce1b469e5
memory: tune last param of iommu_ops.translate()
This patch converts the old "is_write" bool into IOMMUAccessFlags. The
difference is that "is_write" can only express either read/write, but
sometimes what we really want is "none" here (neither read nor write).
Replay is an good example - during replay, we should not check any RW
permission bits since thats not an actual IO at all.

Backports commit bf55b7afce53718ef96f4e6616da62c0ccac37dd from qemu
2018-03-02 18:59:12 -05:00
Gerd Hoffmann 108354cc4a
bitmap: add bitmap_copy_and_clear_atomic
Backports commit d6eb1413920affb7be3df9982682dd183a805dd7 from qemu
2018-03-02 18:59:11 -05:00
Peter Maydell b8b70dfcd2
Drop QEMU_GNUC_PREREQ() checks for gcc older than 4.1
We already require gcc 4.1 or newer (for the atomic
support), so the fallback codepaths for older gcc
versions than that are now dead code and we can
just delete them.

NB: clang reports itself as gcc 4.2 (regardless of
clang version), so clang won't be using the fallbacks
either.

Backports commit fa54abb8c298f892639ffc4bc2f61448ac3be4a1 from qemu
2018-03-02 18:59:05 -05:00
Paolo Bonzini c27870520a
exec: revert MemoryRegionCache
MemoryRegionCache did not know about virtio support for IOMMUs (because the
two features were developed at the same time). Revert MemoryRegionCache
to "normal" address_space_* operations for 2.9, as it is simpler than
undoing the virtio patches.

Backports commit 90c4fe5fc517a045e7a7cf2f23472e114042ca29 from qemu
2018-03-02 14:30:41 -05:00
Michael Davidsaver 2769c6ada0
armv7m: Fix reads of CONTROL register bit 1
The v7m CONTROL register bit 1 is SPSEL, which indicates
the stack being used. We were storing this information
not in v7m.control but in the separate v7m.other_sp
structure field. Unfortunately, the code handling reads
of the CONTROL register didn't take account of this, and
so if SPSEL was updated by an exception entry or exit then
a subsequent guest read of CONTROL would get the wrong value.

Using a separate structure field doesn't really gain us
anything in efficiency, so drop this unnecessary complexity
in favour of simply storing all the bits in v7m.control.

This is a migration compatibility break for M profile
CPUs only.

Backports commit abc24d86cc0364f402e438fae3acb14289b40734 from qemu
2018-03-02 13:26:38 -05:00
Dr. David Alan Gilbert 55d79cf4c0
RAMBlocks: qemu_ram_is_shared
Provide a helper to say whether a RAMBlock was created as a
shared mapping.

Backports commit 463a4ac23bcf0f0b65c850fa66f5ae6e43edd243 from qemu
2018-03-02 13:05:35 -05:00
Dr. David Alan Gilbert 5dfbee8930
memory_region: Fix name comments
The 'name' parameter to memory_region_init_* had been marked as debug
only, however vmstate_region_ram uses it as a parameter to
qemu_ram_set_idstr to set RAMBlock names and these form part of the
migration stream.

Backports commit e8f5fe2de125a0bfbefbaa6a69af81f4817cb7a0 from qemu
2018-03-02 13:01:23 -05:00
Markus Armbruster 8a8dc93945
qapi: Improve qobject visitor documentation
Backports commit aa3a982e674b09ae32502940f93ba98b3a8ad50e from qemu
2018-03-02 12:24:21 -05:00
Markus Armbruster ac1a61af47
qapi: Make input visitors detect unvisited list tails
Fix the design flaw demonstrated in the previous commit: new method
check_list() lets input visitors report that unvisited input remains
for a list, exactly like check_struct() lets them report that
unvisited input remains for a struct or union.

Implement the method for the qobject input visitor (straightforward),
and the string input visitor (less so, due to the magic list syntax
there). The opts visitor's list magic is even more impenetrable, and
all I can do there today is a stub with a FIXME comment. No worse
than before.

Backports commit a4a1c70dc759e5b81627e96564f344ab43ea86eb from qemu
2018-03-02 12:21:04 -05:00
Markus Armbruster e0ee098c4a
qapi: Drop unused non-strict qobject input visitor
The split between tests/test-qobject-input-visitor.c and
tests/test-qobject-input-strict.c now makes less sense than ever. The
next commit will take care of that.

Backports commit 048abb7b20c9f822ad9d4b730bade73b3311a47a from qemu
2018-03-02 12:14:52 -05:00
Markus Armbruster 50e3cda49a
qapi: Drop string input visitor method optional()
visit_optional() is to be called only between visit_start_struct() and
visit_end_struct(). Visitors that don't support struct visits,
i.e. don't implement start_struct(), end_struct(), have no use for it.
Clarify documentation.

The string input visitor doesn't support struct visits. Its
parse_optional() is therefore useless. Drop it.

Backports commit a8aec6de2ac1a5e36989fdfba29067b361009b75 from qemu
2018-03-02 12:07:55 -05:00
Markus Armbruster 84e5261cdf
qapi: Improve qobject input visitor error reporting
Error messages refer to nodes of the QObject being visited by name.
Trouble is the names are sometimes less than helpful:

* The name of the root QObject is whatever @name argument got passed
to the visitor, except NULL gets mapped to "null". We commonly pass
NULL. Not good.

Avoiding errors "at the root" mitigates. For instance,
visit_start_struct() can only fail when the visited object is not a
dictionary, and we commonly ensure it is beforehand.

* The name of a QDict's member is the member key. Good enough only
when this happens to be unique.

* The name of a QList's member is "null". Not good.

Improve error messages by referring to nodes by path instead, as
follows:

* The path of the root QObject is whatever @name argument got passed
to the visitor, except NULL gets mapped to "<anonymous>".

* The path of a root QDict's member is the member key.

* The path of a root QList's member is "[%u]", where %u is the list
index, starting at zero.

* The path of a non-root QDict's member is the path of the QDict
concatenated with "." and the member key.

* The path of a non-root QList's member is the path of the QList
concatenated with "[%u]", where %u is the list index.

For example, the incorrect QMP command

{ "execute": "blockdev-add", "arguments": { "node-name": "foo", "driver": "raw", "file": {"driver": "file" } } }

now fails with

{"error": {"class": "GenericError", "desc": "Parameter 'file.filename' is missing"}}

instead of

{"error": {"class": "GenericError", "desc": "Parameter 'filename' is missing"}}

and

{ "execute": "input-send-event", "arguments": { "device": "bar", "events": [ [] ] } }

now fails with

{"error": {"class": "GenericError", "desc": "Invalid parameter type for 'events[0]', expected: object"}}

instead of

{"error": {"class": "GenericError", "desc": "Invalid parameter type for 'null', expected: QDict"}}

Aside: calling the thing "parameter" is suboptimal for QMP, because
the root object is "arguments" there.

The qobject output visitor doesn't have this problem because it should
not fail. Same for dealloc and clone visitors.

The string visitors don't have this problem because they visit just
one value, whose name needs to be passed to the visitor as @name. The
string output visitor shouldn't fail anyway.

The options visitor uses QemuOpts names. Their name space is flat, so
the use of QDict member keys as names is fine. NULL names used with
roots and lists could conceivably result in bad error messages. Left
for another day.

Backports commit a9fc37f6bc3f2ab90585cb16493da9f6dcfbfbcf from qemu
2018-03-02 12:05:53 -05:00
Markus Armbruster d07bcef231
qmp: Eliminate silly QERR_QMP_* macros
The QERR_ macros are leftovers from the days of "rich" error objects.

QERR_QMP_BAD_INPUT_OBJECT, QERR_QMP_BAD_INPUT_OBJECT_MEMBER,
QERR_QMP_EXTRA_MEMBER are used in just one place now, except for one
use that has crept into qobject-input-visitor.c.

Drop these macros, to make the (bad) error messages more visible.

Backports commit 99fb0c53c038105bae68b02a3d9f1cbf7951ba10 from qemu
2018-03-02 11:28:17 -05:00
Yongji Xie 23f5b17a08
memory: Introduce DEVICE_HOST_ENDIAN for ram device
At the moment ram device's memory regions are DEVICE_NATIVE_ENDIAN. It's
incorrect. This memory region is backed by a MMIO area in host, so the
uint64_t data that MemoryRegionOps read from/write to this area should be
host-endian rather than target-endian. Hence, current code does not work
when target and host endianness are different which is the most common case
on PPC64. To fix it, this introduces DEVICE_HOST_ENDIAN for the ram device.

This has been tested on PPC64 BE/LE host/guest in all possible combinations
including TCG.

Backports commit c99a29e702528698c0ce2590f06ca7ff239f7c39 from qemu
2018-03-02 11:24:32 -05:00
Alex Bennée 454932263c
cputlb and arm/sparc targets: convert mmuidx flushes from varg to bitmap
While the vargs approach was flexible the original MTTCG ended up
having munge the bits to a bitmap so the data could be used in
deferred work helpers. Instead of hiding that in cputlb we push the
change to the API to make it take a bitmap of MMU indexes instead.

For ARM some the resulting flushes end up being quite long so to aid
readability I've tended to move the index shifting to a new line so
all the bits being or-ed together line up nicely, for example:

tlb_flush_page_by_mmuidx(other_cs, pageaddr,
(1 << ARMMMUIdx_S1SE1) |
(1 << ARMMMUIdx_S1SE0));

Backports commit 0336cbf8532935d8e23c2aabf3e2ce2c0697b6ac from qemu
2018-03-02 10:12:40 -05:00
KONRAD Frederic c5730ff194
tcg: add options for enabling MTTCG
We know there will be cases where MTTCG won't work until additional work
is done in the front/back ends to support. It will however be useful to
be able to turn it on.

As a result MTTCG will default to off unless the combination is
supported. However the user can turn it on for the sake of testing.

Backports commit 8d4e9146b3568022ea5730d92841345d41275d66 from qemu
2018-03-02 09:25:01 -05:00
Markus Armbruster 89d8e58718
util/cutils: Change qemu_strtosz*() from int64_t to uint64_t
This will permit its use in parse_option_size().

Backports commit f46bfdbfc8f95cf65d7818ef68a801e063c40332 from qemu
2018-03-02 08:58:55 -05:00
Markus Armbruster 8650d0213c
util/cutils: Return qemu_strtosz*() error and value separately
This makes qemu_strtosz(), qemu_strtosz_mebi() and
qemu_strtosz_metric() similar to qemu_strtoi64(), except negative
values are rejected.

Backports commit f17fd4fdf0df3d2f3444399d04c38d22b9a3e1b7 from qemu
2018-03-02 08:57:16 -05:00
Markus Armbruster 858acd4142
util/cutils: New qemu_strtosz()
Most callers of qemu_strtosz_suffix() pass QEMU_STRTOSZ_DEFSUFFIX_B.
Capture the pattern in new qemu_strtosz().

Inline qemu_strtosz_suffix() into its only remaining caller.

Backports commit 466dea14e677555dd24465aca75d00a3537ad062 from qemu
2018-03-02 08:50:56 -05:00
Markus Armbruster a3358798d6
util/cutils: Rename qemu_strtosz() to qemu_strtosz_MiB()
With qemu_strtosz(), no suffix means mebibytes. It's used rarely.
I'm going to add a similar function where no suffix means bytes.
Rename qemu_strtosz() to qemu_strtosz_MiB() to make the name
qemu_strtosz() available for the new function.

Backports commit e591591b323772eea733de6027f5e8b50692d0ff from qemu
2018-03-02 08:49:26 -05:00
Markus Armbruster f656cd91ec
util/cutils: New qemu_strtosz_metric()
To parse numbers with metric suffixes, we use

qemu_strtosz_suffix_unit(nptr, &eptr, QEMU_STRTOSZ_DEFSUFFIX_B, 1000)

Capture this in a new function for legibility:

qemu_strtosz_metric(nptr, &eptr)

Replace test_qemu_strtosz_suffix_unit() by test_qemu_strtosz_metric().

Rename qemu_strtosz_suffix_unit() to do_strtosz() and give it internal
linkage.

Backports commit d2734d2629266006b0413433778474d5801c60be from qemu
2018-03-02 08:47:40 -05:00
Markus Armbruster 41c2e1168f
util/cutils: Rename qemu_strtoll(), qemu_strtoull()
The name qemu_strtoll() suggests conversion to long long, but it
actually converts to int64_t. Rename to qemu_strtoi64().

The name qemu_strtoull() suggests conversion to unsigned long long,
but it actually converts to uint64_t. Rename to qemu_strtou64().

Backports commit b30d188677456b17c1cd68969e08ddc634cef644 from qemu
2018-03-02 08:39:45 -05:00
Bharata B Rao 7fadaf0bc4
softfloat: Add float128_to_uint32_round_to_zero()
float128_to_uint32_round_to_zero() is needed by xscvqpuwz instruction
of PowerPC ISA 3.0.

Backports commit fd425037d25cecaaffdb3831697e0adc10ca2ba3 from qemu
2018-03-02 08:33:09 -05:00
Bharata B Rao 64d32a2237
softfloat: Add float128_to_uint64_round_to_zero()
Implement float128_to_uint64() and use that to implement
float128_to_uint64_round_to_zero()

This is required by xscvqpudz instruction of PowerPC ISA 3.0.

Backports commit 2e6d85683576c970c714c1cc071dca742835b9d4 from qemu
2018-03-02 08:32:02 -05:00
Bharata B Rao 80e522b499
softfloat: Add round-to-odd rounding mode
Power ISA 3.0 introduces a few quadruple precision floating point
instructions that support round-to-odd rounding mode. The
round-to-odd mode is explained as under:

Let Z be the intermediate arithmetic result or the operand of a convert
operation. If Z can be represented exactly in the target format, the
result is Z. Otherwise the result is either Z1 or Z2 whichever is odd.
Here Z1 and Z2 are the next larger and smaller numbers representable
in the target format respectively.

Backports commit 9ee6f678f473007e252934d6acd09c24490d9d42 from qemu
2018-03-02 08:25:00 -05:00
Julian Brown cc217b0c90
arm: Correctly handle watchpoints for BE32 CPUs
In BE32 mode, sub-word size watchpoints can fail to trigger because the
address of the access is adjusted in the opcode helpers before being
compared with the watchpoint registers. This patch reverses the address
adjustment before performing the comparison with the help of a new CPUClass
hook.

This version of the patch augments and tidies up comments a little.

Backports commit 40612000599e52e792d23c998377a0fa429c4036 from qemu
2018-03-02 00:24:33 -05:00
Michael S. Tsirkin 0455644974
ARRAY_SIZE: check that argument is an array
It's a familiar pattern: some code uses ARRAY_SIZE, then refactoring
changes the argument from an array to a pointer to a dynamically
allocated buffer. Code keeps compiling but any ARRAY_SIZE calls now
return the size of the pointer divided by element size.

Let's add build time checks to ARRAY_SIZE before we allow more
of these in the code-base.

Backports commit ed63ec0d22ccdce3b2222d9a514423b7fbba3a0d from qemu
2018-03-02 00:09:51 -05:00
Michael S. Tsirkin ac013df0a2
compiler: expression version of QEMU_BUILD_BUG_ON
QEMU_BUILD_BUG_ON uses a typedef in order to be safe
to use outside functions, but sometimes it's useful
to have a version that can be used within an expression.
Following what Linux does, introduce QEMU_BUILD_BUG_ON_ZERO
that return zero after checking condition at build time.

Backports commit d757573e69f2ef58a4a7b41f6c55d65fa1e1c5c2 from qemu
2018-03-02 00:07:33 -05:00
Michael S. Tsirkin 634a8094f1
compiler: rework BUG_ON using a struct
There are theoretical concerns that some compilers might not trigger
build failures on attempts to define an array of size (x ? -1 : 1) where
x is a variable and make it a variable sized array instead. Let rewrite
using a struct with a negative bit field size instead as there are no
dynamic bit field sizes. This is similar to what Linux does.

Backports commit f291887e8eef5d37d31484638f6e62401b4b99a2 from qemu
2018-03-02 00:05:07 -05:00
Michael S. Tsirkin 7f9fb3395c
QEMU_BUILD_BUG_ON: use __COUNTER__
Some headers use QEMU_BUILD_BUG_ON. This causes a problem
if the C file including that header happens to have
QEMU_BUILD_BUG_ON at the same line number.

Fix using a widely available extension: __COUNTER__.
If unavailable, provide a stub.

Backports commit 60abf0a5e05134187e274ce5f32524ccf0cae1a6 from qemu
2018-03-02 00:03:44 -05:00
Michael S. Tsirkin beca05eb5f
compiler: drop ; after BUILD_BUG_ON
All users include the trailing ; anyway, let's require that -
it seems cleaner.

Backports commit f29831828441318c7916ae28e6e16e4a1c4a6795 from qemu
2018-03-02 00:01:44 -05:00
Sascha Silbe 11c66029b7
error: error_setg_errno(): errno gets preserved
C11 allows errno to be clobbered by pretty much any library function
call, so in general callers need to take care to save errno before
calling other functions.

However, for error reporting functions this is rather awkward and can
make the code on the caller side more complicated than
necessary. error_setg_errno() already takes care of preserving errno
and some functions rely on that, so just promise that we continue to
do so in the future.

Backports commit 98cb89af4df7e1776ce418ed6167b6e214a64435 from qemu
2018-03-01 23:38:25 -05:00
Thomas Huth b2f1326437
Move target-* CPU file into a target/ folder
We've currently got 18 architectures in QEMU, and thus 18 target-xxx
folders in the root folder of the QEMU source tree. More architectures
(e.g. RISC-V, AVR) are likely to be included soon, too, so the main
folder of the QEMU sources slowly gets quite overcrowded with the
target-xxx folders.
To disburden the main folder a little bit, let's move the target-xxx
folders into a dedicated target/ folder, so that target-xxx/ simply
becomes target/xxx/ instead.

Backports commit fcf5ef2ab52c621a4617ebbef36bf43b4003f4c0 from qemu
2018-03-01 22:50:58 -05:00
Alex Bennée e3e57ca08e
cputlb: drop flush_global flag from tlb_flush
We have never has the concept of global TLB entries which would avoid
the flush so we never actually use this flag. Drop it and make clear
that tlb_flush is the sledge-hammer it has always been.

Backports commit  d10eb08f5d8389c814b554d01aa2882ac58221bf from qemu
2018-03-01 19:36:04 -05:00
Richard Henderson 5ca8ac1aeb
qemu/host-utils.h: Reduce the operation count in the fallback ctpop
Backports commit 7bdcecb7b2d79c292d1256f7d6cf0f1da50d381f from qemu
2018-03-01 18:35:51 -05:00
Jason Wang 29932d0719
memory: handle alias in memory_region_is_iommu()
Backports commit 12d37882f0c0def5dee1c21be5d8fea9c21baada from qemu
2018-03-01 13:06:18 -05:00
Jason Wang fdca6292a1
exec: introduce address_space_get_iotlb_entry()
This patch introduces a helper to query the iotlb entry for a
possible iova. This will be used by later device IOTLB API to enable
the capability for a dataplane (e.g vhost) to query the IOTLB.

Backports commit 052c8fa9983f553fdfa0d61034774070dd639c2b from qemu
2018-03-01 13:05:08 -05:00
Paolo Bonzini 81ad780e5e
exec: introduce MemoryRegionCache
Device models often have to perform multiple access to a single
memory region that is known in advance, but would to use "DMA-style"
functions instead of address_space_map/unmap. This can happen
for example when the data has to undergo endianness conversion.
Introduce a new data structure to cache the result of
address_space_translate without forcing usage of a host address
like address_space_map does.

Backports commit 1f4e496e1fc2eb6c8bf377a0f9695930c380bfd3 from qemu
2018-03-01 10:50:30 -05:00
Paolo Bonzini 88ad0f4f6e
exec: introduce memory_ldst.inc.c
Templatize the address_space_* and *_phys functions, so that we can add
similar functions in the next patch that work with a lightweight,
cache-like version of address_space_map/unmap.

Backports commit 0ce265ffef87f19f4dd1ff0663e09a63d66ae408 from qemu
2018-03-01 09:59:34 -05:00
Paolo Bonzini 9404dbf74e
cpu-exec: fix icount out-of-bounds access
When icount is active, tb_add_jump is surprisingly called with an
out of bounds basic block index. I have no idea how that can work,
but it does not seem like a good idea. Clear *last_tb for all
TB_EXIT_ICOUNT_EXPIRED cases, even when all you have to do is
refill icount_extra.

Backports commit d8dea6fbcbed177ca5d23ab77b3834a9437f0e88 from qemu
2018-03-01 09:17:26 -05:00
Bobby Bingham d46e52d9d0
cpu_ldst.h: use correct guest address parameter
In the user emulation code path, tlb_vaddr_to_host erronesously passed
vaddr as the guest address to be translated, instead of addr, the parameter
which actually contained the guest address.

This resulted in incorrect addresses being used when emulating block copy
(mvc/mvpg) and block clear (xc) instructions for the s390x target.

Backports commit c2a85316902e67530da9d6548139fcce73c0cac6 from qemu
2018-03-01 08:56:37 -05:00
Paolo Bonzini 9d64a89acf
tcg: comment on which functions have to be called with tb_lock held
softmmu requires more functions to be thread-safe, because translation
blocks can be invalidated from e.g. notdirty callbacks. Probably the
same holds for user-mode emulation, it's just that no one has ever
tried to produce a coherent locking there.

This patch will guide the introduction of more tb_lock and tb_unlock
calls for system emulation.

Note that after this patch some (most) of the mentioned functions are
still called outside tb_lock/tb_unlock. The next one will rectify this.

Backports commit 7d7500d99895f888f97397ef32bb536bb0df3b74 from qemu
2018-02-28 10:26:28 -05:00
Alex Bennée 7aab0bd9a6
translate-all: add DEBUG_LOCKING asserts
This adds asserts to check the locking on the various translation
engines structures. There are two sets of structures that are protected
by locks.

The first the l1map and PageDesc structures used to track which
translation blocks are associated with which physical addresses. In
user-mode this is covered by the mmap_lock.

The second case are TB context related structures which are protected by
tb_lock which is also user-mode only.

Currently the asserts do nothing in SoftMMU mode but this will change
for MTTCG.

Backports commit 301e40ed8005306c009978be295ed9a4b725178b from qemu
2018-02-28 08:56:15 -05:00
Richard Henderson da01e53757
tcg: Add atomic128 helpers
Force the use of cmpxchg16b on x86_64.

Wikipedia suggests that only very old AMD64 (circa 2004) did not have
this instruction. Further, it's required by Windows 8 so no new cpus
will ever omit it.

If we truely care about these, then we could check this at startup time
and then avoid executing paths that use it.

Backports commit 7ebee43ee3e2fcd7b5063058b7ef74bc43216733 from qemu
2018-02-27 21:43:48 -05:00
Richard Henderson 5c0ce1b99c
tcg: Add atomic helpers
Add all of cmpxchg, op_fetch, fetch_op, and xchg.
Handle both endian-ness, and sizes up to 8.
Handle expanding non-atomically, when emulating in serial.

Backports commit c482cb117cc418115ca9c6d21a7a2315414c0a40 from qemu
2018-02-27 15:57:47 -05:00
Yongbok Kim 79e4c001a9
softmmu: Add probe_write()
Probe for whether the specified guest write access is permitted.
If it is not permitted then an exception will be taken in the same
way as if this were a real write access (and we will not return).
Otherwise the function will return, and there will be a valid
entry in the TLB for this access.

Backports commit 3b4afc9e75ab1a95f33e41f462921093f8a109c4 from qemu
2018-02-27 12:20:50 -05:00
Richard Henderson e35aacd5ae
tcg: Add EXCP_ATOMIC
When we cannot emulate an atomic operation within a parallel
context, this exception allows us to stop the world and try
again in a serial context.

Backports commit fdbc2b5722f6092e47181a947c90fd4bdcc1c121 from qemu

Also backports parts of commit 02d57ea115b7669f588371c86484a2e8ebc369be
2018-02-27 11:57:58 -05:00
Richard Henderson d5510a546f
int128: Add int128_make128
Allows Int128 to be used more generally, rather than having to
begin with 64-bit inputs and accumulate.

Backports commit 1edaeee0955fba7d834b7c8f4e372e7eae030745 from qemu
2018-02-27 11:06:33 -05:00
Richard Henderson 9084e5fe1b
int128: Use __int128 if available
Backports commit 0846beb36641e8f0c3ee55a5bb84d468b653c852 from qemu
2018-02-27 11:03:06 -05:00
Richard Henderson 4fdbe94eea
exec: Avoid direct references to Int128 parts
Backports commit 258dfaaad05a5fbe32a142b794e1df3e16501d0e from qemu
2018-02-27 11:01:43 -05:00
Richard Henderson ba1c63572e
atomics: Add __nocheck atomic operations
While the check against sizeof(void *) is appropriate for
normal usage within qemu, there are places in which we want
wider operaions and have checked for their existance.

Backports commit 84bca3927b36fb1d9a2ca85cbbdf9023d2b84678 from qemu
2018-02-27 11:00:20 -05:00
Lioncash a59eef391e
atomic: MSVC compatible equivalents to some functions 2018-02-27 10:56:04 -05:00
Emilio G. Cota c837d76a86
atomics: add atomic_op_fetch variants
This paves the way for upcoming work.

Backports commit 83d0c719f837724d9e3963b078211b2242bdd2a5 from qemu
2018-02-27 10:28:27 -05:00
Emilio G. Cota 102a53aa50
atomics: add atomic_xor
This paves the way for upcoming work.

Backports commit 61696ddbdc74263ddb6869856772cfe355a5d3bd from qemu
2018-02-27 10:23:31 -05:00
Richard Henderson 3fe8d46a15
atomics: Add parameters to macros
Making these functional rather than object macros will
prevent later problems with complex macro expansion.

Backports commit d1a9f2d12fcfc942924956fbe321aedf4226ccb7 from qemu
2018-02-27 10:21:35 -05:00
Daniel P. Berrange 83a5bf2d25
qapi: rename QmpOutputVisitor to QObjectOutputVisitor
The QmpOutputVisitor has no direct dependency on QMP. It is
valid to use it anywhere that one wants a QObject. Rename it
to better reflect its functionality as a generic QAPI
to QObject converter.

The commit before previous renamed the files, this one renames C
identifiers.

Backports commit 7d5e199ade76c53ec316ab6779800581bb47c50a from qemu
2018-02-27 08:05:33 -05:00
Daniel P. Berrange 2949a90977
qapi: rename QmpInputVisitor to QObjectInputVisitor
The QmpInputVisitor has no direct dependency on QMP. It is
valid to use it anywhere that one has a QObject. Rename it
to better reflect its functionality as a generic QObject
to QAPI converter.

The previous commit renamed the files, this one renames C identifiers.

Backports commit 09e68369a88d7de0f988972bf28eec1b80cc47f9 from qemu
2018-02-26 15:54:15 -05:00
Daniel P. Berrange 228f122248
qapi: rename *qmp-*-visitor* to *qobject-*-visitor*
The QMP visitors have no direct dependency on QMP. It is
valid to use them anywhere that one has a QObject. Rename them
to better reflect their functionality as a generic QObject
to QAPI converter.

This is the first of three parts: rename the files. The next two
parts will rename C identifiers. The split is necessary to make git
rename detection work.

Backports commit b3db211f3c80bb996a704d665fe275619f728bd4 from qemu
2018-02-26 15:42:37 -05:00
Peter Maydell db8b0a82b1
cpu: Support a target CPU having a variable page size
Support target CPUs having a page size which isn't knownn
at compile time. To use this, the CPU implementation should:
* define TARGET_PAGE_BITS_VARY
* not define TARGET_PAGE_BITS
* define TARGET_PAGE_BITS_MIN to the smallest value it
might possibly want for TARGET_PAGE_BITS
* call set_preferred_target_page_bits() in its realize
function to indicate the actual preferred target page
size for the CPU (and report any error from it)

In CONFIG_USER_ONLY, the CPU implementation should continue
to define TARGET_PAGE_BITS appropriately for the guest
OS page size.

Machines which want to take advantage of having the page
size something larger than TARGET_PAGE_BITS_MIN must
set the MachineClass minimum_page_bits field to a value
which they guarantee will be no greater than the preferred
page size for any CPU they create.

Note that changing the target page size by setting
minimum_page_bits is a migration compatibility break
for that machine.

For debugging purposes, attempts to use TARGET_PAGE_SIZE
before it has been finally confirmed will assert.

Backports commit 20bccb82ff3ea09bcb7c4ee226d3160cab15f7da from qemu
2018-02-26 12:29:08 -05:00
Paolo Bonzini eb75004013
memory: add a per-AddressSpace list of listeners
This speeds up MEMORY_LISTENER_CALL noticeably. Right now,
with many PCI devices you have N regions added to M AddressSpaces
(M = # PCI devices with bus-master enabled) and each call looks
up the whole listener list, with at least M listeners in it.
Because most of the regions in N are BARs, which are also roughly
proportional to M, the whole thing is O(M^3). This changes it
to O(M^2), which is the best we can do without rewriting the
whole thing.

Backports commit 9a54635dcb51a3fcf7507af630168f514a8cd4e7 from qemu
2018-02-26 10:46:50 -05:00
Paolo Bonzini 4b06e8bbb7
memory: eliminate global MemoryListeners
There is none, so just drop the code.

Backports commit d45fa784cd0c111131696808d1168259d66b7519 from qemu
2018-02-26 10:19:28 -05:00
Paolo Bonzini 8b239bd48b
atomic: base mb_read/mb_set on load-acquire and store-release
This introduces load-acquire and store-release operations in QEMU.
For now, just use them as an implementation detail of atomic_mb_read
and atomic_mb_set.

Since docs/atomics.txt documents that atomic_mb_read only synchronizes
with an atomic_mb_set of the same variable, we can use the new implementation
everywhere instead of seq-cst loads and stores.

Backports commit 803cf26a9e019b5d2256a8edeb22e3538c4f3261 from qemu
2018-02-26 10:02:46 -05:00
Paolo Bonzini fd7ef4c184
atomic: introduce smp_mb_acquire and smp_mb_release 2018-02-26 09:58:22 -05:00
Alex Bennée fbf6fb1e25
atomic.h: fix __SANITIZE_THREAD__ build
Only very modern GCC's actually set this define when building with the
ThreadSanitizer so this little typo slipped though.

Backports commit 23ea7f57949f2f5934f4d5bbc29fe321b3a7067b from qemu
2018-02-26 05:12:17 -05:00
Alex Bennée 4046235e92
atomic.h: comment on use of atomic_read/set
Add some notes on the use of the relaxed atomic access helpers and their
importance for defined behaviour in C11's multi-threaded memory model.

Backports commit e653bc6b0ff645c25b8a2eb607c18a5c98b59db6 from qemu
2018-02-26 05:03:59 -05:00
Alex Bennée 33589eb75f
cpus: pass CPUState to run_on_cpu helpers
CPUState is a fairly common pointer to pass to these helpers. This means
if you need other arguments for the async_run_on_cpu case you end up
having to do a g_malloc to stuff additional data into the routine. For
the current users this isn't a massive deal but for MTTCG this gets
cumbersome when the only other parameter is often an address.

This adds the typedef run_on_cpu_func for helper functions which has an
explicit CPUState * passed as the first parameter. All the users of
run_on_cpu and async_run_on_cpu have had their helpers updated to use
CPUState where available.

Backports commit e0eeb4a21a3ca4b296220ce4449d8acef9de9049 from qemu
2018-02-26 04:54:55 -05:00
Felipe Franciosi 0ed8880525
compiler: Swap 'public domain' header for license
As discussed on the list [1], having a comment stating that this file
is "public domain" is arguably wrong and not legally binding. This patch
replaces that comment with a clear GPLv2+ license as proposed in [2].

[1] http://lists.nongnu.org/archive/html/qemu-devel/2016-09/msg06151.html
[2] http://lists.nongnu.org/archive/html/qemu-devel/2016-09/msg06217.html

Worth noting, compiler.h was originally created on 5c026320 by splitting
qemu-common.h. At the time, qemu-common.h was already GPLv2+.

Backports commit cc9d8a3b2c41c22fb09f90f3085e6036c199c3ca from qemu
2018-02-26 04:49:45 -05:00
Richard Henderson 66d79ac959
tcg: Merge GETPC and GETRA
The return address argument to the softmmu template helpers was
confused. In the legacy case, we wanted to indicate that there
is no return address, and so passed in NULL. However, we then
immediately subtracted GETPC_ADJ from NULL, resulting in a non-zero
value, indicating the presence of an (invalid) return address.

Push the GETPC_ADJ subtraction down to the only point it's required:
immediately before use within cpu_restore_state_from_tb, after all
NULL pointer checks have been completed.

This makes GETPC and GETRA identical. Remove GETRA as the lesser
used macro, replacing all uses with GETPC.

Backports commit 01ecaf438b1eb46abe23392c8ce5b7628b0c8cf5 from qemu
2018-02-26 02:54:44 -05:00
Andrew Dutcher 26b36e5ff8
fpu: add mechanism to check for invalid long double formats
All operations that take a floatx80 as an operand need to have their
inputs checked for malformed encodings. In all of these cases, use the
function floatx80_invalid_encoding to perform the check. If an invalid
operand is found, raise an invalid operation exception, and then return
either NaN (for fp-typed results) or the integer indefinite value (the
minimum representable signed integer value, for int-typed results).

For the non-quiet comparison operations, this touches adjacent code in
order to pass style checks.

Backports cast correction portion of commit d1eb8f2acba579830cf3798c3c15ce51be852c56m from qemu
2018-02-26 02:27:40 -05:00
Pranith Kumar 9e6fec8741
atomics: Use __atomic_*_n() variant primitives
Use the __atomic_*_n() primitives which take the value as argument. It
is not necessary to store the value locally before calling the
primitive, hence saving us a stack store and load.

Backports commit 89943de17c4e276f2c47f05b4604e8816a6a636c from qemu
2018-02-26 02:16:48 -05:00
Paolo Bonzini 30845ae475
tcg: Prepare TB invalidation for lockless TB lookup
When invalidating a translation block, set an invalid flag into the
TranslationBlock structure first. It is also necessary to check whether
the target TB is still valid after acquiring 'tb_lock' but before calling
tb_add_jump() since TB lookup is to be performed out of 'tb_lock' in
future. Note that we don't have to check 'last_tb'; an already invalidated
TB will not be executed anyway and it is thus safe to patch it.

Backports commit 6d21e4208f382dd8ca1f7995a6dd9ea7ca281163 from qemu
2018-02-26 01:48:13 -05:00
Lioncash 2d87095858
glib_compat: Amend header guard 2018-02-25 23:12:20 -05:00
Alex Williamson fe66c2e088
memory: Don't use memcpy for ram_device regions
With a vfio assigned device we lay down a base MemoryRegion registered
as an IO region, giving us read & write accessors. If the region
supports mmap, we lay down a higher priority sub-region MemoryRegion
on top of the base layer initialized as a RAM device pointer to the
mmap. Finally, if we have any quirks for the device (ie. address
ranges that need additional virtualization support), we put another IO
sub-region on top of the mmap MemoryRegion. When this is flattened,
we now potentially have sub-page mmap MemoryRegions exposed which
cannot be directly mapped through KVM.

This is as expected, but a subtle detail of this is that we end up
with two different access mechanisms through QEMU. If we disable the
mmap MemoryRegion, we make use of the IO MemoryRegion and service
accesses using pread and pwrite to the vfio device file descriptor.
If the mmap MemoryRegion is enabled and results in one of these
sub-page gaps, QEMU handles the access as RAM, using memcpy to the
mmap. Using either pread/pwrite or the mmap directly should be
correct, but using memcpy causes us problems. I expect that not only
does memcpy not necessarily honor the original width and alignment in
performing a copy, but it potentially also uses processor instructions
not intended for MMIO spaces. It turns out that this has been a
problem for Realtek NIC assignment, which has such a quirk that
creates a sub-page mmap MemoryRegion access.

To resolve this, we disable memory_access_is_direct() for ram_device
regions since QEMU assumes that it can use memcpy for those regions.
Instead we access through MemoryRegionOps, which replaces the memcpy
with simple de-references of standard sizes to the host memory.

With this patch we attempt to provide unrestricted access to the RAM
device, allowing byte through qword access as well as unaligned
access. The assumption here is that accesses initiated by the VM are
driven by a device specific driver, which knows the device
capabilities. If unaligned accesses are not supported by the device,
we don't want them to work in a VM by performing multiple aligned
accesses to compose the unaligned access. A down-side of this
philosophy is that the xp command from the monitor attempts to use
the largest available access weidth, unaware of the underlying
device. Using memcpy had this same restriction, but at least now an
operator can dump individual registers, even if blocks of device
memory may result in access widths beyond the capabilities of a
given device (RTL NICs only support up to dword).

Backports commit 1b16ded6a512809f99c133a97f19026fe612b2de from qemu
2018-02-25 23:06:36 -05:00
Alex Williamson 5db45219c9
memory: Replace skip_dump flag with ram_device
Setting skip_dump on a MemoryRegion allows us to modify one specific
code path, but the restriction we're trying to address encompasses
more than that. If we have a RAM MemoryRegion backed by a physical
device, it not only restricts our ability to dump that region, but
also affects how we should manipulate it. Here we recognize that
MemoryRegions do not change to sometimes allow dumps and other times
not, so we replace setting the skip_dump flag with a new initializer
so that we know exactly the type of region to which we're applying
this behavior.

Backports commit ca83f87a66d19fdaabf23d4f5ebb49396fe232c1 from qemu
2018-02-25 23:00:45 -05:00
Pranith Kumar 1b19fe260a
softfloat: Fix warn about implicit conversion from int to int8_t
Change the flag type to 'uint8_t' to fix the implicit conversion error.

Backports commit dfd607671037ff46d5b16ade10e10efdf0d260be from qemu
2018-02-25 22:54:39 -05:00
Richard Henderson ede1cae3dc
tcg: Lower indirect registers in a separate pass
Rather than rely on recursion during the middle of register allocation,
lower indirect registers to loads and stores off the indirect base into
plain temps.

For an x86_64 host, with sufficient registers, this results in identical
code, modulo the actual register assignments.

For an i686 host, with insufficient registers, this means that temps can
be (temporarily) spilled to the stack in order to satisfy an allocation.
This as opposed to the possibility of not being able to spill, to allocate
a register for the indirect base, in order to perform a spill.

Backports commit 5a18407f55ade924aa6397c9a043a9ffd59645fe from qemu
2018-02-25 22:32:28 -05:00
Richard Henderson 2aa46dd9a1
tcg: Include liveness info in the dumps
Backports commit bdfb460ef77500f7b186759b585f06ff2120929d from qemu
2018-02-25 22:13:08 -05:00
Richard Henderson 1547048a22
tcg: Reorg TCGOp chaining
Instead of using -1 as end of chain, use 0, and link through the 0
entry as a fully circular double-linked list.

Backports commit dcb8e75870e2de199db853697f8839cb603beefe from qemu
2018-02-25 21:44:50 -05:00
Eric Blake 30cbcafc05
osdep: Document differences in rounding macros
Make it obvious which macros are safe in which situations.

Useful since QEMU_ALIGN_UP and ROUND_UP both purport to do
the same thing, but differ on whether the alignment must be
a power of 2.
2018-02-25 21:05:21 -05:00
Igor Mammedov 62c89b9cd4
exec: Reduce CONFIG_USER_ONLY ifdeffenery
Backports commit 1bc7e522d9cf1b58f2de9c8f1737be0bb5129c35 from qemu
2018-02-25 20:57:48 -05:00
Igor Mammedov d30410dc9a
target-i386: Add x86_cpu_unrealizefn()
First remove VCPU from exec loop and only then remove lapic.

Backports commit c884776e9dc947105827bd6c22192863f97267d2 from qemu
2018-02-25 20:54:13 -05:00
Paolo Bonzini a47c68164d
compiler: never omit assertions if using a static analysis tool
Assertions help both Coverity and the clang static analyzer avoid
false positives, but on the other hand both are confused when
the condition is compiled as (void)(x != FOO). Always expand
assertion macros when using Coverity or clang, through a new
QEMU_STATIC_ANALYSIS preprocessor symbol.

This fixes a couple false positives in TCG.

Backports commit 8bff06a0bbf257a2083223534c1607bf87d913e6 from qemu
2018-02-25 19:19:28 -05:00
Markus Armbruster c2ffbc575d
Clean up decorations and whitespace around header guards
Cleaned up with scripts/clean-header-guards.pl.

Backports commit 175de52487ce0b0c78daa4cdf41a5a465a168a25 from qemu
2018-02-25 04:26:02 -05:00
Markus Armbruster 1275b9b459
Clean up ill-advised or unusual header guards
Cleaned up with scripts/clean-header-guards.pl.

Backports commit 2a6a4076e117113ebec97b1821071afccfdfbc96 from qemu
2018-02-25 04:22:46 -05:00
Markus Armbruster 9ae2fc4d9e
Clean up header guards that don't match their file name
Header guard symbols should match their file name to make guard
collisions less likely. Offenders found with
scripts/clean-header-guards.pl -vn.

Cleaned up with scripts/clean-header-guards.pl, followed by some
renaming of new guard symbols picked by the script to better ones.

Backports commit 121d07125bb6d7079c7ebafdd3efe8c3a01cc440 from qemu
2018-02-25 04:18:42 -05:00
Markus Armbruster 60e8836b74
Use #include "..." for our own headers, <...> for others
Tracked down with an ugly, brittle and probably buggy Perl script.

Also move includes converted to <...> up so they get included before
ours where that's obviously okay.

Backports commit a9c94277f07d19d3eb14f199c3e93491aa3eae0e from qemu
2018-02-25 04:10:33 -05:00
Peter Maydell f6f843b4d4
bswap.h: Document cpu_to_* and *_to_cpu conversion functions
Add a documentation comment describing the functions for
converting between the cpu and little or bigendian formats.

Backports commit 7d820b766a2049f33ca7e078aa51018f2335f8c5 from qemu
2018-02-25 04:06:28 -05:00
Peter Maydell 1d7f813942
bswap.h: Remove unused cpu_to_*w() and *_to_cpup()
Now that all uses of cpu_to_*w() and *_to_cpup() have been replaced
with either ld*_p()/st*_p() or by doing direct dereferences and
using the cpu_to_*()/*_to_cpu() byteswap functions, we can remove
the unused implementations.

Backports commit f76bde702916d0230bf359d478bcac8d7f3b30ae from qemu
2018-02-25 04:04:46 -05:00
Sergey Sorokin d1e4ac0451
Fix confusing argument names in some common functions
There are functions tlb_fill(), cpu_unaligned_access() and
do_unaligned_access() that are called with access type and mmu index
arguments. But these arguments are named 'is_write' and 'is_user' in their
declarations. The patches fix the arguments to avoid a confusion.

Backports commit b35399bb4e9968296a12303b00f9f2066470e987 from qemu
2018-02-25 03:58:27 -05:00
Sergey Sorokin e4d123caa9
tcg: Improve the alignment check infrastructure
Some architectures (e.g. ARMv8) need the address which is aligned
to a size more than the size of the memory access.
To support such check it's enough the current costless alignment
check implementation in QEMU, but we need to support
an alignment size specifying.

Backports commit 1f00b27f17518a1bcb4cedca49eaec96a4d560bd from qemu
2018-02-25 02:23:28 -05:00
Lioncash 532f840dc3
qapi: Add new clone visitor
We have a couple places in the code base that want to deep-clone
one QAPI object into another, and they were resorting to serializing
the struct out to QObject then reparsing it. A much more efficient
version can be done by adding a new clone visitor.

Since cloning is still relatively uncommon, expose the use of the
new visitor via a QAPI_CLONE() macro that takes care of type-punning
the underlying function pointer, rather than generating lots of
unused functions for types that won't be cloned. And yes, we're
relying on the compiler treating all pointers equally, even though
a strict C program cannot portably do so - but we're not the first
one in the qemu code base to expect it to work (hello, glib!).

The choice of adding a fourth visitor type deserves some explanation.
On the surface, the clone visitor is mostly an input visitor (it
takes arbitrary input - in this case, another QAPI object - and
creates a new QAPI object during the course of the visit). But
ever since commit da72ab0 consolidated enum visits based on the
visitor type, using VISITOR_INPUT would cause us to run
visit_type_str(), even though for cloning there is nothing to do
(we just copy the enum value across, without regards to its mapping
to strings). Also, since our input happens to be a QAPI object,
we can also satisfy the internal checks for VISITOR_OUTPUT. So in
the end, I settled with a new VISITOR_CLONE, and chose its value
such that many internal checks can use 'v->type & mask', sticking
to 'v->type == value' where the difference matters.

Note that we can only clone objects (including alternates) and lists,
not built-ins or enums. The visitor core hides integer width from
the actual visitor (since commit 04e070d), and as long as that's the
case, we can't clone top-level integers. Then again, those can
always be cloned by direct copy, since they are not objects with
deep pointers, so it's no real loss. And restricting cloning to
just objects and lists is cleaner than restricting it to non-integers.
As such, I documented that the clone visitor is for direct use only
by code internal to QAPI, and should not be used on incomplete objects
(other than a hack to work around the fact that we allow NULL in place
of "" in visit_type_str() in other output visitors). Note that as
written, the clone visitor will never fail on a complete object.

Scalars (including enums) not at the root of the clone copy just fine
with no additional effort while visiting the scalar, by virtue of a
g_memdup() each time we push another struct onto the stack. Cloning
a string requires deduplication of a pointer, which means it can also
provide the guarantee of an input visitor of never producing NULL
even when still accepting NULL in place of "" the way the QMP output
visitor does.

Cloning an 'any' type could be possible by incrementing the QObject
refcnt, but it's not obvious whether that is better than implementing
a QObject deep clone. So for now, we document it as unsupported,
and intentionally omit the .type_any() callback to let a developer
know their usage needs implementation.

Add testsuite coverage for several different clone situations, to
ensure that the code is working. I also tested that valgrind was
happy with the test.

Backports commit a15fcc3cf69ee3d408f60d6cc316488d2b0249b4 from qemu
2018-02-25 01:34:12 -05:00
Eric Blake 85af4b2030
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.

This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).

Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.

The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.

Generated code is simplified as follows for events:

|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);

and for commands:

| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);

Backports commit 3b098d56979d2f7fd707c5be85555d114353a28d from qemu
2018-02-25 01:20:03 -05:00
Eric Blake ec53301cda
qmp-output-visitor: Favor new visit_free() function
Now that we have a polymorphic visit_free(), we no longer need
qmp_output_visitor_cleanup(); however, we still need to
expose the subtype for qmp_output_get_qobject().

Backports commit 1830f22a6777cedaccd67a08f675d30f7a85ebfd from qemu
2018-02-25 01:12:27 -05:00
Eric Blake f008d93ac0
qmp-input-visitor: Favor new visit_free() function
Now that we have a polymorphic visit_free(), we no longer need
qmp_input_visitor_cleanup(); which in turn means we no longer
need to return a subtype from qmp_input_visitor_new() nor a
public upcast function.

Generated code changes to qmp-marshal.c look like:

|@@ -52,11 +52,10 @@ void qmp_marshal_add_fd(QDict *args, QOb
| {
| Error *err = NULL;
| AddfdInfo *retval;
|- QmpInputVisitor *qiv = qmp_input_visitor_new(QOBJECT(args), true);
| Visitor *v;
| q_obj_add_fd_arg arg = {0};
|
|- v = qmp_input_get_visitor(qiv);
|+ v = qmp_input_visitor_new(QOBJECT(args), true);
| visit_start_struct(v, NULL, NULL, 0, &err);
| if (err) {
| goto out;

Backports commit b70ce1018a251c0c33498d9c927a07cade655a5e from qemu
2018-02-25 01:10:53 -05:00
Eric Blake e88a7e260b
string-input-visitor: Favor new visit_free() function
Now that we have a polymorphic visit_free(), we no longer need
string_input_visitor_cleanup(); which in turn means we no longer
need to return a subtype from string_input_visitor_new() nor a
public upcast function.

Backports commit 7a0525c7be6b38d32d586e3fd12e7377ded21faa from qemu
2018-02-25 01:08:04 -05:00
Eric Blake 7f741a6c9b
qapi: Add new visit_free() function
Making each visitor provide its own (awkwardly-named) FOO_cleanup()
is unusual, when we can instead have a polymorphic visit_free()
interface. Over the next few patches, we can use the polymorphic
functions to eliminate the need for a FOO_get_visitor() function
for accessing specific visitor functionality, once everything can
be accessed directly through the Visitor* interfaces.

The dealloc visitor is the first one converted to completely use
the new entry point, since qapi_dealloc_visitor_cleanup() was the
only reason that qapi_dealloc_get_visitor() existed, and only
generated and testsuite code was even using it. With the new
visit_free() entry point in place, we no longer need to expose
the QapiDeallocVisitor subtype through qapi_dealloc_visitor_new(),
and can get by with less generated code, with diffs that look like:

| void qapi_free_ACPIOSTInfo(ACPIOSTInfo *obj)
| {
|- QapiDeallocVisitor *qdv;
| Visitor *v;
|
| if (!obj) {
| return;
| }
|
|- qdv = qapi_dealloc_visitor_new();
|- v = qapi_dealloc_get_visitor(qdv);
|+ v = qapi_dealloc_visitor_new();
| visit_type_ACPIOSTInfo(v, NULL, &obj, NULL);
|- qapi_dealloc_visitor_cleanup(qdv);
|+ visit_free(v);
|}

Backports commit 2c0ef9f411ae6081efa9eca5b3eab2dbeee45a6c from qemu
2018-02-25 01:05:41 -05:00
Eric Blake 37ae4dfdfd
qapi: Add parameter to visit_end_*
Rather than making the dealloc visitor track of stack of pointers
remembered during visit_start_* in order to free them during
visit_end_*, it's a lot easier to just make all callers pass the
same pointer to visit_end_*. The generated code has access to the
same pointer, while all other users are doing virtual walks and
can pass NULL. The dealloc visitor is then greatly simplified.

All three visit_end_*() functions intentionally take a void**,
even though the visit_start_*() functions differ between void**,
GenericList**, and GenericAlternate**. This is done for several
reasons: when doing a virtual walk, passing NULL doesn't care
what the type is, but when doing a generated walk, we already
have to cast the caller's specific FOO* to call visit_start,
while using void** lets us use visit_end without a cast. Also,
an upcoming patch will add a clone visitor that wants to use
the same implementation for all three visit_end callbacks,
which is made easier if all three share the same signature.

For visitors with already track per-object state (the QMP visitors
via a stack, and the string visitors which do not allow nesting),
add an assertion that the caller is indeed passing the same
pointer to paired calls.

Backports commit 1158bb2a058fcdd0c8fc3e60dc77f7a57ddbb271 from qemu
2018-02-25 00:57:54 -05:00
Changlong Xie 2ca07642f1
qom: Fix comment typo
It's qom_unref, not qdef_unref.

Backports commit ada03a0e8423ef8950e30d216f56a9661a4070e2 from qemu
2018-02-25 00:46:15 -05:00
Markus Armbruster eeef227560
range: Replace internal representation of Range
Range represents a range as follows. Member @start is the inclusive
lower bound, member @end is the exclusive upper bound. Zero @end is
special: if @start is also zero, the range is empty, else @end is to
be interpreted as 2^64. No other empty ranges may occur.

The range [0,2^64-1] cannot be represented. If you try to create it
with range_set_bounds1(), you get the empty range instead. If you try
to create it with range_set_bounds() or range_extend(), assertions
fail. Before range_set_bounds() existed, the open-coded creation
usually got you the empty range instead. Open deathtrap.

Moreover, the code dealing with the janus-faced @end is too clever by
half.

Dumb this down to a more pedestrian representation: members @lob and
@upb are inclusive lower and upper bounds. The empty range is encoded
as @lob = 1, @upb = 0.

Backports commit 6dd726a2bf1b800289d90a84d5fcb5ce7b78a8e1 from qemu
2018-02-25 00:44:36 -05:00
Markus Armbruster 8b2a0c4ece
range: Eliminate direct Range member access
Users of struct Range mess liberally with its members, which makes
refactoring hard. Create a set of methods, and convert all users to
call them instead of accessing members. The methods have carefully
worded contracts, and use assertions to check them.

Backports commit a0efbf16604770b9d805bcf210ec29942321134f from qemu
2018-02-25 00:39:43 -05:00
Alistair Francis fbb0645fb3
bitops: Add MAKE_64BIT_MASK macro
Add a macro that creates a 64bit value which has length number of ones
shifted across by the value of shift.

Backports commit ae2923b5c20a21c6457680330506a9c13873485c from qemu
2018-02-25 00:30:39 -05:00
Peter Maydell efc6cc2b83
memory: Assert that memory_region_init_rom_device() ops aren't NULL
It doesn't make sense to pass a NULL ops argument to
memory_region_init_rom_device(), because the effect will
be that if the guest tries to write to the memory region
then QEMU will segfault. Catch the bug earlier by sanity
checking the arguments to this function, and remove the
misleading documentation that suggests that passing NULL
might be sensible.

Backports commit 39e0b03dec518254fabd2acff29548d3f1d2b754 from qemu
2018-02-25 00:29:52 -05:00
Peter Maydell 334e951ec1
memory: Provide memory_region_init_rom()
Provide a new helper function memory_region_init_rom() for memory
regions which are read-only (and unlike those created by
memory_region_init_rom_device() don't have special behaviour
for writes). This has the same behaviour as calling
memory_region_init_ram() and then memory_region_set_readonly()
(which is what we do today in boards with pure ROMs) but is a
more easily discoverable API for the purpose.

Backports commit a1777f7f6462c66e1ee6e98f0d5c431bfe988aa5 from qemu
2018-02-25 00:28:17 -05:00
Alexey Kardashevskiy 7187d77cfa
memory: Add MemoryRegionIOMMUOps.notify_started/stopped callbacks
The IOMMU driver may change behavior depending on whether a notifier
client is present. In the case of POWER, this represents a change in
the visibility of the IOTLB, for other drivers such as intel-iommu and
future AMD-Vi emulation, notifier support is not yet enabled and this
provides the opportunity to flag that incompatibility.

Backports commit d22d8956b185c002b50a4d0883aff61f857347ef from qemu
2018-02-25 00:23:00 -05:00
Eric Blake c14d8226ab
qapi: Fix memleak in string visitors on int lists
Commit 7f8f9ef1 introduced the ability to store a list of
integers as a sorted list of ranges, but when merging ranges,
it leaks one or more ranges. It was also using range_get_last()
incorrectly within range_compare() (a range is a start/end pair,
but range_get_last() is for start/len pairs), and will also
mishandle a range ending in UINT64_MAX (remember, we document
that no range covers 2**64 bytes, but that ranges that end on
UINT64_MAX have end < begin).

The whole merge algorithm was rather complex, and included
unnecessary passes over data within glib functions, and enough
indirection to make it hard to easily plug the data leaks.
Since we are already hard-coding things to a list of ranges,
just rewrite the thing to open-code the traversal and
comparisons, by making the range_compare() helper function give
us an answer that is easier to use, at which point we avoid the
need to pass any callbacks to g_list_*(). Then by reusing
range_extend() instead of duplicating effort with range_merge(),
we cover the corner cases correctly.

Drop the now-unused range_merge() and ranges_can_merge().

Doing this lets test-string-{input,output}-visitor pass under
valgrind without leaks.

Backports commit db486cc334aafd3dbdaf107388e37fc3d6d3e171 from qemu
2018-02-25 00:20:34 -05:00
Eric Blake ef357d06bc
qapi: Simplify use of range.h
Calling our function g_list_insert_sorted_merged is a misnomer,
since we are NOT writing a glib function. Furthermore, we are
making every caller pass the same comparator function of
range_merge(): any caller that would try otherwise would break
in weird ways since our internal call to ranges_can_merge() is
hard-coded to operate only on ranges, rather than paying
attention to the caller's comparator.

Better is to fix things so that callers don't have to care about
our internal comparator, by picking a function name and updating
the parameter type away from a gratuitous use of void*, to make
it obvious that we are operating specifically on a list of ranges
and not a generic list. Plus, refactoring the code here will
make it easier to plug a memory leak in the next patch.

range_compare() is now internal only, and moves to the .c file.

Backports commit 7c47959d0cb05db43014141a156ada0b6d53a750 from qemu
2018-02-25 00:02:42 -05:00
Eric Blake 5e22c7e180
range: Create range.c for code that should not be inline
g_list_insert_sorted_merged() is rather large to be an inline
function; move it to its own file. range_merge() and
ranges_can_merge() can likewise move, as they are only used
internally. Also, it becomes obvious that the condition within
range_merge() is already satisfied by its caller, and that the
return value is not used.

The diffstat is misleading, because of the copyright boilerplate.

Backports commit fec0fc0a13ac7f1a1130433a6740cd850c3db34a from qemu
2018-02-24 23:59:13 -05:00
Aleksandar Markovic 6eb4fa54f6
softfloat: Implement run-time-configurable meaning of signaling NaN bit
This patch modifies SoftFloat library so that it can be configured in
run-time in relation to the meaning of signaling NaN bit, while, at the
same time, strictly preserving its behavior on all existing platforms.

Background:

In floating-point calculations, there is a need for denoting undefined or
unrepresentable values. This is achieved by defining certain floating-point
numerical values to be NaNs (which stands for "not a number"). For additional
reasons, virtually all modern floating-point unit implementations use two
kinds of NaNs: quiet and signaling. The binary representations of these two
kinds of NaNs, as a rule, differ only in one bit (that bit is, traditionally,
the first bit of mantissa).

Up to 2008, standards for floating-point did not specify all details about
binary representation of NaNs. More specifically, the meaning of the bit
that is used for distinguishing between signaling and quiet NaNs was not
strictly prescribed. (IEEE 754-2008 was the first floating-point standard
that defined that meaning clearly, see [1], p. 35) As a result, different
platforms took different approaches, and that presented considerable
challenge for multi-platform emulators like QEMU.

Mips platform represents the most complex case among QEMU-supported
platforms regarding signaling NaN bit. Up to the Release 6 of Mips
architecture, "1" in signaling NaN bit denoted signaling NaN, which is
opposite to IEEE 754-2008 standard. From Release 6 on, Mips architecture
adopted IEEE standard prescription, and "0" denotes signaling NaN. On top of
that, Mips architecture for SIMD (also known as MSA, or vector instructions)
also specifies signaling bit in accordance to IEEE standard. MSA unit can be
implemented with both pre-Release 6 and Release 6 main processor units.

QEMU uses SoftFloat library to implement various floating-point-related
instructions on all platforms. The current QEMU implementation allows for
defining meaning of signaling NaN bit during build time, and is implemented
via preprocessor macro called SNAN_BIT_IS_ONE.

On the other hand, the change in this patch enables SoftFloat library to be
configured in run-time. This configuration is meant to occur during CPU
initialization, at the moment when it is definitely known what desired
behavior for particular CPU (or any additional FPUs) is.

The change is implemented so that it is consistent with existing
implementation of similar cases. This means that structure float_status is
used for passing the information about desired signaling NaN bit on each
invocation of SoftFloat functions. The additional field in float_status is
called snan_bit_is_one, which supersedes macro SNAN_BIT_IS_ONE.

IMPORTANT:

This change is not meant to create any change in emulator behavior or
functionality on any platform. It just provides the means for SoftFloat
library to be used in a more flexible way - in other words, it will just
prepare SoftFloat library for usage related to Mips platform and its
specifics regarding signaling bit meaning, which is done in some of
subsequent patches from this series.

Further break down of changes:

1) Added field snan_bit_is_one to the structure float_status, and
correspondent setter function set_snan_bit_is_one().

2) Constants <float16|float32|float64|floatx80|float128>_default_nan
(used both internally and externally) converted to functions
<float16|float32|float64|floatx80|float128>_default_nan(float_status*).
This is necessary since they are dependent on signaling bit meaning.
At the same time, for the sake of code cleanup and simplicity, constants
<floatx80|float128>_default_nan_<low|high> (used only internally within
SoftFloat library) are removed, as not needed.

3) Added a float_status* argument to SoftFloat library functions
XXX_is_quiet_nan(XXX a_), XXX_is_signaling_nan(XXX a_),
XXX_maybe_silence_nan(XXX a_). This argument must be present in
order to enable correct invocation of new version of functions
XXX_default_nan(). (XXX is <float16|float32|float64|floatx80|float128>
here)

4) Updated code for all platforms to reflect changes in SoftFloat library.
This change is twofolds: it includes modifications of SoftFloat library
functions invocations, and an addition of invocation of function
set_snan_bit_is_one() during CPU initialization, with arguments that
are appropriate for each particular platform. It was established that
all platforms zero their main CPU data structures, so snan_bit_is_one(0)
in appropriate places is not added, as it is not needed.

[1] "IEEE Standard for Floating-Point Arithmetic",
IEEE Computer Society, August 29, 2008.

Backports commit af39bc8c49224771ec0d38f1b693ea78e221d7bc from qemu
2018-02-24 20:27:12 -05:00
Alexey Kardashevskiy 096ca207af
memory: Add reporting of supported page sizes
Every IOMMU has some granularity which MemoryRegionIOMMUOps::translate
uses when translating, however this information is not available outside
the translate context for various checks.

This adds a get_min_page_size callback to MemoryRegionIOMMUOps and
a wrapper for it so IOMMU users (such as VFIO) can know the minimum
actual page size supported by an IOMMU.

As IOMMU MR represents a guest IOMMU, this uses TARGET_PAGE_SIZE
as fallback.

This removes vfio_container_granularity() and uses new helper in
memory_region_iommu_replay() when replaying IOMMU mappings on added
IOMMU memory region.

Backports the relevant parts of commit f682e9c244af7166225f4a50cc18ff296bb9d43e from qemu
2018-02-24 19:23:28 -05:00
Peter Maydell f893dacef0
bitops.h: Implement half-shuffle and half-unshuffle ops
A half-shuffle operation takes a word with zeros in the high half:
0000 0000 0000 0000 ABCD EFGH IJKL MNOP
and spreads the bits out so they are in every other bit of the word:
0A0B 0C0D 0E0F 0G0H 0I0J 0K0L 0M0N 0O0P
A half-unshuffle performs the reverse operation.

Provide functions in bitops.h which implement these operations
for 32-bit and 64-bit inputs, and add tests for them.

Backports commit b355438de52d0782983bf4bdc47936189a0c988b from qemu
2018-02-24 19:02:36 -05:00
Bharata B Rao 851dec945d
qom: API to get instance_size of a type
Add an API object_type_get_size(const char *typename) that returns the
instance_size of the give typename.

Backports commit 3f97b53a682d2595747c926c00d78b9d406f1be0 from qemu
2018-02-24 19:00:16 -05:00
Emilio G. Cota ae3e22a689
tb hash: hash phys_pc, pc, and flags with xxhash
For some workloads such as arm bootup, tb_phys_hash is performance-critical.
The is due to the high frequency of accesses to the hash table, originated
by (frequent) TLB flushes that wipe out the cpu-private tb_jmp_cache's.
More info:
https://lists.nongnu.org/archive/html/qemu-devel/2016-03/msg05098.html

To dig further into this I modified an arm image booting debian jessie to
immediately shut down after boot. Analysis revealed that quite a bit of time
is unnecessarily spent in tb_phys_hash: the cause is poor hashing that
results in very uneven loading of chains in the hash table's buckets;
the longest observed chain had ~550 elements.

The appended addresses this with two changes:

1) Use xxhash as the hash table's hash function. xxhash is a fast,
high-quality hashing function.

2) Feed the hashing function with not just tb_phys, but also pc and flags.

This improves performance over using just tb_phys for hashing, since that
resulted in some hash buckets having many TB's, while others getting very few;
with these changes, the longest observed chain on a single hash bucket is
brought down from ~550 to ~40.

Tests show that the other element checked for in tb_find_physical,
cs_base, is always a match when tb_phys+pc+flags are a match,
so hashing cs_base is wasteful. It could be that this is an ARM-only
thing, though. UPDATE:
On Tue, Apr 05, 2016 at 08:41:43 -0700, Richard Henderson wrote:
> The cs_base field is only used by i386 (in 16-bit modes), and sparc (for a TB
> consisting of only a delay slot).
> It may well still turn out to be reasonable to ignore cs_base for hashing.

BTW, after this change the hash table should not be called "tb_hash_phys"
anymore; this is addressed later in this series.

This change gives consistent bootup time improvements. I tested two
host machines:
- Intel Xeon E5-2690: 11.6% less time
- Intel i7-4790K: 19.2% less time

Increasing the number of hash buckets yields further improvements. However,
using a larger, fixed number of buckets can degrade performance for other
workloads that do not translate as many blocks (600K+ for debian-jessie arm
bootup). This is dealt with later in this series.

Backports commit 42bd32287f3a18d823f2258b813824a39ed7c6d9 from qemu
2018-02-24 18:00:14 -05:00
Emilio G. Cota 9ef9de9cf8
exec: add tb_hash_func5, derived from xxhash
This will be used by upcoming changes for hashing the tb hash.

Add this into a separate file to include the copyright notice from
xxhash.

Backports commit dc8b295d05ec35a8c032f9abca421772347ba5d4 from qemu
2018-02-24 17:36:35 -05:00
Emilio G. Cota 8518f55df7
compiler.h: add QEMU_ALIGNED() to enforce struct alignment
Backports commit 911a4d2215b05267b16925503218f49d607c6b29 from qemu
2018-02-24 17:32:43 -05:00
Peter Maydell d7dccff836
cpu-exec: Rename cpu_resume_from_signal() to cpu_loop_exit_noexc()
The function cpu_resume_from_signal() is now always called with a
NULL puc argument, and is rather misnamed since it is never called
from a signal handler. It is essentially forcing an exit to the
top level cpu loop but without raising any exception, so rename
it to cpu_loop_exit_noexc() and drop the useless unused argument.

Backports commit 6886b98036a8f8f5bce8b10756ce080084cef11b from qemu
2018-02-24 17:25:28 -05:00
Peter Maydell 8d0faac1dc
qemu-common.h: Drop WORDS_ALIGNED define
The WORDS_ALIGNED #define is not used anywhere, and hasn't been since
2013 when commit 612d590ebc6cef rewrote the various ld<type>_<endian>_p
functions to not use it. Remove the #define and the comment describing it.
Also remove the line in the comment about TARGET_WORDS_ALIGNED, since
it has never actually existed.

Backports commit 0d5c21f2b3bf1e0b562a2c74e353d2e03f2f50ef from qemu
2018-02-24 17:01:55 -05:00
Paolo Bonzini 8df5ad80b1
exec: hide mr->ram_addr from qemu_get_ram_ptr users
Let users of qemu_get_ram_ptr and qemu_ram_ptr_length pass in an
address that is relative to the MemoryRegion. This basically means
what address_space_translate returns.

Because the semantics of the second parameter change, rename the
function to qemu_map_ram_ptr.

Backports commit 0878d0e11ba8013dd759c6921cbf05ba6a41bd71 from qemu
2018-02-24 16:17:49 -05:00
Paolo Bonzini b2e1b34bcc
memory: split memory_region_from_host from qemu_ram_addr_from_host
Move the old qemu_ram_addr_from_host to memory_region_from_host and
make it return an offset within the region. For qemu_ram_addr_from_host
return the ram_addr_t directly, similar to what it was before
commit 1b5ec23 ("memory: return MemoryRegion from qemu_ram_addr_from_host",
2013-07-04).

Backports commit 07bdaa4196b51bc7ffa7c3f74e9e4a9dc8a7966a from qemu
2018-02-24 16:06:49 -05:00
Paolo Bonzini 918c626847
exec: remove ram_addr argument from qemu_ram_block_from_host
Of the two callers, one does not use it, and the other can compute
it itself based on the other output argument (offset) and the RAMBlock.

Backports commit f615f39616c4fd1a3a3b078af8d75bb4be6390de from qemu
2018-02-24 03:37:40 -05:00
Paolo Bonzini f26f1f123c
memory: remove qemu_get_ram_fd, qemu_set_ram_fd, qemu_ram_block_host_ptr
Remove direct uses of ram_addr_t and optimize memory_region_{get,set}_fd
now that a MemoryRegion knows its RAMBlock directly.

Backports commit 4ff87573df3606856a92c14eef3393a63d736d11 from qemu
2018-02-24 03:34:44 -05:00
Emilio G. Cota ab569f5cde
atomics: do not emit consume barrier for atomic_rcu_read
Currently we emit a consume-load in atomic_rcu_read. Because of
limitations in current compilers, this is overkill for non-Alpha hosts
and it is only useful to make Thread Sanitizer work.

This patch leaves the consume-load in atomic_rcu_read when
compiling with Thread Sanitizer enabled, and resorts to a
relaxed load + smp_read_barrier_depends otherwise.

On an RMO host architecture, such as aarch64, the performance
improvement of this change is easily measurable. For instance,
qht-bench performs an atomic_rcu_read on every lookup. Performance
before and after applying this patch:

$ tests/qht-bench -d 5 -n 1
Before: 9.78 MT/s
After: 10.96 MT/s

Backports commit 15487aa132109891482f79d78a30d6cfd465a391 from qemu
2018-02-24 03:28:11 -05:00
Emilio G. Cota 87ef2a2c5f
atomics: emit an smp_read_barrier_depends() barrier only for Alpha and Thread Sanitizer
For correctness, smp_read_barrier_depends() is only required to
emit a barrier on Alpha hosts. However, we are currently emitting
a consume fence unconditionally, and most compilers currently treat
consume and acquire fences as equivalent.

Fix it by keeping the consume fence if we're compiling with Thread
Sanitizer, since this might help prevent false warnings. Otherwise,
only emit the barrier for Alpha hosts. Note that we still guarantee
that smp_read_barrier_depends() is a compiler barrier.

Backports commit c983895258a771f8a5e4a53950bfb7fd2216651c from qemu
2018-02-24 03:26:52 -05:00
Eduardo Habkost aa3d46ef83
osdep: Move default qemu_hw_version() value to a macro
The macro will be used by code that will stop calling
qemu_hw_version() at runtime and just need a constant value.

Backports commit d494352c2f7818aeba184a8ef757569083740bb2 from qemu
2018-02-24 03:16:34 -05:00
Fam Zheng fb8135cd0d
memory: Remove code for mr->may_overlap
The collision check does nothing and hasn't been used. Remove the
variable together with related code.

Backports commit b61359781958759317ee6fd1a45b59be0b7dbbe1 from qemu
2018-02-24 02:55:25 -05:00
Gonglei feff56cc11
memory: drop find_ram_block()
On the one hand, we have already qemu_get_ram_block() whose function
is similar. On the other hand, we can directly use mr->ram_block but
searching RAMblock by ram_addr which is a kind of waste.

Backports commit fa53a0e53efdc7002497ea4a76aacf6cceb170ef from qemu
2018-02-24 02:52:20 -05:00
Paolo Bonzini 9bb67a3f58
hw: clean up hw/hw.h includes
Include qom/object.h and exec/memory.h instead of exec/ioport.h;
exec/ioport.h was almost everywhere required only for those two
includes, not for the content of the header itself.

Remove block/aio.h, everybody is already including it through
another path.

With this change, include/hw/hw.h is freed from qemu-common.h.

Backports commit df43d49cb8708b9c88a20afe0d1a3089b550a5b8 from qemu
2018-02-24 02:46:41 -05:00
Paolo Bonzini d0d3712417
hw: remove pio_addr_t
pio_addr_t is almost unused, because these days I/O ports are simply
accessed through the address space. cpu_{in,out}[bwl] themselves are
almost unused; monitor.c and xen-hvm.c could use address_space_read/write
directly, since they have an integer size at hand. This leaves qtest as
the only user of those functions.

On the other hand even portio_* functions use this type; the only
interesting use of pio_addr_t thus is include/hw/sysbus.h. I guess I
could move it there, but I don't see much benefit in that either. Using
uint32_t is enough and avoids the need to include ioport.h everywhere.

Backports commit 89a80e7400f7225d9401b35ef32454b4ab29dc67 from qemu
2018-02-24 02:43:16 -05:00
Paolo Bonzini 9485b7c2e1
cpu: move exec-all.h inclusion out of cpu.h
exec-all.h contains TCG-specific definitions. It is not needed outside
TCG-specific files such as translate.c, exec.c or *helper.c.

One generic function had snuck into include/exec/exec-all.h; move it to
include/qom/cpu.h.

Backports commit 63c915526d6a54a95919ebece83fa9ca631b2508 from qemu
2018-02-24 02:39:08 -05:00
Paolo Bonzini 58693409ea
exec: extract exec/tb-context.h
TCG backends do not need most of exec-all.h; extract what they actually
need to a separate file or move it directly to tcg.h. The next patch
will stop including exec-all.h from everywhere.

Backports commit 00f6da6a1a5d1ce085334eccbb50ec899ceed513 from qemu
2018-02-24 02:09:58 -05:00
Paolo Bonzini f9b9d0ba0f
hw: explicitly include qemu/log.h
Move the inclusion out of hw/hw.h, most files do not need it.

Backports commit 03dd024ff57733a55cd2e455f361d053c81b1b29 from qemu
2018-02-24 02:00:45 -05:00
Paolo Bonzini 37f26922dd
qemu-common: push cpu.h inclusion out of qemu-common.h
Backports commit 33c11879fd422b759483ed25fef133ea900ea8d7 from qemu
2018-02-24 01:50:56 -05:00
Paolo Bonzini e84da64a2b
qemu-common: stop including qemu/bswap.h from qemu-common.h
Move it to the actual users. There are still a few includes of
qemu/bswap.h in headers; removing them is left for future work.

Backports commit 58369e22cf971448411bfbc8c894b2addebe2111 from qemu
2018-02-24 01:06:03 -05:00
Paolo Bonzini 78fd1aab94
cpu: move endian-dependent load/store functions to cpu-all.h
Disentangle cpu-common.h and memory.h from NEED_CPU_H. Prototypes are
not defined for !NEED_CPU_H, so remove them from poison.h too. Only
macros need poisoning.

Backports commit a7d6039cb35592683ecc56d2b37817da2d2f8b00 from qemu
2018-02-24 01:04:26 -05:00
Paolo Bonzini fee6dcb22a
include: move CPU-related definitions out of qemu-common.h
Backports commit 4b4629d9d26fd0e100d9be526367a96aa35b541d from qemu
2018-02-24 00:33:49 -05:00
Wei Jiangang 7cf135457a
accel: make configure_accelerator return void
Return the negated value of accel_initialised is meaningless,
and the caller vl doesn't check it.

Backports commit bdc3f61dec2f9c227235bb5f677a0272e1184c82 from qemu
2018-02-24 00:31:28 -05:00
Sergey Fedorov 1a768018c2
tcg: Remove needless CPUState::current_tb
This field was used for telling cpu_interrupt() to unlink a chain of TBs
being executed when it worked that way. Now, cpu_interrupt() don't do
this anymore. So we don't need this field anymore.

Backports commit 3213525f8ab48742db09dab18cb9ae6f36a6c921 from qemu
2018-02-23 23:45:42 -05:00
Sergey Fedorov ba9a237586
tcg: Rework tb_invalidated_flag
'tb_invalidated_flag' was meant to catch two events:
* some TB has been invalidated by tb_phys_invalidate();
* the whole translation buffer has been flushed by tb_flush().

Then it was checked:
* in cpu_exec() to ensure that the last executed TB can be safely
linked to directly call the next one;
* in cpu_exec_nocache() to decide if the original TB should be provided
for further possible invalidation along with the temporarily
generated TB.

It is always safe to patch an invalidated TB since it is not going to be
used anyway. It is also safe to call tb_phys_invalidate() for an already
invalidated TB. Thus, setting this flag in tb_phys_invalidate() is
simply unnecessary. Moreover, it can prevent from pretty proper linking
of TBs, if any arbitrary TB has been invalidated. So just don't touch it
in tb_phys_invalidate().

If this flag is only used to catch whether tb_flush() has been called
then rename it to 'tb_flushed'. Declare it as 'bool' and stick to using
only 'true' and 'false' to set its value. Also, instead of setting it in
tb_gen_code(), just after tb_flush() has been called, do it right inside
of tb_flush().

In cpu_exec(), this flag is used to track if tb_flush() has been called
and have made 'next_tb' (a reference to the last executed TB) invalid
for linking it to directly call the next TB. tb_flush() can be called
during the CPU execution loop from tb_gen_code(), during TB execution or
by another thread while 'tb_lock' is released. Catch for translation
buffer flush reliably by resetting this flag once before first TB lookup
and each time we find it set before trying to add a direct jump. Don't
touch in in tb_find_physical().

Each vCPU has its own execution loop in multithreaded mode and thus
should have its own copy of the flag to be able to reset it with its own
'next_tb' and don't affect any other vCPU execution thread. So make this
flag per-vCPU and move it to CPUState.

In cpu_exec_nocache(), we only need to check if tb_flush() has been
called from tb_gen_code() called by cpu_exec_nocache() itself. To do
this reliably, preserve the old value of the flag, reset it before
calling tb_gen_code(), check afterwards, and combine the saved value
back to the flag.

This patch is based on the patch "tcg: move tb_invalidated_flag to
CPUState" from Paolo Bonzini <pbonzini@redhat.com>.

Backports commit 6f789be56d3f38e9214dafcfab3bf9be7191f370 from qemu
2018-02-23 23:34:51 -05:00
Sergey Fedorov d60af028c5
tcg: Clarify thread safety check in tb_add_jump()
The check is to make sure that another thread hasn't already done the
same while we were outside of tb_lock. Mention this in a comment.

Backports commit 9962c478b153a18fe88a6509fe58cd178aff8abc from qemu
2018-02-23 21:32:47 -05:00
Sergey Fedorov fbc0a1105f
tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB
These fields do not contain pure pointers to a TranslationBlock
structure. So uintptr_t is the most appropriate type for them.
Also put some asserts to assure that the two least significant bits of
the pointer are always zero before assigning it to jmp_list_first.

Backports commit c37e6d7e3589ecb96914faa21025ad7ba6654aea from qemu
2018-02-23 21:28:19 -05:00
Sergey Fedorov e60c24cecf
tcg: Clean up direct block chaining data fields
Briefly describe in a comment how direct block chaining is done. It
should help in understanding of the following data fields.

Rename some fields in TranslationBlock and TCGContext structures to
better reflect their purpose (dropping excessive 'tb_' prefix in
TranslationBlock but keeping it in TCGContext):
tb_next_offset => jmp_reset_offset
tb_jmp_offset => jmp_insn_offset
tb_next => jmp_target_addr
jmp_next => jmp_list_next
jmp_first => jmp_list_first

Avoid using a magic constant as an invalid offset which is used to
indicate that there's no n-th jump generated.

Backports commit f309101c26b59641fc1aa8fb2a98a5441cdaea03 from qemu
2018-02-23 21:28:19 -05:00
Sergey Fedorov c5b234ed1f
tcg: Note requirement on atomic direct jump patching
Backports commit 10b4f4855537dd421e193a7d0416513116370558 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov 52e2972300
tcg/arm: Make direct jump patching thread-safe
Ensure direct jump patching in ARM is atomic by using
atomic_read()/atomic_set() for code patching.

Backports commit 7d14e0e2d661479985197203589c38840e1066df from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov 57359fbe6c
tcg/s390: Make direct jump patching thread-safe
Ensure direct jump patching in s390 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.

Backports commit ed3d51ecd7fe248d3959e469d53890ac9ffe0cd2 from qemu
2018-02-23 21:28:18 -05:00
Sergey Fedorov 5eb2d6618f
tcg/i386: Make direct jump patching thread-safe
Ensure direct jump patching in i386 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.

Backports commit 0d07abf05e98903c7faf204a9a90f7d45b7554dc from qemu
2018-02-23 21:28:17 -05:00
Lioncash fffa27d269
osdep: MSVC-compatible alignment macros 2018-02-23 21:28:17 -05:00
Sergey Fedorov 3456f0879e
include/qemu/osdep.h: Add macros for pointer alignment
These macros provide a convenient way to n-byte align pointers up and
down and check if a pointer is n-byte aligned.

Backports commit 6b587d3cda48e7ba26de8d30bf0d8a7063970715 from qemu
2018-02-23 21:28:17 -05:00
Sergey Fedorov 47eac70cb9
include/qemu/osdep.h: Add a macro to check for alignment
Backports commit 18a60a76147569ca9e11b0607e50ce4012fe1aaa from qemu
2018-02-23 21:28:17 -05:00
Emilio G. Cota 170f6e0b3b
tb: consistently use uint32_t for tb->flags
We are inconsistent with the type of tb->flags: usage varies loosely
between int and uint64_t. Settle to uint32_t everywhere, which is
superior to both: at least one target (aarch64) uses the most significant
bit in the u32, and uint64_t is wasteful.

Compile-tested for all targets.

Backports commit 89fee74a0f066dfd73830a7b5fa137e87888c870 from qemu
2018-02-23 21:28:11 -05:00
Edgar E. Iglesias bfc74c4da2
gen-icount: Use tcg_set_insn_param
Use tcg_set_insn_param() instead of directly accessing internal
tcg data structures to update an insn param.

Backports commit 25caa94c4a26daaab1e65c6d887e2972aeb5749e from qemu
2018-02-23 20:01:17 -05:00
Eric Blake 2f42c2c195
qapi: Change visit_type_FOO() to no longer return partial objects
Returning a partial object on error is an invitation for a careless
caller to leak memory. We already fixed things in an earlier
patch to guarantee NULL if visit_start fails ("qapi: Guarantee
NULL obj on input visitor callback error"), but that does not
help the case where visit_start succeeds but some other failure
happens before visit_end, such that we leak a partially constructed
object outside visit_type_FOO(). As no one outside the testsuite
was actually relying on these semantics, it is cleaner to just
document and guarantee that ALL pointer-based visit_type_FOO()
functions always leave a safe value in *obj during an input visitor
(either the new object on success, or NULL if an error is
encountered), so callers can now unconditionally use
qapi_free_FOO() to clean up regardless of whether an error occurred.

The decision is done by adding visit_is_input(), then updating the
generated code to check if additional cleanup is needed based on
the type of visitor in use.

Note that we still leave *obj unchanged after a scalar-based
visit_type_FOO(); I did not feel like auditing all uses of
visit_type_Enum() to see if the callers would tolerate a specific
sentinel value (not to mention having to decide whether it would
be better to use 0 or ENUM__MAX as that sentinel).

Backports commit 68ab47e4b4ecc1c4649362b8cc1e49794d1a6537 from qemu
2018-02-23 19:53:17 -05:00
Eric Blake 0d52542da2
qapi: Simplify semantics of visit_next_list()
The semantics of the list visit are somewhat baroque, with the
following pseudocode when FooList is used:

start()
for (prev = head; cur = next(prev); prev = &cur) {
visit(&cur->value)
}

Note that these semantics (advance before visit) requires that
the first call to next() return the list head, while all other
calls return the next element of the list; that is, every visitor
implementation is required to track extra state to decide whether
to return the input as-is, or to advance. It also requires an
argument of 'GenericList **' to next(), solely because the first
iteration might need to modify the caller's GenericList head, so
that all other calls have to do a layer of dereferencing.

Thankfully, we only have two uses of list visits in the entire
code base: one in spapr_drc (which completely avoids
visit_next_list(), feeding in integers from a different source
than uint8List), and one in qapi-visit.py. That is, all other
list visitors are generated in qapi-visit.c, and share the same
paradigm based on a qapi FooList type, so we can refactor how
lists are laid out with minimal churn among clients.

We can greatly simplify things by hoisting the special case
into the start() routine, and flipping the order in the loop
to visit before advance:

start(head)
for (tail = *head; tail; tail = next(tail)) {
visit(&tail->value)
}

With the simpler semantics, visitors have less state to track,
the argument to next() is reduced to 'GenericList *', and it
also becomes obvious whether an input visitor is allocating a
FooList during visit_start_list() (rather than the old way of
not knowing if an allocation happened until the first
visit_next_list()). As a minor drawback, we now allocate in
two functions instead of one, and have to pass the size to
both functions (unless we were to tweak the input visitors to
cache the size to start_list for reuse during next_list, but
that defeats the goal of less visitor state).

The signature of visit_start_list() is chosen to match
visit_start_struct(), with the new parameters after 'name'.

The spapr_drc case is a virtual visit, done by passing NULL for
list, similarly to how NULL is passed to visit_start_struct()
when a qapi type is not used in those visits. It was easy to
provide these semantics for qmp-output and dealloc visitors,
and a bit harder for qmp-input (several prerequisite patches
refactored things to make this patch straightforward). But it
turned out that the string and opts visitors munge enough other
state during visit_next_list() to make it easier to just
document and require a GenericList visit for now; an assertion
will remind us to adjust things if we need the semantics in the
future.

Several pre-requisite cleanup patches made the reshuffling of
the various visitors easier; particularly the qmp input visitor.

Backports commit d9f62dde1303286b24ac8ce88be27e2b9b9c5f46 from qemu
2018-02-23 19:50:26 -05:00
Eric Blake 6084be1882
qapi: Split visit_end_struct() into pieces
As mentioned in previous patches, we want to call visit_end_struct()
functions unconditionally, so that visitors can release resources
tied up since the matching visit_start_struct() without also having
to worry about error priority if more than one error occurs.

Even though error_propagate() can be safely used to ignore a second
error during cleanup caused by a first error, it is simpler if the
cleanup cannot set an error. So, split out the error checking
portion (basically, input visitors checking for unvisited keys) into
a new function visit_check_struct(), which can be safely skipped if
any earlier errors are encountered, and leave the cleanup portion
(which never fails, but must be called unconditionally if
visit_start_struct() succeeded) in visit_end_struct().

Generated code in qapi-visit.c has diffs resembling:

|@@ -59,10 +59,12 @@ void visit_type_ACPIOSTInfo(Visitor *v,
| goto out_obj;
| }
| visit_type_ACPIOSTInfo_members(v, obj, &err);
|- error_propagate(errp, err);
|- err = NULL;
|+ if (err) {
|+ goto out_obj;
|+ }
|+ visit_check_struct(v, &err);
| out_obj:
|- visit_end_struct(v, &err);
|+ visit_end_struct(v);
| out:

and in qapi-event.c:

@@ -47,7 +47,10 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
| visit_type_q_obj_ACPI_DEVICE_OST_arg_members(v, &param, &err);
|- visit_end_struct(v, err ? NULL : &err);
|+ if (!err) {
|+ visit_check_struct(v, &err);
|+ }
|+ visit_end_struct(v);
| if (err) {
| goto out;

Backports commit 15c2f669e3fb2bc97f7b42d1871f595c0ac24af8 from qemu
2018-02-23 19:13:47 -05:00
Eric Blake ef6b7b50f6
qapi: Add visit_type_null() visitor
Right now, qmp-output-visitor happens to produce a QNull result
if nothing is actually visited between the creation of the visitor
and the request for the resulting QObject. A stronger protocol
would require that a QMP output visit MUST visit something. But
to still be able to produce a JSON 'null' output, we need a new
visitor function that states our intentions. Yes, we could say
that such a visit must go through visit_type_any(), but that
feels clunky.

So this patch introduces the new visit_type_null() interface and
its no-op interface in the dealloc visitor, and stubs in the
qmp visitors (the next patch will finish the implementation).
For the visitors that will not implement the callback, document
the situation. The code in qapi-visit-core unconditionally
dereferences the callback pointer, so that a segfault will inform
a developer if they need to implement the callback for their
choice of visitor.

Note that JSON has a primitive null type, with the single value
null; likewise with the QNull type for QObject; but for QAPI,
we just have the 'null' value without a null type. We may
eventually want to add more support in QAPI for null (most likely,
we'd use it via an alternate type that permits 'null' or an
object); but we'll create that usage when we need it.

Backports commit 3bc97fd5924561d92f32758c67eaffd2e4e25038 from qemu
2018-02-23 15:48:57 -05:00
Eric Blake fafb3e354b
qapi: Document visitor interfaces, add assertions
The visitor interface for mapping between QObject/QemuOpts/string
and QAPI is scandalously under-documented, making changes to visitor
core, individual visitors, and users of visitors difficult to
coordinate. Among other questions: when is it safe to pass NULL,
vs. when a string must be provided; which visitors implement which
callbacks; the difference between concrete and virtual visits.

Correct this by retrofitting proper contracts, and document where some
of the interface warts remain (for example, we may want to modify
visit_end_* to require the same 'obj' as the visit_start counterpart,
so the dealloc visitor can be simplified). Later patches in this
series will tackle some, but not all, of these warts.

Add assertions to (partially) enforce the contract. Some of these
were only made possible by recent cleanup commits.

Backports commit adfb264c9ed04bfc694921b72173be8e29e90024 from qemu
2018-02-23 15:45:31 -05:00
Eric Blake 9e999acc83
qapi: Change visit_start_implicit_struct to visit_start_alternate
After recent changes, the only remaining use of
visit_start_implicit_struct() is for allocating the space needed
when visiting an alternate. Since the term 'implicit struct' is
hard to explain, rename the function to its current usage. While
at it, we can merge the functionality of visit_get_next_type()
into the same function, making it more like visit_start_struct().

Generated code is now slightly smaller:

| {
| Error *err = NULL;
|
|- visit_start_implicit_struct(v, (void**) obj, sizeof(BlockdevRef), &err);
|+ visit_start_alternate(v, name, (GenericAlternate **)obj, sizeof(**obj),
|+ true, &err);
| if (err) {
| goto out;
| }
|- visit_get_next_type(v, name, &(*obj)->type, true, &err);
|- if (err) {
|- goto out_obj;
|- }
| switch ((*obj)->type) {
| case QTYPE_QDICT:
| visit_start_struct(v, name, NULL, 0, &err);
...
| }
|-out_obj:
|- visit_end_implicit_struct(v);
|+ visit_end_alternate(v);
| out:
| error_propagate(errp, err);
| }

Backports commit dbf11922622685934bfb41e7cf2be9bd4a0405c0 from qemu
2018-02-23 15:33:25 -05:00
Eric Blake 559304aed9
qapi: Consolidate QMP input visitor creation
Rather than having two separate ways to create a QMP input
visitor, where the safer approach has the more verbose name,
it is better to consolidate things into a single function
where the caller must explicitly choose whether to be strict
or to ignore excess input. This patch is the strictly
mechanical conversion; the next patch will then audit which
uses can be made stricter.

Backports commit fc471c18d5d2ec713d5a019f9530398675494bc8 from qemu
2018-02-23 15:09:57 -05:00
Eric Blake 3cf7b6dd3b
qapi: Adjust layout of FooList types
By sticking the next pointer first, we don't need a union with
64-bit padding for smaller types.  On 32-bit platforms, this
can reduce the size of uint8List from 16 bytes (or 12, depending
on whether 64-bit ints can tolerate 4-byte alignment) down to 8.
It has no effect on 64-bit platforms (where alignment still
dictates a 16-byte struct); but fewer anonymous unions is still
a win in my book.

It requires visit_next_list() to gain a size parameter, to know
what size element to allocate; comparable to the size parameter
of visit_start_struct().

I debated about going one step further, to allow for fewer casts,
by doing:
    typedef GenericList GenericList;
    struct GenericList {
        GenericList *next;
    };
    struct FooList {
        GenericList base;
        Foo *value;
    };
so that you convert to 'GenericList *' by '&foolist->base', and
back by 'container_of(generic, GenericList, base)' (as opposed to
the existing '(GenericList *)foolist' and '(FooList *)generic').
But doing that would require hoisting the declaration of
GenericList prior to inclusion of qapi-types.h, rather than its
current spot in visitor.h; it also makes iteration a bit more
verbose through 'foolist->base.next' instead of 'foolist->next'.

Note that for lists of objects, the 'value' payload is still
hidden behind a boxed pointer.  Someday, it would be nice to do:

struct FooList {
    FooList *next;
    Foo value;
};

for one less level of malloc for each list element.  This patch
is a step in that direction (now that 'next' is no longer at a
fixed non-zero offset within the struct, we can store more than
just a pointer's-worth of data as the value payload), but the
actual conversion would be a task for another series, as it will
touch a lot of code.

Backports commit e65d89bf1a4484e0db0f3dc820a8b209f2fb1e8b from qemu
2018-02-23 14:49:06 -05:00
Eric Blake eef0932471
qapi-visit: Add visitor.type classification
We have three classes of QAPI visitors: input, output, and dealloc.
Currently, all implementations of these visitors have one thing in
common based on their visitor type: the implementation used for the
visit_type_enum() callback. But since we plan to add more such
common behavior, in relation to documenting and further refining
the semantics, it makes more sense to have the visitor
implementations advertise which class they belong to, so the common
qapi-visit-core code can use that information in multiple places.

A later patch will better document the types of visitors directly
in visitor.h.

For this patch, knowing the class of a visitor implementation lets
us make input_type_enum() and output_type_enum() become static
functions, by replacing the callback function Visitor.type_enum()
with the simpler enum member Visitor.type. Share a common
assertion in qapi-visit-core as part of the refactoring.

Move comments in opts-visitor.c to match the refactored layout.

Backports commit 983f52d4b3f86fb9dc9f8b142132feb5a8723016 from qemu
2018-02-23 14:25:41 -05:00
Fam Zheng 5c739f14f5
util: Fix MIN_NON_ZERO
MIN_NON_ZERO(1, 0) is evaluated to 0. Rewrite the macro to fix it.

Backports commit b6ece2c6f37926a994bc564a9e55ef3be6016d8f from qemu
2018-02-23 14:09:44 -05:00
Lioncash 87130fc884
exec-all: Remove externs
These are unused
2018-02-23 12:43:03 -05:00
Peter Crosthwaite 576f1752a6
include/exec: Move cputlb exec.c defs out
Move the architecture agnostic function prototypes for exec.c out of
cputlb.h to exec-all.h. This allows hiding of the arch specific
cputlb.h from exec.c which should be getting close to having no
architecture specifics. Prepares support for multi-arch, which will have
a minimal cpu.h that services exec.c but not cputlb.h.

Backports commit dfccc7602374c9fd3b083208b552d62daa244811 from qemu
2018-02-23 10:52:25 -05:00
Peter Crosthwaite 97c9423ee8
cputlb: move CPU_LOOP() for tlb_reset() to exec.c
To prepare for multi-arch, cputlb.c should only have awareness of one
single architecture. This means it should not have access to the full
CPU lists which may be heterogeneous. Instead, push the CPU_LOOP() up
to the one and only caller in exec.c.

Backports commit 9a13565d52bfd321934fb44ee004bbaf5f5913a8 from qemu
2018-02-23 10:46:31 -05:00
Paolo Bonzini 9479199c6b
memory: fix usage of find_next_bit and find_next_zero_bit
The last two arguments to these functions are the last and first bit to
check relative to the base. The code was using incorrectly the first
bit and the number of bits. Fix this in cpu_physical_memory_get_dirty
and cpu_physical_memory_all_dirty. This requires a few changes in the
iteration; change the code in cpu_physical_memory_set_dirty_range to
match.

Backports commit 88c73d16ad1b6c22a2ab082064d0d521f756296a from qemu
2018-02-22 19:51:43 -05:00
Alex Bennée 171d267209
include/qemu/atomic.h: default to __atomic functions
The __atomic primitives have been available since GCC 4.7 and provide
a richer interface for describing memory ordering requirements. As a
bonus by using the primitives instead of hand-rolled functions we can
use tools such as the ThreadSanitizer which need the use of well
defined APIs for its analysis.

If we have __ATOMIC defines we exclusively use the __atomic primitives
for all our atomic access. Otherwise we fall back to the mixture of
__sync and hand-rolled barrier cases.

Backports commit a0aa44b488b3601415d55041e4619aef5f3a4ba8 from qemu
2018-02-22 16:12:59 -05:00
Paolo Bonzini 4e7259a49b
atomics: add explicit compiler fence in __atomic memory barriers
__atomic_thread_fence does not include a compiler barrier; in the
C++11 memory model, fences take effect in combination with other
atomic operations.  GCC implements this by making __atomic_load and
__atomic_store access memory as if the pointer was volatile, and
leaves no trace whatsoever of acquire and release fences in the
compiler's intermediate representation.

In QEMU, we want memory barriers to act on all memory, but at the same
time we would like to use __atomic_thread_fence for portability reasons.
Add compiler barriers manually around the __atomic_thread_fence.

Backports commit 3bbf572345c65813f86a8fc434ea1b23beb08e16 from qemu
2018-02-22 15:56:37 -05:00
Paolo Bonzini 02e3eeff40
atomic: fix position of volatile qualifier
What needs to be volatile is not the pointer, but the pointed-to
value!

Backports commit 2cbcfb281afa041a41f6e4c4da0f5c9314084604 from qemu
2018-02-22 15:52:48 -05:00
Stefan Hajnoczi e79e0881cd
memory: RCU ram_list.dirty_memory[] for safe RAM hotplug
Although accesses to ram_list.dirty_memory[] use atomics so multiple
threads can safely dirty the bitmap, the data structure is not fully
thread-safe yet.

This patch handles the RAM hotplug case where ram_list.dirty_memory[] is
grown.  ram_list.dirty_memory[] is change from a regular bitmap to an
RCU array of pointers to fixed-size bitmap blocks.  Threads can continue
accessing bitmap blocks while the array is being extended.  See the
comments in the code for an in-depth explanation of struct
DirtyMemoryBlocks.

I have tested that live migration with virtio-blk dataplane works.

Backports commit 5b82b703b69acc67b78b98a5efc897a3912719eb from qemu
2018-02-22 15:38:03 -05:00
Denis V. Lunev eb29ff04ca
log: move qemu_log_close/qemu_log_flush from header to log.c
There is no particular reason to keep these functions in the header.
Suggested by Paolo.

Backports commit 99affd1d5bd4e396ecda50e53dfbc5147fa1313d from qemu
2018-02-22 11:13:17 -05:00
Alex Bennée 3da7d9d9ae
qemu-log: dfilter-ise exec, out_asm, op and opt_op
qemu-log: dfilter-ise exec, out_asm, op and opt_op

This ensures the code generation debug code will honour -dfilter if set.
For the "exec" tracing I've added a new inline macro for efficiency's
sake.

Backports commit d977e1c2dbc9e63454b2000f91954d02543bf43b from qemu
2018-02-22 10:06:19 -05:00
Alex Bennée 2d401b6f23
qemu-log: new option -dfilter to limit output
When debugging big programs or system emulation sometimes you want both
the verbosity of cpu,exec et all but don't want to generate lots of logs
for unneeded stuff. This patch adds a new option -dfilter which allows
you to specify interesting address ranges in the form:

-dfilter 0x8000..0x8fff,0xffffffc000080000+0x200,...

Then logging code can use the new qemu_log_in_addr_range() function to
decide if it will output logging information for the given range.

Backports commit 3514552e04388d8e7686bcf89efd022e892acb5b from qemu
2018-02-22 10:02:26 -05:00
Peter Maydell 3f5e36e15f
qemu-log: Improve the exec TB execution logging
Improve the TB execution logging so that it is easier to identify
what is happening from trace logs:
* move the "Trace" logging of executed TBs into cpu_tb_exec()
so that it is emitted if and only if we actually execute a TB,
and for consistency for the CPU state logging
* log when we link two TBs together via tb_add_jump()
* log when cpu_tb_exec() returns early from a chain of TBs

The new style logging looks like this:

Trace 0x7fb7cc822ca0 [ffffffc0000dce00]
Linking TBs 0x7fb7cc822ca0 [ffffffc0000dce00] index 0 -> 0x7fb7cc823110 [ffffffc0000dce10]
Trace 0x7fb7cc823110 [ffffffc0000dce10]
Trace 0x7fb7cc823420 [ffffffc000302688]
Trace 0x7fb7cc8234a0 [ffffffc000302698]
Trace 0x7fb7cc823520 [ffffffc0003026a4]
Trace 0x7fb7cc823560 [ffffffc0000dce44]
Linking TBs 0x7fb7cc823560 [ffffffc0000dce44] index 1 -> 0x7fb7cc8235d0 [ffffffc0000dce70]
Trace 0x7fb7cc8235d0 [ffffffc0000dce70]
Stopped execution of TB chain before 0x7fb7cc8235d0 [ffffffc0000dce70]
Trace 0x7fb7cc8235d0 [ffffffc0000dce70]
Trace 0x7fb7cc822fd0 [ffffffc0000dd52c]

Backports commit 1a830635229e14c403600167823ea6b3b79d3097 from qemu
2018-02-22 09:40:11 -05:00
Peter Maydell 66e1bacd64
qemu-log: Avoid function call for disabled qemu_log_mask logging
Make qemu_log_mask() a macro which only calls the function to
do the actual work if the logging is enabled. This avoids making
a function call in possible fast paths where logging is disabled.

Backports commit 7ee606230e6b7645d92365d9b39179368e83ac54 from qemu
2018-02-22 09:32:48 -05:00
Veronia Bahaa bafc81b1d3
util: move declarations out of qemu-common.h
Move declarations out of qemu-common.h for functions declared in
utils/ files: e.g. include/qemu/path.h for utils/path.c.
Move inline functions out of qemu-common.h and into new files (e.g.
include/qemu/bcd.h)

Backports commit f348b6d1a53e5271cf1c9f9acc4646b4b98c1771 from qemu
2018-02-22 09:25:48 -05:00
Marc-André Lureau fff79ed49b
utils: rename strtosz to use qemu prefix
Not only it makes sense, but it gets rid of checkpatch warning:
WARNING: consider using qemu_strtosz in preference to strtosz

Also remove get rid of tabs to please checkpatch.

Backports commit 4677bb40f809394bef5fa07329dea855c0371697 from qemu
2018-02-22 00:17:52 -05:00
Rutuja Shah d9fdc180d7
Replaced get_tick_per_sec() by NANOSECONDS_PER_SECOND
This patch replaces get_ticks_per_sec() calls with the macro
NANOSECONDS_PER_SECOND. Also, as there are no callers, get_ticks_per_sec()
is then removed. This replacement improves the readability and
understandability of code.

For example,

timer_mod(fdctrl->result_timer,
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + (get_ticks_per_sec() / 50));

NANOSECONDS_PER_SECOND makes it obvious that qemu_clock_get_ns
matches the unit of the expression on the right side of the plus.

Backports commit 73bcb24d932912f8e75e1d88da0fc0ac6d4bce78 from qemu
2018-02-21 23:21:36 -05:00
Markus Armbruster 6730bd3131
Move QEMU_ALIGN_*() from qemu-common.h to qemu/osdep.h
qemu-common.h should only be included by .c files. Its file comment
explains why: "No header file should depend on qemu-common.h, as this
would easily lead to circular header dependencies."

One of the reasons for headers to include it is QEMU_ALIGN_UP() and
QEMU_ALIGN_DOWN(). Move them next to ROUND_UP() in qemu/osdep.h, to
facilitate removing these ill-advised includes later on.

Backports commit e07e540aaa08718c9ff8213067a3dcef31b3e313 from qemu
2018-02-21 23:12:24 -05:00
Markus Armbruster 6b1ebd16e6
Move HOST_LONG_BITS from qemu-common.h to qemu/osdep.h
qemu-common.h should only be included by .c files. Its file comment
explains why: "No header file should depend on qemu-common.h, as this
would easily lead to circular header dependencies."

One of the reasons for headers to include it is HOST_LONG_BITS. Move
that to its more natural home qemu/osdep.h, to facilitate removing
these ill-advised includes later on.

This also lets us use HOST_LONG_BITS in bswap.h instead of duplicating
its definition there to avoid cyclic inclusion.

Backports commit a8139632161d7546218b696cada0a4f64cc78fb7 from qemu
2018-02-21 23:10:43 -05:00
Markus Armbruster 06668850e3
include/qemu/osdep.h: Don't include qapi/error.h
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.

Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.

Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.

This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.

Backports commit da34e65cb4025728566d6504a99916f6e7e1dd6a from qemu
2018-02-21 23:08:18 -05:00
Stefan Weil baa477d324
Remove unneeded include statements for setjmp.h
As soon as setjmp.h is included from qemu/osdep.h, those old include
statements are no longer needed.

Add also setjmp.h to the list in scripts/clean-includes.

Backports commit 8ff98f1ed2f50cd05c3c5027c7efdf69859ec664 from qemu
2018-02-21 22:57:32 -05:00
Stefan Weil 904b3c467e
Include setjmp.h in qemu/osdep.h (bug fix for w64)
setjmp must be declared before sysemu/os-win32.h
because it is redefined there for 64 bit Windows.

Backports commit e89fdafb58038038e3ccb860c5e1068ba063bac8 from qemu
2018-02-21 22:56:46 -05:00
Max Reitz 1cfdf802a9
qapi: Drop QERR_UNKNOWN_BLOCK_FORMAT_FEATURE
Just specifying a custom string is simpler in basically all places that
used it, and in addition, specifying the BB or node name is something we
generally do not do in other error messages when opening a BDS, so we
should not do it here.

This changes the output for iotest 036 (to the better, in my opinion),
so the reference output needs to be changed accordingly.

Backports commit a55448b3681a880b77eaefe8b2c42912000cb481 from qemu
2018-02-21 21:55:15 -05:00
Daniel P. Berrange eddfb13c2c
qom: Change object property iterator API contract
Currently the ObjectProperty iterator API works as follows:

ObjectPropertyIterator *iter;

iter = object_property_iter_init(obj);
while ((prop = object_property_iter_next(iter))) {
...
}
object_property_iter_free(iter);

This has the benefit that the ObjectPropertyIterator struct
can be opaque, but has the downside that callers need to
explicitly call a free function. It is also not in keeping
with iterator style used elsewhere in QEMU/GLib2.

This patch changes the API to use stack allocation instead:

ObjectPropertyIterator iter;

object_property_iter_init(&iter, obj);
while ((prop = object_property_iter_next(&iter))) {
...
}

Backports commit 7746abd8e9ee9db20c0b0fdb19504f163ba3cbea from qemu
2018-02-21 21:03:58 -05:00
Daniel P. Berrange b97ab59f08
qom: Allow properties to be registered against classes
When there are many instances of a given class, registering
properties against the instance is wasteful of resources. The
majority of objects have a statically defined list of possible
properties, so most of the properties are easily registerable
against the class. Only those properties which are conditionally
registered at runtime need be recorded against the klass.

Registering properties against classes also makes it possible
to provide static introspection of QOM - currently introspection
is only possible after creating an instance of a class, which
severely limits its usefulness.

This impl only supports simple scalar properties. It does not
attempt to allow child object / link object properties against
the class. There are ways to support those too, but it would
make this patch more complicated, so it is left as an exercise
for the future.

There is no equivalent to object_property_del() provided, since
classes must be immutable once they are defined.

Backports commit 16bf7f522a2ff68993f80631ed86254c71eaf5d4 from qemu
2018-02-21 21:00:56 -05:00
Pavel Fedin 825bc2fb04
qom: Replace object property list with GHashTable
ARM GICv3 systems with large number of CPUs create lots of IRQ pins. Since
every pin is represented as a property, number of these properties becomes
very large. Every property add first makes sure there's no duplicates.
Traversing the list becomes very slow, therefore QEMU initialization takes
significant time (several seconds for e. g. 16 CPUs).

This patch replaces list with GHashTable, making lookup very fast. The only
drawback is that object_child_foreach() and object_child_foreach_recursive()
cannot add or remove properties during traversal, since GHashTableIter does
not have modify-safe version. However, the code seems not to modify objects
via these functions.

Backports commit b604a854e843505007c59d68112c654556102a20 from qemu
2018-02-21 13:35:10 -05:00
Lioncash 4b79ff71b4
glib_compat: backport hashtable iterator interfaces 2018-02-21 13:18:44 -05:00
Pavel Fedin 0201c71145
Merge memory_region_init_reservation() into memory_region_init_io()
Just specifying ops = NULL in some cases can be more convenient than having
two functions.

Backports commit 6d6d2abf2c2e52c0f404d0a31a963e945b0cc7ad from qemu
2018-02-21 11:23:00 -05:00
Fam Zheng fa7d3e6cdb
memory: Drop MemoryRegion.ram_addr
All references to mr->ram_addr are replaced by
memory_region_get_ram_addr(mr) (except for a few assertions that are
replaced with mr->ram_block).

Backports commit 8e41fb63c5bf29ecabe0cee1239bf6230f19978a from qemu
2018-02-21 08:53:08 -05:00
Fam Zheng 2c1a72635d
memory: Implement memory_region_get_ram_addr with mr->ram_block
Backports commit 7ebb2745acbb8d910eab07dc5f0aa01a4457703c from qemu
2018-02-21 08:53:08 -05:00
Gonglei aa80edbef0
exec: Return RAMBlock pointer from allocating functions
Previously we return RAMBlock.offset; now return the pointer to the
whole structure.

ram_block_add returns void now, error is completely passed with errp.

Backports commit 528f46af6ecd1e300db18684969104d4067b867b from qemu
2018-02-21 08:52:57 -05:00
Lluís Vilanova d111e2df2d
typedefs: Add CPUState
Backports commit b23197f9cf2f221a6cc6272d36852f4f70cf9c1b from qemu
2018-02-21 01:55:22 -05:00
Gonglei 26951bf754
memory: Remove unreachable return statement
Backports commit d61524486c6e503e502241a2ea834f930f98a6a1 from qemu
2018-02-20 20:54:24 -05:00
Gonglei d25285bc78
memory: optimize qemu_get_ram_ptr and qemu_ram_ptr_length
these two functions consume too much cpu overhead to
find the RAMBlock by ram address.

After this patch, we can pass the RAMBlock pointer
to them so that they don't need to find the RAMBlock
anymore most of the time. We can get better performance
in address translation processing.

Backports commit 3655cb9c7375a595a8051ec677c515b24d5c1fe6 from qemu
2018-02-20 20:53:31 -05:00
Gonglei 39e4d63e68
exec: store RAMBlock pointer into memory region
Each RAM memory region has a unique corresponding RAMBlock.
In the current realization, the memory region only stored
the ram_addr which means the offset of RAM address space,
We need to qurey the global ram.list to find the ram block
by ram_addr if we want to get the ram block, which is very
expensive.

Now, we store the RAMBlock pointer into memory region
structure. So, if we know the mr, we can easily get the
RAMBlock.

Backports commit 58eaa2174e99d9a05172d03fd2799ab8fd9e6f60 from qemu
2018-02-20 20:43:32 -05:00
Peter Maydell 547fabd58e
osdep.h: Include config-target.h if NEED_CPU_H is defined
NEED_CPU_H is the define we use to distinguish per-target object
compilation from common object compilation. For the former, we must
also include config-target.h so that the .c files see the necessary
CONFIG_ constants.

Backports commit b1e34d1c3a9059e87719634bfc4db53174d63e14 from qemu
2018-02-20 19:11:07 -05:00
Peter Maydell c41bb9a772
osdep.h: Define macros for the benefit of C++ before C++11
For C++ before C++11, <stdint.h> requires definition of the macros
__STDC_CONSTANT_MACROS, __STDC_LIMIT_MACROS and __STDC_FORMAT_MACROS
in order to enable definition of various macros by the header file.
Define these in osdep.h, so that we get the right header file
definitions whether osdep.h is being used by plain C, C++11 or
older C++.

In particular libvixl's header files depend on this and won't
compile if osdep.h is included before them otherwise.

Backports commit 79f56d82f805b170fa2be8c04b682117be56483f from qemu
2018-02-20 19:09:58 -05:00
Lioncash c17fa2cad3
osdep.h: Remove int_fast*_t Solaris compatibility code
We now do not use the int_fast*_t types anywhere in QEMU, so we can
remove the compatibility definitions we were providing for the
benefit of ancient Solaris versions.

Backports commit 50fe4df8ee6aba63ae51457bad40ba26e3c9746f from qemu
2018-02-20 18:58:53 -05:00
Peter Maydell 68cbe1b2ce
fpu: Remove use of int_fast16_t in conversions to int16
Make the functions which convert floating point to 16 bit integer
return int16_t rather than int_fast16_t, and correspondingly use
int_fast16_t in their internal implementations where appropriate.

(These functions are used only by the ARM target.)

Backports commit 0bb721d7217ed4a1abb44f521c5c7ec185062d58 from qemu
2018-02-20 16:54:04 -05:00
Eric Blake e096e62127
qapi: Don't box branches of flat unions
There's no reason to do two malloc's for a flat union; let's just
inline the branch struct directly into the C union branch of the
flat union.

Surprisingly, fewer clients were actually using explicit references
to the branch types in comparison to the number of flat unions
thus modified.

This lets us reduce the hack in qapi-types:gen_variants() added in
the previous patch; we no longer need to distinguish between
alternates and flat unions.

The change to unboxed structs means that u.data (added in commit
cee2dedb) is now coincident with random fields of each branch of
the flat union, whereas beforehand it was only coincident with
pointers (since all branches of a flat union have to be objects).
Note that this was already the case for simple unions - but there
we got lucky. Remember, visit_start_union() blindly returns true
for all visitors except for the dealloc visitor, where it returns
the value !!obj->u.data, and that this result then controls
whether to proceed with the visit to the variant. Pre-patch,
this meant that flat unions were testing whether the boxed pointer
was still NULL, and thereby skipping visit_end_implicit_struct()
and avoiding a NULL dereference if the pointer had not been
allocated. The same was true for simple unions where the current
branch had pointer type, except there we bypassed visit_type_FOO().
But for simple unions where the current branch had scalar type, the
contents of that scalar meant that the decision to call
visit_type_FOO() was data-dependent - the reason we got lucky there
is that visit_type_FOO() for all scalar types in the dealloc visitor
is a no-op (only the pointer variants had anything to free), so it
did not matter whether the dealloc visit was skipped. But with this
patch, we would risk leaking memory if we could skip a call to
visit_type_FOO_fields() based solely on a data-dependent decision.

But notice: in the dealloc visitor, visit_type_FOO() already handles
a NULL obj - it was only the visit_type_implicit_FOO() that was
failing to check for NULL. And now that we have refactored things to
have the branch be part of the parent struct, we no longer have a
separate pointer that can be NULL in the first place. So we can just
delete the call to visit_start_union() altogether, and blindly visit
the branch type; there is no change in behavior except to the dealloc
visitor, where we now unconditionally visit the branch, but where that
visit is now always safe (for a flat union, we can no longer
dereference NULL, and for a simple union, visit_type_FOO() was already
safely handling NULL on pointer types).

Unfortunately, simple unions are not as easy to switch to unboxed
layout; because we are special-casing the hidden implicit type with
a single 'data' member, we really DO need to keep calling another
layer of visit_start_struct(), with a second malloc; although there
are some cleanups planned for simple unions in later patches.

visit_start_union() and gen_visit_implicit_struct() are now unused.
Drop them.

Note that after this patch, the only remaining use of
visit_start_implicit_struct() is for alternate types; the next patch
will do further cleanup based on that fact.

Backports commit 544a3731591f5d53e15f22de00ce5ac758d490b3 from qemu
2018-02-20 16:44:55 -05:00
Alistair Francis a4bf026460
qom: Correct object_property_get_int() description
The description of object_property_get_int() stated that on an error
it returns NULL. This is not the case and the function will return -1
if an error occurs. Update the commented documentation accordingly.

Backports commit b29b47e9b35017428904e0e934700877dfaabe73 from qemu
2018-02-20 11:52:16 -05:00
Sergey Fedorov 6a3038db7c
cpu: Add callback to check architectural watchpoint match
When QEMU watchpoint matches, that is not definitely an architectural
watchpoint match yet. If it is a stop-before-access watchpoint then that
is hardly possible to ignore it after throwing a TCG exception.

A special callback is introduced to check for architectural watchpoint
match before raising a TCG exception.

Backports commit 568496c0c0f1863a4bc18539962cd8d81baa4e30 from qemu
2018-02-20 11:43:56 -05:00
Lioncash c658126845
include: Move RAMList to ramlist.h
Moves the struct back into qemu's headers
2018-02-20 08:47:51 -05:00
Lioncash cdd4003ce9
Move RAMBlock to ram_addr.h
Moves it back into qemu's includes.
2018-02-20 08:35:44 -05:00
Paolo Bonzini cbc56b3ceb
memory: add early bail out from cpu_physical_memory_set_dirty_range
This condition is true in the common case, so we can cut out the body of
the function. In addition, this makes it easier for the compiler to do
at least partial inlining, even if it decides that fully inlining the
function is unreasonable.

Backports commit 8bafcb21643a39a5b29109f8bd5ee5a6f0f6850b from qemu
2018-02-20 08:32:10 -05:00
Lioncash a268815478
include: Add stubbed xen function
Will allow us to not comment out code all the time for xen checks (ideally)
2018-02-20 08:29:58 -05:00
Markus Armbruster 91b4e76fef
error: New error_fatal
Similar to error_abort, but doesn't report where the error was
created, and terminates the process with exit(1) rather than abort().

Backports commit a29a37b994ca3c5a1d39fa0e8934f7e0f2cf57ef from qemu
2018-02-20 08:22:27 -05:00
Markus Armbruster c4a06e2d12
error: Improve documentation some more
Don't claim error_report_err() always reports to stderr. It actually
reports to the current monitor when we have one.

Clarify intended use of error_abort and error_fatal.

Backports commit 10303f04b98efa76e638b9ae4632688f56f088fc from qemu
2018-02-20 08:13:39 -05:00
Eric Blake 1f54314cbb
qapi: Drop unused 'kind' for struct/enum visit
visit_start_struct() and visit_type_enum() had a 'kind' argument
that was usually set to either the stringized version of the
corresponding qapi type name, or to NULL (although some clients
didn't even get that right). But nothing ever used the argument.
It's even hard to argue that it would be useful in a debugger,
as a stack backtrace also tells which type is being visited.

Therefore, drop the 'kind' argument as dead.

Backports commit 337283dffbb5ad5860ed00408a5fd0665c21be07 from qemu
2018-02-19 23:43:54 -05:00
Eric Blake 844c136945
qapi: Swap 'name' in visit_* callbacks to match public API
As explained in the previous patches, matching argument order of
'name, &value' to JSON's "name":value makes sense. However,
while the last two patches were easy with Coccinelle, I ended up
doing this one all by hand. Now all the visitor callbacks match
the main interface.

The compiler is able to enforce that all clients match the changed
interface in visitor-impl.h, even where two pointers are being
swapped, because only one of the two pointers is const (if that
were not the case, then C's looseness on treating 'char *' like
'void *' would have made review a bit harder).

Backports commit 0b2a0d6bb2446060944061e53e87d0c7addede79 from qemu
2018-02-19 23:36:52 -05:00
Eric Blake 4100f3b78a
qapi: Consolidate visitor small integer callbacks
Commit 4e27e819 introduced optional visitor callbacks for all
sorts of int types, but no visitor has supplied any of the
callbacks for sizes less than 64 bits. In other words, the
generic implementation based on using type_[u]int64() followed
by bounds-checking works just fine. In the interest of
simplicity, it's easier to make the visitor callback interface
not have to worry about the other sizes.

Adding some helper functions minimizes the boilerplate required
to correct FIXMEs added earlier with regards to questionable
reuse of errp, particularly now that we can guarantee from a
single file audit that value is unchanged if an error is set.

Backports commit 04e070d217b4414f1f91aa8ad25fc0ae7ca0be93 from qemu
2018-02-19 23:21:56 -05:00
Eric Blake 9ec25b4673
qom: Swap 'name' next to visitor in ObjectPropertyAccessor
Similar to the previous patch, it's nice to have all functions
in the tree that involve a visitor and a name for conversion to
or from QAPI to consistently stick the 'name' parameter next
to the Visitor parameter.

Done by manually changing include/qom/object.h and qom/object.c,
then running this Coccinelle script and touching up the fallout
(Coccinelle insisted on adding some trailing whitespace).

@ rule1 @
identifier fn;
typedef Object, Visitor, Error;
identifier obj, v, opaque, name, errp;
@@
void fn
- (Object *obj, Visitor *v, void *opaque, const char *name,
+ (Object *obj, Visitor *v, const char *name, void *opaque,
Error **errp) { ... }

@@
identifier rule1.fn;
expression obj, v, opaque, name, errp;
@@
fn(obj, v,
- opaque, name,
+ name, opaque,
errp)

Backports commit d7bce9999df85c56c8cb1fcffd944d51bff8ff48 from qemu
2018-02-19 23:14:37 -05:00
Eric Blake 5dd5646a9a
qapi: Swap visit_* arguments for consistent 'name' placement
JSON uses "name":value, but many of our visitor interfaces were
called with visit_type_FOO(v, &value, name, errp). This can be
a bit confusing to have to mentally swap the parameter order to
match JSON order. It's particularly bad for visit_start_struct(),
where the 'name' parameter is smack in the middle of the
otherwise-related group of 'obj, kind, size' parameters! It's
time to do a global swap of the parameter ordering, so that the
'name' parameter is always immediately after the Visitor argument.

Additional reason in favor of the swap: the existing include/qjson.h
prefers listing 'name' first in json_prop_*(), and I have plans to
unify that file with the qapi visitors; listing 'name' first in
qapi will minimize churn to the (admittedly few) qjson.h clients.

Later patches will then fix docs, object.h, visitor-impl.h, and
those clients to match.

Done by first patching scripts/qapi*.py by hand to make generated
files do what I want, then by running the following Coccinelle
script to affect the rest of the code base:
$ spatch --sp-file script `git grep -l '\bvisit_' -- '**/*.[ch]'`
I then had to apply some touchups (Coccinelle insisted on TAB
indentation in visitor.h, and botched the signature of
visit_type_enum() by rewriting 'const char *const strings[]' to
the syntactically invalid 'const char*const[] strings'). The
movement of parameters is sufficient to provoke compiler errors
if any callers were missed.

// Part 1: Swap declaration order
@@
type TV, TErr, TObj, T1, T2;
identifier OBJ, ARG1, ARG2;
@@
void visit_start_struct
-(TV v, TObj OBJ, T1 ARG1, const char *name, T2 ARG2, TErr errp)
+(TV v, const char *name, TObj OBJ, T1 ARG1, T2 ARG2, TErr errp)
{ ... }

@@
type bool, TV, T1;
identifier ARG1;
@@
bool visit_optional
-(TV v, T1 ARG1, const char *name)
+(TV v, const char *name, T1 ARG1)
{ ... }

@@
type TV, TErr, TObj, T1;
identifier OBJ, ARG1;
@@
void visit_get_next_type
-(TV v, TObj OBJ, T1 ARG1, const char *name, TErr errp)
+(TV v, const char *name, TObj OBJ, T1 ARG1, TErr errp)
{ ... }

@@
type TV, TErr, TObj, T1, T2;
identifier OBJ, ARG1, ARG2;
@@
void visit_type_enum
-(TV v, TObj OBJ, T1 ARG1, T2 ARG2, const char *name, TErr errp)
+(TV v, const char *name, TObj OBJ, T1 ARG1, T2 ARG2, TErr errp)
{ ... }

@@
type TV, TErr, TObj;
identifier OBJ;
identifier VISIT_TYPE =~ "^visit_type_";
@@
void VISIT_TYPE
-(TV v, TObj OBJ, const char *name, TErr errp)
+(TV v, const char *name, TObj OBJ, TErr errp)
{ ... }

// Part 2: swap caller order
@@
expression V, NAME, OBJ, ARG1, ARG2, ERR;
identifier VISIT_TYPE =~ "^visit_type_";
@@
(
-visit_start_struct(V, OBJ, ARG1, NAME, ARG2, ERR)
+visit_start_struct(V, NAME, OBJ, ARG1, ARG2, ERR)
|
-visit_optional(V, ARG1, NAME)
+visit_optional(V, NAME, ARG1)
|
-visit_get_next_type(V, OBJ, ARG1, NAME, ERR)
+visit_get_next_type(V, NAME, OBJ, ARG1, ERR)
|
-visit_type_enum(V, OBJ, ARG1, ARG2, NAME, ERR)
+visit_type_enum(V, NAME, OBJ, ARG1, ARG2, ERR)
|
-VISIT_TYPE(V, OBJ, NAME, ERR)
+VISIT_TYPE(V, NAME, OBJ, ERR)
)

Backports commit 51e72bc1dd6ace6e91d675f41a1f09bd00ab8043 from qemu
2018-02-19 22:45:07 -05:00
Eric Blake 5b6f0cbdb7
qom: Use typedef for Visitor
No need to repeat 'struct Visitor' when we already have it in
typedefs.h. Omitting the redundant 'struct' also makes a later
patch easier to search for all object property callbacks that
are associated with a Visitor.

Backports commit 4fa45492c3387c0fa51e8e81160ac9a7814f44a2 from qemu
2018-02-19 22:26:47 -05:00
Eric Blake 7e83274012
qapi-visit: Kill unused visit_end_union()
The generated code can call visit_end_union() without having called
visit_start_union(). Example:

if (!*obj) {
goto out_obj;
}
visit_type_CpuInfoBase_fields(v, (CpuInfoBase **)obj, &err);
if (err) {
goto out_obj; // if we go from here...
}
if (!visit_start_union(v, !!(*obj)->u.data, &err) || err) {
goto out_obj;
}
switch ((*obj)->arch) {
[...]
}
out_obj:
// ... then *obj is true, and ...
error_propagate(errp, err);
err = NULL;
if (*obj) {
// we end up here
visit_end_union(v, !!(*obj)->u.data, &err);
}
error_propagate(errp, err);

Harmless only because no visitor implements end_union(). Clean it up
anyway, by deleting the function as useless.

Messed up since we have visit_end_union (commit cee2ded).

Backports commit 7c91aabd8964cfdf637f302c579c95401f21ce92 from qemu
2018-02-19 22:22:24 -05:00
Eric Blake 994490d197
qapi: Shorter visits of optional fields
For less code, reflect the determined boolean value of an optional
visit back to the caller instead of making the caller read the
boolean after the fact.

The resulting generated code has the following diff:

|- visit_optional(v, &has_fdset_id, "fdset-id");
|- if (has_fdset_id) {
|+ if (visit_optional(v, &has_fdset_id, "fdset-id")) {
| visit_type_int(v, &fdset_id, "fdset-id", &err);
| if (err) {
| goto out;
| }
| }

Backports commit 29637a6ee913df8fcdf371426ee48956b945b618 from qemu
2018-02-19 22:03:23 -05:00
Eric Blake f3d2380f6d
qapi: Simplify visits of optional fields
None of the visitor callbacks would set an error when testing
if an optional field was present; make this part of the interface
contract by eliminating the errp argument.

The resulting generated code has a nice diff:

|- visit_optional(v, &has_fdset_id, "fdset-id", &err);
|- if (err) {
|- goto out;
|- }
|+ visit_optional(v, &has_fdset_id, "fdset-id");
| if (has_fdset_id) {
| visit_type_int(v, &fdset_id, "fdset-id", &err);
| if (err) {
| goto out;
| }
| }

Backports commit 5cdc8831a795fb8452d7e34f644202fd724e122a from qemu
2018-02-19 22:01:27 -05:00
Eric Blake 65d58b543e
qapi: Fix alternates that accept 'number' but not 'int'
The QMP input visitor allows integral values to be assigned by
promotion to a QTYPE_QFLOAT. However, when parsing an alternate,
we did not take this into account, such that an alternate that
accepts 'number' and some other type, but not 'int', would reject
integral values.

With this patch, we now have the following desirable table:

alternate has case selected for
'int' 'number' QTYPE_QINT QTYPE_QFLOAT
no no error error
no yes 'number' 'number'
yes no 'int' error
yes yes 'int' 'number'

While it is unlikely that we will ever use 'number' in an
alternate other than in the testsuite, it never hurts to be
more precise in what we allow.

Backports commit d00341af384665d259af475b14c96bb8414df415 from qemu
2018-02-19 21:58:10 -05:00
Eric Blake 2ee6c960ee
qapi: Simplify visiting of alternate types
Previously, working with alternates required two lookup arrays
and some indirection: for type Foo, we created Foo_qtypes[]
which maps each qtype to a value of the generated FooKind enum,
then look up that value in FooKind_lookup[] like we do for other
union types.

This has a couple of subtle bugs. First, the generator was
creating a call with a parameter '(int *) &(*obj)->type' where
type is an enum type; this is unsafe if the compiler chooses
to store the enum type in a different size than int, where
assigning through the wrong size pointer can corrupt data or
cause a SIGBUS.

Related bug, not not fixed in this patch: qapi-visit.py's
gen_visit_enum() generates a cast of its enum * argument to
int *. Marked FIXME.

Second, since the values of the FooKind enum start at zero, all
entries of the Foo_qtypes[] array that were not explicitly
initialized will map to the same branch of the union as the
first member of the alternate, rather than triggering a desired
failure in visit_get_next_type(). Fortunately, the bug seldom
bites; the very next thing the input visitor does is try to
parse the incoming JSON with the wrong parser, which normally
fails; the output visitor is not used with a C struct in that
state, and the dealloc visitor has nothing to clean up (so
there is no leak).

However, the second bug IS observable in one case: parsing an
integer causes unusual behavior in an alternate that contains
at least a 'number' member but no 'int' member, because the
'number' parser accepts QTYPE_QINT in addition to the expected
QTYPE_QFLOAT (that is, since 'int' is not a member, the type
QTYPE_QINT accidentally maps to FooKind 0; if this enum value
is the 'number' branch the integer parses successfully, but if
the 'number' branch is not first, some other branch tries to
parse the integer and rejects it). A later patch will worry
about fixing alternates to always parse all inputs that a
non-alternate 'number' would accept, for now this is still
marked FIXME in the updated test-qmp-input-visitor.c, to
merely point out that new undesired behavior of 'ans' matches
the existing undesired behavior of 'asn'.

This patch fixes the default-initialization bug by deleting the
indirection, and modifying get_next_type() to directly assign a
QTypeCode parameter. This in turn fixes the type-casting bug,
as we are no longer casting a pointer to enum to a questionable
size. There is no longer a need to generate an implicit FooKind
enum associated with the alternate type (since the QMP wire
format never uses the stringized counterparts of the C union
member names). Since the updated visit_get_next_type() does not
know which qtypes are expected, the generated visitor is
modified to generate an error statement if an unexpected type is
encountered.

Callers now have to know the QTYPE_* mapping when looking at the
discriminator; but so far, only the testsuite was even using the
C struct of an alternate types. I considered the possibility of
keeping the internal enum FooKind, but initialized differently
than most generated arrays, as in:
typedef enum FooKind {
FOO_KIND_A = QTYPE_QDICT,
FOO_KIND_B = QTYPE_QINT,
} FooKind;
to create nicer aliases for knowing when to use foo->a or foo->b
when inspecting foo->type; but it turned out to add too much
complexity, especially without a client.

There is a user-visible side effect to this change, but I
consider it to be an improvement. Previously,
the invalid QMP command:
{"execute":"blockdev-add", "arguments":{"options":
{"driver":"raw", "id":"a", "file":true}}}
failed with:
{"error": {"class": "GenericError",
"desc": "Invalid parameter type for 'file', expected: QDict"}}
(visit_get_next_type() succeeded, and the error comes from the
visit_type_BlockdevOptions() expecting {}; there is no mention of
the fact that a string would also work). Now it fails with:
{"error": {"class": "GenericError",
"desc": "Invalid parameter type for 'file', expected: BlockdevRef"}}
(the error when the next type doesn't match any expected types for
the overall alternate).

Backports commit 0426d53c6530606bf7641b83f2b755fe61c280ee from qemu
2018-02-19 21:52:39 -05:00
Eric Blake e9666e4455
qapi: Convert QType into QAPI built-in enum type
What's more meta than using qapi to define qapi? :)

Convert QType into a full-fledged[*] builtin qapi enum type, so
that a subsequent patch can then use it as the discriminator
type of qapi alternate types. Fortunately, the judicious use of
'prefix' in the qapi definition avoids churn to the spelling of
the enum constants.

To avoid circular definitions, we have to flip the order of
inclusion between "qobject.h" vs. "qapi-types.h". Back in commit
28770e0, we had the latter include the former, so that we could
use 'QObject *' for our implementation of 'any'. But that usage
also works with only a forward declaration, whereas the
definition of QObject requires QType to be a complete type.

[*] The type has to be builtin, rather than declared in
qapi/common.json, because we want to use it for alternates even
when common.json is not included. But since it is the first
builtin enum type, we have to add special cases to qapi-types
and qapi-visit to only emit definitions once, even when two
qapi files are being compiled into the same binary (the way we
already handled builtin list types like 'intList'). We may
need to revisit how multiple qapi files share common types,
but that's a project for another day.

Backports commit 7264f5c50cc1be0f1406e3ebb45aedcca02f603a from qemu
2018-02-19 21:47:05 -05:00
Eric Blake 805c803298
qobject: Rename qtype_code to QType
The name QType matches our CODING_STYLE conventions for type names
in CamelCase. It also matches the fact that we are already naming
all the enum members with a prefix of QTYPE, not QTYPE_CODE. And
doing the rename will also make it easier for the next patch to use
QAPI for providing the enum, which also wants CamelCase type names.

Backports commit 1310a3d3bd9301ff5a825287638cfab24c2c6689 from qemu
2018-02-19 21:41:52 -05:00
Eric Blake cc1d62568e
qobject: Simplify QObject
The QObject hierarchy is small enough, and unlikely to grow further
(since we only use it to map to JSON and already cover all JSON
types), that we can simplify things by not tracking a separate
vtable, but just inline the code element of the vtable QType
directly into QObject (renamed to type), and track a separate array
of destroy functions. We can drop qnull_destroy_obj() in the
process.

The remaining QObject subclasses must export their destructor.

This also has the nice benefit of moving the typename 'QType'
out of the way, so that the next patch can repurpose it for a
nicer name for 'qtype_code'.

The various objects are still the same size (so no change in cache
line pressure), but now have less indirection (although I didn't
bother benchmarking to see if there is a noticeable speedup, as
we don't have hard evidence that this was in a performance hotspot
in the first place).

A future patch could drop the refcnt size to 32 bits for a smaller
struct on 64-bit architectures, if desired (we have limits on the
largest JSON that we are willing to parse, and will probably never
need to take full advantage of a 64-bit refcnt).

Backports commit 55e1819c509b3d9c10a54678b9c585bbda13889e from qemu
2018-02-19 21:37:48 -05:00
Markus Armbruster 105a6be9b0
qobject: Add a special null QObject
I'm going to fix the JSON parser to recognize null. The obvious
representation of JSON null as (QObject *)NULL doesn't work, because
the parser already uses it as an error value. Perhaps we should
change it to free NULL for null, but that's more than I can do right
now. Create a special null QObject instead.

The existing QDict, QList, and QString all represent something that
is a pointer in C and could therefore be associated with NULL. But
right now, all three of these sub-types are always non-null once
created, so the new null sentinel object is intentionally unrelated
to them.

Backports commit 481b002cc81ed7fc7b06e32e9d4d495d81739d14 from qemu
2018-02-19 21:25:58 -05:00
Eric Blake 0ae71ac202
qapi: Change munging of CamelCase enum values
When munging enum values, the fact that we were passing the entire
prefix + value through camel_to_upper() meant that enum values
spelled with CamelCase could be turned into CAMEL_CASE. However,
this provides a potential collision (both OneTwo and One-Two would
munge into ONE_TWO) for enum types, when the same two names are
valid side-by-side as QAPI member names. By changing the generation
of enum constants to always be prefix + '_' + c_name(value,
False).upper(), and ensuring that there are no case collisions (in
the next patches), we no longer have to worry about names that
would be distinct as QAPI members but collide as variant tag names,
without having to think about what munging the heuristics in
camel_to_upper() will actually perform on an enum value.

Making the change will affect enums that did not follow coding
conventions, using 'CamelCase' rather than desired 'lower-case'.

Thankfully, there are only two culprits: InputButton and ErrorClass.
We already tweaked ErrorClass to make it an alias of QapiErrorClass,
where only the alias needs changing rather than the whole tree. So
the bulk of this change is modifying INPUT_BUTTON_WHEEL_UP to the
new INPUT_BUTTON_WHEELUP (and likewise for WHEELDOWN). That part
of this commit may later need reverting if we rename the enum
constants from 'WheelUp' to 'wheel-up' as part of moving
x-input-send-event to a stable interface; but at least we have
documentation bread crumbs in place to remind us (commit 513e7cd),
and it matches the fact that SDL constants are also spelled
SDL_BUTTON_WHEELUP.

Backports commit d20a580bc0eac9d489884f6d2ed28105880532b6 from qemu
2018-02-19 20:40:15 -05:00
Eric Blake 1bda2b186b
qapi: Add alias for ErrorClass
The qapi enum ErrorClass is unusual that it uses 'CamelCase' names,
contrary to our documented convention of preferring 'lower-case'.
However, this enum is entrenched in the API; we cannot change
what strings QMP outputs. Meanwhile, we want to simplify how
c_enum_const() is used to generate enum constants, by moving away
from the heuristics of camel_to_upper() to a more straightforward
c_name(N).upper() - but doing so will rename all of the ErrorClass
constants and cause churn to all client files, where the new names
are aesthetically less pleasing (ERROR_CLASS_DEVICENOTFOUND looks
like we can't make up our minds on whether to break between words).

So as always in computer science, solve the problem by some more
indirection: rename the qapi type to QapiErrorClass, and add a
new enum ErrorClass in error.h whose members are aliases of the
qapi type, but with the spelling expected elsewhere in the tree.
Then, when c_enum_const() changes the munging, we only have to
adjust the one alias spot.

Backports commit f22a28b898322c01b0463a8b7ec551d72bc61a5b from qemu
2018-02-19 20:38:51 -05:00
Eric Blake a5cbe099d7
qapi: Remove dead visitor code
Commit cbc95538 removed unused start_handle() and end_handle(),
but forgot to remove their declarations.

Backports commit 7549457200ec3871ee827765f4d3bbc8d903b2ec from qemu
2018-02-19 20:32:00 -05:00
Markus Armbruster f93438ba43
qapi: Introduce a first class 'any' type
It's first class, because unlike '**', it actually works, i.e. doesn't
require 'gen': false.

'**' will go away next.

Backports commit 28770e057f265a4e70bcbdfc2447cce7b5f2dc19 from qemu
2018-02-19 17:46:58 -05:00
Daniel P. Berrange 767e900547
qom: Make enum string tables const-correct
The enum string table parameters in various QOM/QAPI methods
are declared 'const char *strings[]'. This results in const
warnings if passed a variable that was declared as

   static const char * const strings[] = { .... };

Add the extra const annotation to the parameters, since
neither the string elements, nor the array itself should
ever be modified.

Backports commit 2e4450ff432daef524cb3557fca68a3b7b5c7823 from qemu
2018-02-19 16:02:23 -05:00
Eric Blake 3aba81d5aa
qapi: Drop unused error argument for list and implicit struct
No backend was setting an error when ending the visit of a list or
implicit struct, or when moving to the next list node. Make the
callers a bit easier to follow by making this a part of the contract,
and removing the errp argument - callers can then unconditionally end
an object as part of cleanup without having to think about whether a
second error is dominated by a first, because there is no second
error.

A later patch will then tackle the larger task of splitting
visit_end_struct(), which can indeed set an error.

Backports commit 08f9541dec51700abef0c37994213164ca4e4fc9 from qemu
2018-02-19 12:59:54 -05:00
Eric Blake eeffd97458
qapi: Make all visitors supply uint64 callbacks
Our qapi visitor contract supports multiple integer visitors,
but left the type_uint64 visitor as optional (falling back on
type_int64); which in turn can lead to awkward behavior with
numbers larger than INT64_MAX (the user has to be aware of
twos complement, and deal with negatives).

This patch does not address the disparity in handling large
values as negatives. It merely moves the fallback from uint64
to int64 from the visitor core to the visitors, where the issue
can actually be fixed, by implementing the missing type_uint64()
callbacks on top of the respective type_int64() callbacks, and
with a FIXME comment explaining why that's wrong.

With that done, we now have a type_uint64() callback in every
driver, so we can make it mandatory from the core. And although
the type_int64() callback can cover the entire valid range of
type_uint{8,16,32} on valid user input, using type_uint64() to
avoid mixed signedness makes more sense.

Backports commit f755dea79dc81b0d6a8f6414e0672e165e28d8ba from qemu
2018-02-19 11:59:22 -05:00
Eric Blake 5b5299bdee
qapi: Prefer type_int64 over type_int in visitors
The qapi builtin type 'int' is basically shorthand for the type
'int64'. In fact, since no visitor was providing the optional
type_int64() callback, visit_type_int64() was just always falling
back to type_int(), cementing the equivalence between the types.

However, some visitors are providing a type_uint64() callback.
For purposes of code consistency, it is nicer if all visitors
use the paired type_int64/type_uint64 names rather than the
mismatched type_int/type_uint64. So this patch just renames
the signed int callbacks in place, dropping the type_int()
callback as redundant, and a later patch will focus on the
unsigned int callbacks.

Add some FIXMEs to questionable reuse of errp in code touched
by the rename, while at it (the reuse works as long as the
callbacks don't modify value when setting an error, but it's not
a good example to set) - a later patch will then fix those.

No change in functionality here, although further cleanups are
in the pipeline.

Backports commit 4c40314a35816de635e7170eaacdc0c35be83a8a from qemu
2018-02-19 11:53:21 -05:00
Peter Maydell c8220d5aaf
fpu: Replace uint8 typedef with uint8_t
Replace the uint8 softfloat-specific typedef with uint8_t.
This change was made with

find include hw fpu target-* -name '*.[ch]' | xargs sed -i -e 's/\buint8\b/uint8_t/g'

together with manual removal of the typedef definition and
manual fixing of more erroneous uses found via test compilation.

It turns out that the only code using this type is an accidental
use where uint8_t was intended anyway...

Backports commit d341d9f3062c74d74c94ebe6359f067bed8311ba from qemu
2018-02-19 00:34:41 -05:00
Peter Maydell 2eaf79bfd3
fpu: Replace int8 typedef with int8_t
Replace the int8 softfloat-specific typedef with int8_t.
This change was made with

find include hw fpu target-* -name '*.[ch]' | xargs sed -i -e 's/\bint8\b/int8_t/g'

together with manual removal of the typedef definition, and
manual undoing of various mis-hits.

Backports commit 8f506c709adb7d3bed4ebefefe9487c156192a64 from qemu
2018-02-19 00:33:01 -05:00
Peter Maydell 63ac282f1a
fpu: Replace uint32 typedef with uint32_t
Replace the uint32 softfloat-specific typedef with uint32_t.
This change was made with

find include hw fpu target-* -name '*.[ch]' | xargs sed -i -e 's/\buint32\b/uint32_t/g'

together with manual removal of the typedef definition,
manual undoing of various mis-hits, and another couple of
fixes found via test compilation.

All the uses in hw/ were using the wrong type by mistake.

Backports commit 3a87d00910ef64a2eece4aad25d96ea10683fc5c from qemu
2018-02-19 00:29:52 -05:00
Peter Maydell 9712d8a7ac
fpu: Replace int32 typedef with int32_t
Replace the int32 softfloat-specific typedef with int32_t.
This change was made with

find hw include fpu target-* -name '*.[ch]' | xargs sed -i -e 's/\bint32\b/int32_t/g'

together with manual removal of the typedef definition, and
manual undoing of some mis-hits where macro arguments were
being used for token pasting rather than as a type.

The uses in hw/ipmi/ should not have been using this type at all.

Backports commit f4014512cda682a9d0c75310d278d7ae96b0505c from qemu
2018-02-19 00:24:56 -05:00
Peter Maydell df84f0d513
fpu: Replace uint64 typedef with uint64_t
Replace the uint64 softfloat-specific typedef with uint64_t.
This change was made with

find include fpu target-* -name '*.[ch]' | xargs sed -i -e 's/\buint64\b/uint64_t/g'

together with manual removal of the typedef definition, and
manual undoing of some mis-hits where macro arguments were
being used for token pasting rather than as a type.

Note that the target-mips/kvm.c and target-s390x/kvm.c changes are fixing
code that should not have been using the uint64 type in the first place.

Backports commit 182f42fdc219e6481654fcfb73b17e4b4e63b6ff from qemu
2018-02-19 00:16:49 -05:00
Peter Maydell 940106cd69
fpu: Replace int64 typedef with int64_t
Replace the int64 softfloat-specific typedef with int64_t.
This change was made with

find include fpu target-* -name '*.[ch]' | xargs sed -i -e 's/\bint64\b/int64_t/g'

together with manual removal of the typedef definition, and
manual undoing of some mis-hits where macro arguments were
being used for token pasting rather than as a type.

Backports commit f42c222482b651400f0fa417eb174da1c9502c1c from qemu
2018-02-19 00:14:18 -05:00
Peter Crosthwaite ce997e1caf
qom/cpu: Add MemoryRegion property
Add a MemoryRegion property, which if set is used to construct
the CPU's initial (default) AddressSpace.

Backports commit 6731d864f80938e404dc3e5eb7f6b76b891e3e43 from qemu
2018-02-18 21:54:50 -05:00
Lioncash 6d5f465449
uc: Handle freeing of multiple address spaces 2018-02-18 21:36:50 -05:00
Dr. David Alan Gilbert 75701d03ee
qemu_ram_foreach_block: pass up error value, and down the ramblock name
check the return value of the function it calls and error if it's non-0
Fixup qemu_rdma_init_one_block that is the only current caller,
  and rdma_add_block the only function it calls using it.

Pass the name of the ramblock to the function; helps in debugging.

Backports commit e3807054e20fb3b94d18cb751c437ee2f43b6fac from qemu
2018-02-18 19:17:18 -05:00
Peter Crosthwaite b82e711a65
memory: Add address_space_init_shareable()
This will either create a new AS or return a pointer to an
already existing equivalent one, if we have already created
an AS for the specified root memory region.

The motivation is to reuse address spaces as much as possible.
It's going to be quite common that bus masters out in device land
have pointers to the same memory region for their mastering yet
each will need to create its own address space. Let the memory
API implement sharing for them.

Aside from the perf optimisations, this should reduce the amount
of redundant output on info mtree as well.

Thee returned value will be malloced, but the malloc will be
automatically freed when the AS runs out of refs.

Backports commit f0c02d15b57da6f5463e3768aa0cfeedccf4b8f4 from qemu
2018-02-18 00:18:21 -05:00
Peter Maydell 1dfba71bef
exec.c: Add cpu_get_address_space()
Add a function to return the AddressSpace for a CPU based on
its numerical index. (Callers outside exec.c don't have access
to the CPUAddressSpace struct so can't just fish it out of the
CPUState struct directly.)

Backports commit 651a5bc03705102de519ebf079a40ecc1da991db from qemu
2018-02-17 23:22:23 -05:00
Peter Maydell 2fe995a0da
exec.c: Pass MemTxAttrs to iotlb_to_region so it uses the right AS
Pass the MemTxAttrs for the memory access to iotlb_to_region(); this
allows it to determine the correct AddressSpace to use for the lookup.

Backports commit a54c87b68a0410d0cf6f8b84e42074a5cf463732 from qemu
2018-02-17 23:19:00 -05:00
Peter Maydell 8edd6ffdfd
cputlb.c: Use correct address space when looking up MemoryRegionSection
When looking up the MemoryRegionSection for the new TLB entry in
tlb_set_page_with_attrs(), use cpu_asidx_from_attrs() to determine
the correct address space index for the lookup, and pass it into
address_space_translate_for_iotlb().

Backports commit d7898cda81b6efa6b2d7a749882695cdcf280eaa from qemu
2018-02-17 23:15:22 -05:00
Peter Maydell d23831f4dd
cpu: Add new asidx_from_attrs() method
Add a new method to CPUClass which the memory system core can
use to obtain the correct address space index to use for a memory
access with a given set of transaction attributes, together
with the wrapper function cpu_asidx_from_attrs() which implements
the default behaviour ("always use asidx 0") for CPU classes
which don't provide the method.

Backports commit d7f25a9e6a6b2c69a0be6033903b7d6087bcf47d from qemu
2018-02-17 22:45:32 -05:00
Lioncash 1cc4b92c67
cpu: Add new get_phys_page_attrs_debug() method
Add a new optional method get_phys_page_attrs_debug() to CPUClass.
This is like the existing get_phys_page_debug(), but also returns
the memory transaction attributes to use for the access.
This will be necessary for CPUs which have multiple address
spaces and use the attributes to select the correct address
space.

We provide a wrapper function cpu_get_phys_page_attrs_debug()
which falls back to the existing get_phys_page_debug(), so we
don't need to change every target CPU.

Backports commit 1dc6fb1f5cc5cea5ba01010a19c6acefd0ae4b73 from qemu
2018-02-17 22:43:42 -05:00
Peter Maydell 90c7c1bdb5
exec-all.h: Document tlb_set_page_with_attrs, tlb_set_page
Add documentation comments for tlb_set_page_with_attrs()
and tlb_set_page().

Backports commit 1787cc8ee55143b6071c87e59f08d56e7c22c1eb from qemu
2018-02-17 22:37:58 -05:00
Peter Maydell 51369b67cd
exec.c: Allow target CPUs to define multiple AddressSpaces
Allow multiple calls to cpu_address_space_init(); each
call adds an entry to the cpu->ases array at the specified
index. It is up to the target-specific CPU code to actually use
these extra address spaces.

Since this multiple AddressSpace support won't work with
KVM, add an assertion to avoid confusing failures.

Backports commit 12ebc9a76dd7702aef0a3618717a826c19c34ef4 from qemu
2018-02-17 22:35:13 -05:00
Peter Maydell f1b237236c
exec.c: Don't set cpu->as until cpu_address_space_init
Rather than setting cpu->as unconditionally in cpu_exec_init
(and then having target-i386 override this later), don't set
it until the first call to cpu_address_space_init.

This requires us to initialise the address space for
both TCG and KVM (KVM doesn't need the AS listener but
it does require cpu->as to be set).

For target CPUs which don't set up any address spaces (currently
everything except i386), add the default address_space_memory
in qemu_init_vcpu().

Backports commit 56943e8cc14b7eeeab67d1942fa5d8bcafe3e53f from qemu
2018-02-17 22:24:36 -05:00