Commit graph

1840 commits

Author SHA1 Message Date
Eric Blake 85af4b2030
qapi: Add new visit_complete() function
Making each output visitor provide its own output collection
function was the only remaining reason for exposing visitor
sub-types to the rest of the code base. Add a polymorphic
visit_complete() function which is a no-op for input visitors,
and which populates an opaque pointer for output visitors. For
maximum type-safety, also add a parameter to the output visitor
constructors with a type-correct version of the output pointer,
and assert that the two uses match.

This approach was considered superior to either passing the
output parameter only during construction (action at a distance
during visit_free() feels awkward) or only during visit_complete()
(defeating type safety makes it easier to use incorrectly).

Most callers were function-local, and therefore a mechanical
conversion; the testsuite was a bit trickier, but the previous
cleanup patch minimized the churn here.

The visit_complete() function may be called at most once; doing
so lets us use transfer semantics rather than duplication or
ref-count semantics to get the just-built output back to the
caller, even though it means our behavior is not idempotent.

Generated code is simplified as follows for events:

|@@ -26,7 +26,7 @@ void qapi_event_send_acpi_device_ost(ACP
| QDict *qmp;
| Error *err = NULL;
| QMPEventFuncEmit emit;
|- QmpOutputVisitor *qov;
|+ QObject *obj;
| Visitor *v;
| q_obj_ACPI_DEVICE_OST_arg param = {
| info
|@@ -39,8 +39,7 @@ void qapi_event_send_acpi_device_ost(ACP
|
| qmp = qmp_event_build_dict("ACPI_DEVICE_OST");
|
|- qov = qmp_output_visitor_new();
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(&obj);
|
| visit_start_struct(v, "ACPI_DEVICE_OST", NULL, 0, &err);
| if (err) {
|@@ -55,7 +54,8 @@ void qapi_event_send_acpi_device_ost(ACP
| goto out;
| }
|
|- qdict_put_obj(qmp, "data", qmp_output_get_qobject(qov));
|+ visit_complete(v, &obj);
|+ qdict_put_obj(qmp, "data", obj);
| emit(QAPI_EVENT_ACPI_DEVICE_OST, qmp, &err);

and for commands:

| {
| Error *err = NULL;
|- QmpOutputVisitor *qov = qmp_output_visitor_new();
| Visitor *v;
|
|- v = qmp_output_get_visitor(qov);
|+ v = qmp_output_visitor_new(ret_out);
| visit_type_AddfdInfo(v, "unused", &ret_in, &err);
|- if (err) {
|- goto out;
|+ if (!err) {
|+ visit_complete(v, ret_out);
| }
|- *ret_out = qmp_output_get_qobject(qov);
|-
|-out:
| error_propagate(errp, err);

Backports commit 3b098d56979d2f7fd707c5be85555d114353a28d from qemu
2018-02-25 01:20:03 -05:00
Eric Blake ec53301cda
qmp-output-visitor: Favor new visit_free() function
Now that we have a polymorphic visit_free(), we no longer need
qmp_output_visitor_cleanup(); however, we still need to
expose the subtype for qmp_output_get_qobject().

Backports commit 1830f22a6777cedaccd67a08f675d30f7a85ebfd from qemu
2018-02-25 01:12:27 -05:00
Eric Blake f008d93ac0
qmp-input-visitor: Favor new visit_free() function
Now that we have a polymorphic visit_free(), we no longer need
qmp_input_visitor_cleanup(); which in turn means we no longer
need to return a subtype from qmp_input_visitor_new() nor a
public upcast function.

Generated code changes to qmp-marshal.c look like:

|@@ -52,11 +52,10 @@ void qmp_marshal_add_fd(QDict *args, QOb
| {
| Error *err = NULL;
| AddfdInfo *retval;
|- QmpInputVisitor *qiv = qmp_input_visitor_new(QOBJECT(args), true);
| Visitor *v;
| q_obj_add_fd_arg arg = {0};
|
|- v = qmp_input_get_visitor(qiv);
|+ v = qmp_input_visitor_new(QOBJECT(args), true);
| visit_start_struct(v, NULL, NULL, 0, &err);
| if (err) {
| goto out;

Backports commit b70ce1018a251c0c33498d9c927a07cade655a5e from qemu
2018-02-25 01:10:53 -05:00
Eric Blake e88a7e260b
string-input-visitor: Favor new visit_free() function
Now that we have a polymorphic visit_free(), we no longer need
string_input_visitor_cleanup(); which in turn means we no longer
need to return a subtype from string_input_visitor_new() nor a
public upcast function.

Backports commit 7a0525c7be6b38d32d586e3fd12e7377ded21faa from qemu
2018-02-25 01:08:04 -05:00
Eric Blake 7f741a6c9b
qapi: Add new visit_free() function
Making each visitor provide its own (awkwardly-named) FOO_cleanup()
is unusual, when we can instead have a polymorphic visit_free()
interface. Over the next few patches, we can use the polymorphic
functions to eliminate the need for a FOO_get_visitor() function
for accessing specific visitor functionality, once everything can
be accessed directly through the Visitor* interfaces.

The dealloc visitor is the first one converted to completely use
the new entry point, since qapi_dealloc_visitor_cleanup() was the
only reason that qapi_dealloc_get_visitor() existed, and only
generated and testsuite code was even using it. With the new
visit_free() entry point in place, we no longer need to expose
the QapiDeallocVisitor subtype through qapi_dealloc_visitor_new(),
and can get by with less generated code, with diffs that look like:

| void qapi_free_ACPIOSTInfo(ACPIOSTInfo *obj)
| {
|- QapiDeallocVisitor *qdv;
| Visitor *v;
|
| if (!obj) {
| return;
| }
|
|- qdv = qapi_dealloc_visitor_new();
|- v = qapi_dealloc_get_visitor(qdv);
|+ v = qapi_dealloc_visitor_new();
| visit_type_ACPIOSTInfo(v, NULL, &obj, NULL);
|- qapi_dealloc_visitor_cleanup(qdv);
|+ visit_free(v);
|}

Backports commit 2c0ef9f411ae6081efa9eca5b3eab2dbeee45a6c from qemu
2018-02-25 01:05:41 -05:00
Eric Blake 37ae4dfdfd
qapi: Add parameter to visit_end_*
Rather than making the dealloc visitor track of stack of pointers
remembered during visit_start_* in order to free them during
visit_end_*, it's a lot easier to just make all callers pass the
same pointer to visit_end_*. The generated code has access to the
same pointer, while all other users are doing virtual walks and
can pass NULL. The dealloc visitor is then greatly simplified.

All three visit_end_*() functions intentionally take a void**,
even though the visit_start_*() functions differ between void**,
GenericList**, and GenericAlternate**. This is done for several
reasons: when doing a virtual walk, passing NULL doesn't care
what the type is, but when doing a generated walk, we already
have to cast the caller's specific FOO* to call visit_start,
while using void** lets us use visit_end without a cast. Also,
an upcoming patch will add a clone visitor that wants to use
the same implementation for all three visit_end callbacks,
which is made easier if all three share the same signature.

For visitors with already track per-object state (the QMP visitors
via a stack, and the string visitors which do not allow nesting),
add an assertion that the caller is indeed passing the same
pointer to paired calls.

Backports commit 1158bb2a058fcdd0c8fc3e60dc77f7a57ddbb271 from qemu
2018-02-25 00:57:54 -05:00
Changlong Xie 2ca07642f1
qom: Fix comment typo
It's qom_unref, not qdef_unref.

Backports commit ada03a0e8423ef8950e30d216f56a9661a4070e2 from qemu
2018-02-25 00:46:15 -05:00
Markus Armbruster eeef227560
range: Replace internal representation of Range
Range represents a range as follows. Member @start is the inclusive
lower bound, member @end is the exclusive upper bound. Zero @end is
special: if @start is also zero, the range is empty, else @end is to
be interpreted as 2^64. No other empty ranges may occur.

The range [0,2^64-1] cannot be represented. If you try to create it
with range_set_bounds1(), you get the empty range instead. If you try
to create it with range_set_bounds() or range_extend(), assertions
fail. Before range_set_bounds() existed, the open-coded creation
usually got you the empty range instead. Open deathtrap.

Moreover, the code dealing with the janus-faced @end is too clever by
half.

Dumb this down to a more pedestrian representation: members @lob and
@upb are inclusive lower and upper bounds. The empty range is encoded
as @lob = 1, @upb = 0.

Backports commit 6dd726a2bf1b800289d90a84d5fcb5ce7b78a8e1 from qemu
2018-02-25 00:44:36 -05:00
Markus Armbruster 8b2a0c4ece
range: Eliminate direct Range member access
Users of struct Range mess liberally with its members, which makes
refactoring hard. Create a set of methods, and convert all users to
call them instead of accessing members. The methods have carefully
worded contracts, and use assertions to check them.

Backports commit a0efbf16604770b9d805bcf210ec29942321134f from qemu
2018-02-25 00:39:43 -05:00
Alistair Francis fbb0645fb3
bitops: Add MAKE_64BIT_MASK macro
Add a macro that creates a 64bit value which has length number of ones
shifted across by the value of shift.

Backports commit ae2923b5c20a21c6457680330506a9c13873485c from qemu
2018-02-25 00:30:39 -05:00
Peter Maydell efc6cc2b83
memory: Assert that memory_region_init_rom_device() ops aren't NULL
It doesn't make sense to pass a NULL ops argument to
memory_region_init_rom_device(), because the effect will
be that if the guest tries to write to the memory region
then QEMU will segfault. Catch the bug earlier by sanity
checking the arguments to this function, and remove the
misleading documentation that suggests that passing NULL
might be sensible.

Backports commit 39e0b03dec518254fabd2acff29548d3f1d2b754 from qemu
2018-02-25 00:29:52 -05:00
Peter Maydell 334e951ec1
memory: Provide memory_region_init_rom()
Provide a new helper function memory_region_init_rom() for memory
regions which are read-only (and unlike those created by
memory_region_init_rom_device() don't have special behaviour
for writes). This has the same behaviour as calling
memory_region_init_ram() and then memory_region_set_readonly()
(which is what we do today in boards with pure ROMs) but is a
more easily discoverable API for the purpose.

Backports commit a1777f7f6462c66e1ee6e98f0d5c431bfe988aa5 from qemu
2018-02-25 00:28:17 -05:00
Alexey Kardashevskiy 7187d77cfa
memory: Add MemoryRegionIOMMUOps.notify_started/stopped callbacks
The IOMMU driver may change behavior depending on whether a notifier
client is present. In the case of POWER, this represents a change in
the visibility of the IOTLB, for other drivers such as intel-iommu and
future AMD-Vi emulation, notifier support is not yet enabled and this
provides the opportunity to flag that incompatibility.

Backports commit d22d8956b185c002b50a4d0883aff61f857347ef from qemu
2018-02-25 00:23:00 -05:00
Eric Blake c14d8226ab
qapi: Fix memleak in string visitors on int lists
Commit 7f8f9ef1 introduced the ability to store a list of
integers as a sorted list of ranges, but when merging ranges,
it leaks one or more ranges. It was also using range_get_last()
incorrectly within range_compare() (a range is a start/end pair,
but range_get_last() is for start/len pairs), and will also
mishandle a range ending in UINT64_MAX (remember, we document
that no range covers 2**64 bytes, but that ranges that end on
UINT64_MAX have end < begin).

The whole merge algorithm was rather complex, and included
unnecessary passes over data within glib functions, and enough
indirection to make it hard to easily plug the data leaks.
Since we are already hard-coding things to a list of ranges,
just rewrite the thing to open-code the traversal and
comparisons, by making the range_compare() helper function give
us an answer that is easier to use, at which point we avoid the
need to pass any callbacks to g_list_*(). Then by reusing
range_extend() instead of duplicating effort with range_merge(),
we cover the corner cases correctly.

Drop the now-unused range_merge() and ranges_can_merge().

Doing this lets test-string-{input,output}-visitor pass under
valgrind without leaks.

Backports commit db486cc334aafd3dbdaf107388e37fc3d6d3e171 from qemu
2018-02-25 00:20:34 -05:00
Eric Blake ef357d06bc
qapi: Simplify use of range.h
Calling our function g_list_insert_sorted_merged is a misnomer,
since we are NOT writing a glib function. Furthermore, we are
making every caller pass the same comparator function of
range_merge(): any caller that would try otherwise would break
in weird ways since our internal call to ranges_can_merge() is
hard-coded to operate only on ranges, rather than paying
attention to the caller's comparator.

Better is to fix things so that callers don't have to care about
our internal comparator, by picking a function name and updating
the parameter type away from a gratuitous use of void*, to make
it obvious that we are operating specifically on a list of ranges
and not a generic list. Plus, refactoring the code here will
make it easier to plug a memory leak in the next patch.

range_compare() is now internal only, and moves to the .c file.

Backports commit 7c47959d0cb05db43014141a156ada0b6d53a750 from qemu
2018-02-25 00:02:42 -05:00
Eric Blake 5e22c7e180
range: Create range.c for code that should not be inline
g_list_insert_sorted_merged() is rather large to be an inline
function; move it to its own file. range_merge() and
ranges_can_merge() can likewise move, as they are only used
internally. Also, it becomes obvious that the condition within
range_merge() is already satisfied by its caller, and that the
return value is not used.

The diffstat is misleading, because of the copyright boilerplate.

Backports commit fec0fc0a13ac7f1a1130433a6740cd850c3db34a from qemu
2018-02-24 23:59:13 -05:00
Eric Blake ebeb0e46f8
qapi: Fix crash on missing alternate member of QAPI struct
If a QAPI struct has a mandatory alternate member which is not
present on input, the input visitor reports an error for the
missing alternate without setting the discriminator, but the
cleanup code for the struct still tries to use the dealloc
visitor to clean up the alternate.

Commit dbf11922 changed visit_start_alternate to set *obj to NULL
when an error occurs, where it was previously left untouched.
Thus, before the patch, the dealloc visitor is blindly trying to
cleanup whatever branch corresponds to (*obj)->type == 0 (that is,
QTYPE_NONE, because *obj still pointed to zeroed memory), which
selects the default branch of the switch and sets an error, but
this second error is ignored by the way the dealloc visitor is
used; but after the patch, the attempt to switch dereferences NULL.

When cleaning up after a partial object parse, we specifically
check for !*obj after visit_start_struct() (see gen_visit_object());
doing the same for alternates fixes the crash. Enhance the testsuite
to give coverage for both missing struct and missing alternate
members.

Also add an abort - we expect visit_start_alternate() to either set an
error or to set (*obj)->type to a valid QType that corresponds to
actual user input, and QTYPE_NONE should never be reachable from valid
input. Had the abort() been in place earlier, we might have noticed
the dealloc visitor dereferencing bogus zeroed memory prior to when
commit dbf11922 forced our hand by setting *obj to NULL and causing a
fault.

Test case:

{'execute':'blockdev-add', 'arguments':{'options':{'driver':'raw'}}}

The choice of 'driver':'raw' selects a BlockdevOptionsGenericFormat
struct, which has a mandatory 'file':'BlockdevRef' in QAPI. Since
'file' is missing as a sibling of 'driver', this should report a
graceful error rather than fault. After this patch, we are back to:

{"error": {"class": "GenericError", "desc": "Parameter 'file' is missing"}}

Generated code in qapi-visit.c changes as:

|@@ -2444,6 +2444,9 @@ void visit_type_BlockdevRef(Visitor *v,
| if (err) {
| goto out;
| }
|+ if (!*obj) {
|+ goto out_obj;
|+ }
| switch ((*obj)->type) {
| case QTYPE_QDICT:
| visit_start_struct(v, name, NULL, 0, &err);
|@@ -2459,10 +2462,13 @@ void visit_type_BlockdevRef(Visitor *v,
| case QTYPE_QSTRING:
| visit_type_str(v, name, &(*obj)->u.reference, &err);
| break;
|+ case QTYPE_NONE:
|+ abort();
| default:
| error_setg(&err, QERR_INVALID_PARAMETER_TYPE, name ? name : "null",
| "BlockdevRef");
| }
|+out_obj:
| visit_end_alternate(v);

Backports commit 9b4e38fe6a35890bb1d995316d7be08de0b30ee5 from qemu
2018-02-24 23:53:29 -05:00
Aleksandar Markovic f95e0e9e98
target-mips: Add FCR31's FS bit definition
Add preprocessor definition of FCR31's FS bit, and update related
code for setting this bit.

Backports commit 77be419980114d75605811e1681115d0919cfa1a from qemu
2018-02-24 21:32:10 -05:00
Aleksandar Markovic 4a540f88de
target-mips: Implement FCR31's R/W bitmask and related functionalities
This patch implements read and write access rules for Mips floating
point control and status register (FCR31). The change can be divided
into following parts:

- Add fields that will keep FCR31's R/W bitmask in procesor
definitions and processor float_status structure.

- Add appropriate value for FCR31's R/W bitmask for each supported
processor.

- Add function for setting snan_bit_is_one, and integrate it in
appropriate places.

- Modify handling of CTC1 (case 31) instruction to use FCR31's R/W
bitmask.

- Modify handling user mode executables for Mips, in relation to the
bit EF_MIPS_NAN2008 from ELF header, that is in turn related to
reading and writing to FCR31.

- Modify gdb behavior in relation to FCR31.

Backports commit 599bc5e89c46f95f86ccad0d747d041c89a28806 from qemu
2018-02-24 21:30:24 -05:00
Aleksandar Markovic 84b516d9db
target-mips: Add nan2008 flavor of <CEIL|CVT|FLOOR|ROUND|TRUNC>.<L|W>.<S|D>
New set of helpers for handling nan2008-syle versions of instructions
<CEIL|CVT|FLOOR|ROUND|TRUNC>.<L|W>.<S|D>, for Mips R6.

All involved instructions have float operand and integer result. Their
core functionality is implemented via invocations of appropriate SoftFloat
functions. The problematic cases are when the operand is a NaN, and also
when the operand (float) is out of the range of the result.

Here one can distinguish three cases:

CASE MIPS-A: (FCR31.NAN2008 == 1)

1. Operand is a NaN, result should be 0;
2. Operand is larger than INT_MAX, result should be INT_MAX;
3. Operand is smaller than INT_MIN, result should be INT_MIN.

CASE MIPS-B: (FCR31.NAN2008 == 0)

1. Operand is a NaN, result should be INT_MAX;
2. Operand is larger than INT_MAX, result should be INT_MAX;
3. Operand is smaller than INT_MIN, result should be INT_MAX.

CASE SoftFloat:

1. Operand is a NaN, result is INT_MAX;
2. Operand is larger than INT_MAX, result is INT_MAX;
3. Operand is smaller than INT_MIN, result is INT_MIN.

Current implementation of <CEIL|CVT|FLOOR|ROUND|TRUNC>.<L|W>.<S|D>
implements case MIPS-B. This patch relates to case MIPS-A. For case
MIPS-A, only return value for NaN-operands should be corrected after
appropriate SoftFloat library function is called.

Related MSA instructions FTRUNC_S and FTINT_S already handle well
all cases, in the fashion similar to the code from this patch.

Backports commit 87552089b62fa229d2ff86906e4e779177fb5835 from qemu
2018-02-24 21:14:04 -05:00
Aleksandar Markovic a411a12170
target-mips: Add abs2008 flavor of <ABS|NEG>.<S|D>
Updated handling of instructions <ABS|NEG>.<S|D>. Note that legacy
(pre-abs2008) ABS and NEG instructions are arithmetic (and, therefore,
any NaN operand causes signaling invalid operation), while abs2008
ones are non-arithmetic, always and only changing the sign bit, even
for NaN-like operands. Details on these instructions are documented
in [1] p. 35 and 359.

Implementation-wise, abs2008 versions are implemented without helpers,
for simplicity and performance sake.

[1] "MIPS Architecture For Programmers Volume II-A:
The MIPS64 Instruction Set Reference Manual",
Imagination Technologies LTD, Revision 6.04, November 13, 2015

Backports commit 6be77480052b1a71557081896e7080363a8a2f95 from qemu
2018-02-24 20:45:06 -05:00
Aleksandar Markovic ef9f33a345
target-mips: Activate IEEE 754-2008 signaling NaN bit meaning for MSA
Function msa_reset() is updated so that flag snan_bit_is_one is
properly set to 0.

By applying this patch, a number of incorrect MSA behaviors that
require IEEE 754-2008 compliance will be fixed. Those are behaviors
that (up to the moment of applying this patch) did not get the desired
functionality from SoftFloat library with respect to distinguishing
between quiet and signaling NaN, getting default NaN values (both
quiet and signaling), establishing if a floating point number is NaN
or not, etc.

Two examples:

* FMAX, FMIN will now correctly detect and propagate NaNs.
* FCLASS.D ans FCLASS.S will now correcty detect NaN flavors

Backports commit 40bd6dd456e61a36e454fb9dd2cc739b67c224cf from qemu
2018-02-24 20:41:48 -05:00
Aleksandar Markovic 3e9325f1e9
softfloat: Handle snan_bit_is_one == 0 in MIPS pickNaNMulAdd()
Only for Mips platform, and only for cases when snan_bit_is_one is 0,
correct the order of argument comparisons in pickNaNMulAdd().

For more info, see [1], page 53, section "3.5.3 NaN Propagation".

[1] "MIPS Architecture for Programmers Volume IV-j:
The MIPS32 SIMD Architecture Module",
Imagination Technologies LTD, Revision 1.12, February 3, 2016

Backports commit c27644f0e9659471e1c9355da5b667960d311937 from qemu
2018-02-24 20:40:11 -05:00
Aleksandar Markovic 33833b6605
softfloat: For Mips only, correct default NaN values
Only for Mips platform, and only for cases when snan_bit_is_one is 0,
correct default NaN values (in their 16-, 32-, and 64-bit flavors).

For more info, see [1], page 84, Table 6.3 "Value Supplied When a New
Quiet NaN Is Created", and [2], page 52, Table 3.7 "Default NaN
Encodings".

[1] "MIPS Architecture For Programmers Volume II-A:
The MIPS64 Instruction Set Reference Manual",
Imagination Technologies LTD, Revision 6.04, November 13, 2015

[2] "MIPS Architecture for Programmers Volume IV-j:
The MIPS32 SIMD Architecture Module",
Imagination Technologies LTD, Revision 1.12, February 3, 2016

Backports commit a7c04d545a97126c9df9d96623747d8613aaf7db from qemu
2018-02-24 20:38:23 -05:00
Aleksandar Markovic 33ee9429b2
softfloat: Clean code format in fpu/softfloat-specialize.h
fpu/softfloat-specialize.h is the most critical file in SoftFloat
library, since it handles numerous differences between platforms in
relation to floating point arithmetics. This patch makes the code
in this file more consistent format-wise, and hopefully easier to
debug and maintain.

Backports commit a59eaea64686c8966b7653303660f8c26f285c77 from qemu
2018-02-24 20:35:05 -05:00
Aleksandar Markovic 6eb4fa54f6
softfloat: Implement run-time-configurable meaning of signaling NaN bit
This patch modifies SoftFloat library so that it can be configured in
run-time in relation to the meaning of signaling NaN bit, while, at the
same time, strictly preserving its behavior on all existing platforms.

Background:

In floating-point calculations, there is a need for denoting undefined or
unrepresentable values. This is achieved by defining certain floating-point
numerical values to be NaNs (which stands for "not a number"). For additional
reasons, virtually all modern floating-point unit implementations use two
kinds of NaNs: quiet and signaling. The binary representations of these two
kinds of NaNs, as a rule, differ only in one bit (that bit is, traditionally,
the first bit of mantissa).

Up to 2008, standards for floating-point did not specify all details about
binary representation of NaNs. More specifically, the meaning of the bit
that is used for distinguishing between signaling and quiet NaNs was not
strictly prescribed. (IEEE 754-2008 was the first floating-point standard
that defined that meaning clearly, see [1], p. 35) As a result, different
platforms took different approaches, and that presented considerable
challenge for multi-platform emulators like QEMU.

Mips platform represents the most complex case among QEMU-supported
platforms regarding signaling NaN bit. Up to the Release 6 of Mips
architecture, "1" in signaling NaN bit denoted signaling NaN, which is
opposite to IEEE 754-2008 standard. From Release 6 on, Mips architecture
adopted IEEE standard prescription, and "0" denotes signaling NaN. On top of
that, Mips architecture for SIMD (also known as MSA, or vector instructions)
also specifies signaling bit in accordance to IEEE standard. MSA unit can be
implemented with both pre-Release 6 and Release 6 main processor units.

QEMU uses SoftFloat library to implement various floating-point-related
instructions on all platforms. The current QEMU implementation allows for
defining meaning of signaling NaN bit during build time, and is implemented
via preprocessor macro called SNAN_BIT_IS_ONE.

On the other hand, the change in this patch enables SoftFloat library to be
configured in run-time. This configuration is meant to occur during CPU
initialization, at the moment when it is definitely known what desired
behavior for particular CPU (or any additional FPUs) is.

The change is implemented so that it is consistent with existing
implementation of similar cases. This means that structure float_status is
used for passing the information about desired signaling NaN bit on each
invocation of SoftFloat functions. The additional field in float_status is
called snan_bit_is_one, which supersedes macro SNAN_BIT_IS_ONE.

IMPORTANT:

This change is not meant to create any change in emulator behavior or
functionality on any platform. It just provides the means for SoftFloat
library to be used in a more flexible way - in other words, it will just
prepare SoftFloat library for usage related to Mips platform and its
specifics regarding signaling bit meaning, which is done in some of
subsequent patches from this series.

Further break down of changes:

1) Added field snan_bit_is_one to the structure float_status, and
correspondent setter function set_snan_bit_is_one().

2) Constants <float16|float32|float64|floatx80|float128>_default_nan
(used both internally and externally) converted to functions
<float16|float32|float64|floatx80|float128>_default_nan(float_status*).
This is necessary since they are dependent on signaling bit meaning.
At the same time, for the sake of code cleanup and simplicity, constants
<floatx80|float128>_default_nan_<low|high> (used only internally within
SoftFloat library) are removed, as not needed.

3) Added a float_status* argument to SoftFloat library functions
XXX_is_quiet_nan(XXX a_), XXX_is_signaling_nan(XXX a_),
XXX_maybe_silence_nan(XXX a_). This argument must be present in
order to enable correct invocation of new version of functions
XXX_default_nan(). (XXX is <float16|float32|float64|floatx80|float128>
here)

4) Updated code for all platforms to reflect changes in SoftFloat library.
This change is twofolds: it includes modifications of SoftFloat library
functions invocations, and an addition of invocation of function
set_snan_bit_is_one() during CPU initialization, with arguments that
are appropriate for each particular platform. It was established that
all platforms zero their main CPU data structures, so snan_bit_is_one(0)
in appropriate places is not added, as it is not needed.

[1] "IEEE Standard for Floating-Point Arithmetic",
IEEE Computer Society, August 29, 2008.

Backports commit af39bc8c49224771ec0d38f1b693ea78e221d7bc from qemu
2018-02-24 20:27:12 -05:00
Alexey Kardashevskiy 096ca207af
memory: Add reporting of supported page sizes
Every IOMMU has some granularity which MemoryRegionIOMMUOps::translate
uses when translating, however this information is not available outside
the translate context for various checks.

This adds a get_min_page_size callback to MemoryRegionIOMMUOps and
a wrapper for it so IOMMU users (such as VFIO) can know the minimum
actual page size supported by an IOMMU.

As IOMMU MR represents a guest IOMMU, this uses TARGET_PAGE_SIZE
as fallback.

This removes vfio_container_granularity() and uses new helper in
memory_region_iommu_replay() when replaying IOMMU mappings on added
IOMMU memory region.

Backports the relevant parts of commit f682e9c244af7166225f4a50cc18ff296bb9d43e from qemu
2018-02-24 19:23:28 -05:00
Lluís Vilanova 2297527755
exec: [tcg] Track which vCPU is performing translation and execution
Information is tracked inside the TCGContext structure, and later used
by tracing events with the 'tcg' and 'vcpu' properties.

The 'cpu' field is used to check tracing of translation-time
events ("*_trans"). The 'tcg_env' field is used to pass it to
execution-time events ("*_exec").

Backports commit 7c2550432abe62f53e6df878ceba6ceaf71f0e7e from qemu
2018-02-24 19:21:39 -05:00
Eduardo Habkost 0f6513ef62
error: Remove unnecessary local_err variables
This patch simplifies code that uses a local_err variable just to
immediately use it for an error_propagate() call.

Coccinelle patch used to perform the changes added to
scripts/coccinelle/remove_local_err.cocci.

Backports commit 6b62d961373e0327f2af8fb77d6d5d6308864180 from qemu
2018-02-24 19:12:25 -05:00
Peter Maydell 5ae787f895
target-arm: Provide hook to tell GICv3 about changes of security state
The GICv3 CPU interface needs to know when the CPU it is attached
to makes an exception level or mode transition that changes the
security state, because whether it is asserting IRQ or FIQ can change
depending on these things. Provide a mechanism for letting the GICv3
device register a hook to be called on such changes.

Backports commit bd7d00fc50c9960876dd194ebf0c88889b53e765 from qemu
2018-02-24 19:09:22 -05:00
Peter Maydell eec3a5f843
target-arm: Define new arm_is_el3_or_mon() function
The GICv3 system registers need to know if the CPU is AArch64
in EL3 or AArch32 in Monitor mode. This happens to be the first
part of the check for arm_is_secure(), so factor it out into a
new arm_is_el3_or_mon() function that the GIC can also use.

Backports commit 712058764da29b2908f6fbf56760ca4f15980709 from qemu
2018-02-24 19:04:27 -05:00
Peter Maydell f893dacef0
bitops.h: Implement half-shuffle and half-unshuffle ops
A half-shuffle operation takes a word with zeros in the high half:
0000 0000 0000 0000 ABCD EFGH IJKL MNOP
and spreads the bits out so they are in every other bit of the word:
0A0B 0C0D 0E0F 0G0H 0I0J 0K0L 0M0N 0O0P
A half-unshuffle performs the reverse operation.

Provide functions in bitops.h which implement these operations
for 32-bit and 64-bit inputs, and add tests for them.

Backports commit b355438de52d0782983bf4bdc47936189a0c988b from qemu
2018-02-24 19:02:36 -05:00
Bharata B Rao 851dec945d
qom: API to get instance_size of a type
Add an API object_type_get_size(const char *typename) that returns the
instance_size of the give typename.

Backports commit 3f97b53a682d2595747c926c00d78b9d406f1be0 from qemu
2018-02-24 19:00:16 -05:00
Thomas Huth aee5c93f58
configure: Enable -Werror for MinGW builds, too
MinGW seems to compile currently without warnings, so it should
be safe to enable -Werror now for this environment, too.

Backports commit e4650c81b3d15ba67236815defbb475c4bdf8690 from qemu
2018-02-24 18:56:05 -05:00
Eduardo Habkost b918dd95f3
target-i386: Consolidate calls of object_property_parse() in x86_cpu_parse_featurestr
Backports commit f6750e959a397dea988efd4e488e1ff813011065 from qemu
2018-02-24 18:53:55 -05:00
Igor Mammedov 800b28483b
target-i386: Move features logic that requires CPUState to realize time
Making x86_cpu_parse_featurestr() a pure convertor
of legacy feature string into global properties, needs
it to be called before a CPU instance is created so
parser shouldn't modify CPUState directly or access
it at all. Hence move current hack that directly pokes
into CPUState, to set/unset +-feats, from parser to
CPU's realize method.

Backports commit dc15c0517b010a9444a2c05794dae980f2a2cbd9 from qemu
2018-02-24 18:47:46 -05:00
Eduardo Habkost b9ca5c4d33
target-i386: Remove xlevel & hv-spinlocks option fixups
The "fixup will be removed in future versions" warnings are
present since QEMU 1.7.0, at least, so users should have fixed
their scripts and configurations, already.

In the case of libvirt users, libvirt doesn't use the "xlevel"
option, and already rejects HyperV spinlock retry count < 0xFFF.

Backports commit c19b85216b5d47d922ac010931d4c7b2d79b2f68 from qemu
2018-02-24 18:33:32 -05:00
Radim Krčmář 610a52e9c7
target-i386: Implement CPUID[0xB] (Extended Topology Enumeration)
I looked at a dozen Intel CPU that have this CPUID and all of them
always had Core offset as 1 (a wasted bit when hyperthreading is
disabled) and Package offset at least 4 (wasted bits at <= 4 cores).

QEMU uses more compact IDs and it doesn't make much sense to change it
now. I keep the SMT and Core sub-leaves even if there is just one
thread/core; it makes the code simpler and there should be no harm.

Backports commit 5232d00a041c8f3628b3532ef35d703a1f0dac19 from qemu
2018-02-24 18:31:14 -05:00
Eduardo Habkost 8991e8bf0b
target-i386: add Skylake-Client cpu model
Introduce Skylake-Client cpu mode which inherits the features from
Broadwell and supports some additional features that are: MPX,
XSAVEC, and XGETBV1.

Backports commit f6f949e9295889fb272698aea763dcea77d616ce from qemu
2018-02-24 18:25:50 -05:00
Peter Maydell 9bdf310d49
target-arm: Don't permit ARMv8-only Neon insns on ARMv7
The Neon instructions VCVTA, VCVTM, VCVTN, VCVTP, VRINTA, VRINTM,
VRINTN, VRINTP, VRINTX, and VRINTZ were only introduced with ARMv8,
so they need a guard to make them UNDEF if the CPU only supports ARMv7.
(We got this right for all the other new-in-v8 insns, but forgot
it for these Neon 2-reg-misc ops.)

Backports commit fe8fcf3d642b4de1369841bf6acac13e0ec8770d from qemu
2018-02-24 18:20:00 -05:00
Peter Maydell a9fb399490
target-arm: Fix reset and migration of TTBCR(S)
Commit 6459b94c26dd666badb3 broke reset and migration of the AArch32
TTBCR(S) register if the guest used non-LPAE page tables. This is
because the AArch32 TTBCR register definition is marked as ARM_CP_ALIAS,
meaning that the AArch64 variant has to handle migration and reset.
Although AArch64 TCR_EL3 doesn't need to care about the mask and
base_mask fields, AArch32 may do so, and so we must use the special
TTBCR reset and raw write functions to ensure they are set correctly.

This doesn't affect TCR_EL2, because the AArch32 equivalent of that
is HTCR, which never uses the non-LPAE page table variant.

Backports commit 811595a2d4ab8c6354857a50ffd29fafce52a892 from qemu
2018-02-24 18:18:24 -05:00
Shannon Zhao 51c9e12605
target-arm: kvm64: set guest PMUv3 feature bit if supported
Check if kvm supports guest PMUv3. If so, set the corresponding feature
bit for vcpu.

Backports commit 5c0a3819f009639f67ce0453dff6ec7211bfee54 from qemu
2018-02-24 18:17:11 -05:00
Emilio G. Cota ae3e22a689
tb hash: hash phys_pc, pc, and flags with xxhash
For some workloads such as arm bootup, tb_phys_hash is performance-critical.
The is due to the high frequency of accesses to the hash table, originated
by (frequent) TLB flushes that wipe out the cpu-private tb_jmp_cache's.
More info:
https://lists.nongnu.org/archive/html/qemu-devel/2016-03/msg05098.html

To dig further into this I modified an arm image booting debian jessie to
immediately shut down after boot. Analysis revealed that quite a bit of time
is unnecessarily spent in tb_phys_hash: the cause is poor hashing that
results in very uneven loading of chains in the hash table's buckets;
the longest observed chain had ~550 elements.

The appended addresses this with two changes:

1) Use xxhash as the hash table's hash function. xxhash is a fast,
high-quality hashing function.

2) Feed the hashing function with not just tb_phys, but also pc and flags.

This improves performance over using just tb_phys for hashing, since that
resulted in some hash buckets having many TB's, while others getting very few;
with these changes, the longest observed chain on a single hash bucket is
brought down from ~550 to ~40.

Tests show that the other element checked for in tb_find_physical,
cs_base, is always a match when tb_phys+pc+flags are a match,
so hashing cs_base is wasteful. It could be that this is an ARM-only
thing, though. UPDATE:
On Tue, Apr 05, 2016 at 08:41:43 -0700, Richard Henderson wrote:
> The cs_base field is only used by i386 (in 16-bit modes), and sparc (for a TB
> consisting of only a delay slot).
> It may well still turn out to be reasonable to ignore cs_base for hashing.

BTW, after this change the hash table should not be called "tb_hash_phys"
anymore; this is addressed later in this series.

This change gives consistent bootup time improvements. I tested two
host machines:
- Intel Xeon E5-2690: 11.6% less time
- Intel i7-4790K: 19.2% less time

Increasing the number of hash buckets yields further improvements. However,
using a larger, fixed number of buckets can degrade performance for other
workloads that do not translate as many blocks (600K+ for debian-jessie arm
bootup). This is dealt with later in this series.

Backports commit 42bd32287f3a18d823f2258b813824a39ed7c6d9 from qemu
2018-02-24 18:00:14 -05:00
Emilio G. Cota 9ef9de9cf8
exec: add tb_hash_func5, derived from xxhash
This will be used by upcoming changes for hashing the tb hash.

Add this into a separate file to include the copyright notice from
xxhash.

Backports commit dc8b295d05ec35a8c032f9abca421772347ba5d4 from qemu
2018-02-24 17:36:35 -05:00
Emilio G. Cota 8518f55df7
compiler.h: add QEMU_ALIGNED() to enforce struct alignment
Backports commit 911a4d2215b05267b16925503218f49d607c6b29 from qemu
2018-02-24 17:32:43 -05:00
Peter Maydell 48539e54da
target-i386: Move user-mode exception actions out of user-exec.c
The exception_action() function in user-exec.c is just a call to
cpu_loop_exit() for every target CPU except i386. Since this
function is only called if the target's handle_mmu_fault() hook has
indicated an MMU fault, and that hook is only called from the
handle_cpu_signal() code path, we can simply move the x86-specific
setup into that hook, which allows us to remove the TARGET_I386
ifdef from user-exec.c.

Of the actions that were done by the call to raise_interrupt_err():
* cpu_svm_check_intercept_param() is a no-op in user mode
* check_exception() is a no-op since double faults are impossible
for user-mode
* assignments to cs->exception_index and env->error_code are no-ops
* assigning to env->exception_next_eip is unnecessary because it
is not used unless env->exception_is_int is true
* cpu_loop_exit_restore() is equivalent to cpu_loop_exit() since
pc is 0
which leaves just setting env_>exception_is_int as the action that
needs to be added to x86_cpu_handle_mmu_fault().

Backports commit 0c33682d5f29b0a4ae53bdec4c8e52e4fae37b34 from qemu
2018-02-24 17:27:08 -05:00
Peter Maydell fa2679ba96
target-i386: Add comment about do_interrupt_user() next_eip argument
Add a comment to do_interrupt_user() along the same lines as the
existing one for do_interrupt_all() noting that the next_eip
argument is not used unless is_int is true or intno is EXCP_SYSCALL.

Backports commit 33271823323483b4ede1ae99de83d33b25875402 from qemu
2018-02-24 17:26:18 -05:00
Peter Maydell d7dccff836
cpu-exec: Rename cpu_resume_from_signal() to cpu_loop_exit_noexc()
The function cpu_resume_from_signal() is now always called with a
NULL puc argument, and is rather misnamed since it is never called
from a signal handler. It is essentially forcing an exit to the
top level cpu loop but without raising any exception, so rename
it to cpu_loop_exit_noexc() and drop the useless unused argument.

Backports commit 6886b98036a8f8f5bce8b10756ce080084cef11b from qemu
2018-02-24 17:25:28 -05:00
Peter Maydell b2013255aa
user-exec: Push resume-from-signal code out to handle_cpu_signal()
Since the only caller of page_unprotect() which might cause it to
need to call cpu_resume_from_signal() is handle_cpu_signal() in
the user-mode code, push the longjump handling out to that function.

Since this is the only caller of cpu_resume_from_signal() which
passes a non-NULL puc argument, split the non-NULL handling into
a new cpu_exit_tb_from_sighandler() function. This allows us
to merge the softmmu and usermode implementations of the
cpu_resume_from_signal() function, which are now identical.

Backports commit f213e72f2356b77768b9cb73814a3b26ad5a0099 from qemu
2018-02-24 17:21:06 -05:00
Peter Maydell 37b7538d85
translate-all.c: Don't pass puc, locked to tb_invalidate_phys_page()
The user-mode-only function tb_invalidate_phys_page() is only
called from two places:
* page_unprotect(), which passes in a non-zero pc, a puc pointer
and the value 'true' for the locked argument
* page_set_flags(), which passes in a zero pc, a NULL puc pointer
and a 'false' locked argument

If the pc is non-zero then we may call cpu_resume_from_signal(),
which does a longjmp out of the calling code (and out of the
signal handler); this is to cover the case of a target CPU with
"precise self-modifying code" (currently only x86) executing
a store instruction which modifies code in the same TB as the
store itself. Rather than doing the longjump directly here,
return a flag to the caller which indicates whether the current
TB was modified, and move the longjump to page_unprotect.

Backports commit 75809229bbf28b371afce14921ff5be98ddc5faa from qemu
2018-02-24 17:11:30 -05:00
Paolo Bonzini 1db22b5889
Makefile: add dependency on scripts/create_config
Make sure that config-host.h and config-target.h are rebuilt whenever
there is a change in the scripts that generates them; add the dependency
to the pattern rule as suggested by Peter.

Backports commit 553350156d80c18d0127c742f47b7adbd642f3ef from qemu
2018-02-24 17:05:03 -05:00
Fam Zheng c17a3070ea
Makefile: Add a FORCE target
Backports commit d41d4da3c5d702b505d74265900a13fae2c8d0e0 from qemu
2018-02-24 17:03:51 -05:00
Peter Maydell 8d0faac1dc
qemu-common.h: Drop WORDS_ALIGNED define
The WORDS_ALIGNED #define is not used anywhere, and hasn't been since
2013 when commit 612d590ebc6cef rewrote the various ld<type>_<endian>_p
functions to not use it. Remove the #define and the comment describing it.
Also remove the line in the comment about TARGET_WORDS_ALIGNED, since
it has never actually existed.

Backports commit 0d5c21f2b3bf1e0b562a2c74e353d2e03f2f50ef from qemu
2018-02-24 17:01:55 -05:00
Stefan Weil 4470900f3b
configure: Use instead of deprecated
This fixes these warnings from shellcheck:

^-- SC2006: Use $(..) instead of deprecated `..`

Backports commit 89138857619b2a023c32200e9af780792ccaa8c3 from qemu
2018-02-24 16:59:40 -05:00
Sergey Sorokin c05902eddd
target-arm: Fix TTBR selecting logic on AArch32 Stage 2 translation
Address size is 40-bit for the AArch32 stage 2 translation,
and t0sz can be negative (from -8 to 7),
so we need to adjust it to use the existing TTBR selecting logic.

Backports commit 6e99f762612827afeff54add2e4fc2c3b2657fed from qemu
2018-02-24 16:54:32 -05:00
Peter Maydell 806d72035e
target-arm: Don't try to set ESR IL bit in arm_cpu_do_interrupt_aarch64()
Remove some incorrect code from arm_cpu_do_interrupt_aarch64()
which attempts to set the IL bit in the syndrome register based
on the value of env->thumb. This is wrong in several ways:
* IL doesn't indicate Thumb-vs-ARM, it indicates instruction
length (which may be 16 or 32 for Thumb and is always 32 for ARM)
* not every syndrome format uses IL like this -- for some IL is
always set, and for some it is always clear
* the code is changing esr_el[new_el] even for interrupt entry,
which is not supposed to modify ESR_ELx at all

Delete the code, and instead rely on the syndrome value in
env->exception.syndrome having already been set up with the
correct value of IL.

Backports commit 78f1edb19fe11fa0c5d0bf484db59a384f455d3c from qemu
2018-02-24 16:49:53 -05:00
Peter Maydell dc8bf22d88
target-arm: Set IL bit in syndromes for insn abort, watchpoint, swstep
For some exception syndrome types, the IL bit should always be set.
This includes the instruction abort, watchpoint and software step
syndrome types; add the missing ARM_EL_IL bit to the syndrome
values returned by syn_insn_abort(), syn_swstep() and syn_watchpoint().

Backports commit 04ce861ea545477425ad9e045eec3f61c8a27df9 from qemu
2018-02-24 16:48:59 -05:00
Edgar E. Iglesias 8aee797956
target-arm: A64: Create Instruction Syndromes for Data Aborts
Add support for generating the ISS (Instruction Specific Syndrome) for
Data Abort exceptions taken from AArch64.
These syndromes are used by hypervisors for example to trap and emulate
memory accesses.

We save the decoded data out-of-band with the TBs at translation time.
When exceptions hit, the extra data attached to the TB is used to
recreate the state needed to encode instruction syndromes.
This avoids the need to emit moves with every load/store.

Based on a suggestion from Peter Maydell.

Backports commit aaa1f954d4cab243e3d5337a72bc6d104e1c4808 from qemu
2018-02-24 16:46:44 -05:00
Alistair Francis 25daa5363e
target-arm: Add the HSTR_EL2 register
Add the Hypervisor System Trap Register for EL2.

This register is used early in the Linux boot and without it the kernel
aborts with a "Synchronous Abort" error.

Backports commit 2a5a9abd4bc45e2f4c62c77e07aebe53608c6915 from qemu
2018-02-24 16:24:57 -05:00
Fam Zheng 495c39300c
rules.mak: Add COMMA constant
Using "," literal in $(call quiet-command, ...) arguments is awkward.
Add this constant to make it at least doable.

Backports commit 2f4e4dc237261c76734d8ae1d8e09d2983d2f1ca from qemu
2018-02-24 16:20:31 -05:00
Paolo Bonzini 8df5ad80b1
exec: hide mr->ram_addr from qemu_get_ram_ptr users
Let users of qemu_get_ram_ptr and qemu_ram_ptr_length pass in an
address that is relative to the MemoryRegion. This basically means
what address_space_translate returns.

Because the semantics of the second parameter change, rename the
function to qemu_map_ram_ptr.

Backports commit 0878d0e11ba8013dd759c6921cbf05ba6a41bd71 from qemu
2018-02-24 16:17:49 -05:00
Paolo Bonzini b2e1b34bcc
memory: split memory_region_from_host from qemu_ram_addr_from_host
Move the old qemu_ram_addr_from_host to memory_region_from_host and
make it return an offset within the region. For qemu_ram_addr_from_host
return the ram_addr_t directly, similar to what it was before
commit 1b5ec23 ("memory: return MemoryRegion from qemu_ram_addr_from_host",
2013-07-04).

Backports commit 07bdaa4196b51bc7ffa7c3f74e9e4a9dc8a7966a from qemu
2018-02-24 16:06:49 -05:00
Paolo Bonzini 918c626847
exec: remove ram_addr argument from qemu_ram_block_from_host
Of the two callers, one does not use it, and the other can compute
it itself based on the other output argument (offset) and the RAMBlock.

Backports commit f615f39616c4fd1a3a3b078af8d75bb4be6390de from qemu
2018-02-24 03:37:40 -05:00
Paolo Bonzini f26f1f123c
memory: remove qemu_get_ram_fd, qemu_set_ram_fd, qemu_ram_block_host_ptr
Remove direct uses of ram_addr_t and optimize memory_region_{get,set}_fd
now that a MemoryRegion knows its RAMBlock directly.

Backports commit 4ff87573df3606856a92c14eef3393a63d736d11 from qemu
2018-02-24 03:34:44 -05:00
Emilio G. Cota ab569f5cde
atomics: do not emit consume barrier for atomic_rcu_read
Currently we emit a consume-load in atomic_rcu_read. Because of
limitations in current compilers, this is overkill for non-Alpha hosts
and it is only useful to make Thread Sanitizer work.

This patch leaves the consume-load in atomic_rcu_read when
compiling with Thread Sanitizer enabled, and resorts to a
relaxed load + smp_read_barrier_depends otherwise.

On an RMO host architecture, such as aarch64, the performance
improvement of this change is easily measurable. For instance,
qht-bench performs an atomic_rcu_read on every lookup. Performance
before and after applying this patch:

$ tests/qht-bench -d 5 -n 1
Before: 9.78 MT/s
After: 10.96 MT/s

Backports commit 15487aa132109891482f79d78a30d6cfd465a391 from qemu
2018-02-24 03:28:11 -05:00
Emilio G. Cota 87ef2a2c5f
atomics: emit an smp_read_barrier_depends() barrier only for Alpha and Thread Sanitizer
For correctness, smp_read_barrier_depends() is only required to
emit a barrier on Alpha hosts. However, we are currently emitting
a consume fence unconditionally, and most compilers currently treat
consume and acquire fences as equivalent.

Fix it by keeping the consume fence if we're compiling with Thread
Sanitizer, since this might help prevent false warnings. Otherwise,
only emit the barrier for Alpha hosts. Note that we still guarantee
that smp_read_barrier_depends() is a compiler barrier.

Backports commit c983895258a771f8a5e4a53950bfb7fd2216651c from qemu
2018-02-24 03:26:52 -05:00
Sergey Fedorov 3a9c5e7509
cpu-exec: Fix direct jump to TB spanning page
It is not safe to make a direct jump to a TB spanning two pages in
system emulation because the mapping for the second page can get changed
but we don't take care of direct jumps in this case.

However in user mode emulation, this is not the case because there's
only static address translation and TBs are always invalidated properly.

Backports commit c88c67e58b61618a904d2333ceebefc3c852d32e from qemu
2018-02-24 03:24:53 -05:00
Eduardo Habkost 9c04a28bd2
target-i386: Move TCG initialization to realize time
QOM instance_init functions are not supposed to have any side-effects,
as new objects may be created at any moment for querying property
information (see qmp_device_list_properties()).

Move TCG initialization to realize time so it won't be called when just
doing object_new() on a X86CPU subclass.

Backports commit 57f2453ab48a771b30aeced01b329ee85853bb7b from qemu
2018-02-24 03:23:09 -05:00
Eduardo Habkost fee2c27f2b
cpu: Eliminate cpudef_init(), cpudef_setup()
x86_cpudef_init() doesn't do anything anymore, cpudef_init(),
cpudef_setup(), and x86_cpudef_init() can be finally removed.

Backports commit 3e2c0e062f0963a6b73b0cd1990fad79495463d9 from qemu
2018-02-24 03:20:46 -05:00
Eduardo Habkost 956e20ea6b
target-i386: Set constant model_id for qemu64/qemu32/athlon
Newer PC machines don't set hw_version, and older machines set
model-id on compat_props explicitly, so we don't need the
x86_cpudef_setup() code that sets model_id using
qemu_hw_version() anymore.

Backports commit 9cf2cc3d8237732946720d78bf9aec0064026ed8 from qemu
2018-02-24 03:18:11 -05:00
Eduardo Habkost aa3d46ef83
osdep: Move default qemu_hw_version() value to a macro
The macro will be used by code that will stop calling
qemu_hw_version() at runtime and just need a constant value.

Backports commit d494352c2f7818aeba184a8ef757569083740bb2 from qemu
2018-02-24 03:16:34 -05:00
Eduardo Habkost 923dcf1cb8
target-i386: Use xsave structs for ext_save_area
This doesn't introduce any change in the code, as the offsets and
struct sizes match what was present in the table. This can be
validated by the QEMU_BUILD_BUG_ON lines on target-i386/cpu.h,
which ensures the struct sizes and offsets match the existing
values in ext_save_area.

Backports commit ee1b09f695dcd8532f470e53297473bd3bc88718 from qemu
2018-02-24 03:13:16 -05:00
Lioncash 05963470a2
target-i386: Include log.h in smm_helper
Fixes a compilation error
2018-02-24 03:06:07 -05:00
Eduardo Habkost 128f7c078a
target-i386: Define structs for layout of xsave area
Add structs that define the layout of the xsave areas used by
Intel processors. Add some QEMU_BUILD_BUG_ON lines to ensure the
structs match the XSAVE_* macros in target-i386/kvm.c and the
offsets and sizes at target-i386/cpu.c:ext_save_areas.

Backports commit b503717d28e8f7eff39bf38624e6cf42687d951a from qemu
2018-02-24 03:04:31 -05:00
Paolo Bonzini 77305ce4ee
memory: remove unnecessary masking of MemoryRegion ram_addr
mr->ram_block->offset is already aligned to both host and target size
(see qemu_ram_alloc_internal). Remove further masking as it is
unnecessary.

Backports commit e4e697940dff612b789b0858270c20a8b680f78d from qemu
2018-02-24 03:01:34 -05:00
Fam Zheng 74962feee1
memory: Drop FlatRange.romd_mode
Its value is alway set to mr->romd_mode, so the removed comparisons are
fully superseded by "a->mr == b->mr".

Backports commit 5b5660adf1fdb61db14ec681b10463b8cba633f1 from qemu
2018-02-24 02:57:29 -05:00
Fam Zheng fb8135cd0d
memory: Remove code for mr->may_overlap
The collision check does nothing and hasn't been used. Remove the
variable together with related code.

Backports commit b61359781958759317ee6fd1a45b59be0b7dbbe1 from qemu
2018-02-24 02:55:25 -05:00
Gonglei feff56cc11
memory: drop find_ram_block()
On the one hand, we have already qemu_get_ram_block() whose function
is similar. On the other hand, we can directly use mr->ram_block but
searching RAMblock by ram_addr which is a kind of waste.

Backports commit fa53a0e53efdc7002497ea4a76aacf6cceb170ef from qemu
2018-02-24 02:52:20 -05:00
Paolo Bonzini 9bb67a3f58
hw: clean up hw/hw.h includes
Include qom/object.h and exec/memory.h instead of exec/ioport.h;
exec/ioport.h was almost everywhere required only for those two
includes, not for the content of the header itself.

Remove block/aio.h, everybody is already including it through
another path.

With this change, include/hw/hw.h is freed from qemu-common.h.

Backports commit df43d49cb8708b9c88a20afe0d1a3089b550a5b8 from qemu
2018-02-24 02:46:41 -05:00
Paolo Bonzini d0d3712417
hw: remove pio_addr_t
pio_addr_t is almost unused, because these days I/O ports are simply
accessed through the address space. cpu_{in,out}[bwl] themselves are
almost unused; monitor.c and xen-hvm.c could use address_space_read/write
directly, since they have an integer size at hand. This leaves qtest as
the only user of those functions.

On the other hand even portio_* functions use this type; the only
interesting use of pio_addr_t thus is include/hw/sysbus.h. I guess I
could move it there, but I don't see much benefit in that either. Using
uint32_t is enough and avoids the need to include ioport.h everywhere.

Backports commit 89a80e7400f7225d9401b35ef32454b4ab29dc67 from qemu
2018-02-24 02:43:16 -05:00
Paolo Bonzini 9485b7c2e1
cpu: move exec-all.h inclusion out of cpu.h
exec-all.h contains TCG-specific definitions. It is not needed outside
TCG-specific files such as translate.c, exec.c or *helper.c.

One generic function had snuck into include/exec/exec-all.h; move it to
include/qom/cpu.h.

Backports commit 63c915526d6a54a95919ebece83fa9ca631b2508 from qemu
2018-02-24 02:39:08 -05:00
Paolo Bonzini 58693409ea
exec: extract exec/tb-context.h
TCG backends do not need most of exec-all.h; extract what they actually
need to a separate file or move it directly to tcg.h. The next patch
will stop including exec-all.h from everywhere.

Backports commit 00f6da6a1a5d1ce085334eccbb50ec899ceed513 from qemu
2018-02-24 02:09:58 -05:00
Paolo Bonzini f9b9d0ba0f
hw: explicitly include qemu/log.h
Move the inclusion out of hw/hw.h, most files do not need it.

Backports commit 03dd024ff57733a55cd2e455f361d053c81b1b29 from qemu
2018-02-24 02:00:45 -05:00
Paolo Bonzini adf97a4d59
mips: move CP0 functions out of cpu.h
These are here for historical reasons: they are needed from both gdbstub.c
and op_helper.c, and the latter was compiled with fixed AREG0. It is
not needed anymore, so uninline them.

Backports commit e6623d88f44aae9e9c78276c0cb7bd352283d50a from qemu
2018-02-24 01:57:30 -05:00
Paolo Bonzini 058624b9e4
arm: move arm_log_exception into .c file
Avoid need for qemu/log.h inclusion, and make the function static too.

Backports commit 27a7ea8a1f351578ce869b41ba1ba662c063fd62 from qemu
2018-02-24 01:52:55 -05:00
Paolo Bonzini 37f26922dd
qemu-common: push cpu.h inclusion out of qemu-common.h
Backports commit 33c11879fd422b759483ed25fef133ea900ea8d7 from qemu
2018-02-24 01:50:56 -05:00
Paolo Bonzini e84da64a2b
qemu-common: stop including qemu/bswap.h from qemu-common.h
Move it to the actual users. There are still a few includes of
qemu/bswap.h in headers; removing them is left for future work.

Backports commit 58369e22cf971448411bfbc8c894b2addebe2111 from qemu
2018-02-24 01:06:03 -05:00
Paolo Bonzini 78fd1aab94
cpu: move endian-dependent load/store functions to cpu-all.h
Disentangle cpu-common.h and memory.h from NEED_CPU_H. Prototypes are
not defined for !NEED_CPU_H, so remove them from poison.h too. Only
macros need poisoning.

Backports commit a7d6039cb35592683ecc56d2b37817da2d2f8b00 from qemu
2018-02-24 01:04:26 -05:00
Paolo Bonzini 9c0e31ed3a
target-sparc: make cpu-qom.h not target specific
Make SPARCCPU an opaque type within cpu-qom.h, and move all definitions
of private methods, as well as all type definitions that require knowledge
of the layout to cpu.h. This helps making files independent of NEED_CPU_H
if they only need to pass around CPU pointers.

Backports commit d61d1b20610e4655d7846e4cb43d22188e935f5f from qemu
2018-02-24 01:00:56 -05:00
Paolo Bonzini 01bd1c1a73
target-mips: make cpu-qom.h not target specific
Make MIPSCPU an opaque type within cpu-qom.h, and move all definitions of
private methods, as well as all type definitions that require knowledge
of the layout to cpu.h. This helps making files independent of NEED_CPU_H
if they only need to pass around CPU pointers.

Backports commit 416bf936864f16caad6993b9ebd452fb34f801bd from qemu
2018-02-24 00:59:03 -05:00
Paolo Bonzini 27ebc27beb
target-m68k: make cpu-qom.h not target specific
Make M68KCPU an opaque type within cpu-qom.h, and move all definitions of
private methods, as well as all type definitions that require knowledge
of the layout to cpu.h. This helps making files independent of NEED_CPU_H
if they only need to pass around CPU pointers.

Backports commit a836b8fa00fa1032ccd234a71b33943627d211ea from qemu
2018-02-24 00:56:58 -05:00
Paolo Bonzini 2f4ae94b5c
target-i386: make cpu-qom.h not target specific
Make X86CPU an opaque type within cpu-qom.h, and move all definitions of
private methods, as well as all type definitions that require knowledge
of the layout to cpu.h. This helps making files independent of NEED_CPU_H
if they only need to pass around CPU pointers.

Backports commit 4da6f8d954429c0cd1471d25cb9dbe909607374e from qemu
2018-02-24 00:55:22 -05:00
Lioncash 791413630e
target-arm: make cpu-qom.h not target specific
Make ARMCPU an opaque type within cpu-qom.h, and move all definitions of
private methods, as well as all type definitions that require knowledge
of the layout to cpu.h. This helps making files independent of NEED_CPU_H
if they only need to pass around CPU pointers.

Backports commit 74e755647c1598a6845df1ee4f8b96d01afd96e7 from qemu
2018-02-24 00:48:59 -05:00
Paolo Bonzini fee6dcb22a
include: move CPU-related definitions out of qemu-common.h
Backports commit 4b4629d9d26fd0e100d9be526367a96aa35b541d from qemu
2018-02-24 00:33:49 -05:00
Wei Jiangang 7cf135457a
accel: make configure_accelerator return void
Return the negated value of accel_initialised is meaningless,
and the caller vl doesn't check it.

Backports commit bdc3f61dec2f9c227235bb5f677a0272e1184c82 from qemu
2018-02-24 00:31:28 -05:00
Sergey Fedorov eab60b7c77
cpu-exec: Clean up 'interrupt_request' reloading in cpu_handle_interrupt()
Backports commit 8b1fe3f439eaa2f0a6ee7737942bb6c405725867 from qemu
2018-02-24 00:27:05 -05:00
Sergey Fedorov b4b7b88f69
cpu-exec: Remove unused 'x86_cpu' and 'env' from cpu_exec()
Backports commit ba048a4ae15ba0f70c6dcb12ee05db120408de78 from qemu
2018-02-24 00:16:40 -05:00
Sergey Fedorov aefb8935a9
cpu-exec: Move TB execution stuff out of cpu_exec()
Simplify cpu_exec() by extracting TB execution code outside of
cpu_exec() into a new static inline function cpu_loop_exec_tb().

Backports commit 928de9ee14b0b63ee9f9275732ed3e1c8b5f4790 from qemu
2018-02-24 00:15:24 -05:00
Sergey Fedorov d4ef96abf2
cpu-exec: Move interrupt handling out of cpu_exec()
Simplify cpu_exec() by extracting interrupt handling code outside of
cpu_exec() into a new static inline function cpu_handle_interrupt().

Backports commit c385e6e49763c6dd5dbbd90fadde95d986f8bd38 from qemu
2018-02-24 00:09:06 -05:00
Sergey Fedorov c1b52a4387
cpu-exec: Move exception handling out of cpu_exec()
Simplify cpu_exec() by extracting exception handling code out of
cpu_exec() into a new static inline function cpu_handle_exception().
Also make cpu_handle_debug_exception() inline as it is used only once.

Backports commit ea284766ec6b9f1712369249566b4c372f3cec8b from qemu
2018-02-24 00:03:37 -05:00