Commit graph

36 commits

Author SHA1 Message Date
Richard Henderson 7fe5f620df
tcg: Dynamically allocate TCGOps
With no fixed array allocation, we can't overflow a buffer.
This will be important as optimizations related to host vectors
may expand the number of ops used.

Use QTAILQ to link the ops together.

Backports commit 15fa08f8451babc88d733bd411d4c94976f9d0f8 from qemu
2018-03-05 16:34:40 -05:00
Richard Henderson ab9df6244c
tcg: Use offsets not indices for TCGv_*
Using the offset of a temporary, relative to TCGContext, rather than
its index means that we don't use 0. That leaves offset 0 free for
a NULL representation without having to leave index 0 unused.

Backports commit e89b28a63501c0ad6d2501fe851d0c5202055e70 from qemu
2018-03-05 10:12:08 -05:00
Richard Henderson d450156414
tcg: Remove GET_TCGV_* and MAKE_TCGV_*
The GET and MAKE functions weren't really specific enough.
We now have a full complement of functions that convert exactly
between temporaries, arguments, tcgv pointers, and indices.

The target/sparc change is also a bug fix, which would have affected
a host that defines TCG_TARGET_HAS_extr[lh]_i64_i32, i.e. MIPS64.

Backports commit dc41aa7d34989b552efe712ffe184236216f960b from qemu
2018-03-05 09:12:26 -05:00
Richard Henderson 2bb5011b18
tcg: Introduce tcgv_{i32,i64,ptr}_{arg,temp}
Transform TCGv_* to an "argument" or a temporary.
For now, an argument is simply the temporary index.

Backports commit ae8b75dc6ec808378487064922f25f1e7ea7a9be from qemu
2018-03-05 08:46:12 -05:00
Richard Henderson eb488f5bd6
tcg: Merge opcode arguments into TCGOp
Rather than have a separate buffer of 10*max_ops entries,
give each opcode 10 entries. The result is actually a bit
smaller and should have slightly more cache locality.

Backports commit 75e8b9b7aa0b95a761b9add7e2f09248b101a392 from qemu
2018-03-05 04:45:20 -05:00
Emilio G. Cota 5fae6dd433
tcg: remove addr argument from lookup_tb_ptr
It is unlikely that we will ever want to call this helper passing
an argument other than the current PC. So just remove the argument,
and use the pc we already get from cpu_get_tb_cpu_state.

This change paves the way to having a common "tb_lookup" function.

Backports commit 7f11636dbee89b0e4d03e9e2b96e14649a7db778 from qemu
2018-03-05 02:16:34 -05:00
Pranith Kumar 902886cc45
tcg: Implement implicit ordering semantics
Currently, we cannot use mttcg for running strong memory model guests
on weak memory model hosts due to missing ordering semantics.

We implicitly generate fence instructions for stronger guests if an
ordering mismatch is detected. We generate fences only for the orders
for which fence instructions are necessary, for example a fence is not
necessary between a store and a subsequent load on x86 since its
absence in the guest binary tells that ordering need not be
ensured. Also note that if we find multiple subsequent fence
instructions in the generated IR, we combine them in the TCG
optimization pass.

This patch allows us to boot an x86 guest on ARM64 hosts using mttcg.

Backports commit b32dc3370a666e237b2099c22166b15e58cb6df8 from qemu
2018-03-04 13:24:27 -05:00
Emilio G. Cota 8f4f15e5f5
tcg: Introduce goto_ptr opcode and tcg_gen_lookup_and_goto_ptr
Instead of exporting goto_ptr directly to TCG frontends, export
tcg_gen_lookup_and_goto_ptr(), which calls goto_ptr with the pointer
returned by the lookup_tb_ptr() helper. This is the only use case
we have for goto_ptr and lookup_tb_ptr, so having this function is
very convenient. Furthermore, it trivially allows us to avoid calling
the lookup helper if goto_ptr is not implemented by the backend.

Backports commit cedbcb01529cb6cf9a2289cdbebbc63f6149fc18 from qemu
2018-03-02 21:05:18 -05:00
Richard Henderson 69116abafc
tcg: Initialize return value after exit_atomic
Users of tcg_gen_atomic_cmpxchg and do_atomic_op rightfully utilize
the output. Even though this code is dead, it gets translated, and
without the initialization we encounter a tcg_error.

Backports commit 79b1af906245558c30e0a5faf26cb52b63f83cce from qemu
2018-03-02 18:59:11 -05:00
Richard Henderson ff3512a045
tcg: Use ctpop to generate ctz if needed
Particularly when andc is also available, this is two insns
shorter than using clz to compute ctz.

Backports commit 14e99210f6c6cede461a54b2e0f9b4cd55175f00 from qemu
2018-03-01 18:39:20 -05:00
Richard Henderson 5f6e7bbdbd
tcg: Add opcode for ctpop
The number of actual invocations of ctpop itself does not warrent
an opcode, but it is very helpful for POWER7 to use in generating
an expansion for ctz.

Backports commit a768e4e99247911f00c5c0267c12d4e207d5f6cc from qemu
2018-03-01 18:26:41 -05:00
Richard Henderson fff7ca4617
tcg: Add helpers for clrsb
The number of actual invocations does not warrent an opcode,
and the backends generating it. But at least we can eliminate
redundant helpers.

Backports commit 086920c2c8008f125fd38781072fa25c3ad158ea from qemu
2018-03-01 18:14:11 -05:00
Richard Henderson 2cf34e1b55
tcg: Add clz and ctz opcodes
Backports commit 0e28d0063bbd9e59a981ea2d20f82f30c5d956a8 from qemu
2018-03-01 16:04:11 -05:00
Richard Henderson 9f2fcaaf27
tcg: Add deposit_z expander
While we don't require a new opcode, it is handy to have an expander
that knows the first source is zero.

Backports commit 07cc68d52852bf47dea7c402b46ddd28248d4212 from qemu
2018-03-01 13:29:24 -05:00
Richard Henderson 8e0585dcb1
tcg: Add field extraction primitives
Adds tcg_gen_extract_* and tcg_gen_sextract_* for extraction of
fixed position bitfields, much like we already have for deposit.

Backports commit 7ec8bab3deae643b1ce579c2d65a244f30708330 from qemu
2018-03-01 13:21:30 -05:00
Joseph Myers 7ff441826c
tcg: correct 32-bit tcg_gen_ld8s_i64 sign-extension
The version of tcg_gen_ld8s_i64 for 32-bit systems does a load into
the low part of the return value - then attempts a sign extension into
the high part, but wrongly sets the high part to a sign extension of
itself rather than of the low part. This results in TCG internal
errors from the use of the uninitialized high part (in some GCC tests
of AArch64 NEON shift intrinsics, in particular). This patch corrects
the sign-extension logic, making it match other functions such as
tcg_gen_ld16s_i64.

Backports commit 3ff91d7e85176f8b4b131163d7fd801757a2c949 from qemu
2018-03-01 08:41:23 -05:00
Richard Henderson f5a35908da
tcg: Add tcg_gen_mulsu2_{i32,i64,tl}
This multiply has one signed input and one unsigned input,
producing the full double-width result.

Backports commit 5087abfb7dfd1d368ae6939420057036b4d8e509 from qemu
2018-03-01 08:39:37 -05:00
Richard Henderson b48508a6c1
tcg: Emit barriers with parallel_cpus
Backports commit 91682118aa330aff7e8ef0cc685c32d101f49940 from qemu
2018-02-27 22:28:33 -05:00
Richard Henderson 064543a415
tcg: Add CONFIG_ATOMIC64
Allow qemu to build on 32-bit hosts without 64-bit atomic ops.

Even if we only allow 32-bit hosts to multi-thread emulate 32-bit
guests, we still need some way to handle the 32-bit guest using a
64-bit atomic operation. Do so by dropping back to single-step.

Backports commit df79b996a7b21c6ea7847f7927a2e1a294b86c72 from qemu
2018-02-27 22:25:36 -05:00
Richard Henderson 5c0ce1b99c
tcg: Add atomic helpers
Add all of cmpxchg, op_fetch, fetch_op, and xchg.
Handle both endian-ness, and sizes up to 8.
Handle expanding non-atomically, when emulating in serial.

Backports commit c482cb117cc418115ca9c6d21a7a2315414c0a40 from qemu
2018-02-27 15:57:47 -05:00
Pranith Kumar 5e44ce9be8
Introduce TCGOpcode for memory barrier
This commit introduces the TCGOpcode for memory barrier instruction.

This opcode takes an argument which is the type of memory barrier
which should be generated.

Backports commit f65e19bc2c9e8358e634d309606144ac2a3c2936 from qemu
2018-02-26 03:02:41 -05:00
Richard Henderson 1547048a22
tcg: Reorg TCGOp chaining
Instead of using -1 as end of chain, use 0, and link through the 0
entry as a fully circular double-linked list.

Backports commit dcb8e75870e2de199db853697f8839cb603beefe from qemu
2018-02-25 21:44:50 -05:00
Sergey Sorokin e4d123caa9
tcg: Improve the alignment check infrastructure
Some architectures (e.g. ARMv8) need the address which is aligned
to a size more than the size of the memory access.
To support such check it's enough the current costless alignment
check implementation in QEMU, but we need to support
an alignment size specifying.

Backports commit 1f00b27f17518a1bcb4cedca49eaec96a4d560bd from qemu
2018-02-25 02:23:28 -05:00
Paolo Bonzini 9485b7c2e1
cpu: move exec-all.h inclusion out of cpu.h
exec-all.h contains TCG-specific definitions. It is not needed outside
TCG-specific files such as translate.c, exec.c or *helper.c.

One generic function had snuck into include/exec/exec-all.h; move it to
include/qom/cpu.h.

Backports commit 63c915526d6a54a95919ebece83fa9ca631b2508 from qemu
2018-02-24 02:39:08 -05:00
Paolo Bonzini 37f26922dd
qemu-common: push cpu.h inclusion out of qemu-common.h
Backports commit 33c11879fd422b759483ed25fef133ea900ea8d7 from qemu
2018-02-24 01:50:56 -05:00
Peter Maydell 4ca19f2cd6
tcg: Clean up includes
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.

This commit was created with scripts/clean-includes.

Backports commit 757e725b58c57d3ebb66a31fd2210df977a12154 from qemu
2018-02-19 01:04:30 -05:00
Richard Henderson 58e939b91f
tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32
Rather than allow arbitrary shift+trunc, only concern ourselves
with low and high parts. This is all that was being used anyway.

Backports commit 609ad70562793937257c89d07bf7c1370b9fc9aa from qemu
2018-02-10 23:00:45 -05:00
Aurelien Jarno f279c93768
tcg: implement real ext_i32_i64 and extu_i32_i64 ops
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a
32-bit value is always converted to a 64-bit value and not propagated
through the register allocator or the optimizer.

Backports commit 4f2331e5b67af8172419eb1c8db510b497b30a7b from qemu
2018-02-10 22:45:13 -05:00
Aurelien Jarno 80223e7ad5
tcg: rename trunc_shr_i32 into trunc_shr_i64_i32
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32,
and the name in the README doesn't match the name offered to the
frontends.

Always use the long name to make it clear it is a size changing op.

Backports commit 0632e555fc4d281d69cb08d98d500d96185b041f from qemu
2018-02-10 22:29:30 -05:00
Richard Henderson 6234d07489
tcg: Merge memop and mmu_idx parameters to qemu_ld/st
At the tcg opcode level, not at the tcg-op.h generator level.
This requires minor changes through all of the tcg backends,
but none of the cpu translators.

Backports commit 59227d5d45bb3c31dc2118011691c35b3c00879c from qemu
2018-02-10 19:01:49 -05:00
Richard Henderson e0d99a1a06
tcg: Complete handling of ALWAYS and NEVER
Missing from movcond, and brcondi_i32 (but not brcondi_i64).

Backports commit 37ed3bf1ee07bb1a26adca0df8718f601f231c0b from qemu
2018-02-09 14:52:21 -05:00
Richard Henderson 232632e76c
tcg: Change translator-side labels to a pointer
This is improved type checking for the translators -- it's no longer
possible to accidentally swap arguments to the branch functions.

Note that the code generating backends still manipulate labels as int.

With notable exceptions, the scope of the change is just a few lines
for each target, so it's not worth building extra machinery to do this
change in per-target increments.

Backports commit 42a268c241183877192c376d03bd9b6d527407c7 from qemu
2018-02-09 14:17:56 -05:00
Richard Henderson 70f28c8bd5
tcg: Implement insert_op_before
Rather reserving space in the op stream for optimization,
let the optimizer add ops as necessary.

Backports commit a4ce099a7a4b4734c372f6bf28f3362e370f23c1 from qemu
2018-02-09 13:11:50 -05:00
Lioncash 0273e6ae18
tcg: Put opcodes in a linked list
The previous setup required ops and args to be completely sequential,
and was error prone when it came to both iteration and optimization.
2018-02-09 12:54:05 -05:00
Richard Henderson 4d46959c3b
tcg: Reduce ifdefs in tcg-op.c
Almost completely eliminates the ifdefs in this file, improving
confidence in the lesser used 32-bit builds.

Backports commit 3a13c3f34ce2058e0c2decc3b0f9f56be24c9400 from qemu
2018-02-09 08:35:52 -05:00
Richard Henderson 500c546444
tcg: Move some opcode generation functions out of line
Some of these functions are really quite large.  We have a number of
things that ought to be circularly dependent, but we duplicated code
to break that chain for the inlines.

This saved 25% of the code size of one of the translators I examined.
2018-02-09 08:10:00 -05:00