Commit graph

398 commits

Author SHA1 Message Date
Richard Henderson cd538f0b7e
tcg: Initialize cpu_env generically
This is identical for each target. So, move the initialization to
common code. Move the variable itself out of tcg_ctx and name it
cpu_env to minimize changes within targets.

This also means we can remove tcg_global_reg_new_{ptr,i32,i64},
since there are no longer global-register temps created by targets.

Backports commit 1c2adb958fc07e5b3e81ed21b801c04a15f41f4f from qemu
2018-03-15 15:49:19 -04:00
Emilio G. Cota 23a55a277f
tcg: enable multiple TCG contexts in softmmu
This enables parallel TCG code generation. However, we do not take
advantage of it yet since tb_lock is still held during tb_gen_code.

In user-mode we use a single TCG context; see the documentation
added to tcg_region_init for the rationale.

Note that targets do not need any conversion: targets initialize a
TCGContext (e.g. defining TCG globals), and after this initialization
has finished, the context is cloned by the vCPU threads, each of
them keeping a separate copy.

TCG threads claim one entry in tcg_ctxs[] by atomically increasing
n_tcg_ctxs. Do not be too annoyed by the subsequent atomic_read's
of that variable and tcg_ctxs; they are there just to play nice with
analysis tools such as thread sanitizer.

Note that we do not allocate an array of contexts (we allocate
an array of pointers instead) because when tcg_context_init
is called, we do not know yet how many contexts we'll use since
the bool behind qemu_tcg_mttcg_enabled() isn't set yet.

Previous patches folded some TCG globals into TCGContext. The non-const
globals remaining are only set at init time, i.e. before the TCG
threads are spawned. Here is a list of these set-at-init-time globals
under tcg/:

Only written by tcg_context_init:
- indirect_reg_alloc_order
- tcg_op_defs
Only written by tcg_target_init (called from tcg_context_init):
- tcg_target_available_regs
- tcg_target_call_clobber_regs
- arm: arm_arch, use_idiv_instructions
- i386: have_cmov, have_bmi1, have_bmi2, have_lzcnt,
have_movbe, have_popcnt
- mips: use_movnz_instructions, use_mips32_instructions,
use_mips32r2_instructions, got_sigill (tcg_target_detect_isa)
- ppc: have_isa_2_06, have_isa_3_00, tb_ret_addr
- s390: tb_ret_addr, s390_facilities
- sparc: qemu_ld_trampoline, qemu_st_trampoline (build_trampolines),
use_vis3_instructions

Only written by tcg_prologue_init:
- 'struct jit_code_entry one_entry'
- aarch64: tb_ret_addr
- arm: tb_ret_addr
- i386: tb_ret_addr, guest_base_flags
- ia64: tb_ret_addr
- mips: tb_ret_addr, bswap32_addr, bswap32u_addr, bswap64_addr

Backports commit 3468b59e18b179bc63c7ce934de912dfa9596122 from qemu
2018-03-14 14:32:34 -04:00
Emilio G. Cota f772fd986d
tcg: introduce regions to split code_gen_buffer
This is groundwork for supporting multiple TCG contexts.

The naive solution here is to split code_gen_buffer statically
among the TCG threads; this however results in poor utilization
if translation needs are different across TCG threads.

What we do here is to add an extra layer of indirection, assigning
regions that act just like pages do in virtual memory allocation.
(BTW if you are wondering about the chosen naming, I did not want
to use blocks or pages because those are already heavily used in QEMU).

We use a global lock to serialize allocations as well as statistics
reporting (we now export the size of the used code_gen_buffer with
tcg_code_size()). Note that for the allocator we could just use
a counter and atomic_inc; however, that would complicate the gathering
of tcg_code_size()-like stats. So given that the region operations are
not a fast path, a lock seems the most reasonable choice.

The effectiveness of this approach is clear after seeing some numbers.
I used the bootup+shutdown of debian-arm with '-tb-size 80' as a benchmark.
Note that I'm evaluating this after enabling per-thread TCG (which
is done by a subsequent commit).

* -smp 1, 1 region (entire buffer):
qemu: flush code_size=83885014 nb_tbs=154739 avg_tb_size=357
qemu: flush code_size=83884902 nb_tbs=153136 avg_tb_size=363
qemu: flush code_size=83885014 nb_tbs=152777 avg_tb_size=364
qemu: flush code_size=83884950 nb_tbs=150057 avg_tb_size=373
qemu: flush code_size=83884998 nb_tbs=150234 avg_tb_size=373
qemu: flush code_size=83885014 nb_tbs=154009 avg_tb_size=360
qemu: flush code_size=83885014 nb_tbs=151007 avg_tb_size=370
qemu: flush code_size=83885014 nb_tbs=151816 avg_tb_size=367

That is, 8 flushes.

* -smp 8, 32 regions (80/32 MB per region) [i.e. this patch]:

qemu: flush code_size=76328008 nb_tbs=141040 avg_tb_size=356
qemu: flush code_size=75366534 nb_tbs=138000 avg_tb_size=361
qemu: flush code_size=76864546 nb_tbs=140653 avg_tb_size=361
qemu: flush code_size=76309084 nb_tbs=135945 avg_tb_size=375
qemu: flush code_size=74581856 nb_tbs=132909 avg_tb_size=375
qemu: flush code_size=73927256 nb_tbs=135616 avg_tb_size=360
qemu: flush code_size=78629426 nb_tbs=142896 avg_tb_size=365
qemu: flush code_size=76667052 nb_tbs=138508 avg_tb_size=368

Again, 8 flushes. Note how buffer utilization is not 100%, but it
is close. Smaller region sizes would yield higher utilization,
but we want region allocation to be rare (it acquires a lock), so
we do not want to go too small.

* -smp 8, static partitioning of 8 regions (10 MB per region):
qemu: flush code_size=21936504 nb_tbs=40570 avg_tb_size=354
qemu: flush code_size=11472174 nb_tbs=20633 avg_tb_size=370
qemu: flush code_size=11603976 nb_tbs=21059 avg_tb_size=365
qemu: flush code_size=23254872 nb_tbs=41243 avg_tb_size=377
qemu: flush code_size=28289496 nb_tbs=52057 avg_tb_size=358
qemu: flush code_size=43605160 nb_tbs=78896 avg_tb_size=367
qemu: flush code_size=45166552 nb_tbs=82158 avg_tb_size=364
qemu: flush code_size=63289640 nb_tbs=116494 avg_tb_size=358
qemu: flush code_size=51389960 nb_tbs=93937 avg_tb_size=362
qemu: flush code_size=59665928 nb_tbs=107063 avg_tb_size=372
qemu: flush code_size=38380824 nb_tbs=68597 avg_tb_size=374
qemu: flush code_size=44884568 nb_tbs=79901 avg_tb_size=376
qemu: flush code_size=50782632 nb_tbs=90681 avg_tb_size=374
qemu: flush code_size=39848888 nb_tbs=71433 avg_tb_size=372
qemu: flush code_size=64708840 nb_tbs=119052 avg_tb_size=359
qemu: flush code_size=49830008 nb_tbs=90992 avg_tb_size=362
qemu: flush code_size=68372408 nb_tbs=123442 avg_tb_size=368
qemu: flush code_size=33555560 nb_tbs=59514 avg_tb_size=378
qemu: flush code_size=44748344 nb_tbs=80974 avg_tb_size=367
qemu: flush code_size=37104248 nb_tbs=67609 avg_tb_size=364

That is, 20 flushes. Note how a static partitioning approach uses
the code buffer poorly, leading to many unnecessary flushes.

Backports commit e8feb96fcc6c16eab8923332e86ff4ef0e2ac276 from qemu
2018-03-14 12:10:29 -04:00
Emilio G. Cota 5ad6116f20
tcg: allocate optimizer temps with tcg_malloc
Groundwork for supporting multiple TCG contexts.

While at it, also allocate temps_used directly as a bitmap of the
required size, instead of using a bitmap of TCG_MAX_TEMPS via
TCGTempSet.

Performance-wise we lose about 1.12% in a translation-heavy workload
such as booting+shutting down debian-arm:

Performance counter stats for 'taskset -c 0 arm-softmmu/qemu-system-arm \
-machine type=virt -nographic -smp 1 -m 4096 \
-netdev user,id=unet,hostfwd=tcp::2222-:22 \
-device virtio-net-device,netdev=unet \
-drive file=die-on-boot.qcow2,id=myblock,index=0,if=none \
-device virtio-blk-device,drive=myblock \
-kernel kernel.img -append console=ttyAMA0 root=/dev/vda1 \
-name arm,debug-threads=on -smp 1' (10 runs):

exec time (s) Relative slowdown wrt original (%)
---------------------------------------------------------------
original 20.213321616 0.
tcg_malloc 20.441130078 1.1270214
TCGContext 20.477846517 1.3086662
g_malloc 20.780527895 2.8061013

The other two alternatives shown in the table are:
- TCGContext: embed temps[TCG_MAX_TEMPS] and TCGTempSet used_temps
in TCGContext. This is simple enough but it isn't faster than using
tcg_malloc; moreover, it wastes memory.
- g_malloc: allocate/deallocate both temps and used_temps every time
tcg_optimize is executed.

Backports commit 34184b071817b4f9edbfd1aa2225c196f05a0947 from qemu
2018-03-14 12:10:28 -04:00
Emilio G. Cota 1be7b55bb4
tcg: introduce **tcg_ctxs to keep track of all TCGContext's
Groundwork for supporting multiple TCG contexts.

Note that having n_tcg_ctxs is unnecessary. However, it is
convenient to have it, since it will simplify iterating over the
array: we'll have just a for loop instead of having to iterate
over a NULL-terminated array (which would require n+1 elems)
or having to check with ifdef's for usermode/softmmu.

Backports commit df2cce2968069526553d82331ce9817eaca6b03a from qemu
2018-03-14 12:10:25 -04:00
Emilio G. Cota 078c9e7e3b
tcg: take tb_ctx out of TCGContext
Groundwork for supporting multiple TCG contexts.

Backports commit 44ded3d04821bec57407cc26a8b4db620da2be04 from qemu
2018-03-14 09:18:12 -04:00
Emilio G. Cota f593db445a
tcg: check CF_PARALLEL instead of parallel_cpus
Thereby decoupling the resulting translated code from the current state
of the system.

The tb->cflags field is not passed to tcg generation functions. So
we add a field to TCGContext, storing there a copy of tb->cflags.

Most architectures have <= 32 registers, which results in a 4-byte hole
in TCGContext. Use this hole for the new field.

Backports commit e82d5a2460b0e176128027651ff9b104e4bdf5cc from qemu
2018-03-13 15:17:59 -04:00
Lioncash 035f1afa7d
tcg: move tcg backend files into accel/tcg/
move tcg-runtime.c, translate-all.(ch) and translate-common.c into
accel/tcg/ subdirectory and updated related trace-events file.

Backports commit 244f144134d0dd182f1af8654e7f9a79fe770368 and applies
relevant changes made in db432672dc50ed86dda17ac821b7eb07411a90af and
d9bb58e51068dfc48746c6af0179926c8dc05bce from qemu
2018-03-13 11:48:15 -04:00
Lioncash 99dbbf1571
tcg/optimize: Perform comparison pass with qemu
Keeps formatting and code synced
2018-03-12 18:06:29 -04:00
Lioncash 21b0afe218
tcg: Perform comparison pass with qemu
Makes formatting and code consistent with qemu
2018-03-12 18:03:06 -04:00
Lioncash b28c64ed34
tcg/i386: Amend bad merge 2018-03-12 10:11:03 -04:00
Richard Henderson a16ee979fc
tcg/i386: Always use TZCNT when available
I think this is cleaner than sometimes using BSF.

Backports commit 39f099ec9d6d420b6fe6f7f4f8ed80ae29c65ff2 from qemu
2018-03-12 05:11:42 -04:00
Richard Henderson 7e327aaf84
util: Introduce include/qemu/cpuid.h
Clang 3.9 passes the CONFIG_AVX2_OPT configure test. However, the
supplied <cpuid.h> does not contain the bit_AVX2 define that we use
when detecting whether the routine can be enabled.

Introduce a qemu-specific header that uses the compiler's definition
of __cpuid et al, but supplies any missing bit_* definitions needed.
This avoids introducing any extra ifdefs to util/bufferiszero.c, and
allows quite a few to be removed from tcg/i386/tcg-target.inc.c.

Backports commit 5dd8990841a9e331d9d4838a116291698208cbb6 from qemu
2018-03-09 12:12:00 -05:00
Richard Henderson d1da0b8f6d
tcg/aarch64: Add vector operations
Backports commit 14e4c1e2355473ccb2939afc69ac8f25de103b92 from qemu
2018-03-07 08:07:58 -05:00
Richard Henderson b3e89e9996
tcg/i386: Add vector operations
The x86 vector instruction set is extremely irregular. With newer
editions, Intel has filled in some of the blanks. However, we don't
get many 64-bit operations until SSE4.2, introduced in 2009.

The subsequent edition was for AVX1, introduced in 2011, which added
three-operand addressing, and adjusts how all instructions should be
encoded.

Given the relatively narrow 2 year window between possible to support
and desirable to support, and to vastly simplify code maintainence,
I am only planning to support AVX1 and later cpus.

Backports commit 770c2fc7bb70804ae9869995fd02dadd6d7656ac from qemu
2018-03-07 08:07:40 -05:00
Richard Henderson 7f55d6ed69
tcg/optimize: Handle vector opcodes during optimize
Trivial move and constant propagation. Some identity and constant
function folding, but nothing that requires knowledge of the size
of the vector element.

Backports commit 170ba88f45bd7b1c5593021ed8e174f663b0bd1a from qemu
2018-03-06 16:10:09 -05:00
Richard Henderson ac4d051b05
tcg: Add generic vector helpers with a scalar operand
Use dup to convert a non-constant scalar to a third vector.

Add addition, multiplication, and logical operations with an immediate.
Add addition, subtraction, multiplication, and logical operations with
a non-constant scalar. Allow for the front-end to build operations in
which the scalar operand comes first.

Backports commit 22fc3527034678489ec554e82fd52f8a7f05418e from qemu
2018-03-06 16:10:09 -05:00
Richard Henderson 57bdf0faa2
tcg: Add generic helpers for saturating arithmetic
No vector ops as yet. SSE only has direct support for 8- and 16-bit
saturation; handling 32- and 64-bit saturation is much more expensive.

Backports commit f49b12c6e6a75a5bd109bcbbda072b24e5fb8dfd from qemu
2018-03-06 16:10:09 -05:00
Richard Henderson ab8579123e
tcg: Add generic vector ops for multiplication
Backports commit 3774030a3e523689df24a7ed22854ce7a06b0116 from qemu
2018-03-06 16:10:08 -05:00
Richard Henderson f9c4930ecd
tcg: Add generic vector ops for comparisons
Backports commit 212be173f01e85e6589fd76676827953a84a732b from qemu
2018-03-06 16:09:38 -05:00
Richard Henderson 577ee114c3
tcg: Add generic vector ops for constant shifts
Opcodes are added for scalar and vector shifts, but considering the
varied semantics of these do not expose them to the front ends. Do
go ahead and provide them in case they are needed for backend expansion.

Backports commit d0ec97967f940bbc11dced83422b39c224127f1e from qemu
2018-03-06 14:03:30 -05:00
Richard Henderson 64365612bf
tcg: Add generic vector expanders
Backports commit db432672dc50ed86dda17ac821b7eb07411a90af from qemu
2018-03-06 13:42:52 -05:00
Richard Henderson 12fb906688
tcg: Standardize integral arguments to expanders
Some functions use intN_t arguments, some use uintN_t, some just
used "unsigned". To aid putting function pointers in tables, we
need consistency.

Backports commit 474b2e8f0f765515515b495e6872b5e18a660baf from qemu
2018-03-06 12:18:28 -05:00
Richard Henderson b9cd924fa5
tcg: Add types and basic operations for host vectors
Nothing uses or enables them yet.

Backports commit d2fd745fe8b9ac574d28b7ac63c39f6529749bd2 from qemu
2018-03-06 12:13:32 -05:00
Richard Henderson 9ef32fc039
tcg: Allow multiple word entries into the constant pool
This will be required for storing vector constants.

Backports commit da73a4abca6acefc4bb55d30bd0242bdaddb6045 from qemu
2018-03-06 11:43:21 -05:00
Lioncash 02eee6d5f7
tcg/ppc: Update to commit 030ffe39dd4128eb90483af82a5b23b23054a466 2018-03-06 09:16:37 -05:00
Richard Henderson 6212981120
tcg/ppc: Support tlb offsets larger than 64k
AArch64 with SVE has an offset of 80k to the 8th TLB.

Backports commit 4a64e0fd6876e45b34cd87b700ee30ef5c10c87a from qemu
2018-03-06 09:14:05 -05:00
Richard Henderson c4f6a7d06d
tcg/arm: Support tlb offsets larger than 64k
AArch64 with SVE has an offset of 80k to the 8th TLB.

Backports commit 71f9cee9d0a36dc4c00dfeeeca1301f265268f62 from qemu
2018-03-06 09:13:17 -05:00
Richard Henderson 9cd6985799
tcg/arm: Fix double-word comparisons
The code sequence we were generating was only good for unsigned
comparisons. For signed comparisions, use the sequence from gcc.

Fixes booting of ppc64 firmware, with a patch changing the code
sequence for ppc comparisons.

Backports commit 7170ac33135e6ecf89752d3949bcecf9b9766d1c from qemu
2018-03-06 09:12:14 -05:00
Richard Henderson bbd87f9d73
tcg: Add tcg_signed_cond
Complimenting the existing tcg_unsigned_cond.

Backports commit 923ed1750186591b04d7d61399f6d68b4e0608f2 from qemu
2018-03-05 16:55:17 -05:00
Richard Henderson 140058221d
tcg: Generalize TCGOp parameters
We had two fields specific to INDEX_op_call. Rename these and
add some macros so that the fields may be reused for other opcodes.

Backports commit cd9090aa9dbba30db8aec9a2fc103aaf1ab0f5a7 from qemu
2018-03-05 16:53:50 -05:00
Richard Henderson 7fe5f620df
tcg: Dynamically allocate TCGOps
With no fixed array allocation, we can't overflow a buffer.
This will be important as optimizations related to host vectors
may expand the number of ops used.

Use QTAILQ to link the ops together.

Backports commit 15fa08f8451babc88d733bd411d4c94976f9d0f8 from qemu
2018-03-05 16:34:40 -05:00
Richard Henderson 5f074f09ab
tcg: Remove TCGV_UNUSED* and TCGV_IS_UNUSED*
These are now trivial sets and tests against NULL. Unwrap.

Backports commit f764718d0cb30af9f1f8e1d6a33622cc05ca4155 from qemu
2018-03-05 15:58:15 -05:00
Richard Henderson 5ef155a68f
tcg/s390x: Use constant pool for prologue
Rather than have separate code only used for guest_base,
rely on a recent change to handle constant pool entries.

Backports commit ba2c747992f8c315c2fbddba196ce9137430d61d from qemu
2018-03-05 11:28:39 -05:00
Richard Henderson ef3f552229
tcg: Allow constant pool entries in the prologue
Both ARMv6 and AArch64 currently may drop complex guest_base values
into the constant pool. But generic code wasn't expecting that, and
the pool is not emitted. Correct that.

Backports commit 5b38ee31616d1532c3c3a6dc644a9160d608ed2f from qemu
2018-03-05 11:25:56 -05:00
Richard Henderson ab9df6244c
tcg: Use offsets not indices for TCGv_*
Using the offset of a temporary, relative to TCGContext, rather than
its index means that we don't use 0. That leaves offset 0 free for
a NULL representation without having to leave index 0 unused.

Backports commit e89b28a63501c0ad6d2501fe851d0c5202055e70 from qemu
2018-03-05 10:12:08 -05:00
Richard Henderson 4d9c8583fa
tcg: Remove TCGV_EQUAL*
When we used structures for TCGv_*, we needed a macro in order to
perform a comparison. Now that we use pointers, this is just clutter

Backports commit 11f4e8f8bfaa2caaab24bef6bbbb8a0205015119 from qemu
2018-03-05 09:16:07 -05:00
Richard Henderson d450156414
tcg: Remove GET_TCGV_* and MAKE_TCGV_*
The GET and MAKE functions weren't really specific enough.
We now have a full complement of functions that convert exactly
between temporaries, arguments, tcgv pointers, and indices.

The target/sparc change is also a bug fix, which would have affected
a host that defines TCG_TARGET_HAS_extr[lh]_i64_i32, i.e. MIPS64.

Backports commit dc41aa7d34989b552efe712ffe184236216f960b from qemu
2018-03-05 09:12:26 -05:00
Richard Henderson 960eb3f4f9
tcg: Introduce temp_tcgv_{i32,i64,ptr}
Backports commit 085272b35e0644fea373c33b5265c1818b7a978c from qemu
2018-03-05 08:55:52 -05:00
Richard Henderson 2bb5011b18
tcg: Introduce tcgv_{i32,i64,ptr}_{arg,temp}
Transform TCGv_* to an "argument" or a temporary.
For now, an argument is simply the temporary index.

Backports commit ae8b75dc6ec808378487064922f25f1e7ea7a9be from qemu
2018-03-05 08:46:12 -05:00
Richard Henderson 9f8c6a456b
tcg: Use per-temp state data in optimize
While we're touching many of the lines anyway, adjust the naming
of the functions to better distinguish when "TCGArg" vs "TCGTemp"
should be used.

Backports commit 6349039d0b06eda59820629b934944246b14a1c1 from qemu
2018-03-05 08:24:06 -05:00
Richard Henderson 387060ccf5
tcg: Remove unused TCG_CALL_DUMMY_TCGV
Backports commit 54534d7cfd3bdff1aa1f6c9472d94243d2303656 from qemu
2018-03-05 07:52:35 -05:00
Richard Henderson d104b792a6
tcg: Change temp_allocate_frame arg to TCGTemp
Backports commit 2272e4a791b7e1a01ffac143616ba4ece9a5762d from qemu
2018-03-05 07:51:40 -05:00
Richard Henderson 35a7a9c9a4
tcg: Avoid loops against variable bounds
Copy s->nb_globals or s->nb_temps to a local variable for the purposes
of iteration. This should allow the compiler to use low-overhead
looping constructs on some hosts.

Backports commit ac3b88911ebc6fc841f28898ee8aed40839debe2 from qemu
2018-03-05 07:50:06 -05:00
Richard Henderson 1f4ac863bf
tcg: Use per-temp state data in liveness
This avoids having to allocate external memory for each temporary.

Backports commit b83eabeac06e38706738bd5e92b1ba117a1b554d from qemu
2018-03-05 07:47:51 -05:00
Richard Henderson 87f2067aac
tcg: Introduce temp_arg, export temp_idx
At the same time, drop the TCGContext argument and use tcg_ctx instead.

Backports commit 1807f4c40098070008eb84b2032e25b7ac42569e from qemu
2018-03-05 07:24:17 -05:00
Richard Henderson a659a03ff5
tcg: Return NULL temp for TCG_CALL_DUMMY_ARG
Backports commit c6c7d84df8889b9d6298466999b88a8a42e5f976 from qemu
2018-03-05 07:22:38 -05:00
Richard Henderson 010ded3088
tcg: Add temp_global bit to TCGTemp
This avoids needing to test the index of a temp against nb_globals.

Backports commit fa477d25470187030614288d35bc734edffa41ee from qemu
2018-03-05 07:21:10 -05:00
Richard Henderson a9c46ad7a0
tcg: Introduce arg_temp
Backports commit 434391390ba99996af1591b427a73b3f5c05065e from qemu
2018-03-05 07:17:44 -05:00
Richard Henderson c8f0f6901e
tcg: Propagate TCGOp down to allocators
Backports commit dd186292017641d5b31fc13225a420677e1d20d3 from qemu
2018-03-05 07:12:48 -05:00