Split arm_gen_test_cc into 3 functions, so that it can be reused
for non-branch TCG comparisons.
Backports commit 6c2c63d3a02c79e9035ca0370cc549d0f938a4dd from qemu
In ffc6372851d8631a9f9fa56ec613b3244dc635b9, we swapped the guest
base to the address base register from the address index register.
Except that 31 in the base slot is SP not XZR, so we need to be
more intelligent about which reg gets placed in which slot.
Backports commit 352bcb0a2b816ff9ab9d75d0f2384650d9e9ab19 from qemu
Rather than allow arbitrary shift+trunc, only concern ourselves
with low and high parts. This is all that was being used anyway.
Backports commit 609ad70562793937257c89d07bf7c1370b9fc9aa from qemu
They behave the same as ext32s_i64 and ext32u_i64 from the constant
folding and zero propagation point of view, except that they can't
be replaced by a mov, so we don't compute the affected value.
Backports commit 8bcb5c8f34f9215d4f88f388c7ff14c9bd5cecd3 from qemu
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a
32-bit value is always converted to a 64-bit value and not propagated
through the register allocator or the optimizer.
Backports commit 4f2331e5b67af8172419eb1c8db510b497b30a7b from qemu
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32,
and the name in the README doesn't match the name offered to the
frontends.
Always use the long name to make it clear it is a size changing op.
Backports commit 0632e555fc4d281d69cb08d98d500d96185b041f from qemu
Instead of using an enum which could be either a copy or a const, track
them separately. This will be used in the next patch.
Constants are tracked through a bool. Copies are tracked by initializing
temp's next_copy and prev_copy to itself, allowing to simplify the code
a bit.
Backports commit b41059dd9deec367a4ccd296659f0bc5de2dc705 from qemu
Add two accessor functions temp_is_const and temp_is_copy, to make the
code more readable and make code change easier.
Backports commit d9c769c60948815ee03b2684b1c1c68ee4375149 from qemu
The tcg_temp_info structure uses 24 bytes per temp. Now that we emulate
vector registers on most guests, it's not uncommon to have more than 100
used temps. This means we have initialize more than 2kB at least twice
per TB, often more when there is a few goto_tb.
Instead used a TCGTempSet bit array to track which temps are in used in
the current basic block. This means there are only around 16 bytes to
initialize.
This improves the boot time of a MIPS guest on an x86-64 host by around
7% and moves out tcg_optimize from the the top of the profiler list.
Backports commit 1208d7dd5fddc1fbd98de800d17429b4e5578848 from qemu
By convention, on a 64-bit host TCG internally stores 32-bit constants
as sign-extended. This is not the case in the optimizer when a 32-bit
constant is folded.
This doesn't seem to have more consequences than suboptimal code
generation. For instance the x86 backend assumes sign-extended constants,
and in some rare cases uses a 32-bit unsigned immediate 0xffffffff
instead of a 8-bit signed immediate 0xff for the constant -1. This is
with a ppc guest:
before
------
---- 0x9f29cc
movi_i32 tmp1,$0xffffffff
movi_i32 tmp2,$0x0
add2_i32 tmp0,CA,CA,tmp2,r6,tmp2
add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2
mov_i32 r10,tmp0
0x7fd8c7dfe90c: xor %ebp,%ebp
0x7fd8c7dfe90e: mov %ebp,%r11d
0x7fd8c7dfe911: mov 0x18(%r14),%r9d
0x7fd8c7dfe915: add %r9d,%r10d
0x7fd8c7dfe918: adc %ebp,%r11d
0x7fd8c7dfe91b: add $0xffffffff,%r10d
0x7fd8c7dfe922: adc %ebp,%r11d
0x7fd8c7dfe925: mov %r11d,0x134(%r14)
0x7fd8c7dfe92c: mov %r10d,0x28(%r14)
after
-----
---- 0x9f29cc
movi_i32 tmp1,$0xffffffffffffffff
movi_i32 tmp2,$0x0
add2_i32 tmp0,CA,CA,tmp2,r6,tmp2
add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2
mov_i32 r10,tmp0
0x7f37010d490c: xor %ebp,%ebp
0x7f37010d490e: mov %ebp,%r11d
0x7f37010d4911: mov 0x18(%r14),%r9d
0x7f37010d4915: add %r9d,%r10d
0x7f37010d4918: adc %ebp,%r11d
0x7f37010d491b: add $0xffffffffffffffff,%r10d
0x7f37010d491f: adc %ebp,%r11d
0x7f37010d4922: mov %r11d,0x134(%r14)
0x7f37010d4929: mov %r10d,0x28(%r14)
Backports commit 29f3ff8d6cbc28f79933aeaa25805408d0984a8f from qemu
Due to a copy&paste, the new op value is tested against mov_i32 instead
of movi_i32. The test is therefore always false. Fix that.
Backports commit 961521261a3d600b0695b2e6d2b0f490076f7e90 from qemu
The tcg_constant_folding folding ends up doing all the optimizations
(which is a good thing to avoid looping on all ops multiple time), so
make it clear and just rename it tcg_optimize.
Backports commit 36e60ef6ac5d8a262d0fbeedfdb2b588514cb1ea from qemu
Most of the calls to tcg_opt_gen_mov are preceeded by a test to check if
the source temp is a constant. Fold that into the tcg_opt_gen_mov
function.
Backports commit 97a79eb70dd35a24fda87d86196afba5e6f21c5d from qemu
Each call to tcg_opt_gen_mov is preceeded by a test to check if the
source and destination temps are copies. Fold that into the
tcg_opt_gen_mov function.
Backports commit 5365718a9afeeabde3784d82a542f8ad909b18cf from qemu
We can get the opcode using the TCGOp pointer. It needs to be
dereferenced, but it's anyway done a few lines below to write
the new value.
Backports commit 8d6a91602ea824ef4435ea38fd475387eecc098c from qemu
We can get the opcode using the TCGOp pointer. It needs to be
dereferenced, but it's anyway done a few lines below to write
the new value.
Backports commit ebd27391b00cdafc81e0541a940686137b3b48df from qemu
The checks in dins is required to avoid triggering an assertion
in tcg_gen_deposit_tl. The check in dext is just for completeness.
Fold the other D cases in via fallthru.
Backports commit b7f26e523914b982a1c1bfa8295f77ff9787c33c from qemu
Similar to the same fix for user-mode, except this instance
occurs on the softmmu path. Again, the tlb addend must be
the base register, while the guest address is the index.
Backports commit 80adb8fcad4778376a11d394a9e01516819e2327 from qemu
Thanks to the previous patch, it is now easy for tcg_out_qemu_ld and
tcg_out_qemu_st to use a 32-bit zero extended offset. However, the
guest base register x28 must be the base and addr_reg must be the
index.
Backports commit ffc6372851d8631a9f9fa56ec613b3244dc635b9 from qemu
The new argument lets you pick uxtw or uxtx mode for the offset
register. For now, all callers pass TCG_TYPE_I64 so that uxtx
is generated. The bits for uxtx are removed from I3312_TO_I3310.
Backports commit 6c0f0c0f124718650a8d682ba275044fc02f6fe2 from qemu
The addition of MO_AMASK means that places that used inverted masks
need to be changed to use positive masks, and places that failed to
mask the intended bits need updating.
Backports commit 2b7ec66f025263a5331f37d5ad78a625496fd7bd from qemu
These modifiers control, on a per-memory-op basis, whether
unaligned memory accesses are allowed. The default setting
reflects the target's definition of ALIGNED_ONLY.
Backports commit dfb36305626636e2e07e0c5acd3a002a5419399e from qemu
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.
Backports commit 3972ef6f830d65e9bacbd31257abedc055fd6dc8 from qemu
At the tcg opcode level, not at the tcg-op.h generator level.
This requires minor changes through all of the tcg backends,
but none of the cpu translators.
Backports commit 59227d5d45bb3c31dc2118011691c35b3c00879c from qemu
This is less about improved type checking than enabling a
subsequent change to the representation of labels.
Backports commit bec1631100323fac0900aea71043d5c4e22fc2fa from qemu
This is improved type checking for the translators -- it's no longer
possible to accidentally swap arguments to the branch functions.
Note that the code generating backends still manipulate labels as int.
With notable exceptions, the scope of the change is just a few lines
for each target, so it's not worth building extra machinery to do this
change in per-target increments.
Backports commit 42a268c241183877192c376d03bd9b6d527407c7 from qemu
We no longer need INDEX_op_end to terminate the list, nor do we
need 5 forms of nop, since we just remove the TCGOp instead.
Backports commit 15fc7daa770764cc795158cbb525569f156f3659 from qemu
Rather reserving space in the op stream for optimization,
let the optimizer add ops as necessary.
Backports commit a4ce099a7a4b4734c372f6bf28f3362e370f23c1 from qemu
With the linked list scheme we need not leave nops in the stream
that we need to process later.
Backports commit 0c627cdca20155753a536c51385abb73941a59a0 from qemu
The method by which we count the number of ops emitted
is going to change. Abstract that away into some inlines.
Backports commit fe700adb3db5b028b504423b946d4ee5200a8f2f from qemu.
Almost completely eliminates the ifdefs in this file, improving
confidence in the lesser used 32-bit builds.
Backports commit 3a13c3f34ce2058e0c2decc3b0f9f56be24c9400 from qemu
Some of these functions are really quite large. We have a number of
things that ought to be circularly dependent, but we duplicated code
to break that chain for the inlines.
This saved 25% of the code size of one of the translators I examined.
Chain the temporaries together via pointers intstead of indices.
The mem_reg value is now mem_base->reg. This will be important later.
This does require that the frame pointer have a global temporary
allocated for it. This is simple bar the existing reserved_regs check.
Backports commit b3a62939561e07bc34493444fa926b6137cba4e8 from qemu
Thus, use cpu_env as the parameter, not TCG_AREG0 directly.
Update all uses in the translators.
Backports commit e1ccc05444676b92c63708096e36582be27fbee1 from qemu