Commit graph

2042 commits

Author SHA1 Message Date
Maciej W. Rozycki 4d9107be8a
target-mips: Correct MIPS16/microMIPS branch size calculation
Correct MIPS16/microMIPS branch size calculation in PC adjustment
needed:

- to set the value of CP0.ErrorEPC at the entry to the reset exception,

- for the purpose of branch reexecution in the context of device I/O.

Follow the approach taken in `exception_resume_pc' for ordinary, Debug
and NMI exceptions.

MIPS16 and microMIPS branches can be 2 or 4 bytes in size and that has
to be reflected in calculation. Original MIPS ISA branches, which is
where this code originates from, are always 4 bytes long, just as all
original MIPS ISA instructions.

Backports commit c3577479815f5bcf9d38993967bca2115af245d8 from qemu
2018-02-11 16:09:33 -05:00
Lioncash 283cbd0317
target-mips: Restore the order of helpers
Restore the order of helpers that used to be: unary operations (generic,
then MIPS-specific), binary operations (generic, then MIPS-specific),
compare operations. At one point FMA operations were inserted at a
random place in the file, disregarding the preexisting order, and later
on even more operations sprinkled across the file. Revert the mess by
moving FMA operations to a new ternary class inserted after the binary
class and move the misplaced unary and binary operations to where they
belong.

Backports commit 8fc605b8aa257feb3e69d44794a765bd492b573b from qemu
2018-02-11 16:07:02 -05:00
Maciej W. Rozycki 802b5d9a3d
target-mips: Remove unused 'FLOAT_OP' macro
Remove the `FLOAT_OP' macro, unused since commit
b6d96beda3a6cbf20a2d04a609eff78adebd8859 [Use temporary registers for
the MIPS FPU emulation.].

Backports commit 51fdea945ae7adae8d7e4a1624e35bb7f714b58f from qemu
2018-02-11 16:03:56 -05:00
Lioncash f62664948e
target-mips: Make 'helper_float_cvtw_s' consistent with the remaining helpers
Move the call to `update_fcr31' in `helper_float_cvtw_s' after the
exception flag check, for consistency with the remaining helpers that do
it last too.

Backports commit 2b09f94cdbf5c54e2278d7f3aed2eceff3494790 from qemu
2018-02-11 16:02:38 -05:00
Maciej W. Rozycki 0f82a7f89f
target-mips: assorted formatting fixes
Backports commits d75de74967f631a7d0b538d4b88f96f9c426bfe2, 6225a4a0e39cb24e7b9e1d4d2c1a3e6eaee18e85, and d2bfa6e6222baa0218bd0658499d38bac56ac34c from qemu
2018-02-11 16:01:23 -05:00
Maciej W. Rozycki ca496991ea
target-mips: Enable vectored interrupt support for the 74Kf CPU
Enable vectored interrupt support for the 74Kf CPU, reflecting hardware.

Backports commit 4386f08767240080334539ac0b07a8bfe30bffe9 from qemu
2018-02-11 15:57:17 -05:00
Maciej W. Rozycki 338e34290d
target-mips: Add M14K and M14Kc MIPS32r2 microMIPS processors
Add the M14K and M14Kc processors from MIPS Technologies that are the
original implementation of the microMIPS ISA. They are dual instruction
set processors, implementing both the microMIPS and the standard MIPSr32
ISA.

These processors correspond to the M4K and 4KEc CPUs respectively,
except with support for the microMIPS instruction set added, support for
the MCU ASE added and two extra interrupt lines, making a total of 8
hardware interrupts plus 2 software interrupts. The remaining parts of
the microarchitecture, in particular the pipeline, stayed unchanged.

The presence of the microMIPS ASE is is reflected in the configuration
added. We currently have no support for the MCU ASE, including in
particular the ACLR, ASET and IRET instructions in either encoding, and
we have no support for the extra interrupt lines, including bits in
CP0.Status and CP0.Cause registers, so these features are not marked,
making our support diverge from real hardware.

Backports commit 11f5ea105c06bec72e9bc9a700fa65d60afb5ec3 from qemu
2018-02-11 15:56:28 -05:00
Lioncash 833b0ff964
target-mips: Make CP0.Config4 and CP0.Config5 registers signed
Make the data type used for the CP0.Config4 and CP0.Config5 registers
and their mask signed, for consistency with the remaining 32-bit CP0
registers, like CP0.Config0, etc.

Backports commit 8280b12c0e4b515d707509dde4ddde05d9bda4ef from qemu
2018-02-11 15:48:10 -05:00
Maciej W. Rozycki 5eea73c534
target-mips: Add 5KEc and 5KEf MIPS64r2 processors
Add the 5KEc and 5KEf processors from MIPS Technologies that are the
original implementation of the MIPS64r2 ISA.

Silicon for these processors has never been taped out and no soft cores
were released even. They do exist though, a CP0.PRId value has been
assigned and experimental RTLs produced at the time the MIPS64r2 ISA has
been finalized. The settings introduced here faithfully reproduce that
hardware.

As far the implementation goes these processors are the same as the 5Kc
and the 5Kf CPUs respectively, except implementing the MIPS64r2 rather
than the original MIPS64 instruction set. There must have been some
updates to the CP0 architecture as mandated by the ISA, such as the
addition of the EBase register, although I am not sure about the exact
details, no documentation has ever been produced for these processors.
The remaining parts of the microarchitecture, in particular the
pipeline, stayed unchanged. Or to put it another way, the difference
between a 5K and a 5KE CPU corresponds to one between a 4K and a 4KE
CPU, except for the 64-bit rather than 32-bit ISA.

Backports commit 36b86e0dc2be93fc538fe7e11e0fda1a198f0135 from qemu
2018-02-11 15:47:13 -05:00
Richard Henderson 2c091e5fb8
target-arm: Add condexec state to insn_start
Backports commit 52e971d9ff67e340ac2a86bd67e14bd31c7991e0 from qemu
2018-02-11 15:13:40 -05:00
Richard Henderson 3f9502dc8b
tcg: Allow extra data to be attached to insn_start
With an eye toward having this data replace the gen_opc_* arrays
that each target collects in order to enable restore_state_from_tb.

Backports commit 9aef40ed1f6e2bd794bbb3ba8c8b773e506334c9 from qemu
2018-02-11 13:03:51 -05:00
Richard Henderson dd1ec408e5
target-*: Increment num_insns immediately after tcg_gen_insn_start
This does tidy the icount test common to all targets.

Backports commit 959082fc4a93a016a6b697e1e0c2b373d8a3a373 from qemu
2018-02-11 12:46:30 -05:00
Richard Henderson a64d0ff657
target-*: Unconditionally emit tcg_gen_insn_start
While we're at it, emit the opcode adjacent to where we currently
record data for search_pc. This puts gen_io_start et al on the
"correct" side of the marker.

Backports commit 667b8e29c5b1d8c5b4e6ad5f780ca60914eb6e96 from qemu
2018-02-11 12:41:20 -05:00
Lioncash b3f9ff667b
tcg: Rename debug_insn_start to insn_start
With an eye toward making it mandatory.

Backports commit 765b842adec4c5a359e69ca08785553599f71496 from qemu
2018-02-11 12:34:01 -05:00
Richard Henderson 77b03e0973
target-i386: Make check_hw_breakpoints static
The function is now only used from within a single file.

Backports commit dd941cdcfec536aad6a310a153778142ed9f3e92 from qemu
2018-02-11 12:28:08 -05:00
Richard Henderson 10e0920fa0
target-i386: Move breakpoint related functions to new file
Backports commit ba4b5c65a98ea91dc3b13e42dd9404808c999dda from qemu
2018-02-11 12:25:24 -05:00
Lioncash fb2fe4580f
optimize: Add missing extrh/extrl case 2018-02-11 02:57:55 -05:00
Lioncash 3791fc69fd
target-arm: Use new revbit functions
Backports commit 42fedbca8f5b54324ed89be3484d4a3dc9946387 from qemu
2018-02-11 02:57:55 -05:00
Richard Henderson a5d6a31d69
host-utils: Add revbit functions
Backports commit 652a4b7e736f432a6809d1d2b52d169ab0b9aa3b from qemu.
2018-02-11 02:57:55 -05:00
Richard Henderson eb5ed2a844
target-arm: Use tcg_gen_extrh_i64_i32
Usually, eliminate an operation from the translator by combining
a shift with an extract.

In the case of gen_set_NZ64, we don't need a boolean value for cpu_ZF,
merely a non-zero value. Given that we can extract both halves of a
64-bit input in one call, this simplifies the code.

Backports commit 7cb36e18b2f1c1f971ebdc2121de22a8c2e94fd6 from qemu
2018-02-11 02:57:54 -05:00
Richard Henderson b94da3fc13
target-arm: Recognize ROR
Backports commit 8fb0ad8e16ab3d03433244a1a03e1df757342ad8 from qemu
2018-02-11 02:57:33 -05:00
Richard Henderson 3173269986
target-arm: Eliminate unnecessary zero-extend in disas_bitfield
For !SF, this initial ext32u can't be optimized away by the
current TCG code generator. (It would require backward bit
liveness propagation.)

Backports commit d3a77b42decd0cbfa62a5526e67d1d6d380c83a9 from qemu
2018-02-11 01:35:58 -05:00
Richard Henderson c637a97270
target-arm: Recognize UXTB, UXTH, LSR, LSL
These are all special case aliases of UBFM.

Backports commit 9924e85829fe21b5f38a5d267c9aea44c5d478ac from qemu
2018-02-11 01:34:11 -05:00
Richard Henderson d9e4e70636
target-arm: Recognize SXTB, SXTH, SXTW, ASR
These are all special case aliases of SBFM.

Backports commit ef60151bee9a95e3a5cc98b345a19ed7eb435ddb from qemu
2018-02-11 01:31:54 -05:00
Richard Henderson 5ee72ff9f5
target-arm: Implement fcsel with movcond
Backports commit 6e061029d74455d83f6fa070ac33de7a356cf60d from qemu
2018-02-11 01:29:14 -05:00
Richard Henderson 53bd2b1d5c
target-arm: Implement ccmp branchless
This can allow much of a ccmp to be elided when particular
flags are subsequently dead.

Backports commit 7dd03d773e0dafae9271318fc8d6b2b14de74403 from qemu
2018-02-11 01:25:51 -05:00
Richard Henderson 2c71ddefb1
target-arm: Use setcond and movcond for csel
Backports commit 259cb68491ab36427e7e5d820fe543d53b006ec6 from qemu
2018-02-10 23:57:11 -05:00
Richard Henderson 70dd48b855
target-arm: Handle always condition codes within arm_test_cc
Handling this with TCG_COND_ALWAYS will allow these unlikely
cases to be handled without special cases in the rest of the
translator. The TCG optimizer ought to be able to reduce
these ALWAYS conditions completely.

Backports commit 9305eac09e61d857c9cc11e20db754dfc25a82db from qemu
2018-02-10 23:48:10 -05:00
Lioncash 94f1227f7a
target-arm: Introduce DisasCompare
Split arm_gen_test_cc into 3 functions, so that it can be reused
for non-branch TCG comparisons.

Backports commit 6c2c63d3a02c79e9035ca0370cc549d0f938a4dd from qemu
2018-02-10 23:45:47 -05:00
Richard Henderson 352f93a119
tcg/aarch64: Fix tcg_out_qemu_{ld, st} for guest_base == 0
In ffc6372851d8631a9f9fa56ec613b3244dc635b9, we swapped the guest
base to the address base register from the address index register.
Except that 31 in the base slot is SP not XZR, so we need to be
more intelligent about which reg gets placed in which slot.

Backports commit 352bcb0a2b816ff9ab9d75d0f2384650d9e9ab19 from qemu
2018-02-10 23:33:24 -05:00
Richard Henderson 7d57c2e4ce
tcg/aarch64: Use softmmu fast path for unaligned accesses
Backports commit 9ee14902bf107e37fb2c8119fa7bca424396237c from qemu
2018-02-10 23:25:34 -05:00
Lioncash f8388a6c03
header_gen: Fix mips platform 2018-02-10 23:21:41 -05:00
Richard Henderson aaf89ed84d
tcg/s390: Use softmmu fast path for unaligned accesses
Backports commit a5e39810b9088b5d20fac8e0293f281e1c8b608f from qemu
2018-02-10 23:14:14 -05:00
Richard Henderson a3aaf5a864
tcg: Remove tcg_gen_trunc_i64_i32
Replacing it with tcg_gen_extrl_i64_i32.

Backports commit ecc7b3aa71f5fdcf9ee87e74ca811d988282641d from qemu
2018-02-10 23:11:02 -05:00
Richard Henderson 58e939b91f
tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32
Rather than allow arbitrary shift+trunc, only concern ourselves
with low and high parts. This is all that was being used anyway.

Backports commit 609ad70562793937257c89d07bf7c1370b9fc9aa from qemu
2018-02-10 23:00:45 -05:00
Aurelien Jarno a05256b206
tcg: update README about size changing ops
Backports commit 870ad1547ac53bc79c21d86cf453b3b20cc660a2 from qemu
2018-02-10 22:49:36 -05:00
Aurelien Jarno 4bd3d5005e
tcg/optimize: add optimizations for ext_i32_i64 and extu_i32_i64 ops
They behave the same as ext32s_i64 and ext32u_i64 from the constant
folding and zero propagation point of view, except that they can't
be replaced by a mov, so we don't compute the affected value.

Backports commit 8bcb5c8f34f9215d4f88f388c7ff14c9bd5cecd3 from qemu
2018-02-10 22:47:26 -05:00
Aurelien Jarno f279c93768
tcg: implement real ext_i32_i64 and extu_i32_i64 ops
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a
32-bit value is always converted to a 64-bit value and not propagated
through the register allocator or the optimizer.

Backports commit 4f2331e5b67af8172419eb1c8db510b497b30a7b from qemu
2018-02-10 22:45:13 -05:00
Aurelien Jarno 80223e7ad5
tcg: rename trunc_shr_i32 into trunc_shr_i64_i32
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32,
and the name in the README doesn't match the name offered to the
frontends.

Always use the long name to make it clear it is a size changing op.

Backports commit 0632e555fc4d281d69cb08d98d500d96185b041f from qemu
2018-02-10 22:29:30 -05:00
Aurelien Jarno 5f0920ad0f
tcg/optimize: allow constant to have copies
Now that copies and constants are tracked separately, we can allow
constant to have copies, deferring the choice to use a register or a
constant to the register allocation pass. This prevent this kind of
regular constant reloading:

-OUT: [size=338]
+OUT: [size=298]
   mov    -0x4(%r14),%ebp
   test   %ebp,%ebp
   jne    0x7ffbe9cb0ed6
   mov    $0x40002219f8,%rbp
   mov    %rbp,(%r14)
-  mov    $0x40002219f8,%rbp
   mov    $0x4000221a20,%rbx
   mov    %rbp,(%rbx)
   mov    $0x4000000000,%rbp
   mov    %rbp,(%r14)
-  mov    $0x4000000000,%rbp
   mov    $0x4000221d38,%rbx
   mov    %rbp,(%rbx)
   mov    $0x40002221a8,%rbp
   mov    %rbp,(%r14)
-  mov    $0x40002221a8,%rbp
   mov    $0x4000221d40,%rbx
   mov    %rbp,(%rbx)
   mov    $0x4000019170,%rbp
   mov    %rbp,(%r14)
-  mov    $0x4000019170,%rbp
   mov    $0x4000221d48,%rbx
   mov    %rbp,(%rbx)
   mov    $0x40000049ee,%rbp
   mov    %rbp,0x80(%r14)
   mov    %r14,%rdi
   callq  0x7ffbe99924d0
   mov    $0x4000001680,%rbp
   mov    %rbp,0x30(%r14)
   mov    0x10(%r14),%rbp
   mov    $0x4000001680,%rbp
   mov    %rbp,0x30(%r14)
   mov    0x10(%r14),%rbp
   shl    $0x20,%rbp
   mov    (%r14),%rbx
   mov    %ebx,%ebx
   mov    %rbx,(%r14)
   or     %rbx,%rbp
   mov    %rbp,0x10(%r14)
   mov    %rbp,0x90(%r14)
   mov    0x60(%r14),%rbx
   mov    %rbx,0x38(%r14)
   mov    0x28(%r14),%rbx
   mov    $0x4000220e60,%r12
   mov    %rbx,(%r12)
   mov    $0x40002219c8,%rbx
   mov    %rbp,(%rbx)
   mov    0x20(%r14),%rbp
   sub    $0x8,%rbp
   mov    $0x4000004a16,%rbx
   mov    %rbx,0x0(%rbp)
   mov    %rbp,0x20(%r14)
   mov    $0x19,%ebp
   mov    %ebp,0xa8(%r14)
   mov    $0x4000015110,%rbp
   mov    %rbp,0x80(%r14)
   xor    %eax,%eax
   jmpq   0x7ffbebcae426
   lea    -0x5f6d72a(%rip),%rax        # 0x7ffbe3d437b3
   jmpq   0x7ffbebcae426

Backports commit 299f80130401153af1a6ddb3cc011781bcd47600 from qemu
2018-02-10 22:18:03 -05:00
Aurelien Jarno 59909fe549
tcg/optimize: track const/copy status separately
Instead of using an enum which could be either a copy or a const, track
them separately. This will be used in the next patch.

Constants are tracked through a bool. Copies are tracked by initializing
temp's next_copy and prev_copy to itself, allowing to simplify the code
a bit.

Backports commit b41059dd9deec367a4ccd296659f0bc5de2dc705 from qemu
2018-02-10 22:15:43 -05:00
Aurelien Jarno 134a7dfe82
tcg/optimize: add temp_is_const and temp_is_copy functions
Add two accessor functions temp_is_const and temp_is_copy, to make the
code more readable and make code change easier.

Backports commit d9c769c60948815ee03b2684b1c1c68ee4375149 from qemu
2018-02-10 22:07:02 -05:00
Aurelien Jarno b450b79622
tcg/optimize: optimize temps tracking
The tcg_temp_info structure uses 24 bytes per temp. Now that we emulate
vector registers on most guests, it's not uncommon to have more than 100
used temps. This means we have initialize more than 2kB at least twice
per TB, often more when there is a few goto_tb.

Instead used a TCGTempSet bit array to track which temps are in used in
the current basic block. This means there are only around 16 bytes to
initialize.

This improves the boot time of a MIPS guest on an x86-64 host by around
7% and moves out tcg_optimize from the the top of the profiler list.

Backports commit 1208d7dd5fddc1fbd98de800d17429b4e5578848 from qemu
2018-02-10 21:51:46 -05:00
Aurelien Jarno 5f67ab74e7
tcg/optimize: fix constant signedness
By convention, on a 64-bit host TCG internally stores 32-bit constants
as sign-extended. This is not the case in the optimizer when a 32-bit
constant is folded.

This doesn't seem to have more consequences than suboptimal code
generation. For instance the x86 backend assumes sign-extended constants,
and in some rare cases uses a 32-bit unsigned immediate 0xffffffff
instead of a 8-bit signed immediate 0xff for the constant -1. This is
with a ppc guest:

before
------

 ---- 0x9f29cc
 movi_i32 tmp1,$0xffffffff
 movi_i32 tmp2,$0x0
 add2_i32 tmp0,CA,CA,tmp2,r6,tmp2
 add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2
 mov_i32 r10,tmp0

0x7fd8c7dfe90c:  xor    %ebp,%ebp
0x7fd8c7dfe90e:  mov    %ebp,%r11d
0x7fd8c7dfe911:  mov    0x18(%r14),%r9d
0x7fd8c7dfe915:  add    %r9d,%r10d
0x7fd8c7dfe918:  adc    %ebp,%r11d
0x7fd8c7dfe91b:  add    $0xffffffff,%r10d
0x7fd8c7dfe922:  adc    %ebp,%r11d
0x7fd8c7dfe925:  mov    %r11d,0x134(%r14)
0x7fd8c7dfe92c:  mov    %r10d,0x28(%r14)

after
-----

 ---- 0x9f29cc
 movi_i32 tmp1,$0xffffffffffffffff
 movi_i32 tmp2,$0x0
 add2_i32 tmp0,CA,CA,tmp2,r6,tmp2
 add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2
 mov_i32 r10,tmp0

0x7f37010d490c:  xor    %ebp,%ebp
0x7f37010d490e:  mov    %ebp,%r11d
0x7f37010d4911:  mov    0x18(%r14),%r9d
0x7f37010d4915:  add    %r9d,%r10d
0x7f37010d4918:  adc    %ebp,%r11d
0x7f37010d491b:  add    $0xffffffffffffffff,%r10d
0x7f37010d491f:  adc    %ebp,%r11d
0x7f37010d4922:  mov    %r11d,0x134(%r14)
0x7f37010d4929:  mov    %r10d,0x28(%r14)

Backports commit 29f3ff8d6cbc28f79933aeaa25805408d0984a8f from qemu
2018-02-10 21:40:20 -05:00
Aurelien Jarno e273acf87a
tcg/optimize: fix tcg_opt_gen_movi
Due to a copy&paste, the new op value is tested against mov_i32 instead
of movi_i32. The test is therefore always false. Fix that.

Backports commit 961521261a3d600b0695b2e6d2b0f490076f7e90 from qemu
2018-02-10 21:38:09 -05:00
Aurelien Jarno 42dd2addbe
tcg/optimize: rename tcg_constant_folding
The tcg_constant_folding folding ends up doing all the optimizations
(which is a good thing to avoid looping on all ops multiple time), so
make it clear and just rename it tcg_optimize.

Backports commit 36e60ef6ac5d8a262d0fbeedfdb2b588514cb1ea from qemu
2018-02-10 21:36:34 -05:00
Aurelien Jarno 7b0055d742
tcg/optimize: fold constant test in tcg_opt_gen_mov
Most of the calls to tcg_opt_gen_mov are preceeded by a test to check if
the source temp is a constant. Fold that into the tcg_opt_gen_mov
function.

Backports commit 97a79eb70dd35a24fda87d86196afba5e6f21c5d from qemu
2018-02-10 21:34:00 -05:00
Aurelien Jarno 517fac57c3
tcg/optimize: fold temp copies test in tcg_opt_gen_mov
Each call to tcg_opt_gen_mov is preceeded by a test to check if the
source and destination temps are copies. Fold that into the
tcg_opt_gen_mov function.

Backports commit 5365718a9afeeabde3784d82a542f8ad909b18cf from qemu
2018-02-10 21:27:06 -05:00
Aurelien Jarno d21f474c39
tcg/optimize: remove opc argument from tcg_opt_gen_mov
We can get the opcode using the TCGOp pointer. It needs to be
dereferenced, but it's anyway done a few lines below to write
the new value.

Backports commit 8d6a91602ea824ef4435ea38fd475387eecc098c from qemu
2018-02-10 21:23:34 -05:00
Aurelien Jarno 0fd0afad13
tcg/optimize: remove opc argument from tcg_opt_gen_movi
We can get the opcode using the TCGOp pointer. It needs to be
dereferenced, but it's anyway done a few lines below to write
the new value.

Backports commit ebd27391b00cdafc81e0541a940686137b3b48df from qemu
2018-02-10 21:21:13 -05:00