Before the last patch, we had an efficient loop that disabled
local breakpoints on task switch. Re-add that, but in a more
general way that handles changes to the global enable bits too.
Backports commit 36eb6e096729f9aade3a6af7dbe4d0a990335d7e from qemu
This moves the last of the iteration over breakpoints into
the bpt_helper.c file. This also allows us to make several
breakpoint functions static.
Backports commit 93d00d0fbe4711061834730fb70525d167b6f908 from qemu
Processors up to the Pentium (says Bochs---I do not have old enough
manuals) require a 32KiB alignment for the SMBASE, but newer processors
do not need that, and Tiano Core will use non-aligned SMBASE values.
Backports commit dd75d4fcb4a82c34d4f466e7fc166162b71ff740 from qemu
Running barebox on qemu-system-mips* with '-d unimp' overloads
stderr by very very many mips_cpu_handle_mmu_fault() messages:
mips_cpu_handle_mmu_fault address=b80003fd ret 0 physical 00000000180003fd prot 3
mips_cpu_handle_mmu_fault address=a0800884 ret 0 physical 0000000000800884 prot 3
mips_cpu_handle_mmu_fault pc a080cd80 ad b80003fd rw 0 mmu_idx 0
So it's very difficult to find LOG_UNIMP message.
The mips_cpu_handle_mmu_fault() messages appear on enabling ANY
logging! It's not very handy.
Adding separate log category for *_cpu_handle_mmu_fault()
logging fixes the problem.
Backports commit 339aaf5b7f26d1e638641c59a44883b7654bd8ea from qemu
Extend MIPS movcond implementation to support the SELNEZ/SELEQZ
instructions introduced in MIPS r6 (where MOVN/MOVZ have been removed).
Whereas the "MOVN/MOVZ rd, rs, rt" instructions have the following
semantics:
rd = [!]rt ? rs : rd
The "SELNEZ/SELEQZ rd, rs, rt" instructions are slightly different:
rd = [!]rt ? rs : 0
First we ensure that if one of the movcond input values is zero that it
comes last (we can swap the input arguments if we invert the condition).
This is so that it can exactly match one of the SELNEZ/SELEQZ
instructions and avoid the need to emit the other one.
Otherwise we emit the opposite instruction first into a temporary
register, and OR that into the result:
SELNEZ/SELEQZ TMP1, v2, c1
SELEQZ/SELNEZ ret, v1, c1
OR ret, ret, TMP1
Which does the following:
ret = cond ? v1 : v2
Backports commit 137d63902faf4960081856db9242cbaf234a23af from qemu
MIPSr6 adds several new integer multiply, divide, and modulo
instructions, and removes several pre-r6 encodings, along with the HI/LO
registers which were the implicit operands of some of those
instructions. Update TCG to use the new instructions when built for r6.
The new instructions actually map much more directly to the TCG ops, as
they only provide a single 32-bit half of the result and in a normal
general purpose register instead of HI or LO.
The mulu2_i32 and muls2_i32 operations are no longer appropriate for r6,
so they are removed from the TCG opcode table. This is because they
would need to emit two separate host instructions anyway (for the high
and low half of the result), which TCG can arrange automatically for us
in the absense of mulu2_i32/muls2_i32 by splitting it into mul_i32 and
mul*h_i32 TCG ops.
Backports commit bc6d0c22b09a72897d9db4482076f89e7de97400 from qemu
MIPSr6 encodes JR as JALR with zero as the link register, and the pre-r6
JR encoding is removed. Update TCG to use the new encoding when built
for r6.
We still use the old encoding for pre-r6, so as not to confuse return
prediction stack hardware which may detect only particular encodings of
the return instruction.
Backports commit 6e0d096989be52c2b945fc83a9bd15d887bbdb47 from qemu
Add definition use_mips32r6_instructions to the MIPS TCG backend which
is constant 1 when built for MIPS release 6. This will be used to decide
between pre-R6 and R6 instruction encodings.
Backports commit ce14bd4d469f3a14f6cbfceb6360aee066a60d72 from qemu
We already have a TLADDR_ARGS definition, so rearrange the order
slightly and use it in the definition of insn_start, instead of
having an #ifdef.
Backports commit c0e40dbdcc291c85faa289a53be60b7b1b7c7598 from qemu
Restrict the size of code_gen_buffer to 2GB on ppc64, which
lets us assert that everything is reachable with addis+addi
from tb_ret_addr. This lets us use a max of 4 insns for goto_tb
instead of 7.
Emit the indirect branch portion of goto_tb up front, which
means we only have to update two insns to update any link.
With a 64-bit store, we can update the link atomically, which
may be required in future.
Backports commit 5bfd75a35c11dd3aa61c73d0d2cd88137c31519c from qemu
Changing the prologue to the beginning of the code_gen_buffer
changes the direction of the "return" branch. Need to change
the logic to match.
Backports commit 70f897bdc4ce4101ec008317d43090f532bfb07d from qemu
Anonymous and file-backed RAM allocation are now almost exactly the same.
Reduce code duplication by moving RAM mmap code out of oslib-posix.c and
exec.c.
Backports commit 794e8f301a17953efa78ab7538019ec43c59e82a from qemu
A QEMU breakpoint match is not definitely an architectural breakpoint
match. If an exception is generated unconditionally during translation,
it is hardly possible to ignore it in the debug exception handler.
Generate a call to a helper to check CPU breakpoints and raise an
exception only if any breakpoint matches architecturally.
Backports commit 5d98bf8f38c17a348ab6e8af196088cd4953acd0 from qemu
GDB breakpoints have higher priority so they have to be checked first.
Should GDB breakpoint match, just return from the debug exception
handler.
Backports commit e63a2d4d9ed73e33a0b7483085808048be8bbcb1 from qemu
Implement debug exception routing according to ARM ARM D2.3.1 Pseudocode
description of routing debug exceptions.
Backports commit 81669b8b81eb450d7b89ee5fdd57bdb73d87022d from qemu
Add the MDCR_EL2 register. We don't implement any of
the debug-related traps this register controls yet, so
currently it simply reads back as written.
Backports commit 14cc7b54372995a6ba72c7719372e4f710fc9b5a from qemu
Added oslar_write function to OSLAR_EL1 sysreg, using a status variable
in ARMCPUState.cp15 struct (oslsr_el1). This variable is also linked
to the newly added read-only OSLSR_EL1 register.
Linux reads from this register during its suspend/resume procedure.
Backports commit 1424ca8d4320427c3e93722b65e19077969808a2 from qemu
It is incorrect to call arm_el_is_aa64() function for unimplemented EL.
This patch fixes several attempts to do so.
Backports commit 2cde031f5a34996bab32571a26b1a6bcf3e5b5d9 from qemu
If any store instruction writes the code inside the same TB
after this store insn, the execution of the TB must be stopped
to execute new code correctly.
As described in ARMv8 manual D3.4.6 self-modifying code must do an
IC invalidation to be valid, and an ISB after it. So it's enough to end
the TB after ISB instruction on the code translation.
Also this TB break is necessary to take any pending interrupts immediately
after an ISB (as required by ARMv8 ARM D1.14.4).
Backports commit 6df99dec9e81838423d723996e96236693fa31fe from qemu
At present, the "average" guestimate of TB size is way too small, leading
to many unused entries in the pre-allocated TB array. For a guest with 1GB
ram, we're currently allocating 256MB for the array.
Survey arm, alpha, aarch64, ppc, sparc, i686, x86_64 guests running on
x86_64 and ppc64 hosts and select a new average. The size of the array
drops to 81MB with no more flushing than before.
Backports commit 126d89e8cdfa3be15d51f76906eaccbcd0023f98 from qemu
We currently pre-compute an worst case code size for any TB, which
works out to be 122kB. Since the average TB size is near 1kB, this
wastes quite a lot of storage.
Instead, check for overflow in between generating code for each opcode.
The overhead of the check isn't measurable and wastage is minimized.
Backports commit b125f9dc7bd68cd4c57189db4da83b0620b28a72 from qemu
This will catch any overflow of the buffer.
Add a native win32 alternative for alloc_code_gen_buffer;
remove the malloc alternative.
Backports commit f293709c6af7a65a9bcec09cdba7a60183657a3e from qemu
By putting the prologue at the end, we risk overwriting the
prologue should our estimate of maximum TB size. Given the
two different placements of the call to tcg_prologue_init,
move the high water mark computation into tcg_prologue_init.
Backports commit 8163b74938d8b7d12e70597c4553dd0dc49443d5 from qemu
It is no longer used, so tidy up everything reached by it.
This includes the gen_opc_* arrays, the search_pc parameter
and the inline gen_intermediate_code_internal functions.
Backports commit 4e5e1215156662b2b153255c49d4640d82c5568b from qemu
In this case, QEMU might longjmp out of cpu-exec.c and miss the final
cleanup in cpu_exec_nocache. Do this manually through a new compile
flag.
Backports commit d8a499f17ee5f05407874f29f69f0e3e3198a853 from qemu
The gen_opc_* arrays are already redundant with the data stored in
the insn_start arguments. Transition restore_state_to_opc to use
data from the latter.
Backports commit bad729e272387de7dbfa3ec4319036552fc6c107 from qemu
Since jump_pc[1] is always npc + 4, we can infer after incrementing
that jump_pc[1] == pc + 4. Because of that, we can encode the branch
destination into a single word, and store that in npc.
Backports commit 6c42444f9a53b6af39d46008cb9f650b11e96cb9 from qemu
Unify three copies of this code from different
branch types. Fix the case when npc == DYNAMIC_PC,
i.e. a branch within a delay slot.
Backports commit 2bf2e019ed0a6349220620240c0ba807846793b9 from qemu
We always pass pc2 == dc->npc and r_cond == cpu_cond,
and always set is_br afterward. Infer all of that.
Backports commit bfa31b765798139804ce9e5e35c7e142d233df31 from qemu
ABM is only implemented as a single instruction set by AMD; all AMD
processors support both instructions or neither. Intel considers POPCNT
as part of SSE4.2, and LZCNT as part of BMI1, but Intel also uses AMD's
ABM flag to indicate support for both POPCNT and LZCNT. It has to be
added to Haswell and Broadwell because Haswell, by adding LZCNT, has
completed the ABM.
Tested with "qemu-kvm -cpu Haswell-noTSX,enforce" (and also with older
machine types) on an Haswell-EP machine.
Backports commit becb66673ec30cb604926d247ab9449a60ad8b11 from qemu
When doing a re-initialization of a CPU core, the default state is to _not_
have 64-bit long mode enabled. This means the LME (long mode enable) and LMA
(long mode active) bits in the EFER model-specific register should be cleared.
However, the EFER state is part of the CPU environment which is
preserved by do_cpu_init(), so if EFER.LME and EFER.LMA were set at the
time an INIT IPI was received, they will remain set after the init completes.
This is contrary to what the Intel architecture manual describes and what
happens on real hardware, and it leaves the CPU in a weird state that the
guest can't clear.
To fix this, the 'efer' member of the CPUX86State structure has been moved
to an area outside the region preserved by do_cpu_init(), so that it can
be properly re-initialized by x86_cpu_reset().
Backports commit 2188cc52cb363433751f72b991d8fb05fc60e39d from qemu
Rename ELF_MACHINE to be I386 specific. This is used as-is by the
multiboot loader.
Linux-user previously used this definition but will not anymore,
falling back to the default bahaviour of using ELF_ARCH as ELF_MACHINE.
This removes another architecture specific definition from the global
namespace.
Backports commit a5e8788f89312f19f54dba0454ee5bf7209b4cd7 from qemu
The only generic code relying on this is linux-user, but linux users'
default behaviour of defaulting ELF_MACHINE to ELF_ARCH will handle
this.
The bootloaders can just pass EM_MIPS directly, as that is
architecture specific code.
This removes another architecture specific definition from the global
namespace.
Backports commit 04ce380e9e3fad1dbf4e86ebdf9315573a06b30e from qemu