Incrementally paves the way towards using the generic instruction translation
loop.
Backports commit 1d8a5535238fc5976e0542a413f4ad88f5d4b233 from qemu
Incrementally paves the way towards using the generic
instruction translation loop.
Backports commit dcba3a8d443842f7a30a2c52d50a6b50b6982b35 from qemu
Incrementally paves the way towards using the generic instruction translation
loop.
Backports commit e0d110d943891b719de7ca075fc17fa8ea5749b8 from qemu
Incrementally paves the way towards using the generic instruction translation
loop.
Backports commit 47e981b42553f00110024c33897354f9014e83e9 from qemu
Incrementally paves the way towards using the generic instruction translation
loop.
Backports commit 2c2f8cacd8cf4f67d6f1384b19d38f9a0a25878b from qemu
Incrementally paves the way towards using the generic instruction translation
loop.
Backports commit e6b41ec37f0a9742374dfdb90e662745969cd7ea from qemu
Incrementally paves the way towards using the generic instruction translation
loop.
Backports commit e6b41ec37f0a9742374dfdb90e662745969cd7ea from qemu
Incrementally paves the way towards using the generic instruction translation
loop.
Backports commit 9d75f52b34053066b8e8fc37610d5f300d67538b from qemu
Incrementally paves the way towards using the generic instruction translation
loop.
Backports commit 9761d39b09c4beb1340bf3074be3d3e0a5d453a4 from qemu
Incrementally paves the way towards using the generic instruction translation
loop.
Backports commit 6cf147aa299e49f7794858609a1e8ef19f81c007 from qemu
There's nothing magic about the exception that we generate in order
to execute the magic kernel page. We can and should allow gdb to
set a breakpoint at this location.
Backports commit 3805c2eba8999049bbbea29fdcdea4d47d943c88 from qemu
Used later. An enum makes expected values explicit and
bounds the value space of switches.
Backports commit 77fc6f5e28667634916f114ae04c6029cd7b9c45 from qemu
Fold DISAS_EXC and DISAS_TB_JUMP into DISAS_NORETURN.
In both cases all following code is dead. In the first
case because we have exited the TB via exception; in the
second case because we have exited the TB via goto_tb
and its associated machinery.
Backports commit a0c231e651b249960906f250b8e5eef5ed9888c4 from qemu
This target is not sophisticated in its use of cleanups at the
end of the translation loop. For the most part, any condition
that exits the TB is dealt with by emitting the exiting opcode
right then and there. Therefore the only is_jmp indicator that
is needed is DISAS_NORETURN.
For two stack segment modifying cases, we have not yet exited
the TB (therefore DISAS_NORETURN feels wrong), but intend to exit.
The caller of gen_movl_seg_T0 currently checks for any non-zero
value, therefore DISAS_TOO_MANY seems acceptable for that usage.
Backports commit 1e39d97af086d525cd0408eaa5d19783ea165906 from qemu
This will allow some amount of cleanup to happen before
switching the backends over to enum DisasJumpType.
Backports commit 5dc66895b0113034cd37fd5e65911d7959fc26a9 from qemu
This allows LOAD HALFWORD IMMEDIATE ON CONDITION,
eliminating one insn in some common cases.
Backports commit 7af525af01b9615c4f4df5da2e8a50f2fe00b023 from qemu
Currently, we cannot use mttcg for running strong memory model guests
on weak memory model hosts due to missing ordering semantics.
We implicitly generate fence instructions for stronger guests if an
ordering mismatch is detected. We generate fences only for the orders
for which fence instructions are necessary, for example a fence is not
necessary between a store and a subsequent load on x86 since its
absence in the guest binary tells that ordering need not be
ensured. Also note that if we find multiple subsequent fence
instructions in the generated IR, we combine them in the TCG
optimization pass.
This patch allows us to boot an x86 guest on ARM64 hosts using mttcg.
Backports commit b32dc3370a666e237b2099c22166b15e58cb6df8 from qemu
For external aborts, we will want to be able to specify the EA
(external abort type) bit in the syndrome field. Allow callers of
deliver_fault() to do that by adding a field to ARMMMUFaultInfo which
we use when constructing the syndrome values.
Backports commit c528af7aa64f159eb30b46e567b650c5440fc117 from qemu
We currently have some similar code in tlb_fill() and in
arm_cpu_do_unaligned_access() for delivering a data abort or prefetch
abort. We're also going to want to do the same thing to handle
external aborts. Factor out the common code into a new function
deliver_fault().
Backports commit aac43da1d772a50778ab1252c13c08c2eb31fb39 from qemu
Call the new cpu_transaction_failed() hook at the places where
CPU generated code interacts with the memory system:
io_readx()
io_writex()
get_page_addr_code()
Any access from C code (eg via cpu_physical_memory_rw(),
address_space_rw(), ld/st_*_phys()) will *not* trigger CPU exceptions
via cpu_transaction_failed(). Handling for transactions failures for
this kind of call should be done by using a function which returns a
MemTxResult and treating the failure case appropriately in the
calling code.
In an ideal world we would not generate CPU exceptions for
instruction fetch failures in get_page_addr_code() but instead wait
until the code translation process tried a load and it failed;
however that change would require too great a restructuring and
redesign to attempt at this point.
Backports commit 04e3aabde397e7abc78ba1ce6cbd144d5fbb1722 from qemu
Currently we have a rather half-baked setup for allowing CPUs to
generate exceptions on accesses to invalid memory: the CPU has a
cpu_unassigned_access() hook which the memory system calls in
unassigned_mem_write() and unassigned_mem_read() if the current_cpu
pointer is non-NULL. This was originally designed before we
implemented the MemTxResult type that allows memory operations to
report a success or failure code, which is why the hook is called
right at the bottom of the memory system. The major problem with
this is that it means that the hook can be called even when the
access was not actually done by the CPU: for instance if the CPU
writes to a DMA engine register which causes the DMA engine to begin
a transaction which has been set up by the guest to operate on
invalid memory then this will casue the CPU to take an exception
incorrectly. Another minor problem is that currently if a device
returns a transaction error then this won't turn into a CPU exception
at all.
The right way to do this is to have allow the CPU to respond
to memory system transaction failures at the point where the
CPU specific code calls into the memory system.
Define a new QOM CPU method and utility function
cpu_transaction_failed() which is called in these cases.
The functionality here overlaps with the existing
cpu_unassigned_access() because individual target CPUs will
need some work to convert them to the new system. When this
transition is complete we can remove the old cpu_unassigned_access()
code.
Backports commit 0dff0939f6fc6a7abd966d4295f06a06d7a01df9 from qemu
Move the MemTxResult type to memattrs.h. We're going to want to
use it in cpu/qom.h, which doesn't want to include all of
memory.h. In practice MemTxResult and MemTxAttrs are pretty
closely linked since both are used for the new-style
read_with_attrs and write_with_attrs callbacks, so memattrs.h
is a reasonable home for this rather than creating a whole
new header file for it.
Backports commit 3114d092b1740f9db9aa559aeb48ee387011e1da from qemu
Add a utility function for testing whether the CPU is in Handler
mode; this is just a check whether v7m.exception is non-zero, but
we do it in several places and it makes the code a bit easier
to read to not have to mentally figure out what the test is testing.
Backports commit 15b3f556bab4f961bf92141eb8521c8da3df5eb2 from qemu
For v7M, writes to the CONTROL register are only permitted for
privileged code. However even if the code is privileged, the
write must not affect the SPSEL bit in the CONTROL register
if the CPU is in Thread mode (as documented in the pseudocode
for the MSR instruction). Implement this, instead of permitting
SPSEL to be written in all cases.
This was causing mbed applications not to run, because the
RTX RTOS they use relies on this behaviour.
Backports commit 792dac309c8660306557ba058b8b5a6a75ab3c1f from qemu
Move the code in arm_v7m_cpu_do_interrupt() that calculates the
magic LR value down to when we're actually going to use it.
Having the calculation and use so far apart makes the code
a little harder to understand than it needs to be.
Backports commit bd70b29ba92e4446f9e4eb8b9acc19ef6ff4a4d5 from qemu
Make the arm_cpu_dump_state() debug logging handle the M-profile XPSR
rather than assuming it's an A-profile CPSR. On M profile the PSR
line of a register dump will now look like this:
XPSR=41000000 -Z-- T priv-thread
Backports commit 5b906f3589443a3c69d8feeaac37263843ecfb8d from qemu
We currently store the M profile CPU register state PRIMASK and
FAULTMASK in the daif field of the CPU state in its I and F
bits. This is a legacy from the original implementation, which
tried to share the cpu_exec_interrupt code between A profile
and M profile. We've since separated out the two cases because
they are significantly different, so now there is no common
code between M and A profile which looks at env->daif: all the
uses are either in A-only or M-only code paths. Sharing the state
fields now is just confusing, and will make things awkward
when we implement v8M, where the PRIMASK and FAULTMASK
registers are banked between security states.
Switch M profile over to using v7m.faultmask and v7m.primask
fields for these registers.
Backports commit e6ae5981ea4b0f6feb223009a5108582e7644f8f from qemu
The M profile XPSR is almost the same format as the A profile CPSR,
but not quite. Define some XPSR_* macros and use them where we
definitely dealing with an XPSR rather than reusing the CPSR ones.
Backports commit 987ab45e108953c1c98126c338c2119c243c372b from qemu
When we switched our handling of exception exit to detect
the magic addresses at translate time rather than via
a do_unassigned_access hook, we forgot to update a
comment; correct the omission.
Backports commit 9d17da4b68a05fc78daa47f0f3d914eea5d802ea from qemu
Remove the comment that claims that some MPU_CTRL bits are stored
in sctlr_el[1]. This has never been true since MPU_CTRL was added
in commit 29c483a50607 -- the comment is a leftover from
Michael Davidsaver's original implementation, which I modified
not to use sctlr_el[1]; I forgot to delete the comment then.
Backports commit 59e4972c3fc63d981e8b613ebb3bb01a05848075 from qemu
Tighten up the T32 decoder in the places where new v8M instructions
will be:
* TT/TTT/TTA/TTAT are in what was nominally LDREX/STREX r15, ...
which is UNPREDICTABLE:
make the UNPREDICTABLE behaviour be to UNDEF
* BXNS/BLXNS are distinguished from BX/BLX via the low 3 bits,
which in previous architectural versions are SBZ:
enforce the SBZ via UNDEF rather than ignoring it, and move
the "ARCH(5)" UNDEF case up so we don't leak a TCG temporary
* SG is in the encoding which would be LDRD/STRD with rn = r15;
this is UNPREDICTABLE and we currently UNDEF:
move this check further up the code so that we don't leak
TCG temporaries in the UNDEF case and have a better place
to put the SG decode.
This means that if a v8M binary is accidentally run on v7M
or if a test case hits something that we haven't implemented
yet the behaviour will be obvious (UNDEF) rather than obscure
(plough on treating it as a different instruction).
In the process, add some comments about the instruction patterns
at these points in the decode. Our Thumb and ARM decoders are
very difficult to understand currently, but gradually adding
comments like this should help to clarify what exactly has
been decoded when.
Backports commit ebfe27c593e5b222aa2a1fc545b447be3d995faa from qemu
Currently get_phys_addr() has PMSAv7 handling before the
"is translation disabled?" check, and then PMSAv5 after it.
Tidy this up by making the PMSAv5 code handle the "MPU disabled"
case itself, so that we have all the PMSA code in one place.
This will make adding the PMSAv8 code slightly cleaner, and
also means that pre-v7 PMSA cores benefit from the MPU lookup
logging that the PMSAv7 codepath had.
Backports commit 3279adb95e34dd3d67c66d729458f7784747cf8d from qemu
M profile cores can never trap on WFI or WFE instructions. Check for
M profile in check_wfx_trap() to ensure this.
The existing code will do the right thing for v7M cores because
the hcr_el2 and scr_el3 registers will be all-zeroes and so we
won't attempt to trap, but when we start setting ARM_FEATURE_V8
for v8M cores the v8A handling of SCTLR.nTWE and .nTWI will not
give the right results.
Backports commit 0e2845689ebdb4ea7174f96f6797e2d8942bd114 from qemu
In the ARM get_phys_addr() code, switch to using the MMUAccessType
enum and its MMU_* values rather than int and literal 0/1/2.
Backports commit 03ae85f858fc46495258a5dd4551fff2c34bd495 from qemu
Add a new base CPU model called 'EPYC' to model processors from AMD EPYC
family (which includes EPYC 76xx,75xx,74xx, 73xx and 72xx).
The following features bits have been added/removed compare to Opteron_G5
Added: monitor, movbe, rdrand, mmxext, ffxsr, rdtscp, cr8legacy, osvw,
fsgsbase, bmi1, avx2, smep, bmi2, rdseed, adx, smap, clfshopt, sha
xsaveopt, xsavec, xgetbv1, arat
Removed: xop, fma4, tbm
Backports commit 2e2efc7dbe2b0adc1200b5aa286cdbed729f6751 from qemu
The helper can be used for CPU object lookup using the CPU's
arch-specific ID (the one returned by CPUClass::get_arch_id()).
Backports commit 5ce46cb34eecec0bc94a4b1394763f9a1bbe20c3 from qemu
This moves a FlatView allocation and initialization to a helper.
While we are nere, replace g_new with g_new0 to not to bother if we add
new fields in the future.
This should cause no behavioural change.
Backports commit de7e6815b84c797cbda56dc96fcacaf5f37d3a20 from qemu
We are going to share FlatView's between AddressSpace's and per-AS
memory listeners won't suit the purpose anymore so open code
the dispatch tree rendering.
Since there is a good chance that dispatch_listener was the only
listener, this avoids address_space_update_topology_pass() if there is
no registered listeners; this should improve starting time.
This should cause no behavioural change.
Backports commit 1b04a1580917d9e41fd37ca62cbff9b4bf061e96 from qemu
This adds an AS** parameter to address_space_do_translate()
to make it easier for the next patch to share FlatViews.
This should cause no behavioural change.
Backports commit 6424975ce912061ac9e4a375237b0c89d83d93e3 from qemu
When using bit-wise operations that exploit the power-of-two
nature of the second argument of ROUND_UP(), we still need to
ensure that the mask is as wide as the first argument (done
by using a ternary to force proper arithmetic promotion).
Unpatched, ROUND_UP(2ULL*1024*1024*1024*1024, 512U) produces 0,
instead of the intended 2TiB, because negation of an unsigned
32-bit quantity followed by widening to 64-bits does not
sign-extend the mask.
Broken since its introduction in commit 292c8e50 (v1.5.0).
Callers that passed the same width type to both macro parameters,
or that had other code to ensure the first parameter's maximum
runtime value did not exceed the second parameter's width, are
unaffected, but I did not audit to see which (if any) existing
clients of the macro could trigger incorrect behavior (I found
the bug while adding a new use of the macro).
While preparing the patch, checkpatch complained about poor
spacing, so I also fixed that here and in the nearby DIV_ROUND_UP.
Backports commit 33a599667a9e70588483a31286dfff8cfc27d513 from qemu
According to the ARM ARM exclusive loads require the same alignment as
exclusive stores. Let's update the memops used for the load to match
that of the store. This adds the alignment requirement to the memops.
Backports commit 4a2fdb78e794c1ad93aa9e160235d6a61a2125de from qemu
We are not providing the required single-copy atomic semantics for
the 64-bit operation that is the 32-bit paired load.
At the same time, leave the entire 64-bit value in cpu_exclusive_val
and stop writing to cpu_exclusive_high. This means that we do not
have to re-assemble the 64-bit quantity when it comes time to store.
At the same time, drop a redundant temporary and perform all loads
directly into the cpu_exclusive_* globals.
Backports commit 19514cde3b92938df750acaecf2caaa85e1d36a6 from qemu
When we perform the atomic_cmpxchg operation we want to perform the
operation on a pair of 32-bit registers. Previously we were just passing
the register size in which was set to MO_32. This would result in the
high register to be ignored. To fix this issue we hardcode the size to
be 64-bits long when operating on 32-bit pairs.
Backports commit 955fd0ad5d610f62ba2f4ce46a872bf50434dcf8 from qemu
This reverts commit e2a7f28693aea7e194ec1435697ec4feb24f8a6f.
This was not supposed to go upstream yet. Reverting.
Backports commit cde0a63ad721dbb538419a00f9405587680be436 from qemu
When emulating various SSE4.1 instructions such as pinsrd, the address
of a memory operand is computed without allowing for the 8-bit
immediate operand located after the memory operand, meaning that the
memory operand uses the wrong address in the case where it is
rip-relative. This patch adds the required rip_offset setting for
those instructions, so fixing some GCC test failures (13 in the gcc
testsuite in my GCC 6-based testing) when testing with a default CPU
setting enabling those instructions.
Backports commit ab6ab3e9972a49a359f59895a88bed311472ca97 from qemu
For a 64-bit ILP32 host, aligning to sizeof(long) is not enough.
Guess the minimum for any host is 8, as that covers uint64_t.
Qemu doesn't use a host long double or host vectors, except in
extremely limited circumstances.
Fixes a bus error for a sparc v8plus host.
Backports commit 13aaef678ed377b12b76dc7fb9e615b2f2f9047b from qemu
Patch 85aa80813dd changed the IF emitting the TST instruction,
but failed to change the ?: converting CMP to CMPEQ, so the
result of the TST is ignored.
Backports commit ca671de8af96798e0f493378240034620a3a04ee from qemu
RDHWR CC reads the CPU timer like MFC0 CP0_Count, so with icount enabled
it must set can_do_io while it calls the helper to avoid the "Bad icount
read" error. It should also break out of the translation loop to ensure
that timer interrupts are immediately handled.
Backports commit d673a68db6963e86536b125af464bb6ed03eba33 from qemu
DMTC0 CP0_Cause does a redundant gen_io_start() and gen_io_end() pair,
even though this is done for all DMTC0 operations outside of the switch
statement. Remove these redundant calls.
Backports commit 51ca717b079dccae5b6cc9f45153f5044abd34f0 from qemu
Commit e350d8ca3ac7 ("target/mips: optimize indirect branches") made
indirect branches able to directly find the next TB and jump straight to
it without breaking out of translated code and going around the main
execution loop. This breaks the assumption in target/mips/translate.c
that BS_STOP is sufficient to cause pending interrupts to be handled,
since interrupts are only checked in the main loop.
Fix a few of these assumptions by using gen_save_pc to update the saved
PC and using BS_EXCP instead of BS_STOP:
- [D]MFC0 CP0_Count may trigger a timer interrupt which should be
immediately handled.
- [D]MTC0 CP0_Cause may trigger an interrupt (but in fact translation
was only even being stopped in the DMTC0 case).
- [D]MTC0 CP0_<any> when icount is used is assumed could potentially
cause interrupts.
- EI may trigger an interrupt which was pending. I specifically hit
this case when running KVM nested in mipsel-softmmu. A timer
interrupt while the 2nd guest was executing is caught by KVM which
switches back to the normal Linux exception base and re-enables
interrupts with EI. Since the above commit QEMU doesn't leave
translated code until the nested KVM has already restored the KVM
exception base and returned to the 2nd guest, at which point it is
too late to check for pending interrupts and it gets stuck in an
infinite loop of unhandled interrupts.
Something similar was needed for ARM in commit b29fd33db578
("target/arm: use DISAS_EXIT for eret handling").
Backports commit b74cddcbf6063f684725e3f8bca49a68e30cba71 from qemu
Improve the segment definitions used by get_physical_address() to yield
target_ulong types, e.g. 0xffffffff80000000 instead of 0x80000000. This
is in preparation for enabling emulation of MIPS KVM T&E segments in TCG
MIPS targets, which unlike KVM could potentially have 64-bit
target_ulong. In such a case the offset guest KSEG0 address ends up at
e.g. 0x000000008xxxxxxx instead of 0xffffffff8xxxxxxx.
This also allows the casts to int32_t that force sign extension to be
removed, which removes any confusion due to relational comparison of
unsigned (target_ulong) and signed (int32_t) types.
Backports commit 6743334568933199927af4992a04bfb3c30610f5 from qemu
Writing to the MIPS DESAVE register (and now the KScratch registers)
will stop translation, supposedly due to risk of execution mode
switches. However these registers are basically RW scratch registers
with no side effects so there is no risk of them triggering execution
mode changes.
Drop the bstate = BS_STOP for these registers for both mtc0 and dmtc0.
Backports commit cb539fd241900f51de7d21244f7a55422ad0d40a from qemu
Commit 04bf2526ce87f21b32c9acba1c5518708c243ad0 (exec: use
qemu_ram_ptr_length to access guest ram) start using qemu_ram_ptr_length
instead of qemu_map_ram_ptr, but when used with Xen, the behavior of
both function is different. They both call xen_map_cache, but one with
"lock", meaning the mapping of guest memory is never released
implicitly, and the second one without, which means, mapping can be
release later, when needed.
In the context of address_space_{read,write}_continue, the ptr to those
mapping should not be locked because it is used immediatly and never
used again.
The lock parameter make it explicit in which context qemu_ram_ptr_length
is called.
Backports commit f5aa69bdc3418773f26747ca282c291519626ece from qemu
When the PMSAv7 implementation was originally added it was for R profile
CPUs only, and reset was handled using the cpreg .resetfn hooks.
Unfortunately for M profile cores this doesn't work, because they do
not register any cpregs. Move the reset handling into arm_cpu_reset(),
where it will work for both R profile and M profile cores.
Backports commit 69ceea64bf565559a2b865ffb2a097d2caab805b from qemu
Almost all of the PMSAv7 state is in the pmsav7 substruct of
the ARM CPU state structure. The exception is the region
number register, which is in cp15.c6_rgnr. This exception
is a bit odd for M profile, which otherwise generally does
not store state in the cp15 substruct.
Rename cp15.c6_rgnr to pmsav7.rnr accordingly.
Backports commit 8531eb4f614a60e6582d4832b15eee09f7d27874 from qemu
For an M profile v7PMSA, the system space (0xe0000000 - 0xffffffff) can
never be executable, even if the guest tries to set the MPU registers
up that way. Enforce this restriction.
Backports commit bf446a11dfb17ae7d8ed2b61a2444804eb458075 from qemu
The M profile PMSAv7 specification says that if the address being looked
up is in the PPB region (0xe0000000 - 0xe00fffff) then we do not use
the MPU regions but always use the default memory map. Implement this
(we were previously behaving like an R profile PMSAv7, which does not
special case this).
Backports commit 38aaa60ca464b48e6feef346709e97335d01b289 from qemu
Correct off-by-one bug in the PSMAv7 MPU tracing where it would print
a write access as "reading", an insn fetch as "writing", and a read
access as "execute".
Since we have an MMUAccessType enum now, we can make the code clearer
in the process by using that rather than the raw 0/1/2 values.
Backports commit 709e4407add7acacc593cb6cdac026558c9a8fb6 from qemu
Enable the CP0_EBase.WG (write gate) on the I6400 and MIPS64R2-generic
CPUs. This allows 64-bit guests to run KVM itself, which uses
CP0_EBase.WG to point CP0_EBase at XKPhys.
Backports commit bad63a8008a0aaefcd00542c89bee01623d7c9de from qemu
Add the Enhanced Virtual Addressing (EVA) feature to the P5600 core
configuration, along with the related Segmentation Control (SC) feature
and writable CP0_EBase.WG bit.
This allows it to run Malta EVA kernels.
Backports commit 574da58e4678b3c09048f268821295422d8cde6d from qemu
Implement the optional segmentation control feature in the virtual to
physical address translation code.
The fixed legacy segment and xkphys handling is replaced with a dynamic
layout based on the segmentation control registers (which should be set
up even when the feature is not exposed to the guest).
Backports commit 480e79aedd322fcfac17052caff21626ea7c78e2 from qemu
The optional segmentation control registers CP0_SegCtl0, CP0_SegCtl1 &
CP0_SegCtl2 control the behaviour and required privilege of the legacy
virtual memory segments.
Add them to the CP0 interface so they can be read and written when
CP0_Config3.SC=1, and initialise them to describe the standard legacy
layout so they can be used in future patches regardless of whether they
are exposed to the guest.
Backports commit cec56a733dd2c3fa81dbedbecf03922258747f7d from qemu
The segmentation control feature allows a legacy memory segment to
become unmapped uncached at error level (according to CP0_Status.ERL),
and in fact the user segment is already treated in this way by QEMU.
Add a new MMU mode for this state so that QEMU's mappings don't persist
between ERL=0 and ERL=1.
Backports commit 42c86612d507c2a8789f2b8d920a244693c4ef7b from qemu
The MIPS mmu_idx is sometimes calculated from hflags without an env
pointer available as cpu_mmu_index() requires.
Create a common hflags_mmu_index() for the purpose of this calculation
which can operate on any hflags, not just with an env pointer, and
update cpu_mmu_index() itself and gen_intermediate_code() to use it.
Also update debug_post_eret() and helper_mtc0_status() to log the MMU
mode with the status change (SM, UM, or nothing for kernel mode) based
on cpu_mmu_index() rather than directly testing hflags.
This will also allow the logic to be more easily updated when a new MMU
mode is added.
Backports commit b0fc6003224543d2bdb172eca752656a6223e4a1 from qemu
When performing virtual to physical address translation, check the
required privilege level based on the mem_idx rather than the mode in
the hflags. This will allow EVA loads & stores to operate safely only on
user memory from kernel mode.
For the cases where the mmu_idx doesn't need to be overridden
(mips_cpu_get_phys_page_debug() and cpu_mips_translate_address()), we
calculate the required mmu_idx using cpu_mmu_index(). Note that this
only tests the MIPS_HFLAG_KSU bits rather than MIPS_HFLAG_MODE, so we
don't test the debug mode hflag MIPS_HFLAG_DM any longer. This should be
fine as get_physical_address() only compares against MIPS_HFLAG_UM and
MIPS_HFLAG_SM, neither of which should get set by compute_hflags() when
MIPS_HFLAG_DM is set.
Backports commit 9fbf4a58c90183b30bb2c8ad971ccce7e6716a16 from qemu
Implement decoding of microMIPS EVA load and store instruction groups in
the POOL31C pool. These use the same gen_ld(), gen_st(), gen_st_cond()
helpers as the MIPS32 decoding, passing the equivalent MIPS32 opcodes as
opc.
Backports commit 8fffc64696783b1ff1d17262d098976479895660 from qemu
Add CP0.ErrCtl register with WST, SPR and ITC bits. In 34K and interAptiv
processors these bits are used to enable CACHE instruction access to
different arrays. When WST=0, SPR=0 and ITC=1 the CACHE instruction will
access ITC tag values.
Generally we do not model caches and we have been treating the CACHE
instruction as NOP. But since CACHE can operate on ITC Tags new
MIPS_HFLAG_ITC_CACHE hflag is introduced to generate the helper only when
CACHE is in the ITC Access mode.
Backports commit 0d74a222c27e26fc40f4f6120c61c3f9ceaa3776 from qemu
Implement decoding of MIPS32 EVA loads and stores. These access the user
address space from kernel mode when implemented, so for each instruction
we need to check that EVA is available from Config5.EVA & check for
sufficient COP0 privilege (with the new check_eva()), and then override
the mem_idx used for the operation.
Unfortunately some Loongson 2E instructions use overlapping encodings,
so we must be careful not to prevent those from being decoded when EVA
is absent.
Backports commit 7696414729b2d0f870c80ad1dd637d854bc78847 from qemu
EVA load and store instructions access the user mode address map, so
they need to use mem_idx of MIPS_HFLAG_UM. Update the various utility
functions to allow mem_idx to be more easily overridden from the
decoding logic.
Specifically we add a mem_idx argument to the op_ld/st_* helpers used
for atomics, and a mem_idx local variable to gen_ld(), gen_st(), and
gen_st_cond().
Backports commit dd4096cd2ccc19384770f336c930259da7a54980 from qemu
Add support for the CP0_EBase.WG bit, which allows upper bits to be
written (bits 31:30 on MIPS32, or bits 63:30 on MIPS64), along with the
CP0_Config5.CV bit to control whether the exception vector for Cache
Error exceptions is forced into KSeg1.
This is necessary on MIPS32 to support Segmentation Control and Enhanced
Virtual Addressing (EVA) extensions (where KSeg1 addresses may not
represent an unmapped uncached segment).
It is also useful on MIPS64 to allow the exception base to reside in
XKPhys, and possibly out of range of KSEG0 and KSEG1.
Backports commit 74dbf824a1313b6064bbebb981a7440951d70896 from qemu
There is no need to invalidate any shadow TLB entries when the ASID
changes or when access to one of the 64-bit segments has been disabled,
since doing so doesn't reveal to software whether any TLB entries have
been evicted into the shadow half of the TLB.
Therefore weaken the tlb flushes in these cases to only flush the QEMU
TLB.
Backports commit 9658e4c342e6ae0d775101f8f6bb6efb16789af1 from qemu
Writing specific TLB entries with TLBWI flushes shadow TLB entries
unless an existing entry is having its access permissions upgraded. This
is necessary as software would from then on expect the previous mapping
in that entry to no longer be in effect (even if QEMU has quietly
evicted it to the shadow TLB on a TLBWR).
However it won't do this if only EHINV, XI, or RI bits have been set,
even if that results in a reduction of permissions, so add the necessary
checks to invoke the flush when these bits are set.
Backports commit eff6ff9431aa9776062a5f4a08d1f6503ca9995a from qemu
Using MFC0 to read CP0_UserLocal uses tcg_gen_ld32s_tl, however
CP0_UserLocal is a target_ulong. On a big endian host with a MIPS64
target this reads and sign extends the more significant half of the
64-bit register.
Fix this by using ld_tl to load the whole target_ulong and ext32s_tl to
sign extend it, as done for various other target_ulong COP0 registers.
Backports commit e40df9a80bb7cdb0a4ca650985fa9fe572097fa7 from qemu
This include was forgotten when splitting cacheinfo.c out of
tcg/ppc/tcg-target.inc.c (see commit b255b2c8).
For a Centos7 host, the include path
<signal.h>
<bits/sigcontext.h>
<asm/sigcontext.h>
<asm/elf.h>
<asm/auxvec.h>
implicitly pulls in the desired AT_* defines.
Not so for Debian Jessie.
Backports commit 810d5cad4087236236e00fd3046a16adf26e9060 from qemu
Reserve a register for the guest_base using ppc code for reference.
By doing so, we do not have to recompute it for every memory load.
Backports commit 4df9cac57f5220c17d856292e90fce455f708421 from qemu
Introduce Skylake-Server cpu mode which inherits the features from
Skylake-Client and supports some additional features that are: AVX512,
CLWB and PGPE1GB.
Backports commit 53f9a6f45fb214540cb40af45efc11ac40ac454c from qemu
Currently when running KVM, we expose "KVMKVMKVM\0\0\0" in
the 0x40000000 CPUID leaf. Other hypervisors (VMWare,
HyperV, Xen, BHyve) all do the same thing, which leaves
TCG as the odd one out.
The CPUID signature is used by software to detect which
virtual environment they are running in and (potentially)
change behaviour in certain ways. For example, systemd
supports a ConditionVirtualization= setting in unit files.
The virt-what command can also report the virt type it is
running on
Currently both these apps have to resort to custom hacks
like looking for 'fw-cfg' entry in the /proc/device-tree
file to identify TCG.
This change thus proposes a signature "TCGTCGTCGTCG" to be
reported when running under TCG.
To hide this, the -cpu option tcg-cpuid=off can be used.
Backports commits 4ed3d478c63dc65a02eba774c35116618ea5ff10 and 1ce36bfe6424243082d3d7c2330e1a0a4ff72a43 from qemu
object_resolve_path*() ambiguous path detection breaks when
ambiguous==NULL and the object tree have 3 objects of the same type and
only 2 of them are under the same parent. e.g.:
/container/obj1 (TYPE_FOO)
/container/obj2 (TYPE_FOO)
/obj2 (TYPE_FOO)
With the above tree, object_resolve_path_type("", TYPE_FOO, NULL) will
incorrectly return /obj2, because the search inside "/container" will
return NULL, and the match at "/obj2" won't be detected as ambiguous.
Fix that by always calling object_resolve_partial_path() with a non-NULL
ambiguous parameter.
Backports commit ebcc479eee740937e70a94a468effcf2126a572b from qemu
This patch fixes setting DExcCode field of CP0 Debug register
when SDBBP instruction is executed. According to EJTAG specification,
this field must be set to the value 9 (Bp).
Backports commit c6c2c0fc32362ba234ae3bdad1a55c2d6aefaa12 from qemu
Previously DISAS_JUMP did ensure this but with the optimisation of
8a6b28c7 (optimize indirect branches) we might not leave the loop.
This means if any pending interrupts are cleared by changing IRQ flags
we might never get around to servicing them. You usually notice this
by seeing the lookup_tb_ptr() helper gainfully chaining TBs together
while cpu->interrupt_request remains high and the exit_request has not
been set.
This breaks amongst other things the OPTEE test suite which executes
an eret from the secure world after a non-secure world IRQ has gone
pending which then never gets serviced.
Instead of using the previously implied semantics of DISAS_JUMP we use
DISAS_EXIT which will always exit the run-loop.
Backports commit b29fd33db578decacd14f34933b29aece3e7c25e from qemu
While an ISB will ensure any raised IRQs happen on the next
instruction it doesn't cause any to get raised by itself. We can
therefore use a simple tb exit for ISB instructions and rely on the
exit_request check at the top of each TB to deal with exiting if
needed.
Backports commit 0b609cc128ba5ef16cc841bcade898d1898f1dc3 from qemu
As the gen_goto_tb function can do both static and dynamic jumps it
should also set the is_jmp field. This matches the behaviour of the
a64 code.
Backports commit 4cae8f56fbab2798586576a56cc669f0127d04fb from qemu
We already have an exit condition, DISAS_UPDATE which will exit the
run-loop. Expand on the difference with DISAS_EXIT in the comments
Backports commit abd1fb0ee2c58b99f4b2d15718f1825fe4984e12 from qemu
DISAS_UPDATE should be used when the wider CPU state other than just
the PC has been updated and we should therefore exit the TCG runtime
and return to the main execution loop rather assuming DISAS_JUMP would
do that.
Backports commit e8d5230221851e8933811f1579fd13371f576955 from qemu
As a precursor to later patches attempt to come up with a more
concrete wording for what each of the common exit cases would be.
Backports commit df0311e634828fdc99ca59352aef68503d631aad from qemu
The Cortex-M3 and M4 CPUs always have 8 PMSA MPU regions (this isn't
a configurable option for the hardware). Make the default value of
the pmsav7-dregion property be set per-cpu, so we don't need to have
every user of these CPUs set it manually. (The existing default of
16 is correct for the other PMSAv7 core, the Cortex-R5.)
This fixes a bug where we were creating the M3 and M4 with
too many regions; most guest software would not notice or
care, though, since it would just not use the registers
associated with the unexpected extra regions.
Backports commit 8d92e26b452f8961ec90df3f93cf5f3b7a9d158f from qemu
Rename memory_region_init_rom() to memory_region_init_rom_nomigrate()
and memory_region_init_rom_device() to
memory_region_init_rom_device_nomigrate().
Backports commit b59821a95bd1d7cb4697fd7748725c910582e0e7 from qemu
Rename memory_region_init_ram() to memory_region_init_ram_nomigrate().
This leaves the way clear for us to provide a memory_region_init_ram()
which does handle migration.
Backports commit 1cfe48c1ce219b60a9096312f7a61806fae64ab3 from qemu
The various functions for initializing RAM MemoryRegions do not do
anything to cause the data in the MemoryRegion to be migrated.
Note in their documentation comments that this is the responsibility
of the caller.
(We will shortly add a new function that *does* do this for you.)
Backports commit a5c0234bb2754f5248e67929a34c843dbe039da5 from qemu
Add a documentation comment for memory_region_allocate_system_memory().
In particular, the reason for this function's existence and the
requirement on board code to call it exactly once are non-obvious.
Backports commit 09ad643823dcda0a86eddce1291c28d0ccb09a3b from qemu
link's check callback is supposed to verify/permit setting it,
however currently nothing restricts it from misusing it
and modifying target object from within.
Make sure that readonly semantics are checked by compiler
to prevent callback's misuse.
Backports commit 8f5d58ef2c92d7b82d9a6eeefd7c8854a183ba4a from qemu
Now that we have proper locking after MTTCG patches have landed, we
can revert the commit. This reverts commit
a9353fe897ca2687e5b3385ed39e3db3927a90e0.
Backports commit 406bc339b0505fcfc2ffcbca1f05a3756e338a65 from qemu
This warning is included in -Wall by clang, but not by GCC (which only
enables it for -Wextra). Include it in the list of warnings we enable
to minimize the differences between the compilers:
Backports commit b98fcfd8840f290c406c32301340e96f00238a93 from qemu
The gen_ prefix is awkward. Generated C should go through cgen()
exactly once (see commit 1f9a7a1). The common way to get this wrong is
passing a foo=gen_foo() keyword argument to mcgen(). I'd like us to
adopt a naming convention where gen_ means "something that's been piped
through cgen(), and thus must not be passed to cgen() or mcgen()".
Requires renaming gen_params(), gen_marshal_proto() and
gen_event_send_proto().
Backports commit 086ee7a6200fa5ad795b12110b5b3d5a93dcac3e from qemu
This patch fixes the msa copy_[s|u]_df instruction emulation when
the destination register rd is zero. Without this patch the zero
register would get clobbered, which should never happen because it
is supposed to be hardwired to 0.
Fix this corner case by explicitly checking rd = 0 and effectively
making these instructions emulation no-op in that case.
Backports commit cab4888136a92250fdd401402622824994f7ce0b from qemu
When running a helloworld program with qemu-i386 in linux-user
mode on Loongson 3A3000, it will crash. This patch fix the bug.
Backports commit 8b8d768f19037a825a0bc81654492caa7c8fab8b from qemu
Clang generates the following warning on aarch64 host:
CC util/cacheinfo.o
/home/pranith/qemu/util/cacheinfo.c:121:48: warning: value size does not match register size specified by the constraint and modifier [-Wasm-operand-widths]
asm volatile("mrs\t%0, ctr_el0" : "=r"(ctr));
^
/home/pranith/qemu/util/cacheinfo.c:121:28: note: use constraint modifier "w"
asm volatile("mrs\t%0, ctr_el0" : "=r"(ctr));
^~
%w0
Constraint modifier 'w' is not (yet?) accepted by gcc. Fix this by increasing the ctr size.
Backports commit 2ae96c157ab3155baf6595c08cf5d3fe3c023a60 from qemu
This patch enables the indirect jump path using an LDR (literal)
instruction. It will be interesting to test and see which performs
better among the two paths.
Backports commit 2acee8b2b5e6bba2935bb6ce5be92d0f0f9799cb from qemu
We use ADRP+ADD to compute the target address for goto_tb. This patch
introduces the NOP instruction which is used to align the above
instruction pair so that we can use one atomic instruction to patch
the destination offsets.
Backports commit b68686bd4bfeb70040b4099df993dfa0b4f37b03 from qemu
We can use a branch to register instruction for exit_tb for offsets
greater than 128MB.
Backports commit 23b7aa1d2af04ba57cc94f74d9f0ab25dce72fa0 from qemu
Move cpu_get_fp80()/cpu_set_fp80() from fpu_helper.c to
machine.c because fpu_helper.c will be disabled if tcg is
disabled in the build.
Backports commit db573d2cf7ae6b5a4fc324be6f55e078fc218464 from qemu.
In unicorn's case, they can be moved into unicorn.c
Move cpu_sync_bndcs_hflags() function from mpx_helper.c
to helper.c because mpx_helper.c need be disabled when
tcg is disabled.
Backports commit ab0a19d4f08d924e052eb369420d264240872f8a from qemu
Add CONFIG_TCG around TLB-related functions and structure declarations.
Some of these functions are defined in ./accel/tcg/cputlb.c, which will
not be linked in if TCG is disabled, and have no stubs; therefore, their
callers will also be compiled out for --disable-tcg.
Backports commit b11ec7f2e44b285a3967d629b55d1a6970b06787 from qemu
This lets you build without TCG (hardware accelerationor qtest only). When
this flag is passed to configure, it will automatically filter out the target
list to only those that support KVM or Xen or HAX.
Backports commit b3f6ea7e55e8228d6f84d5cee7cb11cae917ba95 from qemu
translate-all.c will be disabled if tcg is disabled in the build,
so page_size_init() function and related variables will be moved
to exec.c file.
Backports commit a0be0c585f5dcc4d50a37f6a20d3d625c5ef3a2c from qemu
Commit 1f5c00cfdb8114c ("qom/cpu: move tlb_flush to cpu_common_reset")
moved the call to tlb_flush() from the target-specific reset handlers
into the common code qom/cpu.c file, and protected the call with
"#ifdef CONFIG_SOFTMMU" to avoid that it is called for linux-user
only targets. But since qom/cpu.c is common code, CONFIG_SOFTMMU is
*never* defined here, so the tlb_flush() was simply never executed
anymore. Fix it by introducing a wrapper for tlb_flush() in a file
that is re-compiled for each target, i.e. in translate-all.c.
Backports commit 2cd53943115be5118b5b2d4b80ee0a39c94c4f73 from qemu
Move the handling of conforming code segments before the handling
of stack switch.
Because dpl == cpl after the new "if", it's now unnecessary to check
the C bit when testing dpl < cpl. Furthermore, dpl > cpl is checked
slightly above the modified code, so the final "else" is unreachable
and we can remove it.
Backports commit 1110bfe6f5600017258fa6578f9c17ec25b32277 from qemu
In do_interrupt64(), when interrupt stack table(ist) is enabled
and the the target code segment is conforming(e2 & DESC_C_MASK), the
old implementation always set new CPL to 0, and SS.RPL to 0.
This is incorrect for when CPL3 code access a CPL0 conforming code
segment, the CPL should remain unchanged. Otherwise higher privileged
code can be compromised.
The patch fix this for always set dpl = cpl when the target code segment
is conforming, and modify the last parameter `flags`, which contains
correct new CPL, in cpu_x86_load_seg_cache().
Backports commit e95e9b88ba5f4a6c17f4d0c3a3a6bf3f648bb328 from qemu
Some code paths can lead to atomic accesses racing with memset()
on cpu->tb_jmp_cache, which can result in torn reads/writes
and is undefined behaviour in C11.
These torn accesses are unlikely to show up as bugs, but from code
inspection they seem possible. For example, tb_phys_invalidate does:
/* remove the TB from the hash list */
h = tb_jmp_cache_hash_func(tb->pc);
CPU_FOREACH(cpu) {
if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) {
atomic_set(&cpu->tb_jmp_cache[h], NULL);
}
}
Here atomic_set might race with a concurrent memset (such as the
ones scheduled via "unsafe" async work, e.g. tlb_flush_page) and
therefore we might end up with a torn pointer (or who knows what,
because we are under undefined behaviour).
This patch converts parallel accesses to cpu->tb_jmp_cache to use
atomic primitives, thereby bringing these accesses back to defined
behaviour. The price to pay is to potentially execute more instructions
when clearing cpu->tb_jmp_cache, but given how infrequently they happen
and the small size of the cache, the performance impact I have measured
is within noise range when booting debian-arm.
Note that under "safe async" work (e.g. do_tb_flush) we could use memset
because no other vcpus are running. However I'm keeping these accesses
atomic as well to keep things simple and to avoid confusing analysis
tools such as ThreadSanitizer.
Backports commit f3ced3c59287dabc253f83f0c70aa4934470c15e from qemu
We are relying on cpu_env being defined as a global, yet most
targets (i.e. all but arm/a64) have it defined as a local variable.
Luckily all of them use the same "cpu_env" name, but really
compilation shouldn't break if the name of that local variable
changed.
Fix it by using tcg_ctx.tcg_env, which all targets set in their
translate_init function. This change also helps paving the way
for the upcoming "translation loop common to all targets" work.
Backports commit 53f6672bcf57d82b794a2cc3a3469be7d35c8653 from qemu
Add fsabs, fdabs, fsneg, fdneg, fsmove and fdmove.
The value is converted using the new floatx80_round() function.
Backports commit 77bdb2292492fafc4bc0fbb4d8c44fdd0ef1fa8e from qemu
Add a function to round a floatx80 to the defined precision
(floatx80_rounding_precision)
Backports commit 0f72129281765ed64d26353284059f2bdcde7a23 from qemu
fmovecr moves a floating point constant from the
FPU ROM to a floating point register.
Backports commit 9d403660d91229922c2786e81c23cc9dd8e644f1 from qemu
This may be used for deprecated object properties that are kept for
backwards compatibility.
Backports commit a733371214b68881d84725a3c71f60e2faf3b8e2 from qemu
This replaces env1 and page_index variables by env and index
so we can use VICTIM_TLB_HIT macro later.
Backports commit 3416343255cbe01fbe12e5e36cd4bb5042425b27 from qemu
Coldfire uses float64, but 680x0 use floatx80.
This patch introduces the use of floatx80 internally
and enables 680x0 80bits FPU.
Backports commit f83311e4764f1f25a8abdec2b32c64483be1759b from qemu
Switch to use QNum/uint where appropriate to remove i64 limitation.
The input visitor will cast i64 input to u64 for compatibility
reasons (existing json QMP client already use negative i64 for large
u64, and expect an implicit cast in qemu).
Note: before the patch, uint64_t values above INT64_MAX are sent over
json QMP as negative values, e.g. UINT64_MAX is sent as -1. After the
patch, they are sent unmodified. Clearly a bug fix, but we have to
consider compatibility issues anyway. libvirt should cope fine,
because its parsing of unsigned integers accepts negative values
modulo 2^64. There's hope that other clients will, too.
Backports commit 5923f85fb82df7c8c60a89458a5ae856045e5ab1 from qemu
In order to store integer values between INT64_MAX and UINT64_MAX, add
a uint64_t internal representation.
Backports commit 61a8f418b26a2d974e38e4ae55020aca8d402d88 from qemu
Before the previous commit, parameter promote_int = true made
visit_start_alternate() with an input visitor avoid QTYPE_QINT
variants and create QTYPE_QFLOAT variants instead. This was used
where QTYPE_QINT variants were invalid.
The previous commit fused QTYPE_QINT with QTYPE_QFLOAT, rendering
promote_int useless and unused.
Backports commit 60390d2dc85ffade8981ca41e02335cb07353a6d from qemu
We would like to use a same QObject type to represent numbers, whether
they are int, uint, or floats. Getters will allow some compatibility
between the various types if the number fits other representations.
Add a few more tests while at it.
Backports commit 01b2ffcedd94ad7b42bc870e4c6936c87ad03429 from qemu
QAPI_CLONE() returns a newly allocated QAPI object. Inconvenient when
we want to clone into an existing object. QAPI_CLONE_MEMBERS() does
exactly that.
Backports commit 4626a19c86c30d96cedbac2bd44ef8103303cb37 from qemu
Rather than making lots of callers wrap a scalar in a QInt, QString,
or QBool, provide helper macros that do the wrapping automatically.
Update the Coccinelle script to make mass conversions easy, although
the conversion itself will be done as a separate patches to ease
review and backport efforts.
Backports commit a92c21591b5bb9543996538f14854ca6b528318b from qemu
Visiting a list when input is the empty string should result in an
empty list, not an error. Noticed when commit 3d089ce belatedly added
tests, but simply accepted as weird then. It's actually a regression:
broken in commit 74f24cb, v2.7.0. Fix it, and throw in another test
case for empty string.
Backports commit d2788227c6185c72d88ef3127e9fed41686f8e39 from qemu
We can call tb_htable_lookup even when the tb_jmp_cache is completely
empty. Therefore, un-nest most of the code dependent on tb != NULL
from the read from the cache.
This improves the hit rate of lookup_tb_ptr; for instance, when booting
and immediately shutting down debian-arm, the hit rate improves from
93.2% to 99.4%.
Backports commit b97a879de980e99452063851597edb98e7e8039c from qemu
The new placement of the TB means that we can use one insn
to load the goto_tb destination directly from the TB.
Backports commit 308714e6bc945389c64faf1b9213e2c0d3f03391 from qemu
Since we're no longer using a direct branch, we have no
limit on the branch distance.
Backports commit acb0b292b6d0f49972dc98f742e79ed53973e438 from qemu
The new placement of the TB means that we can use one insn
to load the return value for exit_tb returning the TB pointer.
Backports commit cc74d332ff9a78684374847375ef63fc4bd10436 from qemu
We are partially initializing tb in tb_alloc. Instead, fully
initialize it in tb_gen_code, which is tb_alloc's only caller.
This saves an unnecessary write to tb->cflags.
Backports commit 2b48e10f888059a98043b4816769fa2a326a1d2c from qemu
Allocating an arbitrarily-sized array of tbs results in either
(a) a lot of memory wasted or (b) unnecessary flushes of the code
cache when we run out of TB structs in the array.
An obvious solution would be to just malloc a TB struct when needed,
and keep the TB array as an array of pointers (recall that tb_find_pc()
needs the TB array to run in O(log n)).
Perhaps a better solution, which is implemented in this patch, is to
allocate TB's right before the translated code they describe. This
results in some memory waste due to padding to have code and TBs in
separate cache lines--for instance, I measured 4.7% of padding in the
used portion of code_gen_buffer when booting aarch64 Linux on a
host with 64-byte cache lines. However, it can allow for optimizations
in some host architectures, since TCG backends could safely assume that
the TB and the corresponding translated code are very close to each
other in memory. See this message by rth for a detailed explanation:
https://lists.gnu.org/archive/html/qemu-devel/2017-03/msg05172.html
Subject: Re: GSoC 2017 Proposal: TCG performance enhancements
Backports commit 6e3b2bfd6af488a896f7936e99ef160f8f37e6f2 from qemu
Add helpers to gather cache info from the host at init-time.
For now, only export the host's I/D cache line sizes, which we
will use to improve cache locality to avoid false sharing.
Backports commit b255b2c8a5484742606e8760870ba3e14d0c9605 from qemu
V flag for subtraction is:
v = (res ^ src1) & (src1 ^ src2)
(see COMPUTE_CCR() in target/m68k/helper.c)
But gen_flush_flags() uses:
v = (res ^ src2) & (src1 ^ src2)
The problem has been found with the following program:
.global _start
_start:
move.l #-2147483648,%d0
subq.l #1,%d0
jvc 1f
move.l #1,%d1
move.l #1,%d0
trap #0
1:
move.l #0,%d1
move.l #1,%d0
trap #0
It works fine (exit(1)) on real hardware, and with "-singlestep".
"-singlestep" uses gen_helper_flush_flags(), whereas
without "-singlestep", V flag is computed directly in
gen_flush_flags().
This patch updates gen_flush_flags() to have the same result
as with gen_helper_flush_flags().
Backports commit 043b936ef6fe53396b3c6b8f5562ea3e238a071d from qemu
Running Windows with icount causes a crash in instruction of write cr.
This patch fixes it.
Reading and writing cr cause an icount read because there are called
cpu_get_apic_tpr and cpu_set_apic_tpr functions. So, there is need
gen_io_start()/gen_io_end() calls.
Backports commit 5b003a40bb1ab14d0398e91f03393d3c6b9577cd from qemu
This speeds up SMM switches. Later on it may remove the need to take
the BQL, and it may also allow to reuse code between TCG and KVM.
Backports commit f8c45c6550b9ff1e1f0b92709ff3213a79870879 from qemu
It really only plays with the dispatchers, so the parameter list does
not need that complexity. This helps for readability at least.
Backports commit 003a0cf2cd1828a1141a874428571267b117f765 from qemu
In theory this would re-enable usage of QEMU on an armv4 host.
Whether this is worthwhile is debatable -- we've been unconditionally
issuing the armv5t BX instruction in the prologue since 2011 without
complaint. Possibly we should simply require an armv6 host.
Backports commit 702a947484eb3e615183dafc93de590ab0679f60 from qemu
Instead of unconditionally exiting to the exec loop, use the
gen_jr helper to jump to the target if it is valid.
Perf impact: see next commit's log.
Backports commit fe62089563ffc6a42f16ff28a6b6be34d2697766 from qemu
Instead of unconditionally exiting to the exec loop, use the
lookup_and_goto_ptr helper to jump to the target if it is valid.
Perf impact: see next commit's log.
Backports commit 7ad55b4ffd982c80f26f7f3658138d94cdc678e8 from qemu