This patch implements a fucntion pointer "virtio_is_big_endian"
from "CPUClass" structure for arm/arm64.
Function arm_cpu_is_big_endian() is added to determine and
return the guest cpu endianness to virtio.
This is required for running cross endian guests with virtio on ARM/ARM64.
Backports commit 84f2bed3cf505f90b7918e2de32e11da27160563 from qemu
A few of the oldest parts of the page-table-walk code have broken indent
(either hardcoded tabs or two-spaces). Reindent these sections.
For ease of review, this patch does not touch the brace style and
so is a whitespace-only change.
Backports commit 554b0b09aec4579c8164f363b18a263150e91a2c from qemu
Now we have the mmu_idx in get_phys_addr(), use it correctly to
determine the behaviour of virtual to physical address translations,
rather than using just an is_user flag and the current CPU state.
Some TODO comments have been added to indicate where changes will
need to be made to add EL2 and 64-bit EL3 support.
Backports commit 0480f69abf849ca0d48928cc6c669c1c7264239b from qemu
Make all the callers of get_phys_addr() pass it the correct
mmu_idx rather than just a simple "is_user" flag. This includes
properly decoding the AT/ATS system instructions; we include the
logic for handling all the opc1/opc2 cases because we'll need
them later for supporting EL2/EL3, even if we don't have the
regdef stanzas yet.
Backports commit d364970287c0ba68979711928c15e5d37414f87f from qemu
Instead of simply reusing ats_write() as the handler for both AArch32
and AArch64 address translation operations, use a different function
for each with the common code in a third function. This is necessary
because the semantics for selecting the right translation regime are
different; we are only getting away with sharing currently because
we don't support EL2 and only support EL3 in AArch32.
Backports commit 060e8a48cb84d41d4ac36e4bb29d9c14ed7168b6 from qemu
target-arm doesn't use any of the MMU-mode specific cpu ldst
accessor functions. Suppress their generation by not defining
any of the MMU_MODE*_SUFFIX macros. ("user" and "kernel" are
too simplistic as descriptions of indexes 0 and 1 anyway.)
Backports commit 0dfef7b58f0c24b463e36630f08a45e93012b33a from qemu
The MMU index to use for unprivileged loads and stores is more
complicated than we currently implement:
* for A64, it should be "if at EL1, access as if EL0; otherwise
access at current EL"
* for A32/T32, it should be "if EL2, UNPREDICTABLE; otherwise
access as if at EL0".
In both cases, if we want to make the access for Secure EL0
this is not the same mmu_idx as for Non-Secure EL0.
Backports commit 579d21cce63f3dd2f6ee49c0b02a14e92cb4a836 from qemu
We currently claim that for ARM the mmu_idx should simply be the current
exception level. However this isn't actually correct -- secure EL0 and EL1
should have separate indexes from non-secure EL0 and EL1 since their
VA->PA mappings may differ. We also will want an index for stage 2
translations when we properly support EL2.
Define and document all seven mmu index values that we require, and
pass the mmu index in the TB flags rather than exception level or
priv/user bit.
This change doesn't update the get_phys_addr() code, so our page
table walking still assumes a simplistic "user or priv?" model for
the moment.
Backports commit c1e3781090b9d36c60e1a254ba297cb34011d3d4 from qemu
Support guest CPUs which need 7 MMU index values.
Add a comment about what would be required to raise the limit
further (trivial for 8, TCG backend rework for 9 or more).
Backports commit 8f3ae2ae2d02727f6d56610c09d7535e43650dd4 from qemu
Although M profile doesn't have the same concept of exception level
as A profile, it does have a notion of privileged versus not, which
we currently track in the privmode TB flag. Support returning this
information if arm_current_el() is called on an M profile core, so
that we can identify the correct MMU index to use (and put the MMU
index in the TB flags) without having to special-case M profile.
Backports commit 6d54ed3c93f1e05a483201b087142998381c9be8 from qemu
The documentation states that if LSB > MSB in BFI instruction behaviour
is unpredictable. Currently QEMU crashes because of assertion failure in
this case:
tcg/tcg-op.h:2061: tcg_gen_deposit_i32: Assertion `len <= 32' failed.
While assertion failure may meet the "unpredictable" definition this
behaviour is undesirable because it allows an unprivileged guest program
to crash the emulator with the OS and other programs.
This patch addresses the issue by throwing illegal instruction exception
if LSB > MSB. Only ARM decoder is affected because Thumb decoder already
has this check in place.
To reproduce issue run the following program
int main(void) {
asm volatile (".long 0x07c00c12" :: );
return 0;
}
compiled with
gcc -marm -static badop_arm.c -o badop_arm
Backports commit 45140a57675ecb4b0daee71bf145c24dbdf9429c from qemu
The helper functions for FRECPS and FRSQRTS have special case
handling that includes checks for zero inputs, so squash input
denormals if necessary before those checks. This fixes incorrect
output when the FPCR DZ bit is set to enable squashing of input
denormals.
Backports commit a8eb6e19991d1a7a6a7b04ac447548d30d75eb4a from qemu
Add assertion checking when cpreg structures are registered that they
either forbid raw-access attempts or at least make an attempt at
handling them. Also add an assert in the raw-accessor-of-last-resort,
to avoid silently doing a read or write from offset zero, which is
actually AArch32 CPU register r0.
Backports commit 375421ccaeebae8212eb8f9a36835ad4d9dc60a8 from qemu
We currently mark ARM coprocessor/system register definitions with
the flag ARM_CP_NO_MIGRATE for two different reasons:
1) register is an alias on to state that's also visible via
some other register, and that other register is the one
responsible for migrating the state
2) register is not actually state at all (for instance the TLB
or cache maintenance operation "registers") and it makes no
sense to attempt to migrate it or otherwise access the raw state
This works fine for identifying which registers should be ignored
when performing migration, but we also use the same functions for
synchronizing system register state between QEMU and the kernel
when using KVM. In this case we don't want to try to sync state
into registers in category 2, but we do want to sync into registers
in category 1, because the kernel might have picked a different
one of the aliases as its choice for which one to expose for
migration. (In particular, on 32 bit hosts the kernel will
expose the state in the AArch32 version of the register, but
TCG's convention is to mark the AArch64 version as the version
to migrate, even if the CPU being emulated happens to be 32 bit,
so almost all system registers will hit this issue now that we've
added AArch64 system emulation.)
Fix this by splitting the NO_MIGRATE flag in two (ALIAS and NO_RAW)
corresponding to the two different reasons we might not want to
migrate a register. When setting up the TCG list of registers to
migrate we honour both flags; when populating the list from KVM,
only ignore registers which are NO_RAW.
Backports commit 7a0e58fa648736a75f2a6943afd2ab08ea15b8e0 from qemu
Update to arm_cpu_reset() to reset into the highest available exception level
based on the set ARM features.
Backports commit 5097227c15aa89baec1123aac25dd9500a62684d from qemu
Added RVBAR_EL2 and RVBAR_EL3 CP register support. All RVBAR_EL# registers
point to the same location and only the highest EL version exists at any one
time.
Backports commit be8e8128595b41b9f609c1507e67d121e65e7173 from qemu
The crypto emulation code in target-arm/crypto_helper.c never worked
correctly on big endian hosts, due to the fact that it uses a union
of array types to convert between the native VFP register size (64
bits) and the types used in the algorithms (bytes and 32 bit words)
We cannot just swab between LE and BE when reading and writing the
registers, as the SHA code performs word additions, so instead, add
array accessors for the CRYPTO_STATE type whose LE and BE specific
implementations ensure that the correct array elements are referenced.
Backports commit b449ca3c1874418d948878d5417a32fc0dbf9fea from qemu
Added a "has_el3" state property to the ARMCPU descriptor. This property
indicates whether the ARMCPU has security extensions enabled (EL3) or not.
By default it is disabled at this time.
Backports commit 51942aee3c51ca23b0dd78f95534a57e8dc1e582 from qemu
Add an unset_feature() function to compliment the set_feature() function. This
will be used to disable functions after they have been enabled during
initialization.
Backports commit 08828484a5c1ec55a6cbb4b4d377bfcf41199b5c from qemu
Merge of the v8_el2_cp_reginfo and el3_cp_reginfo ARMCPRegInfo lists.
Previously, some EL3 registers were restricted to the ARMv8 list under the
impression that they were not needed on ARMv7. However, this is not the case
as the ARMv7/32-bit variants rely on the ARMv8/64-bit variants to handle
migration and reset. For this reason they must always exist.
Backports commit 60fb1a87b47b14e4ea67043aa56f353e77fbd70a from qemu
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
FCSEIDR, CONTEXTIDR, TPIDRURW, TPIDRURO and TPIDRPRW have a secure
and a non-secure instance.
Backports commit 54bf36ed351c526cde0c853079f9ff1ab7e2ff89 from qemu
When EL3 is running in Aarch32 (or ARMv7 with Security Extensions)
VBAR has a secure and a non-secure instance, which are mapped to
VBAR_EL1 and VBAR_EL3.
Backports commit fb6c91ba2bb0b1c1b8662ceeeeb9474a025f9a6b from qemu
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
PAR has a secure and a non-secure instance.
Backports commit 01c097f7960b330c4bf038d34bae17ad6c1ba499 from qemu
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
IFAR and DFAR have a secure and a non-secure instance.
Backports commit b848ce2b9cbd38da3f2530fd93dba76dba0621c0 from qemu
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
DFSR has a secure and a non-secure instance.
Backports commit 4a7e2d7315bd2ce28e49ccd0bde73eabdfd7437b from qemu
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
IFSR has a secure and a non-secure instance. Adds IFSR32_EL2 definition and
storage.
Backports commit 88ca1c2d70523486a952065f3ed7b8fc823b5863 from qemu
When EL3 is running in AArch32 (or ARMv7 with Security Extensions)
DACR has a secure and a non-secure instance. Adds definition for DACR32_EL2.
Backports commit 0c17d68c1d3d6c35f37f5692042d2edb65c8bcc0 from qemu
Adds secure and non-secure bank register suport for TTBCR.
Added new struct to compartmentalize the TCR data and masks. Removed old
tcr/ttbcr data and added a 4 element array of the new structs in cp15. This
allows for one entry per EL. Added a CP register definition for TCR_EL3.
Backports commit 11f136ee25232a00f433cefe98ee33cd614ecccc from qemu
Adds secure and non-secure bank register suport for TTBR0 and TTBR1.
Changes include adding secure and non-secure instances of ttbr0 and ttbr1 as
well as a CP register definition for TTBR0_EL3. Added a union containing
both EL based array fields and secure and non-secure fields mapped to them.
Updated accesses to use A32_BANKED_CURRENT_REG_GET macro.
Backports commit 7dd8c9af0d9d18fb3e54a4843b3bb1398bd330bc to qemu
Add checks of SCR AW/FW bits when performing writes of CPSR. These SCR bits
are used to control whether the CPSR masking bits can be adjusted from
non-secure state.
Backports commit 6e8801f9dea9e10449f4fd7d85dbe8cab708a686 from qemu
Use MVBAR register as exception vector base address for
exceptions taken to CPU monitor mode.
Backports commit e89e51a17ea0d8aef9bf9b766c98f963e835fbf2 from qemu
Added CP register defintions for SDER and SDER32_EL3 as well as cp15.sder for
register storage.
Backports commit 144634ae6c1618dcee6aced9c0d4427844154091 from qemu
Implements NSACR register with corresponding read/write functions
for ARMv7 and ARMv8.
Backports commit 770225764f831031d2e1453f69c365eb1b647d87 from qemu
SCR.{IRQ/FIQ} bits allow to route IRQ/FIQ exceptions to monitor CPU
mode. When taking IRQ exception to monitor mode FIQ exception is
additionally masked.
Backports commit de38d23b542efca54108ef28bcc0efe96f378d2e from qemu
Define a new ARM CP register info list for the ARMv7 Security Extension
feature. Register that list only for ARM cores with Security Extension/EL3
support. Moving AArch32 SCR into Security Extension register group.
Backports commit 0f1a3b2470d798ad5335eb9d6236f02ff64e31a8 from qemu
Prepare for cp register banking by inserting every cp register twice,
once for secure world and once for non-secure world.
Backports commit 3f3c82a57d128aa3ec823aa8032867c3a6e2e795 from qemu
Added additional NS-bit to CPREG hash encoding. Updated hash lookup
locations to specify hash bit currently set to non-secure.
Backports commit 51a79b039728277e35fd19f7a7b4bc6cb323697f from qemu
Prepare ARMCPRegInfo to support specifying two fieldoffsets per
register definition. This will allow us to keep one register
definition for banked registers (different offsets for secure/
non-secure world).
Also added secure state tracking field and flags. This allows for
identification of the register info secure state.
Backports commit c3e302606253a17568dc3ef30238f102468f7ee1 from qemu
This patch is based on idea found in patch at
git://github.com/jowinter/qemu-trustzone.git
f3d955c6c0ed8c46bc0eb10b634201032a651dd2 by
Johannes Winter <johannes.winter@iaik.tugraz.at>.
The TBFLAG captures the SCR NS secure state at the time when a TB is created so
the correct bank is accessed on system register accesses.
Backports commit 3f342b9e0e64ad681cd39840bfa75ef12d2807c1 from qemu
If EL3 is in AArch32 state certain cp registers are banked (secure and
non-secure instance). When reading or writing to coprocessor registers
the following macros can be used.
- A32_BANKED macros are used for choosing the banked register based on provided
input security argument. This macro is used to choose the bank during
translation of MRC/MCR instructions that are dependent on something other
than the current secure state.
- A32_BANKED_CURRENT macros are used for choosing the banked register based on
current secure state. This is NOT to be used for choosing the bank used
during translation as it breaks monitor mode.
If EL3 is operating in AArch64 state coprocessor registers are not
banked anymore. The macros use the non-secure instance (_ns) in this
case, which is architecturally mapped to the AArch64 EL register.
Backports commit ea30a4b824ecc3c829b70eb9999ac5457dc5790f from qemu
Adds a dedicated function and a lookup table for determining the target
exception level of IRQ and FIQ exceptions. The lookup table is taken from the
ARMv7 and ARMv8 specification exception routing tables.
Backports commit 0eeb17d618361a0f4faddc160e33598b23da6dd5 from qemu
This patch extends arm_excp_unmasked() to use lookup tables for determining
whether IRQ and FIQ exceptions are masked. The lookup tables are based on the
ARMv8 and ARMv7 specification physical interrupt masking tables.
If EL3 is using AArch64 IRQ/FIQ masking is ignored in all exception levels
other than EL3 if SCR.{FIQ|IRQ} is set to 1 (routed to EL3).
Backports commit 57e3a0c7cb0ac2f0288890482e0a463adce2080a from qemu
Using rs = -1 in gen_logic_imm() for microMIPS LUI instruction is dangerous
and may bite us when implementing microMIPS R6 because in R6 AUI and LUI
are distinguished by rs value. Therefore use 0 for safety.
Backports commit 5e88759a52934a32502298f2c78c6dfaa144364b from qemu
The test is supposed to terminate TB if the end of the page is reached.
However, with current implementation it may never succeed for microMIPS or
mips16.
Backports commit fe2372910a09034591fd2cfc2d70cca43fccaa95 from qemu
Commit fecd264 added a number of fall-throughs, but neglected to
properly document them as intentional. Commit d922445 cleaned that up
for many, but not all cases. Take care of the remaining ones.
Backports commit b6f3b233eabb4df5d65ae9fbfb3d3c8befea0de7 from qemu
Reduce line wrapping throughout MSA helper macros by using a local float
status pointer rather than referring to the float status through the
environment each time. No functional change.
Backports commit 1a4d570017bf35d99340781ecb59dd3772464031 from qemu
Add missing calls to synchronise the SoftFloat status with the CP1.FSCR:
+ for the rounding and flush-to-zero modes upon processor reset,
+ for the flush-to-zero mode on FSCR updates through the GDB stub.
Refactor code accordingly and remove the redundant RESTORE_ROUNDING_MODE
macro.
Backports commit bb962386b82c1b0e9e12fdb6b9bb62106bf1f822 from qemu
Make CP0.Status writes made with the MTTC0 instruction respect this
register's mask just like all the other places. Also preserve the
current values of masked out bits.
Backports commit 1d725ae952a14b30c84b7bc81b218b8ba77dd311 from qemu
Make sure the address space is unconditionally wrapped on 32-bit
processors, that is ones that do not implement at least the MIPS III
ISA.
Also make MIPS16 SAVE and RESTORE instructions use address calculation
rather than plain arithmetic operations for stack pointer manipulation
so that their semantics for stack accesses follows the architecture
specification. That in particular applies to user software run on
64-bit processors with the CP0.Status.UX bit clear where the address
space is wrapped to 32 bits.
Backports commit c48245f0c62405f27266fcf08722d8c290520418 from qemu
Tighten ISA level checks down to MIPS II that many of our instructions
are missing. Also make sure any 64-bit instruction enables are only
applied to 64-bit processors, that is ones that implement at least the
MIPS III ISA.
Backports commit d9224450208e0de62323b64ace91f98bc31d6e2c from qemu
Fix CP0.Config3.ISAOnExc write accesses on microMIPS processors. This
bit is mandatory for any processor that implements the microMIPS
instruction set. This bit is r/w for processors that implement both the
standard MIPS and the microMIPS instruction set. This bit is r/o and
hardwired to 1 if only the microMIPS instruction set is implemented.
There is no other bit ever writable in CP0.Config3 so defining a
corresponding `CP0_Config3_rw_bitmask' member in `CPUMIPSState' is I
think an overkill. Therefore make the ability to write the bit rely on
the presence of ASE_MICROMIPS set in the instruction flags.
The read-only case of the microMIPS instruction set being implemented
only can be added when we add support for such a configuration. We do
not currently have such support, we have no instruction flag that would
control the presence of the standard MIPS instruction set nor any
associated code in instruction decoding.
This change is needed to boot a microMIPS Linux kernel successfully,
otherwise it hangs early on as interrupts are enabled and then the
exception handler invoked loops as its first instruction is interpreted
in the wrong execution mode and triggers another exception right away.
And then over and over again.
We already check the current setting of the CP0.Config3.ISAOnExc in
`set_hflags_for_handler' to set the ISA bit correctly on the exception
handler entry so it is the ability to set it that is missing only.
Backports commit 90f12d735d66ac1196d9a2bced039a432eefc03d from qemu
Fix microMIPS MOVE16 and MOVEP instructions on 64-bit processors by
using register addition operations.
This copies the approach taken with MIPS16 MOVE instructions (I8_MOV32R
and I8_MOVR32 opcodes) and follows the observation that OPC_ADDU expands
to tcg_gen_mov_tl whenever `rt' is 0 and `rs' is not, therefore copying
`rs' to `rd' verbatim. This is not the case with OPC_ADDIU where a
sign-extension from bit #31 is made, unless in the uninteresting case of
`rs' being 0, losing the upper 32 bits of the value copied for any
proper 64-bit values.
This also serves as an optimization as one op is produced in generated
code rather than two (again, unless `rs' is 0, where it doesn't change
anything).
Backports commit 7215d7e7aea85699bf516c3e8d84f6a22584da35 from qemu
Make writes to CP0.Status and CP0.Cause have the same effect as
executing corresponding MTC0 instructions would in Kernel Mode. Also
ignore writes in the user emulation mode.
Currently for requests from the GDB stub we write all the bits across
both registers, ignoring any read-only locations, and do not synchronise
the environment to evaluate side effects. We also write these registers
in the user emulation mode even though a real kernel presents them as
read only.
Backports commit 81a423e6c6d3ccaa79de4e58024369c660c1eeb4 from qemu
Correct these issues with the handling of CP0.Status for MIPSr6:
* only ignore the bit pattern of 0b11 on writes to CP0.Status.KSU, that
is for processors that do implement Supervisor Mode, let the bit
pattern be written to CP0.Status.UM:R0 freely (of course the value
written to read-only CP0.Status.R0 will be discarded anyway); this is
in accordance to the relevant architecture specification[1],
* check the newly written pattern rather than the current contents of
CP0.Status for the KSU bits being 0b11,
* use meaningful macro names to refer to CP0.Status bits rather than
magic numbers.
References:
[1] "MIPS Architecture For Programmers, Volume III: MIPS64 / microMIPS64
Privileged Resource Architecture", MIPS Technologies, Inc., Document
Number: MD00091, Revision 6.00, March 31, 2014, Table 9.45 "Status
Register Field Descriptions", pp. 210-211.
Backports commit f88f79ec9df06d26d84e1d2e0c02d2634b4d8583 from qemu
Correct MIPS16/microMIPS branch size calculation in PC adjustment
needed:
- to set the value of CP0.ErrorEPC at the entry to the reset exception,
- for the purpose of branch reexecution in the context of device I/O.
Follow the approach taken in `exception_resume_pc' for ordinary, Debug
and NMI exceptions.
MIPS16 and microMIPS branches can be 2 or 4 bytes in size and that has
to be reflected in calculation. Original MIPS ISA branches, which is
where this code originates from, are always 4 bytes long, just as all
original MIPS ISA instructions.
Backports commit c3577479815f5bcf9d38993967bca2115af245d8 from qemu
Restore the order of helpers that used to be: unary operations (generic,
then MIPS-specific), binary operations (generic, then MIPS-specific),
compare operations. At one point FMA operations were inserted at a
random place in the file, disregarding the preexisting order, and later
on even more operations sprinkled across the file. Revert the mess by
moving FMA operations to a new ternary class inserted after the binary
class and move the misplaced unary and binary operations to where they
belong.
Backports commit 8fc605b8aa257feb3e69d44794a765bd492b573b from qemu
Remove the `FLOAT_OP' macro, unused since commit
b6d96beda3a6cbf20a2d04a609eff78adebd8859 [Use temporary registers for
the MIPS FPU emulation.].
Backports commit 51fdea945ae7adae8d7e4a1624e35bb7f714b58f from qemu
Move the call to `update_fcr31' in `helper_float_cvtw_s' after the
exception flag check, for consistency with the remaining helpers that do
it last too.
Backports commit 2b09f94cdbf5c54e2278d7f3aed2eceff3494790 from qemu
Backports commits d75de74967f631a7d0b538d4b88f96f9c426bfe2, 6225a4a0e39cb24e7b9e1d4d2c1a3e6eaee18e85, and d2bfa6e6222baa0218bd0658499d38bac56ac34c from qemu
Add the M14K and M14Kc processors from MIPS Technologies that are the
original implementation of the microMIPS ISA. They are dual instruction
set processors, implementing both the microMIPS and the standard MIPSr32
ISA.
These processors correspond to the M4K and 4KEc CPUs respectively,
except with support for the microMIPS instruction set added, support for
the MCU ASE added and two extra interrupt lines, making a total of 8
hardware interrupts plus 2 software interrupts. The remaining parts of
the microarchitecture, in particular the pipeline, stayed unchanged.
The presence of the microMIPS ASE is is reflected in the configuration
added. We currently have no support for the MCU ASE, including in
particular the ACLR, ASET and IRET instructions in either encoding, and
we have no support for the extra interrupt lines, including bits in
CP0.Status and CP0.Cause registers, so these features are not marked,
making our support diverge from real hardware.
Backports commit 11f5ea105c06bec72e9bc9a700fa65d60afb5ec3 from qemu
Make the data type used for the CP0.Config4 and CP0.Config5 registers
and their mask signed, for consistency with the remaining 32-bit CP0
registers, like CP0.Config0, etc.
Backports commit 8280b12c0e4b515d707509dde4ddde05d9bda4ef from qemu
Add the 5KEc and 5KEf processors from MIPS Technologies that are the
original implementation of the MIPS64r2 ISA.
Silicon for these processors has never been taped out and no soft cores
were released even. They do exist though, a CP0.PRId value has been
assigned and experimental RTLs produced at the time the MIPS64r2 ISA has
been finalized. The settings introduced here faithfully reproduce that
hardware.
As far the implementation goes these processors are the same as the 5Kc
and the 5Kf CPUs respectively, except implementing the MIPS64r2 rather
than the original MIPS64 instruction set. There must have been some
updates to the CP0 architecture as mandated by the ISA, such as the
addition of the EBase register, although I am not sure about the exact
details, no documentation has ever been produced for these processors.
The remaining parts of the microarchitecture, in particular the
pipeline, stayed unchanged. Or to put it another way, the difference
between a 5K and a 5KE CPU corresponds to one between a 4K and a 4KE
CPU, except for the 64-bit rather than 32-bit ISA.
Backports commit 36b86e0dc2be93fc538fe7e11e0fda1a198f0135 from qemu
With an eye toward having this data replace the gen_opc_* arrays
that each target collects in order to enable restore_state_from_tb.
Backports commit 9aef40ed1f6e2bd794bbb3ba8c8b773e506334c9 from qemu
While we're at it, emit the opcode adjacent to where we currently
record data for search_pc. This puts gen_io_start et al on the
"correct" side of the marker.
Backports commit 667b8e29c5b1d8c5b4e6ad5f780ca60914eb6e96 from qemu
Usually, eliminate an operation from the translator by combining
a shift with an extract.
In the case of gen_set_NZ64, we don't need a boolean value for cpu_ZF,
merely a non-zero value. Given that we can extract both halves of a
64-bit input in one call, this simplifies the code.
Backports commit 7cb36e18b2f1c1f971ebdc2121de22a8c2e94fd6 from qemu
For !SF, this initial ext32u can't be optimized away by the
current TCG code generator. (It would require backward bit
liveness propagation.)
Backports commit d3a77b42decd0cbfa62a5526e67d1d6d380c83a9 from qemu
This can allow much of a ccmp to be elided when particular
flags are subsequently dead.
Backports commit 7dd03d773e0dafae9271318fc8d6b2b14de74403 from qemu
Handling this with TCG_COND_ALWAYS will allow these unlikely
cases to be handled without special cases in the rest of the
translator. The TCG optimizer ought to be able to reduce
these ALWAYS conditions completely.
Backports commit 9305eac09e61d857c9cc11e20db754dfc25a82db from qemu
Split arm_gen_test_cc into 3 functions, so that it can be reused
for non-branch TCG comparisons.
Backports commit 6c2c63d3a02c79e9035ca0370cc549d0f938a4dd from qemu
In ffc6372851d8631a9f9fa56ec613b3244dc635b9, we swapped the guest
base to the address base register from the address index register.
Except that 31 in the base slot is SP not XZR, so we need to be
more intelligent about which reg gets placed in which slot.
Backports commit 352bcb0a2b816ff9ab9d75d0f2384650d9e9ab19 from qemu
Rather than allow arbitrary shift+trunc, only concern ourselves
with low and high parts. This is all that was being used anyway.
Backports commit 609ad70562793937257c89d07bf7c1370b9fc9aa from qemu
They behave the same as ext32s_i64 and ext32u_i64 from the constant
folding and zero propagation point of view, except that they can't
be replaced by a mov, so we don't compute the affected value.
Backports commit 8bcb5c8f34f9215d4f88f388c7ff14c9bd5cecd3 from qemu
Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a
32-bit value is always converted to a 64-bit value and not propagated
through the register allocator or the optimizer.
Backports commit 4f2331e5b67af8172419eb1c8db510b497b30a7b from qemu
The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32,
and the name in the README doesn't match the name offered to the
frontends.
Always use the long name to make it clear it is a size changing op.
Backports commit 0632e555fc4d281d69cb08d98d500d96185b041f from qemu
Instead of using an enum which could be either a copy or a const, track
them separately. This will be used in the next patch.
Constants are tracked through a bool. Copies are tracked by initializing
temp's next_copy and prev_copy to itself, allowing to simplify the code
a bit.
Backports commit b41059dd9deec367a4ccd296659f0bc5de2dc705 from qemu
Add two accessor functions temp_is_const and temp_is_copy, to make the
code more readable and make code change easier.
Backports commit d9c769c60948815ee03b2684b1c1c68ee4375149 from qemu
The tcg_temp_info structure uses 24 bytes per temp. Now that we emulate
vector registers on most guests, it's not uncommon to have more than 100
used temps. This means we have initialize more than 2kB at least twice
per TB, often more when there is a few goto_tb.
Instead used a TCGTempSet bit array to track which temps are in used in
the current basic block. This means there are only around 16 bytes to
initialize.
This improves the boot time of a MIPS guest on an x86-64 host by around
7% and moves out tcg_optimize from the the top of the profiler list.
Backports commit 1208d7dd5fddc1fbd98de800d17429b4e5578848 from qemu
By convention, on a 64-bit host TCG internally stores 32-bit constants
as sign-extended. This is not the case in the optimizer when a 32-bit
constant is folded.
This doesn't seem to have more consequences than suboptimal code
generation. For instance the x86 backend assumes sign-extended constants,
and in some rare cases uses a 32-bit unsigned immediate 0xffffffff
instead of a 8-bit signed immediate 0xff for the constant -1. This is
with a ppc guest:
before
------
---- 0x9f29cc
movi_i32 tmp1,$0xffffffff
movi_i32 tmp2,$0x0
add2_i32 tmp0,CA,CA,tmp2,r6,tmp2
add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2
mov_i32 r10,tmp0
0x7fd8c7dfe90c: xor %ebp,%ebp
0x7fd8c7dfe90e: mov %ebp,%r11d
0x7fd8c7dfe911: mov 0x18(%r14),%r9d
0x7fd8c7dfe915: add %r9d,%r10d
0x7fd8c7dfe918: adc %ebp,%r11d
0x7fd8c7dfe91b: add $0xffffffff,%r10d
0x7fd8c7dfe922: adc %ebp,%r11d
0x7fd8c7dfe925: mov %r11d,0x134(%r14)
0x7fd8c7dfe92c: mov %r10d,0x28(%r14)
after
-----
---- 0x9f29cc
movi_i32 tmp1,$0xffffffffffffffff
movi_i32 tmp2,$0x0
add2_i32 tmp0,CA,CA,tmp2,r6,tmp2
add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2
mov_i32 r10,tmp0
0x7f37010d490c: xor %ebp,%ebp
0x7f37010d490e: mov %ebp,%r11d
0x7f37010d4911: mov 0x18(%r14),%r9d
0x7f37010d4915: add %r9d,%r10d
0x7f37010d4918: adc %ebp,%r11d
0x7f37010d491b: add $0xffffffffffffffff,%r10d
0x7f37010d491f: adc %ebp,%r11d
0x7f37010d4922: mov %r11d,0x134(%r14)
0x7f37010d4929: mov %r10d,0x28(%r14)
Backports commit 29f3ff8d6cbc28f79933aeaa25805408d0984a8f from qemu
Due to a copy&paste, the new op value is tested against mov_i32 instead
of movi_i32. The test is therefore always false. Fix that.
Backports commit 961521261a3d600b0695b2e6d2b0f490076f7e90 from qemu
The tcg_constant_folding folding ends up doing all the optimizations
(which is a good thing to avoid looping on all ops multiple time), so
make it clear and just rename it tcg_optimize.
Backports commit 36e60ef6ac5d8a262d0fbeedfdb2b588514cb1ea from qemu
Most of the calls to tcg_opt_gen_mov are preceeded by a test to check if
the source temp is a constant. Fold that into the tcg_opt_gen_mov
function.
Backports commit 97a79eb70dd35a24fda87d86196afba5e6f21c5d from qemu
Each call to tcg_opt_gen_mov is preceeded by a test to check if the
source and destination temps are copies. Fold that into the
tcg_opt_gen_mov function.
Backports commit 5365718a9afeeabde3784d82a542f8ad909b18cf from qemu
We can get the opcode using the TCGOp pointer. It needs to be
dereferenced, but it's anyway done a few lines below to write
the new value.
Backports commit 8d6a91602ea824ef4435ea38fd475387eecc098c from qemu
We can get the opcode using the TCGOp pointer. It needs to be
dereferenced, but it's anyway done a few lines below to write
the new value.
Backports commit ebd27391b00cdafc81e0541a940686137b3b48df from qemu
The checks in dins is required to avoid triggering an assertion
in tcg_gen_deposit_tl. The check in dext is just for completeness.
Fold the other D cases in via fallthru.
Backports commit b7f26e523914b982a1c1bfa8295f77ff9787c33c from qemu
Similar to the same fix for user-mode, except this instance
occurs on the softmmu path. Again, the tlb addend must be
the base register, while the guest address is the index.
Backports commit 80adb8fcad4778376a11d394a9e01516819e2327 from qemu
Thanks to the previous patch, it is now easy for tcg_out_qemu_ld and
tcg_out_qemu_st to use a 32-bit zero extended offset. However, the
guest base register x28 must be the base and addr_reg must be the
index.
Backports commit ffc6372851d8631a9f9fa56ec613b3244dc635b9 from qemu
The new argument lets you pick uxtw or uxtx mode for the offset
register. For now, all callers pass TCG_TYPE_I64 so that uxtx
is generated. The bits for uxtx are removed from I3312_TO_I3310.
Backports commit 6c0f0c0f124718650a8d682ba275044fc02f6fe2 from qemu
The addition of MO_AMASK means that places that used inverted masks
need to be changed to use positive masks, and places that failed to
mask the intended bits need updating.
Backports commit 2b7ec66f025263a5331f37d5ad78a625496fd7bd from qemu
These modifiers control, on a per-memory-op basis, whether
unaligned memory accesses are allowed. The default setting
reflects the target's definition of ALIGNED_ONLY.
Backports commit dfb36305626636e2e07e0c5acd3a002a5419399e from qemu
The extra information is not yet used but it is now available.
This requires minor changes through all of the tcg backends.
Backports commit 3972ef6f830d65e9bacbd31257abedc055fd6dc8 from qemu
At the tcg opcode level, not at the tcg-op.h generator level.
This requires minor changes through all of the tcg backends,
but none of the cpu translators.
Backports commit 59227d5d45bb3c31dc2118011691c35b3c00879c from qemu
This is less about improved type checking than enabling a
subsequent change to the representation of labels.
Backports commit bec1631100323fac0900aea71043d5c4e22fc2fa from qemu
This is improved type checking for the translators -- it's no longer
possible to accidentally swap arguments to the branch functions.
Note that the code generating backends still manipulate labels as int.
With notable exceptions, the scope of the change is just a few lines
for each target, so it's not worth building extra machinery to do this
change in per-target increments.
Backports commit 42a268c241183877192c376d03bd9b6d527407c7 from qemu
We no longer need INDEX_op_end to terminate the list, nor do we
need 5 forms of nop, since we just remove the TCGOp instead.
Backports commit 15fc7daa770764cc795158cbb525569f156f3659 from qemu
Rather reserving space in the op stream for optimization,
let the optimizer add ops as necessary.
Backports commit a4ce099a7a4b4734c372f6bf28f3362e370f23c1 from qemu
With the linked list scheme we need not leave nops in the stream
that we need to process later.
Backports commit 0c627cdca20155753a536c51385abb73941a59a0 from qemu
The method by which we count the number of ops emitted
is going to change. Abstract that away into some inlines.
Backports commit fe700adb3db5b028b504423b946d4ee5200a8f2f from qemu.
Almost completely eliminates the ifdefs in this file, improving
confidence in the lesser used 32-bit builds.
Backports commit 3a13c3f34ce2058e0c2decc3b0f9f56be24c9400 from qemu
Some of these functions are really quite large. We have a number of
things that ought to be circularly dependent, but we duplicated code
to break that chain for the inlines.
This saved 25% of the code size of one of the translators I examined.
Chain the temporaries together via pointers intstead of indices.
The mem_reg value is now mem_base->reg. This will be important later.
This does require that the frame pointer have a global temporary
allocated for it. This is simple bar the existing reserved_regs check.
Backports commit b3a62939561e07bc34493444fa926b6137cba4e8 from qemu
Thus, use cpu_env as the parameter, not TCG_AREG0 directly.
Update all uses in the translators.
Backports commit e1ccc05444676b92c63708096e36582be27fbee1 from qemu
* arm64eb: arm64 big endian also using little endian instructions.
* arm64: using another example that depends on endians.
example:
1. store a word: 0x12345678
2. load a byte:
* little endian : 0x78
* big endian : 0x12
* uc_reg_read & uc_reg_write now support ARM64 Neon registers
* Do not reuse uc_x86_xmm for uc_arm64_neon128. TODO: refactor both classes to use the same parent.
Writing / reading to model specific registers should be as easy as
calling a function, it's a bit stupid to write shell code and run them
just to write/read to a MSR, and even worse, you need more than just a
shellcode to read...
So, add a special register ID called UC_X86_REG_MSR, which should be
passed to uc_reg_write()/uc_reg_read() as the register ID, and then a
data structure which is uc_x86_msr (12 bytes), as the value (always), where:
Byte Value Size
0 MSR ID 4
4 MSR val 8
* Remove glib from samples makefile
* changes to 16 bit segment registers needs to update segment base as well as segment selector
* change how x86 segment registers are set in 16-bit mode
* more appropriate solution to initial state of x86 segment registers in 16-bit mode
* remove commented lines
* Remove glib from samples makefile
* changes to 16 bit segment registers needs to update segment base as well as segment selector
* change how x86 segment registers are set in 16-bit mode
* unicorn: use waitable timer to implement usleep() on Windows
Signed-off-by: vardyh <vardyh.dev@gmail.com>
* atomic: implement barrier() for msvc
Signed-off-by: vardyh <vardyh.dev@gmail.com>
* Changed some MSVC compatibility defines based on MSVC version.
* Added prebuild_script.bat to remove leftover configure generated files before building.
Also added project files and MSVC copies of configure generated files for all supported CPUs.
* Moved ./bindings/msvc_native into ./msvc
* Remove old project dir.
* isnan() fix for msvc2013 onwards
* reg_read and reg_write now work with registers W0 through W30 in Aarch64 emulaton
* Added a regress test for the ARM64 reg_read and reg_write on 32-bit registers (W0-W30)
Added a new macro in uc_priv.h (WRITE_DWORD_TO_QWORD), in order to write to the lower 32 bits of a 64 bit value without overwriting the whole value when using reg_write
* Fixed WRITE_DWORD macro
reg_write would zero out the high order bits when writing to 32 bit registers
e.g. uc.reg_write(UC_X86_REG_EAX, 0) would also set register RAX to zero
Support for Cortex-M ARM CPU already exists in Qemu. This patch just
exposes a "cortex-m3" CPU.
"uc_open(UC_ARCH_ARM, UC_MODE_THUMB | UC_MODE_MCLASS, &uc);"
Instantiates a CPU with this feature on.
Signed-off-by: Lucian Cojocar <lucian@cojocar.com>
This commit fixes the following issues:
- Any unmapped/free'd memory regions (MemoryRegion instances) are not
removed from the object property linked list of its owner (which is
always qdev_get_machine(uc)). This issue makes adding new memory
mapping by calling mem_map() or mem_map_ptr() slower as more and more
memory pages are mapped and unmapped - yes, even if those memory pages
are unmapped, they still impact the speed of future memory page
mappings due to this issue.
- FlatView is not reconstructed after a memory region is freed during
unmapping, which leads to a use-after-free the next time a new memory
region is mapped in address_space_update_topology().
ARM and probably the rest of the arches have significant memory leaks as
they have no release interface.
Additionally, DrMemory does not have 64-bit support and thus I can't
test the 64-bit version under Windows. Under Linux valgrind supports
both 32-bit and 64-bit but there are different macros and code for Linux
and Windows.
helper_sysenter in qemu/target-i386/seg_helper.c didn't check properly if a call interrupt callback was registred.
It has been fixed by copying the helper_syscall behavior.
It appears the problem is that we are not calling the memory region
destructor. After modifying memory_unmap to include the destructor call
for the memory region, the memory is freed.
Furthermore in uc_close we must explicitly free any blocks that were not
unmapped by the user to prevent leaks.
This should fix issue 305.
- Allow to register handler separately for invalid memory access
- Add new memory events for hooking:
- UC_MEM_READ_INVALID, UC_MEM_WRITE_INVALID, UC_MEM_FETCH_INVALID
- UC_HOOK_MEM_READ_PROT, UC_HOOK_MEM_WRITE_PROT, UC_HOOK_MEM_FETCH_PROT
- Rename UC_ERR_EXEC_PROT to UC_ERR_FETCH_PROT
- Change API uc_hook_add() so event type @type can be combined from hooking types
As pointed out by aquynh the return types are actually different. A
uc_cb_eventmem_t callback returns a bool, while uc_cb_hookmem_t has a
void return type.
This reverts commit cb2b97f26c.