Use pow2floor() to round down to the nearest power of 2,
rather than an inline calculation.
Backports commit 6554f5c03793bb8a3d5dedcebf758a1694fa186c from qemu
Introduces reusable definitions for CPU affinity masks/shifts and gets rid
of hardcoded magic numbers.
Backports commit 0f4a9e45ec35811ee250ac232d84d3c6d4fcd7fc from qemu
There is an error in arm_excp_unmasked() function:
bitwise operator & is used with integer and bool operands
causing an incorrect zeroed result.
The patch fixes it.
Backports commit 771842585f3119f69641ed90a97d56eb9ed6f5ae from qemu
There is an error in functions aarch64_sync_32_to_64() and
aarch64_sync_64_to_32() with mapping of registers between AArch32 and
AArch64. This commit fixes the mapping to match the v8 ARM ARM
section D1.20.1 (table D1-77).
Backports commit 3a9148d0bdcee990fbe86759b9b1f5723c1d7fbc from qemu
All of these hw_errors are fatal and indicate something wrong with
QEMU implementation.
Convert to g_assert_not_reached.
Backports commit 8f6fd322f6e25995629a1a07b56bc5b91fb947ca from qemu
For the A64 instruction set, the semihosting call instruction
is 'HLT 0xf000'. Wire this up to call do_arm_semihosting()
if semihosting is enabled.
Backports commit 8012c84ff92a36d05dfe61af9b24dd01a7ea25e4 from qemu
The 64-bit A64 semihosting API has some pervasive changes from
the 32-bit version:
* all parameter blocks are arrays of 64-bit values, not 32-bit
* the semihosting call number is passed in W0
* the return value is a 64-bit value in X0
Implement the necessary handling for this widening.
Backports relevant parts of commit faacc041619581c566c21ed87aa1933420731282 from qemu
Print semihosting debugging information before the
do_arm_semihosting() call so that angel_SWIreason_ReportException,
which causes the function to not return, gets the same debug prints as
other semihosting calls. Also print out the semihosting call number.
Backports commit 205ace55ffff77964e50af08c99639ec47db53f6 from qemu
Softmmu unaligned load/stores currently goes through through the slow
path for two reasons:
- to support unaligned access on host with strict alignement
- to correctly handle accesses crossing pages
x86 is only concerned by the second reason. Unaligned accesses are
avoided by compilers, but are not uncommon. We therefore would like
to see them going through the fast path, if they don't cross pages.
For that we can use the fact that two adjacent TLB entries can't contain
the same page. Therefore accessing the TLB entry corresponding to the
first byte, but comparing its content to page address of the last byte
ensures that we don't cross pages. We can do this check without adding
more instructions in the TLB code (but increasing its length by one
byte) by using the LEA instruction to combine the existing move with the
size addition.
On an x86-64 host, this gives a 3% boot time improvement for a powerpc
guest and 4% for an x86-64 guest.
Backports commit 8cc580f6a0d8c0e2f590c1472cf5cd8e51761760 from qemu
Implement the AArch64 TLBI operations which take an intermediate
physical address and invalidate stage 2 translations.
Backports commit cea66e91212164e02ad1d245c2371f7e8eb59e7f from qemu
Now we have the ability to flush the TLB only for specific MMU indexes,
update the AArch64 TLB maintenance instruction implementations to only
flush the parts of the TLB they need to, rather than doing full flushes.
We take the opportunity to remove some duplicate functions (the per-asid
tlb ops work like the non-per-asid ones because we don't support
flushing a TLB only by ASID) and to bring the function names in line
with the architectural TLBI operation names.
Backports commit fd3ed969227f54f08f87d9eb6de2d4e48e99279b from qemu
Move the two regdefs for TLBI ALLE1 and TLBI ALLE1IS down so that the
whole set of AArch64 TLBI regdefs is arranged in numeric order.
Backports commit 83ddf975777cc23337b7ef92e83b1b9c949396f3 from qemu
Guest CPU TLB maintenance operations may be sufficiently
specialized to only need to flush TLB entries corresponding
to a particular MMU index. Implement cputlb functions for
this, to avoid the inefficiency of flushing TLB entries
which we don't need to.
Backports commit d7a74a9d4a68e27b3a8ceda17bb95cb0a23d8e4d from qemu
Apply the correct conditions in the ats_access() function for
the ATS12NSO* address translation operations:
* succeed at EL2 or EL3
* normal UNDEF trap from NS EL1
* trap to EL3 from S EL1 (only possible if EL3 is AArch64)
(This change means they're now available in our EL3-supporting
CPUs when they would previously always UNDEF.)
Backports commit 87562e4f4a2bdd028eef3549ce9cb4e7c83cb0bf from qemu
Some coprocessor register access functions need to be able
to report "trap to EL3 with an 'uncategorized' syndrome";
add the necessary CPAccessResult enum and handling for it.
I don't currently know of any registers that need to trap
to EL2 with the 'uncategorized' syndrome, but adding the
_EL2 enum as well is trivial and fills in what would
otherwise be an odd gap in the handling.
Backports commit e76157264da20b85698b09fa5eb8e02e515e232c from qemu
Wire up the AArch64 EL2 and EL3 address translation operations
(AT S12E1*, AT S12E0*, AT S1E2*, AT S1E3*), and correct some
errors in the ats_write64() function in previously unused code
that would have done the wrong kind of lookup for accesses from
EL3 when SCR.NS==0.
Backports commit 2a47df953202e1f226aa045ea974427c4540a167 from qemu
For EL2 stage 1 translations, there is no TTBR1. We were already
handling this for 64-bit EL2; add the code to take the 'no TTBR1'
code path for 64-bit EL2 as well.
Backports commit d0a2cbceb2aa20d64d53e1c20c7d26a78ade8382 from qemu
We already implemented ACTLR_EL1; add the missing ACTLR_EL2 and
ACTLR_EL3, for consistency.
Since we don't currently have any CPUs that need the EL2/EL3
versions to reset to non-zero values, implement as RAZ/WI.
Backports commit 834a6c6920316d39aaf0e68ac936c0a3ad164815 from qemu
The AFSR registers are implementation dependent auxiliary fault
status registers. We already implemented a RAZ/WI AFSR0_EL1 and
AFSR_EL1; add the missing AFSR{0,1}_EL{2,3} for consistency.
Backports commit 37cd6c2478196623ca28526627ca8c69afe0d654 from qemu
The AMAIR registers are for providing auxiliary implementation
defined memory attributes. We already implemented a RAZ/WI
AMAIR_EL1; add the EL2 and EL3 versions for consistency.
Backports commit 2179ef958c81480b841ffa0aab5e265688ffd2b0 from qemu
Add the AArch64 registers MAIR_EL3 and TPIDR_EL3, which are the only
two which we had implemented the 32-bit Secure equivalents of but
not the 64-bit Secure versions.
Backports commit 4cfb8ad896a6f85953038bd913ce3d82d347013d from qemu
apic_internal.h relies on cpu.h having been included (for the
X86CPU type); include it directly rather than relying on it
being pulled in via one of the other includes like timer.h.
Backports commit 20fbcfdd58ea47607a5755979d43f8c48ac93f08 from qemu
Move the muldiv64() function from qemu-common.h to host-utils.h.
This puts it together with all the other arithmetic functions
where we provide a version with __int128_t and a fallback
without, and allows headers which need muldiv64() to avoid
including qemu-common.h.
We don't include host-utils from qemu-common.h, to avoid dragging
more things into qemu-common.h than it already has; in practice
everywhere that needs muldiv64() can get it via qemu/timer.h.
Backports commit 49caffe0cc95a9d0dc344e3328be8197f3536cf8 from qemu
Add a header comment to osdep.h, explaining what the header is for
and some rules to avoid circular-include difficulties.
Backports commit 03557b9abaee78e9d1ef5cd236d32a7b3e75e6f8 from qemu
qemu-common.h has some system header includes and fixups for
things that might be missing. This is really an OS dependency
and belongs in osdep.h, so move it across.
Backports commit bfe7e449f14313f646da621288ca2fd12223414f from qemu
qemu-common.h includes some fixups for things the Win32
headers don't define or define weirdly. These really
belong in os-win32.h, so move them there.
Backports commit 1aad8104f3b69206da1f868639e1f69c26f6d482 from qemu
Add documentation comments for various utility string functions
which we have implemented in util/cutils.c:
pstrcpy()
strpadcpy()
pstrcat()
strstart()
stristart()
qemu_strnlen()
qemu_strsep()
Backports commit ab6036630865eff8bb12dd51dfa6921b4607fc81 from qemu
Rather than rolling custom concatenate-strings macros for the
QEMU_BUILD_BUG_ON macro to use, use the glue() macro we already
have (since it's now available to us in this header).
Backports commit 24134c4e9126bf505b612e901c63a102fc471083 from qemu
osdep.h has a few things which are really compiler specific;
move them to compiler.h, and include compiler.h from osdep.h.
Backports commit 4912086865083a008f4fb73173fd0ddf2206c4d9 from qemu
qemu_printf is an ancient remnant which has been a simple #define to
printf for over a decade, and is used in only a few places. Expand
it out in those places and remove the #define.
Backports commit 71baf787d8fa2a5d186f22d8154069fd212be37f from qemu
There was a complicated subtractive arithmetic for determining the
padding on the CPUTLBEntry structure. Simplify this with a union.
Backports commit b4a4b8d0e0767c85946fd8fc404643bf5766351a from qemu
The LWL/LDL instructions mask the GPR with a mask depending on the
address alignement. It is currently computed by doing:
mask = 0x7fffffffffffffffull >> (t1 ^ 63)
It's simpler to generate it by doing:
mask = ~(-1 << t1)
It uses one TCG instruction less, and it avoids a 32/64-bit constant
loading which can take a few instructions on RISC hosts.
Backports commit eb02cc3f89013612cb05df23b5441741e902bbd2 from qemu
As full specification of P5600 is available, mips32r5-generic should
be renamed to P5600 and corrected as its intention.
Correct PRid and detail of configuration.
Features which are not currently supported are described as FIXME.
Fix Config.MM bit location
Backports commit aff2bc6dc6d839caf6df0900437cc2cc9e180605 from qemu
If EL3 is AArch32, then the secure physical timer is accessed via
banking of the registers used for the non-secure physical timer.
Implement this banking.
Note that the access controls for the AArch32 banked registers
remain the same as the physical-timer checks; they are not the
same as the controls on the AArch64 secure timer registers.
Backports commit 9ff9dd3c875956523bb4c19ca712e5d05aab3c65 from qemu
On CPUs with EL3, there are two physical timers, one for Secure and one
for Non-secure. Implement this extra timer and the AArch64 registers
which access it.
Backports commit b4d3978c2fdf944e428a46d2850dbd950b6fbe78 from qemu
It's easy to accidentally define two cpregs which both try
to reset the same underlying state field (for instance a
clash between an AArch64 EL3 definition and an AArch32
banked register definition). if the two definitions disagree
about the reset value then the result is dependent on which
one happened to be reached last in the hashtable enumeration.
Add a consistency check to detect and assert in these cases:
after reset, we run a second pass where we check that the
reset operation doesn't change the value of the register.
Backports commit 49a661910c1374858602a3002b67115893673c25 from qemu
Rename gt_cnt_reset to gt_timer_reset as the function really
resets the timers and not the counters. Move the registration
from counter regs to timer regs.
Backports commit d57b9ee84f6b2786f025712609edb259d0de086d from qemu