There is an error in functions aarch64_sync_32_to_64() and
aarch64_sync_64_to_32() with mapping of registers between AArch32 and
AArch64. This commit fixes the mapping to match the v8 ARM ARM
section D1.20.1 (table D1-77).
Backports commit 3a9148d0bdcee990fbe86759b9b1f5723c1d7fbc from qemu
All of these hw_errors are fatal and indicate something wrong with
QEMU implementation.
Convert to g_assert_not_reached.
Backports commit 8f6fd322f6e25995629a1a07b56bc5b91fb947ca from qemu
For the A64 instruction set, the semihosting call instruction
is 'HLT 0xf000'. Wire this up to call do_arm_semihosting()
if semihosting is enabled.
Backports commit 8012c84ff92a36d05dfe61af9b24dd01a7ea25e4 from qemu
The 64-bit A64 semihosting API has some pervasive changes from
the 32-bit version:
* all parameter blocks are arrays of 64-bit values, not 32-bit
* the semihosting call number is passed in W0
* the return value is a 64-bit value in X0
Implement the necessary handling for this widening.
Backports relevant parts of commit faacc041619581c566c21ed87aa1933420731282 from qemu
Print semihosting debugging information before the
do_arm_semihosting() call so that angel_SWIreason_ReportException,
which causes the function to not return, gets the same debug prints as
other semihosting calls. Also print out the semihosting call number.
Backports commit 205ace55ffff77964e50af08c99639ec47db53f6 from qemu
Softmmu unaligned load/stores currently goes through through the slow
path for two reasons:
- to support unaligned access on host with strict alignement
- to correctly handle accesses crossing pages
x86 is only concerned by the second reason. Unaligned accesses are
avoided by compilers, but are not uncommon. We therefore would like
to see them going through the fast path, if they don't cross pages.
For that we can use the fact that two adjacent TLB entries can't contain
the same page. Therefore accessing the TLB entry corresponding to the
first byte, but comparing its content to page address of the last byte
ensures that we don't cross pages. We can do this check without adding
more instructions in the TLB code (but increasing its length by one
byte) by using the LEA instruction to combine the existing move with the
size addition.
On an x86-64 host, this gives a 3% boot time improvement for a powerpc
guest and 4% for an x86-64 guest.
Backports commit 8cc580f6a0d8c0e2f590c1472cf5cd8e51761760 from qemu
Implement the AArch64 TLBI operations which take an intermediate
physical address and invalidate stage 2 translations.
Backports commit cea66e91212164e02ad1d245c2371f7e8eb59e7f from qemu
Now we have the ability to flush the TLB only for specific MMU indexes,
update the AArch64 TLB maintenance instruction implementations to only
flush the parts of the TLB they need to, rather than doing full flushes.
We take the opportunity to remove some duplicate functions (the per-asid
tlb ops work like the non-per-asid ones because we don't support
flushing a TLB only by ASID) and to bring the function names in line
with the architectural TLBI operation names.
Backports commit fd3ed969227f54f08f87d9eb6de2d4e48e99279b from qemu
Move the two regdefs for TLBI ALLE1 and TLBI ALLE1IS down so that the
whole set of AArch64 TLBI regdefs is arranged in numeric order.
Backports commit 83ddf975777cc23337b7ef92e83b1b9c949396f3 from qemu
Guest CPU TLB maintenance operations may be sufficiently
specialized to only need to flush TLB entries corresponding
to a particular MMU index. Implement cputlb functions for
this, to avoid the inefficiency of flushing TLB entries
which we don't need to.
Backports commit d7a74a9d4a68e27b3a8ceda17bb95cb0a23d8e4d from qemu
Apply the correct conditions in the ats_access() function for
the ATS12NSO* address translation operations:
* succeed at EL2 or EL3
* normal UNDEF trap from NS EL1
* trap to EL3 from S EL1 (only possible if EL3 is AArch64)
(This change means they're now available in our EL3-supporting
CPUs when they would previously always UNDEF.)
Backports commit 87562e4f4a2bdd028eef3549ce9cb4e7c83cb0bf from qemu
Some coprocessor register access functions need to be able
to report "trap to EL3 with an 'uncategorized' syndrome";
add the necessary CPAccessResult enum and handling for it.
I don't currently know of any registers that need to trap
to EL2 with the 'uncategorized' syndrome, but adding the
_EL2 enum as well is trivial and fills in what would
otherwise be an odd gap in the handling.
Backports commit e76157264da20b85698b09fa5eb8e02e515e232c from qemu
Wire up the AArch64 EL2 and EL3 address translation operations
(AT S12E1*, AT S12E0*, AT S1E2*, AT S1E3*), and correct some
errors in the ats_write64() function in previously unused code
that would have done the wrong kind of lookup for accesses from
EL3 when SCR.NS==0.
Backports commit 2a47df953202e1f226aa045ea974427c4540a167 from qemu
For EL2 stage 1 translations, there is no TTBR1. We were already
handling this for 64-bit EL2; add the code to take the 'no TTBR1'
code path for 64-bit EL2 as well.
Backports commit d0a2cbceb2aa20d64d53e1c20c7d26a78ade8382 from qemu
We already implemented ACTLR_EL1; add the missing ACTLR_EL2 and
ACTLR_EL3, for consistency.
Since we don't currently have any CPUs that need the EL2/EL3
versions to reset to non-zero values, implement as RAZ/WI.
Backports commit 834a6c6920316d39aaf0e68ac936c0a3ad164815 from qemu
The AFSR registers are implementation dependent auxiliary fault
status registers. We already implemented a RAZ/WI AFSR0_EL1 and
AFSR_EL1; add the missing AFSR{0,1}_EL{2,3} for consistency.
Backports commit 37cd6c2478196623ca28526627ca8c69afe0d654 from qemu
The AMAIR registers are for providing auxiliary implementation
defined memory attributes. We already implemented a RAZ/WI
AMAIR_EL1; add the EL2 and EL3 versions for consistency.
Backports commit 2179ef958c81480b841ffa0aab5e265688ffd2b0 from qemu
Add the AArch64 registers MAIR_EL3 and TPIDR_EL3, which are the only
two which we had implemented the 32-bit Secure equivalents of but
not the 64-bit Secure versions.
Backports commit 4cfb8ad896a6f85953038bd913ce3d82d347013d from qemu
apic_internal.h relies on cpu.h having been included (for the
X86CPU type); include it directly rather than relying on it
being pulled in via one of the other includes like timer.h.
Backports commit 20fbcfdd58ea47607a5755979d43f8c48ac93f08 from qemu
Move the muldiv64() function from qemu-common.h to host-utils.h.
This puts it together with all the other arithmetic functions
where we provide a version with __int128_t and a fallback
without, and allows headers which need muldiv64() to avoid
including qemu-common.h.
We don't include host-utils from qemu-common.h, to avoid dragging
more things into qemu-common.h than it already has; in practice
everywhere that needs muldiv64() can get it via qemu/timer.h.
Backports commit 49caffe0cc95a9d0dc344e3328be8197f3536cf8 from qemu
Add a header comment to osdep.h, explaining what the header is for
and some rules to avoid circular-include difficulties.
Backports commit 03557b9abaee78e9d1ef5cd236d32a7b3e75e6f8 from qemu
qemu-common.h has some system header includes and fixups for
things that might be missing. This is really an OS dependency
and belongs in osdep.h, so move it across.
Backports commit bfe7e449f14313f646da621288ca2fd12223414f from qemu
qemu-common.h includes some fixups for things the Win32
headers don't define or define weirdly. These really
belong in os-win32.h, so move them there.
Backports commit 1aad8104f3b69206da1f868639e1f69c26f6d482 from qemu
Add documentation comments for various utility string functions
which we have implemented in util/cutils.c:
pstrcpy()
strpadcpy()
pstrcat()
strstart()
stristart()
qemu_strnlen()
qemu_strsep()
Backports commit ab6036630865eff8bb12dd51dfa6921b4607fc81 from qemu
Rather than rolling custom concatenate-strings macros for the
QEMU_BUILD_BUG_ON macro to use, use the glue() macro we already
have (since it's now available to us in this header).
Backports commit 24134c4e9126bf505b612e901c63a102fc471083 from qemu
osdep.h has a few things which are really compiler specific;
move them to compiler.h, and include compiler.h from osdep.h.
Backports commit 4912086865083a008f4fb73173fd0ddf2206c4d9 from qemu
qemu_printf is an ancient remnant which has been a simple #define to
printf for over a decade, and is used in only a few places. Expand
it out in those places and remove the #define.
Backports commit 71baf787d8fa2a5d186f22d8154069fd212be37f from qemu
There was a complicated subtractive arithmetic for determining the
padding on the CPUTLBEntry structure. Simplify this with a union.
Backports commit b4a4b8d0e0767c85946fd8fc404643bf5766351a from qemu
The LWL/LDL instructions mask the GPR with a mask depending on the
address alignement. It is currently computed by doing:
mask = 0x7fffffffffffffffull >> (t1 ^ 63)
It's simpler to generate it by doing:
mask = ~(-1 << t1)
It uses one TCG instruction less, and it avoids a 32/64-bit constant
loading which can take a few instructions on RISC hosts.
Backports commit eb02cc3f89013612cb05df23b5441741e902bbd2 from qemu
As full specification of P5600 is available, mips32r5-generic should
be renamed to P5600 and corrected as its intention.
Correct PRid and detail of configuration.
Features which are not currently supported are described as FIXME.
Fix Config.MM bit location
Backports commit aff2bc6dc6d839caf6df0900437cc2cc9e180605 from qemu
If EL3 is AArch32, then the secure physical timer is accessed via
banking of the registers used for the non-secure physical timer.
Implement this banking.
Note that the access controls for the AArch32 banked registers
remain the same as the physical-timer checks; they are not the
same as the controls on the AArch64 secure timer registers.
Backports commit 9ff9dd3c875956523bb4c19ca712e5d05aab3c65 from qemu
On CPUs with EL3, there are two physical timers, one for Secure and one
for Non-secure. Implement this extra timer and the AArch64 registers
which access it.
Backports commit b4d3978c2fdf944e428a46d2850dbd950b6fbe78 from qemu
It's easy to accidentally define two cpregs which both try
to reset the same underlying state field (for instance a
clash between an AArch64 EL3 definition and an AArch32
banked register definition). if the two definitions disagree
about the reset value then the result is dependent on which
one happened to be reached last in the hashtable enumeration.
Add a consistency check to detect and assert in these cases:
after reset, we run a second pass where we check that the
reset operation doesn't change the value of the register.
Backports commit 49a661910c1374858602a3002b67115893673c25 from qemu
Rename gt_cnt_reset to gt_timer_reset as the function really
resets the timers and not the counters. Move the registration
from counter regs to timer regs.
Backports commit d57b9ee84f6b2786f025712609edb259d0de086d from qemu
This is legal; the MemoryRegion will simply unreference all the
existing subregions and possibly bring them down with it as well.
However, it requires a bit of care to avoid an infinite loop.
Finalizing a memory region cannot trigger an address space update,
but memory_region_del_subregion errs on the side of caution and
might trigger a spurious update: avoid that by resetting mr->enabled
first.
Backports commit 91232d98da2bfe042d4c5744076b488880de3040 from qemu
The MIPS TCG backend implements qemu_ld with 64-bit targets using the v0
register (base) as a temporary to load the upper half of the QEMU TLB
comparator (see line 5 below), however this happens before the input
address is used (line 8 to mask off the low bits for the TLB
comparison, and line 12 to add the host-guest offset). If the input
address (addrl) also happens to have been placed in v0 (as in the second
column below), it gets clobbered before it is used.
addrl in t2 addrl in v0
1 srl a0,t2,0x7 srl a0,v0,0x7
2 andi a0,a0,0x1fe0 andi a0,a0,0x1fe0
3 addu a0,a0,s0 addu a0,a0,s0
4 lw at,9136(a0) lw at,9136(a0) set TCG_TMP0 (at)
5 lw v0,9140(a0) lw v0,9140(a0) set base (v0)
6 li t9,-4093 li t9,-4093
7 lw a0,9160(a0) lw a0,9160(a0) set addend (a0)
8 and t9,t9,t2 and t9,t9,v0 use addrl
9 bne at,t9,0x836d8c8 bne at,t9,0x836d838 use TCG_TMP0
10 nop nop
11 bne v0,t8,0x836d8c8 bne v0,a1,0x836d838 use base
12 addu v0,a0,t2 addu v0,a0,v0 use addrl, addend
13 lw t0,0(v0) lw t0,0(v0)
Fix by using TCG_TMP0 (at) as the temporary instead of v0 (base),
pushing the load on line 5 forward into the delay slot of the low
comparison (line 10). The early load of the addend on line 7 also needs
pushing even further for 64-bit targets, or it will clobber a0 before
we're done with it. The output for 32-bit targets is unaffected.
srl a0,v0,0x7
andi a0,a0,0x1fe0
addu a0,a0,s0
lw at,9136(a0)
-lw v0,9140(a0) load high comparator
li t9,-4093
-lw a0,9160(a0) load addend
and t9,t9,v0
bne at,t9,0x836d838
- nop
+ lw at,9140(a0) load high comparator
+lw a0,9160(a0) load addend
-bne v0,a1,0x836d838
+bne at,a1,0x836d838
addu v0,a0,v0
lw t0,0(v0)
Backports commit 33fca8589cf2aa7bf91564e6a8f26b3ba0910541 from qemu
When a function returns a null pointer on error and only on error, you
can do
if (!foo(foos, errp)) {
... handle error ...
}
instead of the more cumbersome
Error *err = NULL;
if (!foo(foos, &err)) {
error_propagate(errp, err);
... handle error ...
}
A StringProperty's getter, however, may return null on success! We
then fail to call visit_type_str().
Screwed up in 6a146eb, v1.1.
Fails tests/qom-test in my current, heavily hacked QAPI branch. No
reproducer for master known (but I didn't look hard).
Backports commit a479b21c111a87a50203a7413c4e5ec419fc88dd from qemu
In semihosting mode the SDBBP 1 instructions should trigger UHI syscall,
but in QEMU this does not happen for recently added microMIPS R6.
Consequently bare metal microMIPS R6 programs supporting UHI will not run.
Backports commit 060ebfef1a09b58fb219b3769b72efb407515bf1 from qemu
The add2 code in the tcg_out_addsub2 function doesn't take into account
the case where rl == al == bl. In that case we can't compute the carry
after the addition. As it corresponds to a multiplication by 2, the
carry bit is the bit 31.
While this is a corner case, this prevents x86-64 guests to boot on a
MIPS host.
Backports commit c99d69694af4ed15b33e3f7c2e3ef6972c14358d from qemu
Commit 2b7ec66f fixed TCGMemOp masking following the MO_AMASK addition,
but two cases were forgotten in the TCG S390 backend.
Backports commit 3c8691f568f49bf623dcb2850464d4156d95e61b from qemu
Commit 2b7ec66f fixed TCGMemOp masking following the MO_AMASK addition,
but two cases were forgotten in the TCG MIPS backend.
Backports commit 4214a8cb7c15ec43d4b2a43ebf248b273a0f4d45 from qemu
For 32-bit guest, we load a 32-bit address from the TLB, so there is no
need to compensate for the low or high part. This fixes 32-bit guests on
big-endian hosts.
Backports commit e72c4fb81db52be881c9356f1c60e0a7817d2d32 from qemu
Correct computation of vector offsets for EXCP_EXT_INTERRUPT.
For instance, if Cause.IV is 0 the vector offset should be 0x180.
Simplify the finding vector number logic for the Vectored Interrupts.
Backports commit da52a4dfcc4864fd2260ec4eab331f75b1f0240b from qemu
When a LWL, LWR, LDL or LDR instruction triggers a page fault, QEMU
currently reports the aligned address in CP0 BadVAddr, while the Windows
NT kernel expects the unaligned address.
This patch adds a byte access with the unaligned address at the
beginning of the LWL/LWR/LDL/LDR instructions to possibly trigger a page
fault and fill the QEMU TLB.
Backports commit 908680c6441ac468f4871d513f42be396ea0d264 from qemu
Fix Debug Mode flag clearing, and when DERET is placed between LL and SC
do not make SC fail.
Backports commit fe87c2b36ae9c1c9a5279f3891f3bce1b573baa0 from qemu
When syncing the task ASID with EntryHi, correctly or the value instead
of assigning it.
Backports commit 6a973e6b6584221bed89a01e755b88e58b496652 from qemu
MSACSR.Cause bits are needed to be cleared before a vector floating-point
instructions.
FEXDO.df, FEXUPL.df and FEXUPR.df were missed out.
Backports commit d4f4f0d5d9e74c19614479592c8bc865d92773d0 from qemu
Fix core configuration for MIPS64R6-generic to make it as close as
I6400.
I6400 core has 48-bit of Virtual Address available (SEGBITS).
MIPS SIMD Architecture is available.
Rearrange order of bits to match the specification.
Backports commit 4dc89b782095d7a0b919fafd7b1322b3cb1279f1 from qemu
W10 insider has a bug where it ignores CPUID level and interprets
CPUID.(EAX=07H, ECX=0H) incorrectly, because CPUID in fact returned
CPUID.(EAX=04H, ECX=0H); this resulted in execution of unsupported
instructions.
While it's a Windows bug, there is no reason to emulate incorrect level.
I used http://instlatx64.atw.hu/ as a source of CPUID and checked that
it matches Penryn Xeon X5472, Westmere Xeon W3520, SandyBridge i5-2540M,
and Haswell i5-4670T.
kvm64 and qemu64 were bumped to 0xD to allow all available features for
them (and to avoid the same Windows bug).
Backports commit 3046bb5debc8153a542acb1df93b2a1a85527a15 from qemu.
With the Intel microcode update that removed HLE and RTM, there will be
different kinds of Haswell and Broadwell CPUs out there: some that still
have the HLE and RTM features, and some that don't have the HLE and RTM
features. On both cases people may be willing to use the pc-*-2.3
machine-types.
So, to cover both cases, introduce Haswell-noTSX and Broadwell-noTSX CPU
models, for hosts that have Haswell and Broadwell CPUs without TSX support.
Backports commit a356850b80b3d13b2ef737dad2acb05e6da03753 from qemu
ARAT signals that the APIC timer does not stop in power saving states.
As our APICs are emulated, it's fine to expose this feature to guests,
at least when asking for KVM host features or with CPU types that
include the flag. The exact model number that introduced the feature is
not known, but reports can be found that it's at least available since
Sandy Bridge.
Backports commit 28b8e4d0bf93ba176b4b7be819d537383c5a9060 from qemu
This patch denies crossing the boundary of the pages in the replay mode,
because it can cause an exception. Do it only when boundary is
crossed by the first instruction in the block.
If current instruction already crossed the bound - it's ok,
because an exception hasn't stopped this code.
Backports commit 5b9efc39aee90bbd343793e942bf8f582a0c9e4f from qemu
TCG generates optimized code for i386 repz instructions in single step mode.
It means that when ecx becomes 0, execution of the string instruction breaks
immediately without an additional iteration for ecx==0 (which will only check
ecx and set the flags). Omitting this iteration leads to different
instructions counting in singlestep mode and in normal execution.
This patch disables optimization of this last iteration for icount mode
which should be deterministic.
Backport commit c4d4525c38cd93cc5d1a743976eb25ac571d435f from qemu
This patch simplifies the AES code, by directly accessing the newly added
S-Box, InvS-Box and InvMixColumns tables instead of recreating them by
using the AES_Te and AES_Td tables.
Backports commit 9551ea6991cfb7c777f7943ad69b30d0a4fadac3 from qemu
These represent xsave-related capabilities of the processor, and KVM may
or may not support them.
Add feature bits so that they are considered by "-cpu ...,enforce", and use
the new feature work instead of calling kvm_arch_get_supported_cpuid.
Bit 3 (XSAVES) is not migratables because it requires saving MSR_IA32_XSS.
Neither KVM nor any commonly available hardware supports it anyway.
Backports commit 0bb0b2d2fe7f645ddaf1f0ff40ac669c9feb4aa1 from qemu
also backports 18cd2c17b5370369a886155c001da0a7f54bbcca
With this, object_property_add_alias() callers can safely free the
target property name, like what already happens with the 'name' argument
to all object_property_add*() functions.
Backports commit 1590d266d96b3f9b42443d6388dfc38f527ac2d8 from qemu
The SCTLR_EL3 cpreg definition was implicitly resetting the
register state to 0, which is both wrong and clashes with
the reset done via the SCTLR definition (since sctlr[3]
is unioned with sctlr_s). This went unnoticed until recently,
when an unrelated change (commit a903c449b41f105aa) happened to
perturb the order of enumeration through the cpregs hashtable for
reset such that the erroneous reset happened after the correct one
rather than before it. Fix this by marking SCTLR_EL3 as an alias,
so its reset is left up to the AArch32 view.
Backports commit e46e1a74ef482f1ef773e750df9654ef4442ca29 from qemu
Remove un-needed usages of ENV_GET_CPU() by converting the APIs to use
CPUState pointers and retrieving the env_ptr as minimally needed.
Scripted conversion for target-* change:
for I in target-*/cpu.h; do
sed -i \
's/\(^int cpu_[^_]*_exec(\)[^ ][^ ]* \*s);$/\1CPUState *cpu);/' \
$I;
done
Backports commit ea3e9847408131abc840240bd61e892d28459452 from qemu
The callers (most of them in target-foo/cpu.c) to this function all
have the cpu pointer handy. Just pass it to avoid an ENV_GET_CPU() from
core code (in exec.c).
Backports commit 4bad9e392e788a218967167a38ce2ae7a32a6231 from qemu
The sole caller of this function navigates the cpu->env_ptr only for
this function to take it back the cpu pointer straight away. Pass in
cpu pointer instead and grab the env pointer locally in the function.
Removes a core code usage of ENV_GET_CPU().
Backports commit 3d57f7893c90d911d786cb2c622b0926fc808b57 from qemu
All of the core-code usages of this API have the cpu pointer handy so
pass it in. There are only 3 architecture specific usages (2 of which
are commented out) which can just use ENV_GET_CPU() locally to get the
cpu pointer. The reduces core code usage of the CPU env, which brings
us closer to common-obj'ing these core files.
Backports commit bbd77c180d7ff1b04a7661bb878939b2e1d23798 from qemu
QOM objects are already zero-filled when instantiated, there's no need
to explicitly set numa_node to 0.
Backports commit 199fc85acd0571902eeefef6ea861b8ba4c8201f from qemu
To prepare for a generic internal cipher API, move the
built-in AES implementation into the crypto/ directory
Backports commit 6f2945cde60545aae7f31ab9d5ef29531efbc94f from qemu
Introduce a new crypto/ directory that will (eventually) contain
all the cryptographic related code. This initially defines a
wrapper for initializing gnutls and for computing hashes with
gnutls. The former ensures that gnutls is guaranteed to be
initialized exactly once in QEMU regardless of CLI args. The
block quorum code currently fails to initialize gnutls so it
only works by luck, if VNC server TLS is not requested. The
hash APIs avoids the need to litter the rest of the code with
preprocessor checks and simplifies callers by allocating the
correct amount of memory for the requested hash.
Backports commit ddbb0d09661f5fce21b335ba9aea8202d189b98e from qemu
Make sure to not modify the branch target. This ensure that the
branch target is not corrupted during partial retranslation.
Backports commit cd3b29b745b0ff393b2d37317837bc726b8dacc8 from qemu
The TSC frequency fits comfortably in an int when expressed in kHz,
but it may overflow when converted to Hz. In this case,
tsc-frequency returns a negative value because x86_cpuid_get_tsc_freq
does a 32-bit multiplication before assigning to int64_t.
For simplicity just make tsc_khz a 64-bit value.
Backports commit 06ef227e5158cca6710e6c268d6a7f65a5e2811b from qemu
Currently the "host" page size alignment API is really aligning to both
host and target page sizes. There is the qemu_real_page_size which can
be used for the actual host page size but it's missing a mask and ALIGN
macro as provided for qemu_page_size. Complete the API. This allows
system level code that cares about the host page size to use a
consistent alignment interface without having to un-needingly align to
the target page size. This also reduces system level code dependency
on the cpu specific TARGET_PAGE_SIZE.
Backports commit 4e51361d79289aee2985dfed472f8d87bd53a8df from qemu
Apart from the MSR, the smi field of struct kvm_vcpu_events has to be
translated into the corresponding CPUX86State fields. Also,
memory transaction flags depend on SMM state, so pull it from struct
kvm_run on every exit from KVM to userspace.
Backports relevant parts of commit fc12d72e10828ca6ff75f2ad432b741f07a10cef from qemu
Loading the BIOS in the mac99 machine is interesting, because there is a
PROM in the middle of the BIOS region (from 16K to 32K). Before memory
region accesses were clamped, when QEMU was asked to load a BIOS from
0xfff00000 to 0xffffffff it would put even those 16K from the BIOS file
into the region. This is weird because those 16K were not actually
visible between 0xfff04000 and 0xfff07fff. However, it worked.
After clamping was added, this also worked. In this case, the
cpu_physical_memory_write_rom_internal function split the write in
three parts: the first 16K were copied, the PROM area (second 16K) were
ignored, then the rest was copied.
Problems then started with commit 965eb2f (exec: do not clamp accesses
to MMIO regions, 2015-06-17). Clamping accesses is not done for MMIO
regions because they can overlap wildly, and MMIO registers can be
expected to perform full-width accesses based only on their address
(with no respect for adjacent registers that could decode to completely
different MemoryRegions). However, this lack of clamping also applied
to the PROM area! cpu_physical_memory_write_rom_internal thus failed
to copy the third range above, i.e. only copied the first 16K of the BIOS.
In effect, address_space_translate is expecting _something else_ to do
the clamping for MMIO regions if the incoming length is large. This
"something else" is memory_access_size in the case of address_space_rw,
so use the same logic in cpu_physical_memory_write_rom_internal.
Backports commit b242e0e0e2969c044a318e56f7988bbd84de1f63 from qemu
Including qemu-common.h from other header files is generally a bad
idea, because it means it's very easy to end up with a circular
dependency. For instance, if we wanted to include memory.h from
qom/cpu.h we'd end up with this loop:
memory.h -> qemu-common.h -> cpu.h -> cpu-qom.h -> qom/cpu.h -> memory.h
Remove the include from memory.h. This requires us to fix up a few
other files which were inadvertently getting declarations indirectly
through memory.h.
The biggest change is splitting the fprintf_function typedef out
into its own header so other headers can get at it without having
to include qemu-common.h.
Backports commit fba0a593b2809ecdda68650952cf3d3332ac1990 from qemu
This introduces the memory region property "global_locking". It is true
by default. By setting it to false, a device model can request BQL-free
dispatching of region accesses to its r/w handlers. The actual BQL
break-up will be provided in a separate patch.
Backports commit 196ea13104f802c508e57180b2a0d2b3418989a3 from qemu
This makes it more consistent with all other core code files, which
either just rely on qemu-common.h inclusion or precede cpu.h with
qemu-common.h.
cpu-all.h should not be included in addition to cpu.h. Remove it.
Backports commit 94beb661bd90bcb477eed6d3b07aced988c40163 from qemu
These are not Architecture specific in any way so move them out of
cpu-defs.h. tb-hash.h is an appropriate place as a leading user and
their strong relationship to TB hashing and caching.
Backports commit 41da4bd6420afd1209c408974920f63ff9c658e1 from qemu
This is one of very few things in exec-all with a genuine CPU
architecture dependency. Move these hashing helpers to a new
header to trim exec-all.h down to a near architecture-agnostic
header.
The defs are only used by cpu-exec and translate-all which are both
arch-obj's so the new tb-hash.h has no core code usage.
Backports commit e1b89321bafea9fb33d87852fc91fee579d17dfe from qemu
These exception indicies are generic and don't have any reliance on the
per-arch cpu.h defs. Move them to cpu-all.h so they can be used by core
code that does not have access to cpu-defs.h.
Backports commit 9e0dc48c9f05505b53cb28f860456a0648e56ddf from qemu
Intel C Compiler version 15.0.3.187 Build 20150407 doesn't support
'|' function for non floating-point simd operands.
Define VEC_OR macro which uses _mm_or_si128 supported
both in icc and gcc on x86 platform.
Backports commit 34664507c7f038842f20a2c787915680b1fabba2 from qemu
The usages of this define are pure TCG and there is no architecture
specific variation of the value. Localise it to the TCG engine to
remove another architecture agnostic piece from cpu-defs.h.
This follows on from a28177820a868eafda8fab007561cc19f41941f4 where
temp_buf was moved out of the CPU_COMMON obsoleting the need for
the super early definition.
Backports commit 6e0b07306d1793e8402dd218d2e38a7377b5fc27 from qemu
Implement the YIELD instruction in the ARM and Thumb translators to
actually yield control back to the top level loop rather than being
a simple no-op. (We already do this for A64.)
Backports commit c87e5a61c2b3024116f52f7e68273f864ff7ab82 from qemu