CP0.PageGrain.ELPA enables support for large physical addresses. This field
is encoded as follows:
0: Large physical address support is disabled.
1: Large physical address support is enabled.
If this bit is a 1, the following changes occur to coprocessor 0 registers:
- The PFNX field of the EntryLo0 and EntryLo1 registers is writable and
concatenated with the PFN field to form the full page frame number.
- Access to optional COP0 registers with PA extension, LLAddr, TagLo is
defined.
P5600 can operate in 32-bit or 40-bit Physical Address Mode. Therefore if
XPA is disabled (CP0.PageGrain.ELPA = 0) then assume 32-bit Address Mode.
In MIPS64 assume 36 as default PABITS (when CP0.PageGrain.ELPA = 0).
env->PABITS value is constant and indicates maximum PABITS available on
a core, whereas env->PAMask is calculated from env->PABITS and is also
affected by CP0.PageGrain.ELPA.
Backports commit e117f52636d04502fab28bd3abe93347c29f39a5 from qemu
Update tlb->PFN to contain PFN concatenated with PFNX. PFNX is 0 if large
physical address is not supported.
Backports commit cd0d45c40133ef8b409aede5ad8a99aeaf6a70fe from qemu
Extend EntryLo0, EntryLo1, LLAddr and TagLo from 32 to 64 bits in MIPS32.
Introduce gen_move_low32() function which moves low 32 bits from 64-bit
temp to GPR; it sign extends 32-bit value on MIPS64 and truncates on
MIPS32.
Backports commit 284b731a6ae47b9ebabb9613e753c4d83cf75dd3 from qemu
ERETNC is identical to ERET except that an ERETNC will not clear the LLbit
that is set by execution of an LL instruction, and thus when placed between
an LL and SC sequence, will never cause the SC to fail.
Presence of ERETNC is denoted by the Config5.LLB.
Backports commit ce9782f40ac16660ea9437bfaa2c9c34d5ed8110 from qemu
MIPS SIMD Architecture vector loads and stores require misalignment support.
MSA Memory access should work as an atomic operation. Therefore, it has to
check validity of all addresses for a vector store access if it is spanning
into two pages.
Separating helper functions for each data format as format is known in
translation.
To use mmu_idx from cpu_mmu_index() instead of calculating it from hflag.
Removing save_cpu_state() call in translation because it is able to use
cpu_restore_state() on fault as GETRA() is passed.
Backports commit adc370a48fd26b92188fa4848dfb088578b1936c from qemu
Release 6 requires misaligned memory access support for all ordinary memory
access instructions (for example, LW/SW, LWC1/SWC1).
However misaligned support is not provided for certain special memory accesses
such as atomics (for example, LL/SC).
Backports commit be3a8c53b4f18bcc51a462d977cc61a0f46ebb1c from qemu
This relatively small architectural feature adds the following:
FIR.FREP: Read-only. If FREP=1, then Config5.FRE and Config5.UFE are
available.
Config5.FRE: When enabled all single-precision FP arithmetic instructions,
LWC1/LWXC1/MTC1, SWC1/SWXC1/MFC1 cause a Reserved Instructions
exception.
Config5.UFE: Allows user to write/read Config5.FRE using CTC1/CFC1
instructions.
Enable the feature in MIPS64R6-generic CPU.
Backports commit 7c979afd11b09a16634699dd6344e3ba10c9677e from qemu
Move the "Tests" group of functions so that gen_load_fpr32() and
gen_store_fpr32() can use generate_exception().
Backports commit eab9944c7801525737626fa45cddaf00932dd2c8 from qemu
Different CPUs can be in SMM or not at the same time, thus they
will see different things where the chipset places SMRAM.
Backports commit 2001d0cd6d55e5efa9956fa8ff8b89034d6a4329 from qemu
An SMI should definitely wake up a processor in halted state!
This lets OVMF boot with SMM on multiprocessor systems, although
it halts very soon after that with a "CpuIndex != BspIndex"
assertion failure.
Backports commit a9bad65d2c1f61af74ce2ff43238d4b20bf81c3a from qemu
Because the limit field's bits 31:20 is 1, G should be 1.
VMX actually enforces this, let's do it for completeness
in QEMU as well.
Backports commit b4854f1384176d897747de236f426d020668fa3c from qemu
QEMU is not blocking NMIs on entry to SMM. Implementing this has to
cover a few corner cases, because:
- NMIs can then be enabled by an IRET instruction and there
is no mechanism to _set_ the "NMIs masked" flag on exit from SMM:
"A special case can occur if an SMI handler nests inside an NMI handler
and then another NMI occurs. [...] When the processor enters SMM while
executing an NMI handler, the processor saves the SMRAM state save map
but does not save the attribute to keep NMI interrupts disabled.
- However, there is some hidden state, because "If NMIs were blocked
before the SMI occurred [and no IRET is executed while in SMM], they
are blocked after execution of RSM." This is represented by the new
HF2_SMM_INSIDE_NMI_MASK bit. If it is zero, NMIs are _unblocked_
on exit from RSM.
Backports commit 9982f74bad70479939491b69522da047a3be5a0d from qemu
In order to do this, stop using the cpu_in*/out* helpers, and instead
access address_space_io directly.
cpu_in* and cpu_out* remain for usage in the monitor, in qtest, and
in Xen.
Backports commit 3f7d84648607cc0fcb3812bb4b88978e2a7aa24f from qemu
These include page table walks, SVM accesses and SMM state save accesses.
The bulk of the patch is obtained with
sed -i 's/\(\<[a-z_]*_phys\(_notdirty\)\?\>(cs\)->as,/x86_\1,/'
Backports commit b216aa6c0fcbaa8ff4128969c14594896a5485a4 from qemu
mr->terminates alone doesn't guarantee that we are looking at a RAM region.
mr->ram_addr also has to be checked, in order to distinguish RAM and I/O
regions.
So, do the following:
1) add a new define RAM_ADDR_INVALID, and test it in the assertions
instead of mr->terminates
2) IOMMU regions were not setting mr->ram_addr to a bogus value, initialize
it in the instance_init function so that the new assertions would fire
for IOMMU regions as well.
Backports commit ec05ec26f940564b1e07bf88857035ec27e21dd8 from qemu
The cpu_physical_memory_reset_dirty() function is sometimes used
together with cpu_physical_memory_get_dirty(). This is not atomic since
two separate accesses to the dirty memory bitmap are made.
Turn cpu_physical_memory_reset_dirty() and
cpu_physical_memory_clear_dirty_range_type() into the atomic
cpu_physical_memory_test_and_clear_dirty().
Backports commit 03eebc9e3246b9b3f5925aa41f7dfd7c1e467875 from qemu
Use set_bit_atomic() and bitmap_set_atomic() so that multiple threads
can dirty memory without race conditions.
Backports commit d114875b9a1c21162f69a12d72f69a22e7bab376 from qemu
Most of the time, not all bitmaps have to be marked as dirty;
do not do anything if the interesting ones are already dirty.
Previously, any clean bitmap would have cause all the bitmaps to be
marked dirty.
In fact, unless running TCG most of the time bitmap operations need
not be done at all, because memory_region_is_logging returns zero.
In this case, skip the call to cpu_physical_memory_range_includes_clean
altogether as well.
With this patch, cpu_physical_memory_set_dirty_range is called
unconditionally, so there need not be anymore a separate call to
xen_modified_memory.
Backports commit e87f7778b64d4a6a78e16c288c7fdc6c15317d5f from qemu
cpu_physical_memory_set_dirty_lebitmap unconditionally syncs the
DIRTY_MEMORY_CODE bitmap. This however is unused unless TCG is
enabled.
Backports commit 9460dee4b2258e3990906fb34099481c8334c267 from qemu
The new bitmap_test_and_clear_atomic() function clears a range and
returns whether or not the bits were set.
Backports commit 36546e5b803f6e363906607307f27c489441fd15 from qemu
Use atomic_or() for atomic bitmaps where several threads may set bits at
the same time. This avoids the race condition between threads loading
an element, bitwise ORing, and then storing the element.
When setting all bits in a word we can avoid atomic ops and instead just
use an smp_mb() at the end.
Most bitmap users don't need atomicity so introduce new functions.
Backports commit 9f02cfc84b85929947b32fe1674fbc6a429f332a from qemu
While it is obvious that cpu_physical_memory_get_dirty returns true even if
a single page is dirty, the same is not true for cpu_physical_memory_get_clean;
one would expect that it returns true only if all the pages are clean, but
it actually looks for even one clean page. (By contrast, the caller of that
function, cpu_physical_memory_range_includes_clean, has a good name).
To clarify, rename the function to cpu_physical_memory_all_dirty and return
true if _all_ the pages are dirty. This is the opposite of the previous
meaning, because "all are 1" is the same as "not (any is 0)", so we have to
modify cpu_physical_memory_range_includes_clean as well
Backports commit 72b47e79cef36ed6ffc718f10e21001d7ec2a66f from qemu
is_cpu_write_access is only set if tb_invalidate_phys_page_range is called
from tb_invalidate_phys_page_fast, and hence from notdirty_mem_write.
However:
- the code bitmap can be built directly in tb_invalidate_phys_page_fast
(unconditionally, since is_cpu_write_access would always be passed as 1);
- the virtual address is not needed to mark the page as "not containing
code" (dirty code bitmap = 1), so we can also remove that use of
is_cpu_write_access. For calls of tb_invalidate_phys_page_range
that do not come from notdirty_mem_write, the next call to
notdirty_mem_write will notice that the page does not contain code
anymore, and will fix up the TLB entry.
The parameter needs to remain in order to guard accesses to cpu->mem_io_pc.
Backports commit fc377bcf617a48233a99a9fe0a26247c38b5cb76 from qemu
These days modification of the TLB is done in notdirty_mem_write,
so the virtual address and env pointer as unnecessary.
The new name of the function, tlb_unprotect_code, is consistent with
tlb_protect_code.
Backports commit 9564f52da7eb061326956ed9a468935e3352512d from qemu
Remove them from the sundry exec-all.h header, since they are only used by
the TCG runtime in exec.c and user-exec.c.
Backports commit 1652b974766401743879d78f796f44b8929b0787 from qemu
DIRTY_MEMORY_CODE is only needed for TCG. By adding it directly to
mr->dirty_log_mask, we avoid testing for TCG everywhere a region is
checked for the enabled/disabled state of dirty logging.
Backports commit 677e7805cf95f3b2bca8baf0888d1ebed7f0c606 from qemu
When the dirty log mask will also cover other bits than DIRTY_MEMORY_VGA,
some listeners may be interested in the overall zero/non-zero value of
the dirty log mask; others may be interested in the value of single bits.
For this reason, always call log_start/log_stop if bits have respectively
appeared or disappeared, and pass the old and new values of the dirty log
mask so that listeners can distinguish the kinds of change.
For example, KVM checks if dirty logging used to be completely disabled
(in log_start) or is now completely disabled (in log_stop). On the
other hand, Xen has to check manually if DIRTY_MEMORY_VGA changed,
since that is the only bit it cares about.
Backports commit b2dfd71c4843a762f2befe702adb249cf55baf66 from qemu
For now memory regions only track DIRTY_MEMORY_VGA individually, but
this will change soon. To support this, split memory_region_is_logging
in two functions: one that returns a given bit from dirty_log_mask,
and one that returns the entire mask. memory_region_is_logging gets an
extra parameter so that the compiler flags misuse.
While VGA-specific users (including the Xen listener!) will want to keep
checking that bit, KVM and vhost check for "any bit except migration"
(because migration is handled via the global start/stop listener
callbacks).
Backports commit 2d1a35bef0ed96b3f23535e459c552414ccdbafd from qemu
phys_page_set_level is writing zeroes to a struct that has just been
filled in by phys_map_node_alloc. Instead, tell phys_map_node_alloc
whether to fill in the page "as a leaf" or "as a non-leaf".
memcpy is faster than struct assignment, which copies each bitfield
individually. A compiler bug (https://gcc.gnu.org/PR66391), and
small memcpys like this one are special-cased anyway, and optimized
to a register move, so just use the memcpy.
This cuts the cost of phys_page_set_level from 25% to 5% when
booting qboot.
Backports commit db94604b20278c1dc227a04e4c564d80230e6c3f from qemu
At 8k per TLB (for 64-bit host or target), 8 or more modes
make the TLBs bigger than 64k, and some RISC TCG backends do
not like that. On the affected hosts, cut the TLB size in
half---there is still a measurable speedup on PPC with the
next patch.
Backports commit 1de29aef17a7d70dbc04a7fe51e18942e3ebe313 from qemu
This will be used to size the TLB when more than 8 MMU modes are
used by the target. Limitations come from the limited size of
the immediate fields (which sometimes, as in the case of Aarch64,
extend to instructions that shift the immediate).
Backports commit 006f8638c62bca2b0caf609485f47fa5e14d8a3c from qemu
Existing definition triggers the following when using clang
-fsanitize=undefined:
hw/intc/apic_common.c:314:55: runtime error: left shift of 1048575 by 12
places cannot be represented in type 'int'
Fix it so we won't try to shift a 1 to the sign bit of a signed integer.
Backports commit 458cf469f4a1cb520b07092f5537c5a6d2389d23 from qemu
The ARMCPRegInfo arrays v8_el3_no_el2_cp_reginfo and v8_el2_cp_reginfo
are actually used on non-v8 CPUs as well. Remove the incorrect v8_
prefix from their names.
Backports commit 4771cd01daaccb2a8929fa04c88c608e378cf814 from qemu
Add support for trapping WFI and WFE instructions to the proper EL when
SCTLR/SCR/HCR settings apply.
Backports commit b1eced713d9913a5c58ba9daa795f10e4c856c49 from qemu
Just NOP the WFI instruction if we have work to do.
This doesn't make much difference currently (though it does avoid
jumping out to the top level loop and immediately restarting),
but the distinction between "halt" and "don't halt" will become
more important when the decision to halt requires us to trap
to a higher exception level instead.
Backport commit 84549b6dcf9147559ec08b066de673587be6b763 from qemu
Deleting the now-unused ARM_TBFLAG_CPACR_FPEN left a gap in the
bit usage; move the following ARM_TBFLAG_XSCALE_CPAR and
ARM_TBFLAG_NS_SHIFT down 3 bits to fill the gap.
Backports commit 647f767ba3b37fb229275086187e96242248a4ac from qemu
Extend the ARM disassemble context to take a target exception EL instead of a
boolean enable. This change reverses the polarity of the check making a value
of 0 indicate floating point enabled (no exception).
Backports commit 9dbbc748d671c70599101836cd1c2719d92f3017 from qemu
Currently we keep the TB flags PSTATE_SS and SS_ACTIVE in different
bit positions for AArch64 and AArch32. Replace these separate
definitions with a single common flag in the upper part of the
flags word.
Backports commit 3cf6a0fcedd429693d439556543400d5f0e31e1d from qemu
Adds CPTR_EL2/3 system registers definitions and access function.
Signed-off-by: Greg Bellows <greg.bellows@linaro.org>
[PMM: merge CPTR_EL2 and HCPTR definitions into a single
def using STATE_BOTH;
don't use readfn/writefn to implement RAZ/WI registers;
don't use accessfn for the no-EL2 CPTR_EL2;
fix cpacr_access logic to catch EL2 accesses to CPACR being
trapped to EL3;
use new CP_ACCESS_TRAP_EL[23] rather than setting
exception.target_el directly]
Backports commit c6f191642a4027909813b4e6e288411f8371e951 from qemu
Some coprocessor access functions will need to indicate that the
instruction should trap to EL2 or EL3 rather than the default
target exception level; add corresponding CPAccessResult enum
entries and handling code.
Backports commit 38836a2cd47c20daaaa84873e3d6020f19e4bfca from qemu
Updated the interrupt handling to utilize and report through the target EL
exception field. This includes consolidating and cleaning up code where
needed. Target EL is now calculated once in arm_cpu_exec_interrupt() and
do_interrupt was updated to use the target_el exception field. The
necessary code from arm_excp_target_el() was merged in where needed and the
function removed.
Backports commit 012a906b19e99b126403ff4a257617dab9b34163 from qemu
Rather than making every caller of raise_exception set the
syndrome and target EL by hand, make these arguments to
raise_exception() and have that do the job.
Backports commit c63285991b371c031147ad620dd7671662a90303 from qemu
Move the code which sets exception information out of
arm_cpu_handle_mmu_fault and into tlb_fill. tlb_fill
is the only caller which wants to raise_exception()
so it makes more sense for it to handle the whole of
the exception setup.
As part of this cleanup, move the user-mode-only
implementation function for the handle_mmu_fault CPU
method into cpu.c so we don't need to make it globally
visible, and rename the softmmu-only utility function
arm_cpu_handle_mmu_fault to arm_tlb_fill so it's clear
that it's not the same thing.
Backports commit 8c6084bf10fe721929ca94cf16acd6687e61d3ec from qemu
If the SCTLR.UMA trap bit is set then attempts by EL0 to update
the PSTATE DAIF bits via "MSR DAIFSet, imm" and "MSR DAIFClr, imm"
instructions will raise an exception. We were failing to set
the syndrome information for this exception, which meant that
it would be reported as a repeat of whatever the previous
exception was. Set the correct syndrome information.
Backports commit f2932df777dace044719dc2f394f5a5a8aa1b1cd from qemu
Updated the various helper routines to set the target EL as needed using a
dedicated function.
Backports commit e3b1d480995f6e2e86ef062038e618c1234dbcf1 from qemu
Add a CPU state exception target EL field that will be used for communicating
the EL to which an exception should be routed.
Add a disassembly context field for tracking the EL3 architecture needed for
determining the target exception EL.
Add a target EL argument to the generic exception helper for callers to specify
the EL to which the exception should be routed. Extended the helper to set
the newly added CPU state exception target el.
Added a function for setting the target exception EL and updated calls to helpers
to call it.
Backports commit 737103619869600668cc7e8700e4f6eab3943896 from qemu
Updated get_phys_addr_lpae to check the appropriate TTBCR/TCR depending on the
current EL. Support includes using the different TCR format as well as checks to
insure TTBR1 is not used when in EL2 or EL3.
Backports commit 88e8add8b6656c349a96b447b074688d02dc5415 from qemu
Add a utility function for choosing the correct TTBR system register based on
the specified MMU index. Add use of function on physical address lookup.
Backports commit aef878be4e7ab1bdb30b408007320400b0a29c83 from qemu
Add the ARM Cortex-A53 processor definition. Similar to A57, but with
different L1 I cache policy, phys addr size and different cache
geometries. The cache sizes is implementation configurable, but use
these values (from Xilinx Zynq MPSoC) as a default until cache size
configurability is added.
Backports commit e35310260ec57d20301c65a5714ca55369e971cc from qemu
Rename some A57 CP register variables in preparation for support for
Cortex A53. Use "a57_a53" to describe the shareable features. Some of
the CP15 registers (such as ACTLR) are specific to implementation, but
we currently just RAZ them so continue with that as the policy for both
A57 and A53 processors under a shared definition.
Backports commit ee804264ddc4d3cd36a5183a09847e391da0fc66 from qemu
No code uses the cpu_pc_from_tb() function. Delete from tricore and
arm which each provide an unused implementation. Update the comment
in tcg.h to reflect that this is obsoleted by synchronize_from_tb.
Backports commit fee068e4f190a36ef3bda9aa7c802f90434ef8e5 from qemu
Due to old Seabios bug, QEMU reenable LINT0 after reset. This bug is long gone
and therefore this hack is no longer needed. Since it violates the
specifications, it is removed.
Backports commit b8eb5512fd8a115f164edbbe897cdf8884920ccb from qemu
address_space_translate_internal will clamp the *plen length argument
based on the size of the memory region being queried. The iommu walker
logic in addresss_space_translate was ignoring this by discarding the
post fn call value of *plen. Fix by just always using *plen as the
length argument throughout the fn, removing the len local variable.
This fixes a bootloader bug when a single elf section spans multiple
QEMU memory regions.
Backports commit 23820dbfc79d1c9dce090b4c555994f2bb6a69b3 from qemu
There could be a race condition when two processes call
address_space_map concurrently and both want to use the bounce buffer.
Add an in_use flag in BounceBuffer to sync it.
Backports commit c2cba0ffe495b60c4cc58080281e99c7a6580d4b from qemu
When CPU vendor is AMD, the AMD feature alias bits on
CPUID[0x80000001].EDX are already automatically copied from CPUID[1].EDX
on x86_cpu_realizefn(). When CPU vendor is Intel, those bits are
reserved and should be zero. On either case, those bits shouldn't be set
in the CPU model table.
Backports commit 726a8ff68677d8d5fba17eb0ffb85076bfb598dc from qemu
Static properties require only 1 line of code, much simpler than the
existing code that requires writing new getters/setters.
As a nice side-effect, this fixes an existing bug where the setters were
incorrectly allowing the properties to be changed after the CPU was
already realized.
Backports commit b9472b76d273c7796d877c49af50969c0a879c50 from qemu
Updated scr_write to always allow updates to the SCR.SMD bit on ARMv8
regardless of whether virtualization (EL2) is enabled or not.
Backports commit 4eb276408363aef5435a72a8e818f24220b5edd0 from qemu
Add a transaction attribute indicating that a memory access is being
done from user-mode (unprivileged). This corresponds to an equivalent
signal in ARM AMBA buses.
Backports commit 0995bf8cd91b81ec9c1078e37b808794080dc5c0 from qemu
Factor out the page table walk memory accesses into their own function,
so that we can specify the correct S/NS memory attributes for them.
This will also provide a place to use the correct endianness and
handle the need for a stage-2 translation when virtualization is
supported.
Backports commit ebca90e4c3aaaae5ed1ee7c569dea00d5d6ed476 from qemu
Honour the NS bit in ARM page tables:
* when adding entries to the TLB, include the Secure/NonSecure
transaction attribute
* set the NS bit in the PAR when doing ATS operations
Note that we don't yet correctly use the NSTable bit to
cause the page table walk itself to use the right attributes.
Backports commit 8bf5b6a9c1911d2c8473385fc0cebfaaeef42dbc from qem
This patch makes the following changes to the determination of
whether an address is executable, when translating addresses
using LPAE.
1. No longer assumes that PL0 can't execute when it can't read.
It can in AArch64, a difference from AArch32.
2. Use va_size == 64 to determine we're in AArch64, rather than
arm_feature(env, ARM_FEATURE_V8), which is insufficient.
3. Add additional XN determinants
- NS && is_secure && (SCR & SCR_SIF)
- WXN && (prot & PAGE_WRITE)
- AArch64: (prot_PL0 & PAGE_WRITE)
- AArch32: UWXN && (prot_PL0 & PAGE_WRITE)
- XN determination should also work in secure mode (untested)
- XN may even work in EL2 (currently impossible to test)
4. Cleans up the bloated PAGE_EXEC condition - by removing it.
The helper get_S1prot is introduced. It may even work in EL2,
when support for that comes, but, as the function name implies,
it only works for stage 1 translations.
Backports commit d8e052b387635639a6ba4a09a7874fd2f113b218 from qemu
Introduce simple_ap_to_rw_prot(), which has the same behavior as
ap_to_rw_prot(), but takes the 2-bit simple AP[2:1] instead of
the 3-bit AP[2:0]. Use this in get_phys_addr_v6 when SCTLR_AFE
is set, as that bit indicates we should be using the simple AP
format.
It's unlikely this path is getting used. I don't see CR_AFE
getting used by Linux, so possibly not. If it had been, then
the check would have been wrong for all but AP[2:1] = 0b11.
Anyway, this should fix it up, in case it ever does get used.
Backports commit d76951b65dfb1be4e41cfae6abebf8db7a1243a3 from qemu
Instead of mixing access permission checking with access permissions
to page protection flags translation, just do the translation, and
leave it to the caller to check the protection flags against the access
type. Also rename to ap_to_rw_prot to better describe the new behavior.
Backports commit 0fbf5238203041f734c51b49778223686f14366b from qemu
Switch all the uses of ld/st*_phys to address_space_ld/st*,
except for those cases where the address space is the CPU's
(ie cs->as). This was done with the following script which
generates a Coccinelle patch.
A few over-80-columns lines in the result were rewrapped by
hand where Coccinelle failed to do the wrapping automatically,
as well as one location where it didn't put a line-continuation
'\' when wrapping lines on a change made to a match inside
a macro definition.
===begin===
for FN in ub uw_le uw_be l_le l_be q_le q_be uw l q; do
cat <<EOF
@ cpu_matches_ld_${FN} @
expression E1,E2;
identifier as;
@@
ld${FN}_phys(E1->as,E2)
@ other_matches_ld_${FN} depends on !cpu_matches_ld_${FN} @
expression E1,E2;
@@
-ld${FN}_phys(E1,E2)
+address_space_ld${FN}(E1,E2, MEMTXATTRS_UNSPECIFIED, NULL)
EOF
done
for FN in b w_le w_be l_le l_be q_le q_be w l q; do
cat <<EOF
@ cpu_matches_st_${FN} @
expression E1,E2,E3;
identifier as;
@@
st${FN}_phys(E1->as,E2,E3)
@ other_matches_st_${FN} depends on !cpu_matches_st_${FN} @
expression E1,E2,E3;
@@
-st${FN}_phys(E1,E2,E3)
+address_space_st${FN}(E1,E2,E3, MEMTXATTRS_UNSPECIFIED, NULL)
EOF
done
===endit===
Backports commit 42874d3a8c6267ff7789a0396843c884b1d0933a from qemu
Add new address_space_ld*/st* functions which allow transaction
attributes and error reporting for basic load and stores. These
are named to be in line with the address_space_read/write/rw
buffer operations.
The existing ld/st*_phys functions are now wrappers around
the new functions.
Backports commit 500131154d677930fce35ec3a6f0b5a26bcd2973 from qemu
Make address_space_rw take transaction attributes, rather
than always using the 'unspecified' attributes.
Backports commit 5c9eb0286c819c1836220a32f2e1a7b5004ac79a from qemu
Convert the subpage memory ops to _with_attrs; this will allow
us to pass the attributes through to the underlying access
functions. (Nothing uses the attributes yet.)
Backports commit f25a49e0057bbfcc2b1111f60785d919b6ddaeea from qemu
Add a MemTxAttrs field to the IOTLB, and allow target-specific
code to set it via a new tlb_set_page_with_attrs() function;
pass the attributes through to the device when making IO accesses.
Backports commit fadc1cbe85c6b032d5842ec0d19d209f50fcb375 from qemu
Make the CPU iotlb a structure rather than a plain hwaddr;
this will allow us to add transaction attributes to it.
Backports commit e469b22ffda40188954fafaf6e3308f58d50f8f8 from qemu
Rather than retaining io_mem_read/write as simple wrappers around
the memory_region_dispatch_read/write functions, make the latter
public and change all the callers to use them, since we need to
touch all the callsites anyway to add MemTxAttrs and MemTxResult
support. Delete io_mem_read and io_mem_write entirely.
(All the callers currently pass MEMTXATTRS_UNSPECIFIED
and convert the return value back to bool or ignore it.)
Backports commit 3b6434953934e6d4a776ed426d8c6d6badee176f from qemu
Define an API so that devices can register MemoryRegionOps whose read
and write callback functions are passed an arbitrary pointer to some
transaction attributes and can return a success-or-failure status code.
This will allow us to model devices which:
* behave differently for ARM Secure/NonSecure memory accesses
* behave differently for privileged/unprivileged accesses
* may return a transaction failure (causing a guest exception)
for erroneous accesses
This patch defines the new API and plumbs the attributes parameter through
to the memory.c public level functions io_mem_read() and io_mem_write(),
where it is currently dummied out.
The success/failure response indication is also propagated out to
io_mem_read() and io_mem_write(), which retain the old-style
boolean true-for-error return.
Backports commit cc05c43ad942165ecc6ffd39e41991bee43af044 from qemu
A LDRD or STRD where rd is not an even number is UNPREDICTABLE.
We were letting this fall through, which is OK unless rd is 15,
in which case we would attempt to do a load_reg or store_reg
to a nonexistent r16 for the second half of the double-word.
Catch the odd-numbered-rd cases and UNDEF them instead.
To do this we rearrange the structure of the code a little
so we can put the UNDEF catches at the top before we've
allocated TCG temporaries.
Backports commit a4bb522ee51087af61998f290d12ba2e14c7910e from qemu
Since the BSP bit is writable on real hardware, during reset all the CPUs which
were not chosen to be the BSP should have their BSP bit cleared. This fix is
required for KVM to work correctly when it changes the BSP bit.
An additional fix is required for QEMU tcg to allow software to change the BSP
bit.
Backports commit 9cb11fd7539b5b787d8fb3834004804a58dd16ae from qemu
The AArch64 SPSR_EL1 register is architecturally mandated to
be mapped to the AArch32 SPSR_svc register. This means its
state should live in QEMU's env->banked_spsr[1] field.
Correct the various places in the code that incorrectly
put it in banked_spsr[0].
Backports commit 7847f9ea9fce15a9ecfb62ab72c1e84ff516b0db from qemu