Make ARMCPU an opaque type within cpu-qom.h, and move all definitions of
private methods, as well as all type definitions that require knowledge
of the layout to cpu.h. This helps making files independent of NEED_CPU_H
if they only need to pass around CPU pointers.
Backports commit 74e755647c1598a6845df1ee4f8b96d01afd96e7 from qemu
Return the negated value of accel_initialised is meaningless,
and the caller vl doesn't check it.
Backports commit bdc3f61dec2f9c227235bb5f677a0272e1184c82 from qemu
Simplify cpu_exec() by extracting TB execution code outside of
cpu_exec() into a new static inline function cpu_loop_exec_tb().
Backports commit 928de9ee14b0b63ee9f9275732ed3e1c8b5f4790 from qemu
Simplify cpu_exec() by extracting interrupt handling code outside of
cpu_exec() into a new static inline function cpu_handle_interrupt().
Backports commit c385e6e49763c6dd5dbbd90fadde95d986f8bd38 from qemu
Simplify cpu_exec() by extracting exception handling code out of
cpu_exec() into a new static inline function cpu_handle_exception().
Also make cpu_handle_debug_exception() inline as it is used only once.
Backports commit ea284766ec6b9f1712369249566b4c372f3cec8b from qemu
Simplify cpu_exec() by extracting CPU halt state handling code out of
cpu_exec() into a new static inline function cpu_handle_halt().
Backports commit 8b2d34e997371c9729a0f41e3cc624d4300bbe78 from qemu
This comment should have been deleted by commit 0ac087f1f3ae ("removed
unused code") but somehow it is still here. There's no point to keep it.
Backports commit c6f0d9f84c43ae973270df1a77482466558ee487 from qemu
This field was used for telling cpu_interrupt() to unlink a chain of TBs
being executed when it worked that way. Now, cpu_interrupt() don't do
this anymore. So we don't need this field anymore.
Backports commit 3213525f8ab48742db09dab18cb9ae6f36a6c921 from qemu
Move tb_add_jump() call and surrounding code from cpu_exec() into
tb_find_fast(). That simplifies cpu_exec() a little by hiding the direct
chaining optimization details into tb_find_fast(). It also allows to
move tb_lock()/tb_unlock() pair into tb_find_fast(), putting it closer
to tb_find_slow() which also manipulates the lock.
Backports commit a0522c7a55cc8ac76d82884cf8e52f76daa664cc from qemu
'tb_invalidated_flag' was meant to catch two events:
* some TB has been invalidated by tb_phys_invalidate();
* the whole translation buffer has been flushed by tb_flush().
Then it was checked:
* in cpu_exec() to ensure that the last executed TB can be safely
linked to directly call the next one;
* in cpu_exec_nocache() to decide if the original TB should be provided
for further possible invalidation along with the temporarily
generated TB.
It is always safe to patch an invalidated TB since it is not going to be
used anyway. It is also safe to call tb_phys_invalidate() for an already
invalidated TB. Thus, setting this flag in tb_phys_invalidate() is
simply unnecessary. Moreover, it can prevent from pretty proper linking
of TBs, if any arbitrary TB has been invalidated. So just don't touch it
in tb_phys_invalidate().
If this flag is only used to catch whether tb_flush() has been called
then rename it to 'tb_flushed'. Declare it as 'bool' and stick to using
only 'true' and 'false' to set its value. Also, instead of setting it in
tb_gen_code(), just after tb_flush() has been called, do it right inside
of tb_flush().
In cpu_exec(), this flag is used to track if tb_flush() has been called
and have made 'next_tb' (a reference to the last executed TB) invalid
for linking it to directly call the next TB. tb_flush() can be called
during the CPU execution loop from tb_gen_code(), during TB execution or
by another thread while 'tb_lock' is released. Catch for translation
buffer flush reliably by resetting this flag once before first TB lookup
and each time we find it set before trying to add a direct jump. Don't
touch in in tb_find_physical().
Each vCPU has its own execution loop in multithreaded mode and thus
should have its own copy of the flag to be able to reset it with its own
'next_tb' and don't affect any other vCPU execution thread. So make this
flag per-vCPU and move it to CPUState.
In cpu_exec_nocache(), we only need to check if tb_flush() has been
called from tb_gen_code() called by cpu_exec_nocache() itself. To do
this reliably, preserve the old value of the flag, reset it before
calling tb_gen_code(), check afterwards, and combine the saved value
back to the flag.
This patch is based on the patch "tcg: move tb_invalidated_flag to
CPUState" from Paolo Bonzini <pbonzini@redhat.com>.
Backports commit 6f789be56d3f38e9214dafcfab3bf9be7191f370 from qemu
The value returned from tcg_qemu_tb_exec() is the value passed to the
corresponding tcg_gen_exit_tb() at translation time of the last TB
attempted to execute. It is a little confusing to store it in a variable
named 'next_tb'. In fact, it is a combination of 4-byte aligned pointer
and additional information in its two least significant bits. Break it
down right away into two variables named 'last_tb' and 'tb_exit' which
are a pointer to the last TB attempted to execute and the TB exit
reason, correspondingly. This simplifies the code and improves its
readability.
Correct a misleading documentation comment for tcg_qemu_tb_exec() and
fix logging in cpu_tb_exec(). Also rename a misleading 'next_tb' in
another couple of places.
Backports commit 819af24b9c1e95e6576f1cefd32f4d6bf56dfa56 from qemu
In user mode, there's only a static address translation, TBs are always
invalidated properly and direct jumps are reset when mapping change.
Thus the destination address is always valid for direct jumps and
there's no need to restrict it to the pages the TB resides in.
Backports commit 90aa39a1cc4837360889f0e033ca25cc82100308 from qemu
We don't take care of direct jumps when address mapping changes. Thus we
must be sure to generate direct jumps so that they always keep valid
even if address mapping changes. Luckily, we can only allow to execute a
TB if it was generated from the pages which match with current mapping.
Document tcg_gen_goto_tb() declaration and note the reason for
destination PC limitations.
Some targets with variable length instructions allow TB to straddle a
page boundary. However, we make sure that both of TB pages match the
current address mapping when looking up TBs. So it is safe to do direct
jumps into the both pages. Correct the checks for some of those targets.
Given that, we can safely patch a TB which spans two pages. Remove the
unnecessary check in cpu_exec() and allow such TBs to be patched.
Backports commit 5b053a4a28278bca606eeff7d1c0730df1b047e9 from qemu
Unify the code of this function with tb_jmp_remove_from_list(). Making
these functions similar improves their readability. Also this could be a
step towards making this function thread-safe.
Backports commit f9c5b66f487a04d3747dc6997b1503f9258df945 from qemu
Move the code for removing jumps to a TB out of tb_phys_invalidate() to
a separate static inline function tb_jmp_unlink(). This simplifies
tb_phys_invalidate() and improves code structure.
Backports commit 89bba496322d4cf996d42cdd4bb0912231656c3d from qemu
tb_jmp_remove() was only used to remove the TB from a list of all TBs
jumping to the same TB which is n-th jump destination of the given TB.
Put a comment briefly describing the function behavior and rename it to
better reflect its purpose.
Backports commit 133626783aa5a1bf86332fa3e6f7b8efe005f924 from qemu
The check is to make sure that another thread hasn't already done the
same while we were outside of tb_lock. Mention this in a comment.
Backports commit 9962c478b153a18fe88a6509fe58cd178aff8abc from qemu
Initialize TB's direct jump list data fields and reset the jumps before
tb_link_page() puts it into the physical hash table and the physical
page list. So TB is completely initialized before it becomes visible.
This is pure rearrangement of code to a more suitable place, though it
could be a preparation for relaxing the locking scheme in future.
Backports commit 901bc3deb43bf37c85e43955905d003be7ae5fa5 from qemu
These fields do not contain pure pointers to a TranslationBlock
structure. So uintptr_t is the most appropriate type for them.
Also put some asserts to assure that the two least significant bits of
the pointer are always zero before assigning it to jmp_list_first.
Backports commit c37e6d7e3589ecb96914faa21025ad7ba6654aea from qemu
Briefly describe in a comment how direct block chaining is done. It
should help in understanding of the following data fields.
Rename some fields in TranslationBlock and TCGContext structures to
better reflect their purpose (dropping excessive 'tb_' prefix in
TranslationBlock but keeping it in TCGContext):
tb_next_offset => jmp_reset_offset
tb_jmp_offset => jmp_insn_offset
tb_next => jmp_target_addr
jmp_next => jmp_list_next
jmp_first => jmp_list_first
Avoid using a magic constant as an invalid offset which is used to
indicate that there's no n-th jump generated.
Backports commit f309101c26b59641fc1aa8fb2a98a5441cdaea03 from qemu
The setting of tcg_ctx.code_gen_buffer_size is done by the only caller of
size_code_gen_buffer(), which is code_gen_alloc():
$ git grep size_code_gen_buffer
translate-all.c:static inline size_t size_code_gen_buffer(size_t tb_size)
translate-all.c: tcg_ctx.code_gen_buffer_size = size_code_gen_buffer(tb_size);
Backports commit 835154b6e2200460f04719d0028716a37c178368 from qemu
Ensure direct jump patching in MIPS is atomic by using
atomic_read()/atomic_set() for code patching.
Backports commit c82460a560176ef69c2f0662bd280612e274db96 from qemu
Ensure direct jump patching in SPARC is atomic by using
atomic_read()/atomic_set() for code patching.
Backports commit 84f79fb7c6e857edc807e4a251338243ce0cbac3 from qemu
Ensure direct jump patching in AArch64 is atomic by using
atomic_read()/atomic_set() for code patching.
Backports commit 9e269112953be4d670cb0d25042bd6546fcf3e45 from qemu
Ensure direct jump patching in ARM is atomic by using
atomic_read()/atomic_set() for code patching.
Backports commit 7d14e0e2d661479985197203589c38840e1066df from qemu
Ensure direct jump patching in s390 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.
Backports commit ed3d51ecd7fe248d3959e469d53890ac9ffe0cd2 from qemu
Ensure direct jump patching in i386 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.
Backports commit 0d07abf05e98903c7faf204a9a90f7d45b7554dc from qemu
These macros provide a convenient way to n-byte align pointers up and
down and check if a pointer is n-byte aligned.
Backports commit 6b587d3cda48e7ba26de8d30bf0d8a7063970715 from qemu
We are inconsistent with the type of tb->flags: usage varies loosely
between int and uint64_t. Settle to uint32_t everywhere, which is
superior to both: at least one target (aarch64) uses the most significant
bit in the u32, and uint64_t is wasteful.
Compile-tested for all targets.
Backports commit 89fee74a0f066dfd73830a7b5fa137e87888c870 from qemu
The TCR_EL2 and TCR_EL3 regdefs were incorrectly using the
vmsa_tcr_el1_write function for writes. Since these registers don't
have the A1 bit that TCR_EL1 does, we don't need to do a tlb_flush()
when they are written. Remove the unnecessary .writefn and also the
harmless but unneeded .raw_writefn and .resetfn definitions.
Backports commit 6459b94c26dd666badb3547fef1456992a08e60b from qemu
The various load/store variants under disas_ldst_reg can all reuse the
same decoding for opc, size, rt and is_vector.
This patch unifies the decoding in preparation for generating
instruction syndromes for data aborts.
This will allow us to reduce the number of places to hook in updates
to the load/store state needed to generate the insn syndromes.
No functional change.
Backports commit cd694521ca061a5d0436d5df4ec8c17c8f4dfcdb from qemu
Use extract32 instead of open coding the bit masking when decoding
is_signed and is_extended. This streamlines the decoding with some
of the other ldst variants.
No functional change.
Backports commit 026a19c3128678d4fe301fc36e8ffacdc9ecccb8 from qemu
Split the data abort syndrome generator into two versions:
One with a valid Instruction Specific Syndrome (ISS) and another without.
The following new flags are supported by the syndrome generator
with ISS:
* isv - Instruction syndrome valid
* sas - Syndrome access size
* sse - Syndrome sign extend
* srt - Syndrome register transfer
* sf - Sixty-Four bit register width
* ar - Acquire/Release
These flags are not yet used, so this patch has no functional change
except that we will now correctly set the IL bit in data abort
syndromes without ISS information.
Backports commit 094d028a7968236cd2b7f7b96394f7a3b8ad97c8 from qemu
Use tcg_set_insn_param() instead of directly accessing internal
tcg data structures to update an insn param.
Backports commit 25caa94c4a26daaab1e65c6d887e2972aeb5749e from qemu
Add tcg_set_insn_param as a mechanism to modify an insn
parameter after emiting the insn. This is useful for icount
and also for embedding fault information for a specific insn.
Backports commit 1d41478fd428e01f057d3248292e4cdcdb048523 from qemu
There is a bug in ARM address translation regime with a long-descriptor
format. On the descriptor reading its address is formed from an index
which is a part of the input address. And on the first iteration this index
is incorrectly masked with 'grainsize' mask. But it can be wider according
to pseudo-code.
On the other hand on the iterations other than first the descriptor address
is formed from the previous level descriptor by masking with 'descaddrmask'
value. It always clears just 12 lower bits, but it must clear 'grainsize'
lower bits instead according to pseudo-code.
The patch fixes both cases.
Backports commit dddb5223413c5425ae6eaeb3b967627efc9675f7 from qemu
As described in AArch32.CheckS2Permission an instruction fetch fails if
XN bit is set or there is no read permission for the address.
Backports commit dfda68377e20943f474505e75238cb96bc6874bf from qemu
Returning a partial object on error is an invitation for a careless
caller to leak memory. We already fixed things in an earlier
patch to guarantee NULL if visit_start fails ("qapi: Guarantee
NULL obj on input visitor callback error"), but that does not
help the case where visit_start succeeds but some other failure
happens before visit_end, such that we leak a partially constructed
object outside visit_type_FOO(). As no one outside the testsuite
was actually relying on these semantics, it is cleaner to just
document and guarantee that ALL pointer-based visit_type_FOO()
functions always leave a safe value in *obj during an input visitor
(either the new object on success, or NULL if an error is
encountered), so callers can now unconditionally use
qapi_free_FOO() to clean up regardless of whether an error occurred.
The decision is done by adding visit_is_input(), then updating the
generated code to check if additional cleanup is needed based on
the type of visitor in use.
Note that we still leave *obj unchanged after a scalar-based
visit_type_FOO(); I did not feel like auditing all uses of
visit_type_Enum() to see if the callers would tolerate a specific
sentinel value (not to mention having to decide whether it would
be better to use 0 or ENUM__MAX as that sentinel).
Backports commit 68ab47e4b4ecc1c4649362b8cc1e49794d1a6537 from qemu