2 Copyright (c) 2017 Linaro Limited
3 Written by Peter Maydell
9 QEMU internally has multiple families of functions for performing
10 loads and stores. This document attempts to enumerate them all
11 and indicate when to use them. It does not provide detailed
12 documentation of each API -- for that you should look at the
13 documentation comments in the relevant header files.
19 These functions operate on a host pointer, and should be used
20 when you already have a pointer into host memory (corresponding
21 to guest ram or a local buffer). They deal with doing accesses
22 with the desired endianness and with correctly handling
23 potentially unaligned pointer values.
25 Function names follow the pattern:
27 load: ``ld{type}{sign}{size}_{endian}_p(ptr)``
29 store: ``st{type}{size}_{endian}_p(ptr, val)``
32 - (empty) : integer access
33 - ``f`` : float access
36 - (empty) : for 32 or 64 bit sizes (including floats and doubles)
47 - ``he`` : host endian
49 - ``le`` : little endian
51 The ``_{endian}`` infix is omitted for target-endian accesses.
53 The target endian accessors are only available to source
54 files which are built per-target.
56 There are also functions which take the size as an argument:
58 load: ``ldn{endian}_p(ptr, sz)``
60 which performs an unsigned load of ``sz`` bytes from ``ptr``
61 as an ``{endian}`` order value and returns it in a uint64_t.
63 store: ``stn{endian}_p(ptr, sz, val)``
65 which stores ``val`` to ``ptr`` as an ``{endian}`` order value
70 - ``\<ldf\?[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
71 - ``\<stf\?[bwlq]\(_[hbl]e\)\?_p\>``
72 - ``\<ldn_\([hbl]e\)?_p\>``
73 - ``\<stn_\([hbl]e\)?_p\>``
75 ``cpu_{ld,st}*_mmuidx_ra``
76 ~~~~~~~~~~~~~~~~~~~~~~~~~~
78 These functions operate on a guest virtual address plus a context,
79 known as a "mmu index" or ``mmuidx``, which controls how that virtual
80 address is translated. The meaning of the indexes are target specific,
81 but specifying a particular index might be necessary if, for instance,
82 the helper requires an "always as non-privileged" access rather that
83 the default access for the current state of the guest CPU.
85 These functions may cause a guest CPU exception to be taken
86 (e.g. for an alignment fault or MMU fault) which will result in
87 guest CPU state being updated and control longjmp'ing out of the
88 function call. They should therefore only be used in code that is
89 implementing emulation of the guest CPU.
91 The ``retaddr`` parameter is used to control unwinding of the
92 guest CPU state in case of a guest CPU exception. This is passed
93 to ``cpu_restore_state()``. Therefore the value should either be 0,
94 to indicate that the guest CPU state is already synchronized, or
95 the result of ``GETPC()`` from the top level ``HELPER(foo)``
96 function, which is a return address into the generated code [#gpc]_.
98 .. [#gpc] Note that ``GETPC()`` should be used with great care: calling
99 it in other functions that are *not* the top level
100 ``HELPER(foo)`` will cause unexpected behavior. Instead, the
101 value of ``GETPC()`` should be read from the helper and passed
102 if needed to the functions that the helper calls.
104 Function names follow the pattern:
106 load: ``cpu_ld{sign}{size}{end}_mmuidx_ra(env, ptr, mmuidx, retaddr)``
108 store: ``cpu_st{size}{end}_mmuidx_ra(env, ptr, val, mmuidx, retaddr)``
111 - (empty) : for 32 or 64 bit sizes
122 - (empty) : for target endian, or 8 bit sizes
123 - ``_be`` : big endian
124 - ``_le`` : little endian
126 Regexes for git grep:
127 - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_mmuidx_ra\>``
128 - ``\<cpu_st[bwlq](_[bl]e)\?_mmuidx_ra\>``
130 ``cpu_{ld,st}*_data_ra``
131 ~~~~~~~~~~~~~~~~~~~~~~~~
133 These functions work like the ``cpu_{ld,st}_mmuidx_ra`` functions
134 except that the ``mmuidx`` parameter is taken from the current mode
135 of the guest CPU, as determined by ``cpu_mmu_index(env, false)``.
137 These are generally the preferred way to do accesses by guest
138 virtual address from helper functions, unless the access should
139 be performed with a context other than the default.
141 Function names follow the pattern:
143 load: ``cpu_ld{sign}{size}{end}_data_ra(env, ptr, ra)``
145 store: ``cpu_st{size}{end}_data_ra(env, ptr, val, ra)``
148 - (empty) : for 32 or 64 bit sizes
159 - (empty) : for target endian, or 8 bit sizes
160 - ``_be`` : big endian
161 - ``_le`` : little endian
163 Regexes for git grep:
164 - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data_ra\>``
165 - ``\<cpu_st[bwlq](_[bl]e)\?_data_ra\>``
167 ``cpu_{ld,st}*_data``
168 ~~~~~~~~~~~~~~~~~~~~~
170 These functions work like the ``cpu_{ld,st}_data_ra`` functions
171 except that the ``retaddr`` parameter is 0, and thus does not
172 unwind guest CPU state.
174 This means they must only be used from helper functions where the
175 translator has saved all necessary CPU state. These functions are
176 the right choice for calls made from hooks like the CPU ``do_interrupt``
177 hook or when you know for certain that the translator had to save all
178 the CPU state anyway.
180 Function names follow the pattern:
182 load: ``cpu_ld{sign}{size}{end}_data(env, ptr)``
184 store: ``cpu_st{size}{end}_data(env, ptr, val)``
187 - (empty) : for 32 or 64 bit sizes
198 - (empty) : for target endian, or 8 bit sizes
199 - ``_be`` : big endian
200 - ``_le`` : little endian
203 - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data\>``
204 - ``\<cpu_st[bwlq](_[bl]e)\?_data\+\>``
209 These functions perform a read for instruction execution. The ``mmuidx``
210 parameter is taken from the current mode of the guest CPU, as determined
211 by ``cpu_mmu_index(env, true)``. The ``retaddr`` parameter is 0, and
212 thus does not unwind guest CPU state, because CPU state is always
213 synchronized while translating instructions. Any guest CPU exception
214 that is raised will indicate an instruction execution fault rather than
217 In general these functions should not be used directly during translation.
218 There are wrapper functions that are to be used which also take care of
221 Function names follow the pattern:
223 load: ``cpu_ld{sign}{size}_code(env, ptr)``
226 - (empty) : for 32 or 64 bit sizes
236 Regexes for git grep:
237 - ``\<cpu_ld[us]\?[bwlq]_code\>``
242 These functions are a wrapper for ``cpu_ld*_code`` which also perform
243 any actions required by any tracing plugins. They are only to be
244 called during the translator callback ``translate_insn``.
246 There is a set of functions ending in ``_swap`` which, if the parameter
247 is true, returns the value in the endianness that is the reverse of
248 the guest native endianness, as determined by ``TARGET_WORDS_BIGENDIAN``.
250 Function names follow the pattern:
252 load: ``translator_ld{sign}{size}(env, ptr)``
254 swap: ``translator_ld{sign}{size}_swap(env, ptr, swap)``
257 - (empty) : for 32 or 64 bit sizes
268 - ``\<translator_ld[us]\?[bwlq]\(_swap\)\?\>``
270 ``helper_*_{ld,st}*_mmu``
271 ~~~~~~~~~~~~~~~~~~~~~~~~~
273 These functions are intended primarily to be called by the code
274 generated by the TCG backend. They may also be called by target
275 CPU helper function code. Like the ``cpu_{ld,st}_mmuidx_ra`` functions
276 they perform accesses by guest virtual address, with a given ``mmuidx``.
278 These functions specify an ``opindex`` parameter which encodes
279 (among other things) the mmu index to use for the access. This parameter
280 should be created by calling ``make_memop_idx()``.
282 The ``retaddr`` parameter should be the result of GETPC() called directly
283 from the top level HELPER(foo) function (or 0 if no guest CPU state
284 unwinding is required).
286 **TODO** The names of these functions are a bit odd for historical
287 reasons because they were originally expected to be called only from
288 within generated code. We should rename them to bring them more in
289 line with the other memory access functions. The explicit endianness
290 is the only feature they have beyond ``*_mmuidx_ra``.
292 load: ``helper_{endian}_ld{sign}{size}_mmu(env, addr, opindex, retaddr)``
294 store: ``helper_{endian}_st{size}_mmu(env, addr, val, opindex, retaddr)``
297 - (empty) : for 32 or 64 bit sizes
308 - ``le`` : little endian
309 - ``be`` : big endian
310 - ``ret`` : target endianness
313 - ``\<helper_\(le\|be\|ret\)_ld[us]\?[bwlq]_mmu\>``
314 - ``\<helper_\(le\|be\|ret\)_st[bwlq]_mmu\>``
319 These functions are the primary ones to use when emulating CPU
320 or device memory accesses. They take an AddressSpace, which is the
321 way QEMU defines the view of memory that a device or CPU has.
322 (They generally correspond to being the "master" end of a hardware bus
325 Each CPU has an AddressSpace. Some kinds of CPU have more than
326 one AddressSpace (for instance Arm guest CPUs have an AddressSpace
327 for the Secure world and one for NonSecure if they implement TrustZone).
328 Devices which can do DMA-type operations should generally have an
329 AddressSpace. There is also a "system address space" which typically
330 has all the devices and memory that all CPUs can see. (Some older
331 device models use the "system address space" rather than properly
332 modelling that they have an AddressSpace of their own.)
334 Functions are provided for doing byte-buffer reads and writes,
335 and also for doing one-data-item loads and stores.
337 In all cases the caller provides a MemTxAttrs to specify bus
338 transaction attributes, and can check whether the memory transaction
339 succeeded using a MemTxResult return code.
341 ``address_space_read(address_space, addr, attrs, buf, len)``
343 ``address_space_write(address_space, addr, attrs, buf, len)``
345 ``address_space_rw(address_space, addr, attrs, buf, len, is_write)``
347 ``address_space_ld{sign}{size}_{endian}(address_space, addr, attrs, txresult)``
349 ``address_space_st{size}_{endian}(address_space, addr, val, attrs, txresult)``
352 - (empty) : for 32 or 64 bit sizes
355 (No signed load operations are provided.)
364 - ``le`` : little endian
365 - ``be`` : big endian
367 The ``_{endian}`` suffix is omitted for byte accesses.
370 - ``\<address_space_\(read\|write\|rw\)\>``
371 - ``\<address_space_ldu\?[bwql]\(_[lb]e\)\?\>``
372 - ``\<address_space_st[bwql]\(_[lb]e\)\?\>``
374 ``address_space_write_rom``
375 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
377 This function performs a write by physical address like
378 ``address_space_write``, except that if the write is to a ROM then
379 the ROM contents will be modified, even though a write by the guest
380 CPU to the ROM would be ignored. This is used for non-guest writes
381 like writes from the gdb debug stub or initial loading of ROM contents.
383 Note that portions of the write which attempt to write data to a
384 device will be silently ignored -- only real RAM and ROM will
388 - ``address_space_write_rom``
393 These are functions which are identical to
394 ``address_space_{ld,st}*``, except that they always pass
395 ``MEMTXATTRS_UNSPECIFIED`` for the transaction attributes, and ignore
396 whether the transaction succeeded or failed.
398 The fact that they ignore whether the transaction succeeded means
399 they should not be used in new code, unless you know for certain
400 that your code will only be used in a context where the CPU or
401 device doing the access has no way to report such an error.
403 ``load: ld{sign}{size}_{endian}_phys``
405 ``store: st{size}_{endian}_phys``
408 - (empty) : for 32 or 64 bit sizes
411 (No signed load operations are provided.)
420 - ``le`` : little endian
421 - ``be`` : big endian
423 The ``_{endian}_`` infix is omitted for byte accesses.
426 - ``\<ldu\?[bwlq]\(_[bl]e\)\?_phys\>``
427 - ``\<st[bwlq]\(_[bl]e\)\?_phys\>``
429 ``cpu_physical_memory_*``
430 ~~~~~~~~~~~~~~~~~~~~~~~~~
432 These are convenience functions which are identical to
433 ``address_space_*`` but operate specifically on the system address space,
434 always pass a ``MEMTXATTRS_UNSPECIFIED`` set of memory attributes and
435 ignore whether the memory transaction succeeded or failed.
436 For new code they are better avoided:
438 * there is likely to be behaviour you need to model correctly for a
439 failed read or write operation
440 * a device should usually perform operations on its own AddressSpace
441 rather than using the system address space
443 ``cpu_physical_memory_read``
445 ``cpu_physical_memory_write``
447 ``cpu_physical_memory_rw``
450 - ``\<cpu_physical_memory_\(read\|write\|rw\)\>``
452 ``cpu_memory_rw_debug``
453 ~~~~~~~~~~~~~~~~~~~~~~~
455 Access CPU memory by virtual address for debug purposes.
457 This function is intended for use by the GDB stub and similar code.
458 It takes a virtual address, converts it to a physical address via
459 an MMU lookup using the current settings of the specified CPU,
460 and then performs the access (using ``address_space_rw`` for
461 reads or ``cpu_physical_memory_write_rom`` for writes).
462 This means that if the access is a write to a ROM then this
463 function will modify the contents (whereas a normal guest CPU access
464 would ignore the write attempt).
466 ``cpu_memory_rw_debug``
471 These behave like ``address_space_*``, except that they perform a DMA
472 barrier operation first.
474 **TODO**: We should provide guidance on when you need the DMA
475 barrier operation and when it's OK to use ``address_space_*``, and
476 make sure our existing code is doing things correctly.
485 - ``\<dma_memory_\(read\|write\|rw\)\>``
487 ``pci_dma_*`` and ``{ld,st}*_pci_dma``
488 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
490 These functions are specifically for PCI device models which need to
491 perform accesses where the PCI device is a bus master. You pass them a
492 ``PCIDevice *`` and they will do ``dma_memory_*`` operations on the
493 correct address space for that device.
501 ``load: ld{sign}{size}_{endian}_pci_dma``
503 ``store: st{size}_{endian}_pci_dma``
506 - (empty) : for 32 or 64 bit sizes
509 (No signed load operations are provided.)
518 - ``le`` : little endian
519 - ``be`` : big endian
521 The ``_{endian}_`` infix is omitted for byte accesses.
524 - ``\<pci_dma_\(read\|write\|rw\)\>``
525 - ``\<ldu\?[bwlq]\(_[bl]e\)\?_pci_dma\>``
526 - ``\<st[bwlq]\(_[bl]e\)\?_pci_dma\>``