2 Copyright (c) 2017 Linaro Limited
3 Written by Peter Maydell
9 QEMU internally has multiple families of functions for performing
10 loads and stores. This document attempts to enumerate them all
11 and indicate when to use them. It does not provide detailed
12 documentation of each API -- for that you should look at the
13 documentation comments in the relevant header files.
19 These functions operate on a host pointer, and should be used
20 when you already have a pointer into host memory (corresponding
21 to guest ram or a local buffer). They deal with doing accesses
22 with the desired endianness and with correctly handling
23 potentially unaligned pointer values.
25 Function names follow the pattern:
27 load: ``ld{sign}{size}_{endian}_p(ptr)``
29 store: ``st{size}_{endian}_p(ptr, val)``
32 - (empty) : for 32 or 64 bit sizes
43 - ``he`` : host endian
45 - ``le`` : little endian
47 The ``_{endian}`` infix is omitted for target-endian accesses.
49 The target endian accessors are only available to source
50 files which are built per-target.
52 There are also functions which take the size as an argument:
54 load: ``ldn{endian}_p(ptr, sz)``
56 which performs an unsigned load of ``sz`` bytes from ``ptr``
57 as an ``{endian}`` order value and returns it in a uint64_t.
59 store: ``stn{endian}_p(ptr, sz, val)``
61 which stores ``val`` to ``ptr`` as an ``{endian}`` order value
66 - ``\<ld[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
67 - ``\<st[bwlq]\(_[hbl]e\)\?_p\>``
68 - ``\<ldn_\([hbl]e\)?_p\>``
69 - ``\<stn_\([hbl]e\)?_p\>``
71 ``cpu_{ld,st}*_mmuidx_ra``
72 ~~~~~~~~~~~~~~~~~~~~~~~~~~
74 These functions operate on a guest virtual address plus a context,
75 known as a "mmu index" or ``mmuidx``, which controls how that virtual
76 address is translated. The meaning of the indexes are target specific,
77 but specifying a particular index might be necessary if, for instance,
78 the helper requires an "always as non-privileged" access rather that
79 the default access for the current state of the guest CPU.
81 These functions may cause a guest CPU exception to be taken
82 (e.g. for an alignment fault or MMU fault) which will result in
83 guest CPU state being updated and control longjmp'ing out of the
84 function call. They should therefore only be used in code that is
85 implementing emulation of the guest CPU.
87 The ``retaddr`` parameter is used to control unwinding of the
88 guest CPU state in case of a guest CPU exception. This is passed
89 to ``cpu_restore_state()``. Therefore the value should either be 0,
90 to indicate that the guest CPU state is already synchronized, or
91 the result of ``GETPC()`` from the top level ``HELPER(foo)``
92 function, which is a return address into the generated code [#gpc]_.
94 .. [#gpc] Note that ``GETPC()`` should be used with great care: calling
95 it in other functions that are *not* the top level
96 ``HELPER(foo)`` will cause unexpected behavior. Instead, the
97 value of ``GETPC()`` should be read from the helper and passed
98 if needed to the functions that the helper calls.
100 Function names follow the pattern:
102 load: ``cpu_ld{sign}{size}{end}_mmuidx_ra(env, ptr, mmuidx, retaddr)``
104 store: ``cpu_st{size}{end}_mmuidx_ra(env, ptr, val, mmuidx, retaddr)``
107 - (empty) : for 32 or 64 bit sizes
118 - (empty) : for target endian, or 8 bit sizes
119 - ``_be`` : big endian
120 - ``_le`` : little endian
122 Regexes for git grep:
123 - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_mmuidx_ra\>``
124 - ``\<cpu_st[bwlq](_[bl]e)\?_mmuidx_ra\>``
126 ``cpu_{ld,st}*_data_ra``
127 ~~~~~~~~~~~~~~~~~~~~~~~~
129 These functions work like the ``cpu_{ld,st}_mmuidx_ra`` functions
130 except that the ``mmuidx`` parameter is taken from the current mode
131 of the guest CPU, as determined by ``cpu_mmu_index(env, false)``.
133 These are generally the preferred way to do accesses by guest
134 virtual address from helper functions, unless the access should
135 be performed with a context other than the default.
137 Function names follow the pattern:
139 load: ``cpu_ld{sign}{size}{end}_data_ra(env, ptr, ra)``
141 store: ``cpu_st{size}{end}_data_ra(env, ptr, val, ra)``
144 - (empty) : for 32 or 64 bit sizes
155 - (empty) : for target endian, or 8 bit sizes
156 - ``_be`` : big endian
157 - ``_le`` : little endian
159 Regexes for git grep:
160 - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data_ra\>``
161 - ``\<cpu_st[bwlq](_[bl]e)\?_data_ra\>``
163 ``cpu_{ld,st}*_data``
164 ~~~~~~~~~~~~~~~~~~~~~
166 These functions work like the ``cpu_{ld,st}_data_ra`` functions
167 except that the ``retaddr`` parameter is 0, and thus does not
168 unwind guest CPU state.
170 This means they must only be used from helper functions where the
171 translator has saved all necessary CPU state. These functions are
172 the right choice for calls made from hooks like the CPU ``do_interrupt``
173 hook or when you know for certain that the translator had to save all
174 the CPU state anyway.
176 Function names follow the pattern:
178 load: ``cpu_ld{sign}{size}{end}_data(env, ptr)``
180 store: ``cpu_st{size}{end}_data(env, ptr, val)``
183 - (empty) : for 32 or 64 bit sizes
194 - (empty) : for target endian, or 8 bit sizes
195 - ``_be`` : big endian
196 - ``_le`` : little endian
199 - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data\>``
200 - ``\<cpu_st[bwlq](_[bl]e)\?_data\+\>``
205 These functions perform a read for instruction execution. The ``mmuidx``
206 parameter is taken from the current mode of the guest CPU, as determined
207 by ``cpu_mmu_index(env, true)``. The ``retaddr`` parameter is 0, and
208 thus does not unwind guest CPU state, because CPU state is always
209 synchronized while translating instructions. Any guest CPU exception
210 that is raised will indicate an instruction execution fault rather than
213 In general these functions should not be used directly during translation.
214 There are wrapper functions that are to be used which also take care of
217 Function names follow the pattern:
219 load: ``cpu_ld{sign}{size}_code(env, ptr)``
222 - (empty) : for 32 or 64 bit sizes
232 Regexes for git grep:
233 - ``\<cpu_ld[us]\?[bwlq]_code\>``
238 These functions are a wrapper for ``cpu_ld*_code`` which also perform
239 any actions required by any tracing plugins. They are only to be
240 called during the translator callback ``translate_insn``.
242 There is a set of functions ending in ``_swap`` which, if the parameter
243 is true, returns the value in the endianness that is the reverse of
244 the guest native endianness, as determined by ``TARGET_WORDS_BIGENDIAN``.
246 Function names follow the pattern:
248 load: ``translator_ld{sign}{size}(env, ptr)``
250 swap: ``translator_ld{sign}{size}_swap(env, ptr, swap)``
253 - (empty) : for 32 or 64 bit sizes
264 - ``\<translator_ld[us]\?[bwlq]\(_swap\)\?\>``
266 ``helper_*_{ld,st}*_mmu``
267 ~~~~~~~~~~~~~~~~~~~~~~~~~
269 These functions are intended primarily to be called by the code
270 generated by the TCG backend. They may also be called by target
271 CPU helper function code. Like the ``cpu_{ld,st}_mmuidx_ra`` functions
272 they perform accesses by guest virtual address, with a given ``mmuidx``.
274 These functions specify an ``opindex`` parameter which encodes
275 (among other things) the mmu index to use for the access. This parameter
276 should be created by calling ``make_memop_idx()``.
278 The ``retaddr`` parameter should be the result of GETPC() called directly
279 from the top level HELPER(foo) function (or 0 if no guest CPU state
280 unwinding is required).
282 **TODO** The names of these functions are a bit odd for historical
283 reasons because they were originally expected to be called only from
284 within generated code. We should rename them to bring them more in
285 line with the other memory access functions. The explicit endianness
286 is the only feature they have beyond ``*_mmuidx_ra``.
288 load: ``helper_{endian}_ld{sign}{size}_mmu(env, addr, opindex, retaddr)``
290 store: ``helper_{endian}_st{size}_mmu(env, addr, val, opindex, retaddr)``
293 - (empty) : for 32 or 64 bit sizes
304 - ``le`` : little endian
305 - ``be`` : big endian
306 - ``ret`` : target endianness
309 - ``\<helper_\(le\|be\|ret\)_ld[us]\?[bwlq]_mmu\>``
310 - ``\<helper_\(le\|be\|ret\)_st[bwlq]_mmu\>``
315 These functions are the primary ones to use when emulating CPU
316 or device memory accesses. They take an AddressSpace, which is the
317 way QEMU defines the view of memory that a device or CPU has.
318 (They generally correspond to being the "master" end of a hardware bus
321 Each CPU has an AddressSpace. Some kinds of CPU have more than
322 one AddressSpace (for instance Arm guest CPUs have an AddressSpace
323 for the Secure world and one for NonSecure if they implement TrustZone).
324 Devices which can do DMA-type operations should generally have an
325 AddressSpace. There is also a "system address space" which typically
326 has all the devices and memory that all CPUs can see. (Some older
327 device models use the "system address space" rather than properly
328 modelling that they have an AddressSpace of their own.)
330 Functions are provided for doing byte-buffer reads and writes,
331 and also for doing one-data-item loads and stores.
333 In all cases the caller provides a MemTxAttrs to specify bus
334 transaction attributes, and can check whether the memory transaction
335 succeeded using a MemTxResult return code.
337 ``address_space_read(address_space, addr, attrs, buf, len)``
339 ``address_space_write(address_space, addr, attrs, buf, len)``
341 ``address_space_rw(address_space, addr, attrs, buf, len, is_write)``
343 ``address_space_ld{sign}{size}_{endian}(address_space, addr, attrs, txresult)``
345 ``address_space_st{size}_{endian}(address_space, addr, val, attrs, txresult)``
348 - (empty) : for 32 or 64 bit sizes
351 (No signed load operations are provided.)
360 - ``le`` : little endian
361 - ``be`` : big endian
363 The ``_{endian}`` suffix is omitted for byte accesses.
366 - ``\<address_space_\(read\|write\|rw\)\>``
367 - ``\<address_space_ldu\?[bwql]\(_[lb]e\)\?\>``
368 - ``\<address_space_st[bwql]\(_[lb]e\)\?\>``
370 ``address_space_write_rom``
371 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
373 This function performs a write by physical address like
374 ``address_space_write``, except that if the write is to a ROM then
375 the ROM contents will be modified, even though a write by the guest
376 CPU to the ROM would be ignored. This is used for non-guest writes
377 like writes from the gdb debug stub or initial loading of ROM contents.
379 Note that portions of the write which attempt to write data to a
380 device will be silently ignored -- only real RAM and ROM will
384 - ``address_space_write_rom``
389 These are functions which are identical to
390 ``address_space_{ld,st}*``, except that they always pass
391 ``MEMTXATTRS_UNSPECIFIED`` for the transaction attributes, and ignore
392 whether the transaction succeeded or failed.
394 The fact that they ignore whether the transaction succeeded means
395 they should not be used in new code, unless you know for certain
396 that your code will only be used in a context where the CPU or
397 device doing the access has no way to report such an error.
399 ``load: ld{sign}{size}_{endian}_phys``
401 ``store: st{size}_{endian}_phys``
404 - (empty) : for 32 or 64 bit sizes
407 (No signed load operations are provided.)
416 - ``le`` : little endian
417 - ``be`` : big endian
419 The ``_{endian}_`` infix is omitted for byte accesses.
422 - ``\<ldu\?[bwlq]\(_[bl]e\)\?_phys\>``
423 - ``\<st[bwlq]\(_[bl]e\)\?_phys\>``
425 ``cpu_physical_memory_*``
426 ~~~~~~~~~~~~~~~~~~~~~~~~~
428 These are convenience functions which are identical to
429 ``address_space_*`` but operate specifically on the system address space,
430 always pass a ``MEMTXATTRS_UNSPECIFIED`` set of memory attributes and
431 ignore whether the memory transaction succeeded or failed.
432 For new code they are better avoided:
434 * there is likely to be behaviour you need to model correctly for a
435 failed read or write operation
436 * a device should usually perform operations on its own AddressSpace
437 rather than using the system address space
439 ``cpu_physical_memory_read``
441 ``cpu_physical_memory_write``
443 ``cpu_physical_memory_rw``
446 - ``\<cpu_physical_memory_\(read\|write\|rw\)\>``
448 ``cpu_memory_rw_debug``
449 ~~~~~~~~~~~~~~~~~~~~~~~
451 Access CPU memory by virtual address for debug purposes.
453 This function is intended for use by the GDB stub and similar code.
454 It takes a virtual address, converts it to a physical address via
455 an MMU lookup using the current settings of the specified CPU,
456 and then performs the access (using ``address_space_rw`` for
457 reads or ``cpu_physical_memory_write_rom`` for writes).
458 This means that if the access is a write to a ROM then this
459 function will modify the contents (whereas a normal guest CPU access
460 would ignore the write attempt).
462 ``cpu_memory_rw_debug``
467 These behave like ``address_space_*``, except that they perform a DMA
468 barrier operation first.
470 **TODO**: We should provide guidance on when you need the DMA
471 barrier operation and when it's OK to use ``address_space_*``, and
472 make sure our existing code is doing things correctly.
481 - ``\<dma_memory_\(read\|write\|rw\)\>``
482 - ``\<ldu\?[bwlq]\(_[bl]e\)\?_dma\>``
483 - ``\<st[bwlq]\(_[bl]e\)\?_dma\>``
485 ``pci_dma_*`` and ``{ld,st}*_pci_dma``
486 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
488 These functions are specifically for PCI device models which need to
489 perform accesses where the PCI device is a bus master. You pass them a
490 ``PCIDevice *`` and they will do ``dma_memory_*`` operations on the
491 correct address space for that device.
499 ``load: ld{sign}{size}_{endian}_pci_dma``
501 ``store: st{size}_{endian}_pci_dma``
504 - (empty) : for 32 or 64 bit sizes
507 (No signed load operations are provided.)
516 - ``le`` : little endian
517 - ``be`` : big endian
519 The ``_{endian}_`` infix is omitted for byte accesses.
522 - ``\<pci_dma_\(read\|write\|rw\)\>``
523 - ``\<ldu\?[bwlq]\(_[bl]e\)\?_pci_dma\>``
524 - ``\<st[bwlq]\(_[bl]e\)\?_pci_dma\>``