2 Copyright (c) 2017 Linaro Limited
3 Written by Peter Maydell
9 QEMU internally has multiple families of functions for performing
10 loads and stores. This document attempts to enumerate them all
11 and indicate when to use them. It does not provide detailed
12 documentation of each API -- for that you should look at the
13 documentation comments in the relevant header files.
19 These functions operate on a host pointer, and should be used
20 when you already have a pointer into host memory (corresponding
21 to guest ram or a local buffer). They deal with doing accesses
22 with the desired endianness and with correctly handling
23 potentially unaligned pointer values.
25 Function names follow the pattern:
27 load: ``ld{type}{sign}{size}_{endian}_p(ptr)``
29 store: ``st{type}{size}_{endian}_p(ptr, val)``
32 - (empty) : integer access
33 - ``f`` : float access
36 - (empty) : for 32 or 64 bit sizes (including floats and doubles)
47 - ``he`` : host endian
49 - ``le`` : little endian
51 The ``_{endian}`` infix is omitted for target-endian accesses.
53 The target endian accessors are only available to source
54 files which are built per-target.
57 - ``\<ldf\?[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
58 - ``\<stf\?[bwlq]\(_[hbl]e\)\?_p\>``
63 These functions operate on a guest virtual address. Be aware
64 that these functions may cause a guest CPU exception to be
65 taken (e.g. for an alignment fault or MMU fault) which will
66 result in guest CPU state being updated and control longjumping
67 out of the function call. They should therefore only be used
68 in code that is implementing emulation of the target CPU.
70 These functions may throw an exception (longjmp() back out
71 to the top level TCG loop). This means they must only be used
72 from helper functions where the translator has saved all
73 necessary CPU state before generating the helper function call.
74 It's usually better to use the ``_ra`` variants described below
75 from helper functions, but these functions are the right choice
76 for calls made from hooks like the CPU do_interrupt hook or
77 when you know for certain that the translator had to save all
78 the CPU state that ``cpu_restore_state()`` would restore anyway.
80 Function names follow the pattern:
82 load: ``cpu_ld{sign}{size}_{mmusuffix}(env, ptr)``
84 store: ``cpu_st{size}_{mmusuffix}(env, ptr, val)``
87 - (empty) : for 32 or 64 bit sizes
97 ``mmusuffix`` is one of the generic suffixes ``data`` or ``code``, or
98 (for softmmu configs) a target-specific MMU mode suffix as defined
99 in the target's ``cpu.h``.
102 - ``\<cpu_ld[us]\?[bwlq]_[a-zA-Z0-9]\+\>``
103 - ``\<cpu_st[bwlq]_[a-zA-Z0-9]\+\>``
108 These functions work like the ``cpu_{ld,st}_*`` functions except
109 that they also take a ``retaddr`` argument. This extra argument
110 allows for correct unwinding of any exception that is taken,
111 and should generally be the result of GETPC() called directly
112 from the top level HELPER(foo) function (i.e. the return address
113 in the generated code).
115 These are generally the preferred way to do accesses by guest
116 virtual address from helper functions; see the documentation
117 of the non-``_ra`` variants for when those would be better.
119 Calling these functions with a ``retaddr`` argument of 0 is
120 equivalent to calling the non-``_ra`` version of the function.
122 Function names follow the pattern:
124 load: ``cpu_ld{sign}{size}_{mmusuffix}_ra(env, ptr, retaddr)``
126 store: ``cpu_st{sign}{size}_{mmusuffix}_ra(env, ptr, val, retaddr)``
129 - ``\<cpu_ld[us]\?[bwlq]_[a-zA-Z0-9]\+_ra\>``
130 - ``\<cpu_st[bwlq]_[a-zA-Z0-9]\+_ra\>``
132 ``helper_*_{ld,st}*mmu``
133 ~~~~~~~~~~~~~~~~~~~~~~~~
135 These functions are intended primarily to be called by the code
136 generated by the TCG backend. They may also be called by target
137 CPU helper function code. Like the ``cpu_{ld,st}_*_ra`` functions
138 they perform accesses by guest virtual address; the difference is
139 that these functions allow you to specify an ``opindex`` parameter
140 which encodes (among other things) the mmu index to use for the
141 access. This is necessary if your helper needs to make an access
142 via a specific mmu index (for instance, an "always as non-privileged"
143 access) rather than using the default mmu index for the current state
146 The ``opindex`` parameter should be created by calling ``make_memop_idx()``.
148 The ``retaddr`` parameter should be the result of GETPC() called directly
149 from the top level HELPER(foo) function (or 0 if no guest CPU state
150 unwinding is required).
152 **TODO** The names of these functions are a bit odd for historical
153 reasons because they were originally expected to be called only from
154 within generated code. We should rename them to bring them
155 more in line with the other memory access functions.
157 load: ``helper_{endian}_ld{sign}{size}_mmu(env, addr, opindex, retaddr)``
159 load (code): ``helper_{endian}_ld{sign}{size}_cmmu(env, addr, opindex, retaddr)``
161 store: ``helper_{endian}_st{size}_mmu(env, addr, val, opindex, retaddr)``
164 - (empty) : for 32 or 64 bit sizes
175 - ``le`` : little endian
176 - ``be`` : big endian
177 - ``ret`` : target endianness
180 - ``\<helper_\(le\|be\|ret\)_ld[us]\?[bwlq]_c\?mmu\>``
181 - ``\<helper_\(le\|be\|ret\)_st[bwlq]_mmu\>``
186 These functions are the primary ones to use when emulating CPU
187 or device memory accesses. They take an AddressSpace, which is the
188 way QEMU defines the view of memory that a device or CPU has.
189 (They generally correspond to being the "master" end of a hardware bus
192 Each CPU has an AddressSpace. Some kinds of CPU have more than
193 one AddressSpace (for instance ARM guest CPUs have an AddressSpace
194 for the Secure world and one for NonSecure if they implement TrustZone).
195 Devices which can do DMA-type operations should generally have an
196 AddressSpace. There is also a "system address space" which typically
197 has all the devices and memory that all CPUs can see. (Some older
198 device models use the "system address space" rather than properly
199 modelling that they have an AddressSpace of their own.)
201 Functions are provided for doing byte-buffer reads and writes,
202 and also for doing one-data-item loads and stores.
204 In all cases the caller provides a MemTxAttrs to specify bus
205 transaction attributes, and can check whether the memory transaction
206 succeeded using a MemTxResult return code.
208 ``address_space_read(address_space, addr, attrs, buf, len)``
210 ``address_space_write(address_space, addr, attrs, buf, len)``
212 ``address_space_rw(address_space, addr, attrs, buf, len, is_write)``
214 ``address_space_ld{sign}{size}_{endian}(address_space, addr, attrs, txresult)``
216 ``address_space_st{size}_{endian}(address_space, addr, val, attrs, txresult)``
219 - (empty) : for 32 or 64 bit sizes
222 (No signed load operations are provided.)
231 - ``le`` : little endian
232 - ``be`` : big endian
234 The ``_{endian}`` suffix is omitted for byte accesses.
237 - ``\<address_space_\(read\|write\|rw\)\>``
238 - ``\<address_space_ldu\?[bwql]\(_[lb]e\)\?\>``
239 - ``\<address_space_st[bwql]\(_[lb]e\)\?\>``
244 These are functions which are identical to
245 ``address_space_{ld,st}*``, except that they always pass
246 ``MEMTXATTRS_UNSPECIFIED`` for the transaction attributes, and ignore
247 whether the transaction succeeded or failed.
249 The fact that they ignore whether the transaction succeeded means
250 they should not be used in new code, unless you know for certain
251 that your code will only be used in a context where the CPU or
252 device doing the access has no way to report such an error.
254 ``load: ld{sign}{size}_{endian}_phys``
256 ``store: st{size}_{endian}_phys``
259 - (empty) : for 32 or 64 bit sizes
262 (No signed load operations are provided.)
271 - ``le`` : little endian
272 - ``be`` : big endian
274 The ``_{endian}_`` infix is omitted for byte accesses.
277 - ``\<ldu\?[bwlq]\(_[bl]e\)\?_phys\>``
278 - ``\<st[bwlq]\(_[bl]e\)\?_phys\>``
280 ``cpu_physical_memory_*``
281 ~~~~~~~~~~~~~~~~~~~~~~~~~
283 These are convenience functions which are identical to
284 ``address_space_*`` but operate specifically on the system address space,
285 always pass a ``MEMTXATTRS_UNSPECIFIED`` set of memory attributes and
286 ignore whether the memory transaction succeeded or failed.
287 For new code they are better avoided:
289 * there is likely to be behaviour you need to model correctly for a
290 failed read or write operation
291 * a device should usually perform operations on its own AddressSpace
292 rather than using the system address space
294 ``cpu_physical_memory_read``
296 ``cpu_physical_memory_write``
298 ``cpu_physical_memory_rw``
301 - ``\<cpu_physical_memory_\(read\|write\|rw\)\>``
303 ``cpu_physical_memory_write_rom``
304 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
306 This function performs a write by physical address like
307 ``address_space_write``, except that if the write is to a ROM then
308 the ROM contents will be modified, even though a write by the guest
309 CPU to the ROM would be ignored.
311 Note that unlike ``cpu_physical_memory_write()`` this function takes
312 an AddressSpace argument, but unlike ``address_space_write()`` this
313 function does not take a ``MemTxAttrs`` or return a ``MemTxResult``.
315 **TODO**: we should probably clean up this inconsistency and
316 turn the function into ``address_space_write_rom`` with an API
317 matching ``address_space_write``.
319 ``cpu_physical_memory_write_rom``
322 ``cpu_memory_rw_debug``
323 ~~~~~~~~~~~~~~~~~~~~~~~
325 Access CPU memory by virtual address for debug purposes.
327 This function is intended for use by the GDB stub and similar code.
328 It takes a virtual address, converts it to a physical address via
329 an MMU lookup using the current settings of the specified CPU,
330 and then performs the access (using ``address_space_rw`` for
331 reads or ``cpu_physical_memory_write_rom`` for writes).
332 This means that if the access is a write to a ROM then this
333 function will modify the contents (whereas a normal guest CPU access
334 would ignore the write attempt).
336 ``cpu_memory_rw_debug``
341 These behave like ``address_space_*``, except that they perform a DMA
342 barrier operation first.
344 **TODO**: We should provide guidance on when you need the DMA
345 barrier operation and when it's OK to use ``address_space_*``, and
346 make sure our existing code is doing things correctly.
355 - ``\<dma_memory_\(read\|write\|rw\)\>``
357 ``pci_dma_*`` and ``{ld,st}*_pci_dma``
358 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
360 These functions are specifically for PCI device models which need to
361 perform accesses where the PCI device is a bus master. You pass them a
362 ``PCIDevice *`` and they will do ``dma_memory_*`` operations on the
363 correct address space for that device.
371 ``load: ld{sign}{size}_{endian}_pci_dma``
373 ``store: st{size}_{endian}_pci_dma``
376 - (empty) : for 32 or 64 bit sizes
379 (No signed load operations are provided.)
388 - ``le`` : little endian
389 - ``be`` : big endian
391 The ``_{endian}_`` infix is omitted for byte accesses.
394 - ``\<pci_dma_\(read\|write\|rw\)\>``
395 - ``\<ldu\?[bwlq]\(_[bl]e\)\?_pci_dma\>``
396 - ``\<st[bwlq]\(_[bl]e\)\?_pci_dma\>``