4 David S. Miller <davem@redhat.com>
6 This document describes the cache/tlb flushing interfaces called
7 by the Linux VM subsystem. It enumerates over each interface,
8 describes it's intended purpose, and what side effect is expected
9 after the interface is invoked.
11 The side effects described below are stated for a uniprocessor
12 implementation, and what is to happen on that single processor. The
13 SMP cases are a simple extension, in that you just extend the
14 definition such that the side effect for a particular interface occurs
15 on all processors in the system. Don't let this scare you into
16 thinking SMP cache/tlb flushing must be so inefficient, this is in
17 fact an area where many optimizations are possible. For example,
18 if it can be proven that a user address space has never executed
19 on a cpu (see vma->cpu_vm_mask), one need not perform a flush
20 for this address space on that cpu.
22 First, the TLB flushing interfaces, since they are the simplest. The
23 "TLB" is abstracted under Linux as something the cpu uses to cache
24 virtual-->physical address translations obtained from the software
25 page tables. Meaning that if the software page tables change, it is
26 possible for stale translations to exist in this "TLB" cache.
27 Therefore when software page table changes occur, the kernel will
28 invoke one of the following flush methods _after_ the page table
31 1) void flush_tlb_all(void)
33 The most severe flush of all. After this interface runs,
34 any previous page table modification whatsoever will be
37 This is usually invoked when the kernel page tables are
38 changed, since such translations are "global" in nature.
40 2) void flush_tlb_mm(struct mm_struct *mm)
42 This interface flushes an entire user address space from
43 the TLB. After running, this interface must make sure that
44 any previous page table modifications for the address space
45 'mm' will be visible to the cpu. That is, after running,
46 there will be no entries in the TLB for 'mm'.
48 This interface is used to handle whole address space
49 page table operations such as what happens during
52 Platform developers note that generic code will always
53 invoke this interface without mm->page_table_lock held.
55 3) void flush_tlb_range(struct vm_area_struct *vma,
56 unsigned long start, unsigned long end)
58 Here we are flushing a specific range of (user) virtual
59 address translations from the TLB. After running, this
60 interface must make sure that any previous page table
61 modifications for the address space 'vma->vm_mm' in the range
62 'start' to 'end-1' will be visible to the cpu. That is, after
63 running, here will be no entries in the TLB for 'mm' for
64 virtual addresses in the range 'start' to 'end-1'.
66 The "vma" is the backing store being used for the region.
67 Primarily, this is used for munmap() type operations.
69 The interface is provided in hopes that the port can find
70 a suitably efficient method for removing multiple page
71 sized translations from the TLB, instead of having the kernel
72 call flush_tlb_page (see below) for each entry which may be
75 Platform developers note that generic code will always
76 invoke this interface with mm->page_table_lock held.
78 4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
80 This time we need to remove the PAGE_SIZE sized translation
81 from the TLB. The 'vma' is the backing structure used by
82 Linux to keep track of mmap'd regions for a process, the
83 address space is available via vma->vm_mm. Also, one may
84 test (vma->vm_flags & VM_EXEC) to see if this region is
85 executable (and thus could be in the 'instruction TLB' in
86 split-tlb type setups).
88 After running, this interface must make sure that any previous
89 page table modification for address space 'vma->vm_mm' for
90 user virtual address 'addr' will be visible to the cpu. That
91 is, after running, there will be no entries in the TLB for
92 'vma->vm_mm' for virtual address 'addr'.
94 This is used primarily during fault processing.
96 Platform developers note that generic code will always
97 invoke this interface with mm->page_table_lock held.
99 5) void flush_tlb_pgtables(struct mm_struct *mm,
100 unsigned long start, unsigned long end)
102 The software page tables for address space 'mm' for virtual
103 addresses in the range 'start' to 'end-1' are being torn down.
105 Some platforms cache the lowest level of the software page tables
106 in a linear virtually mapped array, to make TLB miss processing
107 more efficient. On such platforms, since the TLB is caching the
108 software page table structure, it needs to be flushed when parts
109 of the software page table tree are unlinked/freed.
111 Sparc64 is one example of a platform which does this.
113 Usually, when munmap()'ing an area of user virtual address
114 space, the kernel leaves the page table parts around and just
115 marks the individual pte's as invalid. However, if very large
116 portions of the address space are unmapped, the kernel frees up
117 those portions of the software page tables to prevent potential
118 excessive kernel memory usage caused by erratic mmap/mmunmap
119 sequences. It is at these times that flush_tlb_pgtables will
122 6) void update_mmu_cache(struct vm_area_struct *vma,
123 unsigned long address, pte_t pte)
125 At the end of every page fault, this routine is invoked to
126 tell the architecture specific code that a translation
127 described by "pte" now exists at virtual address "address"
128 for address space "vma->vm_mm", in the software page tables.
130 A port may use this information in any way it so chooses.
131 For example, it could use this event to pre-load TLB
132 translations for software managed TLB configurations.
133 The sparc64 port currently does this.
135 Next, we have the cache flushing interfaces. In general, when Linux
136 is changing an existing virtual-->physical mapping to a new value,
137 the sequence will be in one of the following forms:
139 1) flush_cache_mm(mm);
140 change_all_page_tables_of(mm);
143 2) flush_cache_range(vma, start, end);
144 change_range_of_page_tables(mm, start, end);
145 flush_tlb_range(vma, start, end);
147 3) flush_cache_page(vma, addr);
148 set_pte(pte_pointer, new_pte_val);
149 flush_tlb_page(vma, addr);
151 The cache level flush will always be first, because this allows
152 us to properly handle systems whose caches are strict and require
153 a virtual-->physical translation to exist for a virtual address
154 when that virtual address is flushed from the cache. The HyperSparc
155 cpu is one such cpu with this attribute.
157 The cache flushing routines below need only deal with cache flushing
158 to the extent that it is necessary for a particular cpu. Mostly,
159 these routines must be implemented for cpus which have virtually
160 indexed caches which must be flushed when virtual-->physical
161 translations are changed or removed. So, for example, the physically
162 indexed physically tagged caches of IA32 processors have no need to
163 implement these interfaces since the caches are fully synchronized
164 and have no dependency on translation information.
166 Here are the routines, one by one:
168 1) void flush_cache_mm(struct mm_struct *mm)
170 This interface flushes an entire user address space from
171 the caches. That is, after running, there will be no cache
172 lines associated with 'mm'.
174 This interface is used to handle whole address space
175 page table operations such as what happens during
176 fork, exit, and exec.
178 2) void flush_cache_range(struct vm_area_struct *vma,
179 unsigned long start, unsigned long end)
181 Here we are flushing a specific range of (user) virtual
182 addresses from the cache. After running, there will be no
183 entries in the cache for 'vma->vm_mm' for virtual addresses in
184 the range 'start' to 'end-1'.
186 The "vma" is the backing store being used for the region.
187 Primarily, this is used for munmap() type operations.
189 The interface is provided in hopes that the port can find
190 a suitably efficient method for removing multiple page
191 sized regions from the cache, instead of having the kernel
192 call flush_cache_page (see below) for each entry which may be
195 3) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr)
197 This time we need to remove a PAGE_SIZE sized range
198 from the cache. The 'vma' is the backing structure used by
199 Linux to keep track of mmap'd regions for a process, the
200 address space is available via vma->vm_mm. Also, one may
201 test (vma->vm_flags & VM_EXEC) to see if this region is
202 executable (and thus could be in the 'instruction cache' in
203 "Harvard" type cache layouts).
205 After running, there will be no entries in the cache for
206 'vma->vm_mm' for virtual address 'addr'.
208 This is used primarily during fault processing.
210 4) void flush_cache_kmaps(void)
212 This routine need only be implemented if the platform utilizes
213 highmem. It will be called right before all of the kmaps
216 After running, there will be no entries in the cache for
217 the kernel virtual address range PKMAP_ADDR(0) to
218 PKMAP_ADDR(LAST_PKMAP).
220 This routing should be implemented in asm/highmem.h
222 5) void flush_cache_vmap(unsigned long start, unsigned long end)
223 void flush_cache_vunmap(unsigned long start, unsigned long end)
225 Here in these two interfaces we are flushing a specific range
226 of (kernel) virtual addresses from the cache. After running,
227 there will be no entries in the cache for the kernel address
228 space for virtual addresses in the range 'start' to 'end-1'.
230 The first of these two routines is invoked after map_vm_area()
231 has installed the page table entries. The second is invoked
232 before unmap_vm_area() deletes the page table entries.
234 There exists another whole class of cpu cache issues which currently
235 require a whole different set of interfaces to handle properly.
236 The biggest problem is that of virtual aliasing in the data cache
239 Is your port susceptible to virtual aliasing in it's D-cache?
240 Well, if your D-cache is virtually indexed, is larger in size than
241 PAGE_SIZE, and does not prevent multiple cache lines for the same
242 physical address from existing at once, you have this problem.
244 If your D-cache has this problem, first define asm/shmparam.h SHMLBA
245 properly, it should essentially be the size of your virtually
246 addressed D-cache (or if the size is variable, the largest possible
247 size). This setting will force the SYSv IPC layer to only allow user
248 processes to mmap shared memory at address which are a multiple of
251 NOTE: This does not fix shared mmaps, check out the sparc64 port for
252 one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
254 Next, you have to solve the D-cache aliasing issue for all
255 other cases. Please keep in mind that fact that, for a given page
256 mapped into some user address space, there is always at least one more
257 mapping, that of the kernel in it's linear mapping starting at
258 PAGE_OFFSET. So immediately, once the first user maps a given
259 physical page into its address space, by implication the D-cache
260 aliasing problem has the potential to exist since the kernel already
261 maps this page at its virtual address.
263 void copy_user_page(void *to, void *from, unsigned long addr, struct page *page)
264 void clear_user_page(void *to, unsigned long addr, struct page *page)
266 These two routines store data in user anonymous or COW
267 pages. It allows a port to efficiently avoid D-cache alias
268 issues between userspace and the kernel.
270 For example, a port may temporarily map 'from' and 'to' to
271 kernel virtual addresses during the copy. The virtual address
272 for these two pages is chosen in such a way that the kernel
273 load/store instructions happen to virtual addresses which are
274 of the same "color" as the user mapping of the page. Sparc64
275 for example, uses this technique.
277 The 'addr' parameter tells the virtual address where the
278 user will ultimately have this page mapped, and the 'page'
279 parameter gives a pointer to the struct page of the target.
281 If D-cache aliasing is not an issue, these two routines may
282 simply call memcpy/memset directly and do nothing more.
284 void flush_dcache_page(struct page *page)
286 Any time the kernel writes to a page cache page, _OR_
287 the kernel is about to read from a page cache page and
288 user space shared/writable mappings of this page potentially
289 exist, this routine is called.
291 NOTE: This routine need only be called for page cache pages
292 which can potentially ever be mapped into the address
293 space of a user process. So for example, VFS layer code
294 handling vfs symlinks in the page cache need not call
295 this interface at all.
297 The phrase "kernel writes to a page cache page" means,
298 specifically, that the kernel executes store instructions
299 that dirty data in that page at the page->virtual mapping
300 of that page. It is important to flush here to handle
301 D-cache aliasing, to make sure these kernel stores are
302 visible to user space mappings of that page.
304 The corollary case is just as important, if there are users
305 which have shared+writable mappings of this file, we must make
306 sure that kernel reads of these pages will see the most recent
307 stores done by the user.
309 If D-cache aliasing is not an issue, this routine may
310 simply be defined as a nop on that architecture.
312 There is a bit set aside in page->flags (PG_arch_1) as
313 "architecture private". The kernel guarantees that,
314 for pagecache pages, it will clear this bit when such
315 a page first enters the pagecache.
317 This allows these interfaces to be implemented much more
318 efficiently. It allows one to "defer" (perhaps indefinitely)
319 the actual flush if there are currently no user processes
320 mapping this page. See sparc64's flush_dcache_page and
321 update_mmu_cache implementations for an example of how to go
324 The idea is, first at flush_dcache_page() time, if
325 page->mapping->i_mmap{,_shared} are empty lists, just mark the
326 architecture private page flag bit. Later, in
327 update_mmu_cache(), a check is made of this flag bit, and if
328 set the flush is done and the flag bit is cleared.
330 IMPORTANT NOTE: It is often important, if you defer the flush,
331 that the actual flush occurs on the same CPU
332 as did the cpu stores into the page to make it
333 dirty. Again, see sparc64 for examples of how
336 void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
337 unsigned long user_vaddr,
338 void *dst, void *src, int len)
339 void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
340 unsigned long user_vaddr,
341 void *dst, void *src, int len)
342 When the kernel needs to copy arbitrary data in and out
343 of arbitrary user pages (f.e. for ptrace()) it will use
346 The page has been kmap()'d, and flush_cache_page() has
347 just been called for the user mapping of this page (if
350 Any necessary cache flushing or other coherency operations
351 that need to occur should happen here. If the processor's
352 instruction cache does not snoop cpu stores, it is very
353 likely that you will need to flush the instruction cache
354 for copy_to_user_page().
356 void flush_icache_range(unsigned long start, unsigned long end)
357 When the kernel stores into addresses that it will execute
358 out of (eg when loading modules), this function is called.
360 If the icache does not snoop stores then this routine will need
363 void flush_icache_page(struct vm_area_struct *vma, struct page *page)
364 All the functionality of flush_icache_page can be implemented in
365 flush_dcache_page and update_mmu_cache. In 2.7 the hope is to
366 remove this interface completely.