1 = Transparent Hugepage Support =
5 Performance critical computing applications dealing with large memory
6 working sets are already running on top of libhugetlbfs and in turn
7 hugetlbfs. Transparent Hugepage Support is an alternative means of
8 using huge pages for the backing of virtual memory with huge pages
9 that supports the automatic promotion and demotion of page sizes and
10 without the shortcomings of hugetlbfs.
12 Currently it only works for anonymous memory mappings but in the
13 future it can expand over the pagecache layer starting with tmpfs.
15 The reason applications are running faster is because of two
16 factors. The first factor is almost completely irrelevant and it's not
17 of significant interest because it'll also have the downside of
18 requiring larger clear-page copy-page in page faults which is a
19 potentially negative effect. The first factor consists in taking a
20 single page fault for each 2M virtual region touched by userland (so
21 reducing the enter/exit kernel frequency by a 512 times factor). This
22 only matters the first time the memory is accessed for the lifetime of
23 a memory mapping. The second long lasting and much more important
24 factor will affect all subsequent accesses to the memory for the whole
25 runtime of the application. The second factor consist of two
26 components: 1) the TLB miss will run faster (especially with
27 virtualization using nested pagetables but almost always also on bare
28 metal without virtualization) and 2) a single TLB entry will be
29 mapping a much larger amount of virtual memory in turn reducing the
30 number of TLB misses. With virtualization and nested pagetables the
31 TLB can be mapped of larger size only if both KVM and the Linux guest
32 are using hugepages but a significant speedup already happens if only
33 one of the two is using hugepages just because of the fact the TLB
34 miss is going to run faster.
38 - "graceful fallback": mm components which don't have transparent
39 hugepage knowledge fall back to breaking a transparent hugepage and
40 working on the regular pages and their respective regular pmd/pte
43 - if a hugepage allocation fails because of memory fragmentation,
44 regular pages should be gracefully allocated instead and mixed in
45 the same vma without any failure or significant delay and without
48 - if some task quits and more hugepages become available (either
49 immediately in the buddy or through the VM), guest physical memory
50 backed by regular pages should be relocated on hugepages
51 automatically (with khugepaged)
53 - it doesn't require memory reservation and in turn it uses hugepages
54 whenever possible (the only possible reservation here is kernelcore=
55 to avoid unmovable pages to fragment all the memory but such a tweak
56 is not specific to transparent hugepage support and it's a generic
57 feature that applies to all dynamic high order allocations in the
60 - this initial support only offers the feature in the anonymous memory
61 regions but it'd be ideal to move it to tmpfs and the pagecache
64 Transparent Hugepage Support maximizes the usefulness of free memory
65 if compared to the reservation approach of hugetlbfs by allowing all
66 unused memory to be used as cache or other movable (or even unmovable
67 entities). It doesn't require reservation to prevent hugepage
68 allocation failures to be noticeable from userland. It allows paging
69 and all other advanced VM features to be available on the
70 hugepages. It requires no modifications for applications to take
73 Applications however can be further optimized to take advantage of
74 this feature, like for example they've been optimized before to avoid
75 a flood of mmap system calls for every malloc(4k). Optimizing userland
76 is by far not mandatory and khugepaged already can take care of long
77 lived page allocations even for hugepage unaware applications that
78 deals with large amounts of memory.
80 In certain cases when hugepages are enabled system wide, application
81 may end up allocating more memory resources. An application may mmap a
82 large region but only touch 1 byte of it, in that case a 2M page might
83 be allocated instead of a 4k page for no good. This is why it's
84 possible to disable hugepages system-wide and to only have them inside
85 MADV_HUGEPAGE madvise regions.
87 Embedded systems should enable hugepages only inside madvise regions
88 to eliminate any risk of wasting any precious byte of memory and to
91 Applications that gets a lot of benefit from hugepages and that don't
92 risk to lose memory by using hugepages, should use
93 madvise(MADV_HUGEPAGE) on their critical mmapped regions.
97 Transparent Hugepage Support can be entirely disabled (mostly for
98 debugging purposes) or only enabled inside MADV_HUGEPAGE regions (to
99 avoid the risk of consuming more memory resources) or enabled system
100 wide. This can be achieved with one of:
102 echo always >/sys/kernel/mm/transparent_hugepage/enabled
103 echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
104 echo never >/sys/kernel/mm/transparent_hugepage/enabled
106 It's also possible to limit defrag efforts in the VM to generate
107 hugepages in case they're not immediately free to madvise regions or
108 to never try to defrag memory and simply fallback to regular pages
109 unless hugepages are immediately available. Clearly if we spend CPU
110 time to defrag memory, we would expect to gain even more by the fact
111 we use hugepages later instead of regular pages. This isn't always
112 guaranteed, but it may be more likely in case the allocation is for a
113 MADV_HUGEPAGE region.
115 echo always >/sys/kernel/mm/transparent_hugepage/defrag
116 echo madvise >/sys/kernel/mm/transparent_hugepage/defrag
117 echo never >/sys/kernel/mm/transparent_hugepage/defrag
119 By default kernel tries to use huge zero page on read page fault.
120 It's possible to disable huge zero page by writing 0 or enable it
123 echo 0 >/sys/kernel/mm/transparent_hugepage/use_zero_page
124 echo 1 >/sys/kernel/mm/transparent_hugepage/use_zero_page
126 khugepaged will be automatically started when
127 transparent_hugepage/enabled is set to "always" or "madvise, and it'll
128 be automatically shutdown if it's set to "never".
130 khugepaged runs usually at low frequency so while one may not want to
131 invoke defrag algorithms synchronously during the page faults, it
132 should be worth invoking defrag at least in khugepaged. However it's
133 also possible to disable defrag in khugepaged by writing 0 or enable
134 defrag in khugepaged by writing 1:
136 echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
137 echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
139 You can also control how many pages khugepaged should scan at each
142 /sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan
144 and how many milliseconds to wait in khugepaged between each pass (you
145 can set this to 0 to run khugepaged at 100% utilization of one core):
147 /sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs
149 and how many milliseconds to wait in khugepaged if there's an hugepage
150 allocation failure to throttle the next allocation attempt.
152 /sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs
154 The khugepaged progress can be seen in the number of pages collapsed:
156 /sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed
160 /sys/kernel/mm/transparent_hugepage/khugepaged/full_scans
164 You can change the sysfs boot time defaults of Transparent Hugepage
165 Support by passing the parameter "transparent_hugepage=always" or
166 "transparent_hugepage=madvise" or "transparent_hugepage=never"
167 (without "") to the kernel command line.
169 == Need of application restart ==
171 The transparent_hugepage/enabled values only affect future
172 behavior. So to make them effective you need to restart any
173 application that could have been using hugepages. This also applies to
174 the regions registered in khugepaged.
176 == Monitoring usage ==
178 The number of transparent huge pages currently used by the system is
179 available by reading the AnonHugePages field in /proc/meminfo. To
180 identify what applications are using transparent huge pages, it is
181 necessary to read /proc/PID/smaps and count the AnonHugePages fields
182 for each mapping. Note that reading the smaps file is expensive and
183 reading it frequently will incur overhead.
185 There are a number of counters in /proc/vmstat that may be used to
186 monitor how successfully the system is providing huge pages for use.
188 thp_fault_alloc is incremented every time a huge page is successfully
189 allocated to handle a page fault. This applies to both the
190 first time a page is faulted and for COW faults.
192 thp_collapse_alloc is incremented by khugepaged when it has found
193 a range of pages to collapse into one huge page and has
194 successfully allocated a new huge page to store the data.
196 thp_fault_fallback is incremented if a page fault fails to allocate
197 a huge page and instead falls back to using small pages.
199 thp_collapse_alloc_failed is incremented if khugepaged found a range
200 of pages that should be collapsed into one huge page but failed
203 thp_split is incremented every time a huge page is split into base
204 pages. This can happen for a variety of reasons but a common
205 reason is that a huge page is old and is being reclaimed.
207 thp_zero_page_alloc is incremented every time a huge zero page is
208 successfully allocated. It includes allocations which where
209 dropped due race with other allocation. Note, it doesn't count
210 every map of the huge zero page, only its allocation.
212 thp_zero_page_alloc_failed is incremented if kernel fails to allocate
213 huge zero page and falls back to using small pages.
215 As the system ages, allocating huge pages may be expensive as the
216 system uses memory compaction to copy data around memory to free a
217 huge page for use. There are some counters in /proc/vmstat to help
218 monitor this overhead.
220 compact_stall is incremented every time a process stalls to run
221 memory compaction so that a huge page is free for use.
223 compact_success is incremented if the system compacted memory and
224 freed a huge page for use.
226 compact_fail is incremented if the system tries to compact memory
229 compact_pages_moved is incremented each time a page is moved. If
230 this value is increasing rapidly, it implies that the system
231 is copying a lot of data to satisfy the huge page allocation.
232 It is possible that the cost of copying exceeds any savings
233 from reduced TLB misses.
235 compact_pagemigrate_failed is incremented when the underlying mechanism
236 for moving a page failed.
238 compact_blocks_moved is incremented each time memory compaction examines
239 a huge page aligned range of pages.
241 It is possible to establish how long the stalls were using the function
242 tracer to record how long was spent in __alloc_pages_nodemask and
243 using the mm_page_alloc tracepoint to identify which allocations were
246 == get_user_pages and follow_page ==
248 get_user_pages and follow_page if run on a hugepage, will return the
249 head or tail pages as usual (exactly as they would do on
250 hugetlbfs). Most gup users will only care about the actual physical
251 address of the page and its temporary pinning to release after the I/O
252 is complete, so they won't ever notice the fact the page is huge. But
253 if any driver is going to mangle over the page structure of the tail
254 page (like for checking page->mapping or other bits that are relevant
255 for the head page and not the tail page), it should be updated to jump
256 to check head page instead (while serializing properly against
257 split_huge_page() to avoid the head and tail pages to disappear from
258 under it, see the futex code to see an example of that, hugetlbfs also
259 needed special handling in futex code for similar reasons).
261 NOTE: these aren't new constraints to the GUP API, and they match the
262 same constrains that applies to hugetlbfs too, so any driver capable
263 of handling GUP on hugetlbfs will also work fine on transparent
264 hugepage backed mappings.
266 In case you can't handle compound pages if they're returned by
267 follow_page, the FOLL_SPLIT bit can be specified as parameter to
268 follow_page, so that it will split the hugepages before returning
269 them. Migration for example passes FOLL_SPLIT as parameter to
270 follow_page because it's not hugepage aware and in fact it can't work
271 at all on hugetlbfs (but it instead works fine on transparent
272 hugepages thanks to FOLL_SPLIT). migration simply can't deal with
273 hugepages being returned (as it's not only checking the pfn of the
274 page and pinning it during the copy but it pretends to migrate the
275 memory in regular page sizes and with regular pte/pmd mappings).
277 == Optimizing the applications ==
279 To be guaranteed that the kernel will map a 2M page immediately in any
280 memory region, the mmap region has to be hugepage naturally
281 aligned. posix_memalign() can provide that guarantee.
285 You can use hugetlbfs on a kernel that has transparent hugepage
286 support enabled just fine as always. No difference can be noted in
287 hugetlbfs other than there will be less overall fragmentation. All
288 usual features belonging to hugetlbfs are preserved and
289 unaffected. libhugetlbfs will also work fine as usual.
291 == Graceful fallback ==
293 Code walking pagetables but unware about huge pmds can simply call
294 split_huge_page_pmd(vma, addr, pmd) where the pmd is the one returned by
295 pmd_offset. It's trivial to make the code transparent hugepage aware
296 by just grepping for "pmd_offset" and adding split_huge_page_pmd where
297 missing after pmd_offset returns the pmd. Thanks to the graceful
298 fallback design, with a one liner change, you can avoid to write
299 hundred if not thousand of lines of complex code to make your code
302 If you're not walking pagetables but you run into a physical hugepage
303 but you can't handle it natively in your code, you can split it by
304 calling split_huge_page(page). This is what the Linux VM does before
305 it tries to swapout the hugepage for example.
307 Example to make mremap.c transparent hugepage aware with a one liner
310 diff --git a/mm/mremap.c b/mm/mremap.c
313 @@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
316 pmd = pmd_offset(pud, addr);
317 + split_huge_page_pmd(vma, addr, pmd);
318 if (pmd_none_or_clear_bad(pmd))
321 == Locking in hugepage aware code ==
323 We want as much code as possible hugepage aware, as calling
324 split_huge_page() or split_huge_page_pmd() has a cost.
326 To make pagetable walks huge pmd aware, all you need to do is to call
327 pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
328 mmap_sem in read (or write) mode to be sure an huge pmd cannot be
329 created from under you by khugepaged (khugepaged collapse_huge_page
330 takes the mmap_sem in write mode in addition to the anon_vma lock). If
331 pmd_trans_huge returns false, you just fallback in the old code
332 paths. If instead pmd_trans_huge returns true, you have to take the
333 mm->page_table_lock and re-run pmd_trans_huge. Taking the
334 page_table_lock will prevent the huge pmd to be converted into a
335 regular pmd from under you (split_huge_page can run in parallel to the
336 pagetable walk). If the second pmd_trans_huge returns false, you
337 should just drop the page_table_lock and fallback to the old code as
338 before. Otherwise you should run pmd_trans_splitting on the pmd. In
339 case pmd_trans_splitting returns true, it means split_huge_page is
340 already in the middle of splitting the page. So if pmd_trans_splitting
341 returns true it's enough to drop the page_table_lock and call
342 wait_split_huge_page and then fallback the old code paths. You are
343 guaranteed by the time wait_split_huge_page returns, the pmd isn't
344 huge anymore. If pmd_trans_splitting returns false, you can proceed to
345 process the huge pmd and the hugepage natively. Once finished you can
346 drop the page_table_lock.
348 == compound_lock, get_user_pages and put_page ==
350 split_huge_page internally has to distribute the refcounts in the head
351 page to the tail pages before clearing all PG_head/tail bits from the
352 page structures. It can do that easily for refcounts taken by huge pmd
353 mappings. But the GUI API as created by hugetlbfs (that returns head
354 and tail pages if running get_user_pages on an address backed by any
355 hugepage), requires the refcount to be accounted on the tail pages and
356 not only in the head pages, if we want to be able to run
357 split_huge_page while there are gup pins established on any tail
358 page. Failure to be able to run split_huge_page if there's any gup pin
359 on any tail page, would mean having to split all hugepages upfront in
360 get_user_pages which is unacceptable as too many gup users are
361 performance critical and they must work natively on hugepages like
362 they work natively on hugetlbfs already (hugetlbfs is simpler because
363 hugetlbfs pages cannot be splitted so there wouldn't be requirement of
364 accounting the pins on the tail pages for hugetlbfs). If we wouldn't
365 account the gup refcounts on the tail pages during gup, we won't know
366 anymore which tail page is pinned by gup and which is not while we run
367 split_huge_page. But we still have to add the gup pin to the head page
368 too, to know when we can free the compound page in case it's never
369 splitted during its lifetime. That requires changing not just
370 get_page, but put_page as well so that when put_page runs on a tail
371 page (and only on a tail page) it will find its respective head page,
372 and then it will decrease the head page refcount in addition to the
373 tail page refcount. To obtain a head page reliably and to decrease its
374 refcount without race conditions, put_page has to serialize against
375 __split_huge_page_refcount using a special per-page lock called