powerpc/mm: Cleanup initialization of hugepages on powerpc
commitd1837cba5d5d5458c09f0a2849db2d3c203cb8e9
authorDavid Gibson <david@gibson.dropbear.id.au>
Mon, 26 Oct 2009 19:24:31 +0000 (26 19:24 +0000)
committerBenjamin Herrenschmidt <benh@kernel.crashing.org>
Fri, 30 Oct 2009 06:20:58 +0000 (30 17:20 +1100)
tree144a4eb43ed6b9909133dc1ac0619d813e4cb131
parenta4fe3ce7699bfe1bd88f816b55d42d8fe1dac655
powerpc/mm: Cleanup initialization of hugepages on powerpc

This patch simplifies the logic used to initialize hugepages on
powerpc.  The somewhat oddly named set_huge_psize() is renamed to
add_huge_page_size() and now does all necessary verification of
whether it's given a valid hugepage sizes (instead of just some) and
instantiates the generic hstate structure (but no more).

hugetlbpage_init() now steps through the available pagesizes, checks
if they're valid for hugepages by calling add_huge_page_size() and
initializes the kmem_caches for the hugepage pagetables.  This means
we can now eliminate the mmu_huge_psizes array, since we no longer
need to pass the sizing information for the pagetable caches from
set_huge_psize() into hugetlbpage_init()

Determination of the default huge page size is also moved from the
hash code into the general hugepage code.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
arch/powerpc/include/asm/page_64.h
arch/powerpc/mm/hash_utils_64.c
arch/powerpc/mm/hugetlbpage.c