sched: improve sched_clock() performance
commit0d12cdd5f883f508d33b85c1bae98fa28987c8c7
authorIngo Molnar <mingo@elte.hu>
Sat, 8 Nov 2008 15:19:55 +0000 (8 16:19 +0100)
committerIngo Molnar <mingo@elte.hu>
Sat, 8 Nov 2008 15:48:19 +0000 (8 16:48 +0100)
treee07bb1f9ef49062fbd9817fe41cab66964bedf03
parent52c642f33b14bfa1b00ef2b68296effb34a573f3
sched: improve sched_clock() performance

in scheduler-intense workloads native_read_tsc() overhead accounts for
20% of the system overhead:

 659567 system_call                              41222.9375
 686796 schedule                                 435.7843
 718382 __switch_to                              665.1685
 823875 switch_mm                                4526.7857
 1883122 native_read_tsc                          55385.9412
 9761990 total                                      2.8468

this is large part due to the rdtsc_barrier() that is done before
and after reading the TSC.

But sched_clock() is not a precise clock in the GTOD sense, using such
barriers is completely pointless. So remove the barriers and only use
them in vget_cycles().

This improves lat_ctx performance by about 5%.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
arch/x86/include/asm/msr.h
arch/x86/include/asm/tsc.h