perf: Reimplement frequency driven sampling
commit21a6adcde06e129b055caa3256e65a97a2986770
authorPeter Zijlstra <a.p.zijlstra@chello.nl>
Tue, 26 Jan 2010 17:50:16 +0000 (26 18:50 +0100)
committerGreg Kroah-Hartman <gregkh@suse.de>
Mon, 15 Mar 2010 16:06:17 +0000 (15 09:06 -0700)
tree56663f2682b5114b92335c7c53ce26e1449ac8cf
parent69cb5f7cdc28a5352a03c16bbaa0a92cdf31b9d4
perf: Reimplement frequency driven sampling

commit abd50713944c8ea9e0af5b7bffa0aacae21cc91a upstream.

There was a bug in the old period code that caused intel_pmu_enable_all()
or native_write_msr_safe() to show up quite high in the profiles.

In staring at that code it made my head hurt, so I rewrote it in a
hopefully simpler fashion. Its now fully symetric between tick and
overflow driven adjustments and uses less data to boot.

The only complication is that it basically wants to do a u128 division.
The code approximates that in a rather simple truncate until it fits
fashion, taking care to balance the terms while truncating.

This version does not generate that sampling artefact.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
include/linux/perf_event.h
kernel/perf_event.c