2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
6 config USER_STACKTRACE_SUPPORT
12 config HAVE_FTRACE_NMI_ENTER
15 See Documentation/trace/ftrace-design.txt
17 config HAVE_FUNCTION_TRACER
20 See Documentation/trace/ftrace-design.txt
22 config HAVE_FUNCTION_GRAPH_TRACER
25 See Documentation/trace/ftrace-design.txt
27 config HAVE_FUNCTION_GRAPH_FP_TEST
30 See Documentation/trace/ftrace-design.txt
32 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
35 See Documentation/trace/ftrace-design.txt
37 config HAVE_DYNAMIC_FTRACE
40 See Documentation/trace/ftrace-design.txt
42 config HAVE_FTRACE_MCOUNT_RECORD
45 See Documentation/trace/ftrace-design.txt
47 config HAVE_SYSCALL_TRACEPOINTS
50 See Documentation/trace/ftrace-design.txt
52 config HAVE_C_RECORDMCOUNT
55 C version of recordmcount available?
57 config TRACER_MAX_TRACE
63 config FTRACE_NMI_ENTER
65 depends on HAVE_FTRACE_NMI_ENTER
69 select CONTEXT_SWITCH_TRACER
72 config CONTEXT_SWITCH_TRACER
75 config RING_BUFFER_ALLOW_SWAP
78 Allow the use of ring_buffer_swap_cpu.
79 Adds a very slight overhead to tracing when enabled.
81 # All tracer options should select GENERIC_TRACER. For those options that are
82 # enabled by all tracers (context switch and event tracer) they select TRACING.
83 # This allows those options to appear when no other tracer is selected. But the
84 # options do not appear when something else selects it. We need the two options
85 # GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the
86 # hiding of the automatic options.
92 select STACKTRACE if STACKTRACE_SUPPORT
103 # Minimum requirements an architecture has to meet for us to
104 # be able to offer generic tracing facilities:
106 config TRACING_SUPPORT
108 # PPC32 has no irqflags tracing support, but it can use most of the
109 # tracers anyway, they were tested to build and work. Note that new
110 # exceptions to this list aren't welcomed, better implement the
111 # irqflags tracing for your architecture.
112 depends on TRACE_IRQFLAGS_SUPPORT || PPC32
113 depends on STACKTRACE_SUPPORT
120 default y if DEBUG_KERNEL
122 Enable the kernel tracing infrastructure.
126 config FUNCTION_TRACER
127 bool "Kernel Function Tracer"
128 depends on HAVE_FUNCTION_TRACER
129 select FRAME_POINTER if (!ARM_UNWIND)
131 select GENERIC_TRACER
132 select CONTEXT_SWITCH_TRACER
134 Enable the kernel to trace every kernel function. This is done
135 by using a compiler feature to insert a small, 5-byte No-Operation
136 instruction at the beginning of every kernel function, which NOP
137 sequence is then dynamically patched into a tracer call when
138 tracing is enabled by the administrator. If it's runtime disabled
139 (the bootup default), then the overhead of the instructions is very
140 small and not measurable even in micro-benchmarks.
142 config FUNCTION_GRAPH_TRACER
143 bool "Kernel Function Graph Tracer"
144 depends on HAVE_FUNCTION_GRAPH_TRACER
145 depends on FUNCTION_TRACER
146 depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE
149 Enable the kernel to trace a function at both its return
151 Its first purpose is to trace the duration of functions and
152 draw a call graph for each thread with some information like
153 the return value. This is done by setting the current return
154 address on the current task structure into a stack of calls.
157 config IRQSOFF_TRACER
158 bool "Interrupts-off Latency Tracer"
160 depends on TRACE_IRQFLAGS_SUPPORT
161 depends on !ARCH_USES_GETTIMEOFFSET
162 select TRACE_IRQFLAGS
163 select GENERIC_TRACER
164 select TRACER_MAX_TRACE
165 select RING_BUFFER_ALLOW_SWAP
167 This option measures the time spent in irqs-off critical
168 sections, with microsecond accuracy.
170 The default measurement method is a maximum search, which is
171 disabled by default and can be runtime (re-)started
174 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
176 (Note that kernel size and overhead increase with this option
177 enabled. This option and the preempt-off timing option can be
178 used together or separately.)
180 config PREEMPT_TRACER
181 bool "Preemption-off Latency Tracer"
183 depends on !ARCH_USES_GETTIMEOFFSET
185 select GENERIC_TRACER
186 select TRACER_MAX_TRACE
187 select RING_BUFFER_ALLOW_SWAP
189 This option measures the time spent in preemption-off critical
190 sections, with microsecond accuracy.
192 The default measurement method is a maximum search, which is
193 disabled by default and can be runtime (re-)started
196 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
198 (Note that kernel size and overhead increase with this option
199 enabled. This option and the irqs-off timing option can be
200 used together or separately.)
203 bool "Scheduling Latency Tracer"
204 select GENERIC_TRACER
205 select CONTEXT_SWITCH_TRACER
206 select TRACER_MAX_TRACE
208 This tracer tracks the latency of the highest priority task
209 to be scheduled in, starting from the point it has woken up.
211 config ENABLE_DEFAULT_TRACERS
212 bool "Trace process context switches and events"
213 depends on !GENERIC_TRACER
216 This tracer hooks to various trace points in the kernel,
217 allowing the user to pick and choose which trace point they
218 want to trace. It also includes the sched_switch tracer plugin.
220 config FTRACE_SYSCALLS
221 bool "Trace syscalls"
222 depends on HAVE_SYSCALL_TRACEPOINTS
223 select GENERIC_TRACER
226 Basic tracer to catch the syscall entry and exit events.
228 config TRACE_BRANCH_PROFILING
230 select GENERIC_TRACER
233 prompt "Branch Profiling"
234 default BRANCH_PROFILE_NONE
236 The branch profiling is a software profiler. It will add hooks
237 into the C conditionals to test which path a branch takes.
239 The likely/unlikely profiler only looks at the conditions that
240 are annotated with a likely or unlikely macro.
242 The "all branch" profiler will profile every if-statement in the
243 kernel. This profiler will also enable the likely/unlikely
246 Either of the above profilers adds a bit of overhead to the system.
247 If unsure, choose "No branch profiling".
249 config BRANCH_PROFILE_NONE
250 bool "No branch profiling"
252 No branch profiling. Branch profiling adds a bit of overhead.
253 Only enable it if you want to analyse the branching behavior.
254 Otherwise keep it disabled.
256 config PROFILE_ANNOTATED_BRANCHES
257 bool "Trace likely/unlikely profiler"
258 select TRACE_BRANCH_PROFILING
260 This tracer profiles all the the likely and unlikely macros
261 in the kernel. It will display the results in:
263 /sys/kernel/debug/tracing/profile_annotated_branch
265 Note: this will add a significant overhead; only turn this
266 on if you need to profile the system's use of these macros.
268 config PROFILE_ALL_BRANCHES
269 bool "Profile all if conditionals"
270 select TRACE_BRANCH_PROFILING
272 This tracer profiles all branch conditions. Every if ()
273 taken in the kernel is recorded whether it hit or miss.
274 The results will be displayed in:
276 /sys/kernel/debug/tracing/profile_branch
278 This option also enables the likely/unlikely profiler.
280 This configuration, when enabled, will impose a great overhead
281 on the system. This should only be enabled when the system
282 is to be analyzed in much detail.
285 config TRACING_BRANCHES
288 Selected by tracers that will trace the likely and unlikely
289 conditions. This prevents the tracers themselves from being
290 profiled. Profiling the tracing infrastructure can only happen
291 when the likelys and unlikelys are not being traced.
294 bool "Trace likely/unlikely instances"
295 depends on TRACE_BRANCH_PROFILING
296 select TRACING_BRANCHES
298 This traces the events of likely and unlikely condition
299 calls in the kernel. The difference between this and the
300 "Trace likely/unlikely profiler" is that this is not a
301 histogram of the callers, but actually places the calling
302 events into a running trace buffer to see when and where the
303 events happened, as well as their results.
308 bool "Trace max stack"
309 depends on HAVE_FUNCTION_TRACER
310 select FUNCTION_TRACER
314 This special tracer records the maximum stack footprint of the
315 kernel and displays it in /sys/kernel/debug/tracing/stack_trace.
317 This tracer works by hooking into every function call that the
318 kernel executes, and keeping a maximum stack depth value and
319 stack-trace saved. If this is configured with DYNAMIC_FTRACE
320 then it will not have any overhead while the stack tracer
323 To enable the stack tracer on bootup, pass in 'stacktrace'
324 on the kernel command line.
326 The stack tracer can also be enabled or disabled via the
327 sysctl kernel.stack_tracer_enabled
331 config BLK_DEV_IO_TRACE
332 bool "Support for tracing block IO actions"
338 select GENERIC_TRACER
341 Say Y here if you want to be able to trace the block layer actions
342 on a given queue. Tracing allows you to see any traffic happening
343 on a block device queue. For more information (and the userspace
344 support tools needed), fetch the blktrace tools from:
346 git://git.kernel.dk/blktrace.git
348 Tracing also is possible using the ftrace interface, e.g.:
350 echo 1 > /sys/block/sda/sda1/trace/enable
351 echo blk > /sys/kernel/debug/tracing/current_tracer
352 cat /sys/kernel/debug/tracing/trace_pipe
358 depends on HAVE_REGS_AND_STACK_ACCESS_API
359 bool "Enable kprobes-based dynamic events"
363 This allows the user to add tracing events (similar to tracepoints)
364 on the fly via the ftrace interface. See
365 Documentation/trace/kprobetrace.txt for more details.
367 Those events can be inserted wherever kprobes can probe, and record
368 various register and memory values.
370 This option is also required by perf-probe subcommand of perf tools.
371 If you want to use perf tools, this option is strongly recommended.
373 config DYNAMIC_FTRACE
374 bool "enable/disable ftrace tracepoints dynamically"
375 depends on FUNCTION_TRACER
376 depends on HAVE_DYNAMIC_FTRACE
379 This option will modify all the calls to ftrace dynamically
380 (will patch them out of the binary image and replace them
381 with a No-Op instruction) as they are called. A table is
382 created to dynamically enable them again.
384 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but
385 otherwise has native performance as long as no tracing is active.
387 The changes to the code are done by a kernel thread that
388 wakes up once a second and checks to see if any ftrace calls
389 were made. If so, it runs stop_machine (stops all CPUS)
390 and modifies the code to jump over the call to ftrace.
392 config FUNCTION_PROFILER
393 bool "Kernel function profiler"
394 depends on FUNCTION_TRACER
397 This option enables the kernel function profiler. A file is created
398 in debugfs called function_profile_enabled which defaults to zero.
399 When a 1 is echoed into this file profiling begins, and when a
400 zero is entered, profiling stops. A "functions" file is created in
401 the trace_stats directory; this file shows the list of functions that
402 have been hit and their counters.
406 config FTRACE_MCOUNT_RECORD
408 depends on DYNAMIC_FTRACE
409 depends on HAVE_FTRACE_MCOUNT_RECORD
411 config FTRACE_SELFTEST
414 config FTRACE_STARTUP_TEST
415 bool "Perform a startup test on ftrace"
416 depends on GENERIC_TRACER
417 select FTRACE_SELFTEST
419 This option performs a series of startup tests on ftrace. On bootup
420 a series of tests are made to verify that the tracer is
421 functioning properly. It will do tests on all the configured
424 config EVENT_TRACE_TEST_SYSCALLS
425 bool "Run selftest on syscall events"
426 depends on FTRACE_STARTUP_TEST
428 This option will also enable testing every syscall event.
429 It only enables the event and disables it and runs various loads
430 with the event enabled. This adds a bit more time for kernel boot
431 up since it runs this on every system call defined.
433 TBD - enable a way to actually call the syscalls as we test their
437 bool "Memory mapped IO tracing"
438 depends on HAVE_MMIOTRACE_SUPPORT && PCI
439 select GENERIC_TRACER
441 Mmiotrace traces Memory Mapped I/O access and is meant for
442 debugging and reverse engineering. It is called from the ioremap
443 implementation and works via page faults. Tracing is disabled by
444 default and can be enabled at run-time.
446 See Documentation/trace/mmiotrace.txt.
447 If you are not helping to develop drivers, say N.
449 config MMIOTRACE_TEST
450 tristate "Test module for mmiotrace"
451 depends on MMIOTRACE && m
453 This is a dumb module for testing mmiotrace. It is very dangerous
454 as it will write garbage to IO memory starting at a given address.
455 However, it should be safe to use on e.g. unused portion of VRAM.
457 Say N, unless you absolutely know what you are doing.
459 config RING_BUFFER_BENCHMARK
460 tristate "Ring buffer benchmark stress tester"
461 depends on RING_BUFFER
463 This option creates a test to stress the ring buffer and benchmark it.
464 It creates its own ring buffer such that it will not interfere with
465 any other users of the ring buffer (such as ftrace). It then creates
466 a producer and consumer that will run for 10 seconds and sleep for
467 10 seconds. Each interval it will print out the number of events
468 it recorded and give a rough estimate of how long each iteration took.
470 It does not disable interrupts or raise its priority, so it may be
471 affected by processes that are running.
477 endif # TRACING_SUPPORT