tracing/wakeup: move access to wakeup_cpu into spinlock
commit9be24414aad047dcf9d8d2a9a929321536c7ebec
authorSteven Rostedt <srostedt@redhat.com>
Thu, 26 Mar 2009 14:25:24 +0000 (26 10:25 -0400)
committerSteven Rostedt <rostedt@goodmis.org>
Fri, 24 Apr 2009 03:01:36 +0000 (23 23:01 -0400)
treec4299c263acf1859ff59a3cb03a26826e7d57660
parent6a74aa40907757ec98d8710ff66cd4cfe064e7d8
tracing/wakeup: move access to wakeup_cpu into spinlock

The code had the following outside the lock:

        if (next != wakeup_task)
                return;

        pc = preempt_count();

        /* The task we are waiting for is waking up */
        data = wakeup_trace->data[wakeup_cpu];

On initialization, wakeup_task is NULL and wakeup_cpu -1. This code
is not under a lock. If wakeup_task is set on another CPU as that
task is waking up, we can see the wakeup_task before wakeup_cpu is
set. If we read wakeup_cpu while it is still -1 then we will have
a bad data pointer.

This patch moves the reading of wakeup_cpu within the protection of
the spinlock used to protect the writing of wakeup_cpu and wakeup_task.

[ Impact: remove possible race causing invalid pointer dereference ]

Reported-by: Maneesh Soni <maneesh@in.ibm.com>
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
kernel/trace/trace_sched_wakeup.c