2 * Only give sleepers 50% of their service deficit. This allows
3 * them to run sooner, but does not allow tons of sleepers to
4 * rip the spread apart.
6 SCHED_FEAT(GENTLE_FAIR_SLEEPERS
, 1)
9 * Place new tasks ahead so that they do not starve already running
12 SCHED_FEAT(START_DEBIT
, 1)
15 * Based on load and program behaviour, see if it makes sense to place
16 * a newly woken task on the same cpu as the task that woke it --
17 * improve cache locality. Typically used with SYNC wakeups as
18 * generated by pipes and the like, see also SYNC_WAKEUPS.
20 SCHED_FEAT(AFFINE_WAKEUPS
, 1)
23 * Prefer to schedule the task we woke last (assuming it failed
24 * wakeup-preemption), since its likely going to consume data we
25 * touched, increases cache locality.
27 SCHED_FEAT(NEXT_BUDDY
, 0)
30 * Prefer to schedule the task that ran last (when we did
31 * wake-preempt) as that likely will touch the same data, increases
34 SCHED_FEAT(LAST_BUDDY
, 1)
37 * Consider buddies to be cache hot, decreases the likelyness of a
38 * cache buddy being migrated away, increases cache locality.
40 SCHED_FEAT(CACHE_HOT_BUDDY
, 1)
43 * Use arch dependent cpu power functions
45 SCHED_FEAT(ARCH_POWER
, 0)
48 SCHED_FEAT(DOUBLE_TICK
, 0)
49 SCHED_FEAT(LB_BIAS
, 1)
52 * Spin-wait on mutex acquisition when the mutex owner is running on
53 * another cpu -- assumes that when the owner is running, it will soon
54 * release the lock. Decreases scheduling overhead.
56 SCHED_FEAT(OWNER_SPIN
, 1)
59 * Decrement CPU power based on time not spent running tasks
61 SCHED_FEAT(NONTASK_POWER
, 1)
64 * Queue remote wakeups on the target CPU and process them
65 * using the scheduler IPI. Reduces rq->lock contention/bounces.
67 SCHED_FEAT(TTWU_QUEUE
, 1)
69 SCHED_FEAT(FORCE_SD_OVERLAP
, 0)