x86-64, rwsem: Avoid store forwarding hazard in __downgrade_write
commitfab9024d006d3768b9ea46861ea5302ce1fe6268
authorAvi Kivity <avi@redhat.com>
Sat, 13 Feb 2010 08:33:12 +0000 (13 10:33 +0200)
committerGreg Kroah-Hartman <gregkh@suse.de>
Mon, 26 Apr 2010 14:47:58 +0000 (26 07:47 -0700)
treec0270420bf27440d4b05feba4c8c00cb498f9b86
parentd7a0f8f96011e65bbdd7544653897442507776de
x86-64, rwsem: Avoid store forwarding hazard in __downgrade_write

commit 0d1622d7f526311d87d7da2ee7dd14b73e45d3fc upstream.

The Intel Architecture Optimization Reference Manual states that a short
load that follows a long store to the same object will suffer a store
forwading penalty, particularly if the two accesses use different addresses.
Trivially, a long load that follows a short store will also suffer a penalty.

__downgrade_write() in rwsem incurs both penalties:  the increment operation
will not be able to reuse a recently-loaded rwsem value, and its result will
not be reused by any recently-following rwsem operation.

A comment in the code states that this is because 64-bit immediates are
special and expensive; but while they are slightly special (only a single
instruction allows them), they aren't expensive: a test shows that two loops,
one loading a 32-bit immediate and one loading a 64-bit immediate, both take
1.5 cycles per iteration.

Fix this by changing __downgrade_write to use the same add instruction on
i386 and on x86_64, so that it uses the same operand size as all the other
rwsem functions.

Signed-off-by: Avi Kivity <avi@redhat.com>
LKML-Reference: <1266049992-17419-1-git-send-email-avi@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
arch/x86/include/asm/rwsem.h