x86: Cleanup pthread_spin_{try}lock.S
commit653c12c7d880340462bd963752619a7a61bcb4e3
authorNoah Goldstein <goldstein.w.n@gmail.com>
Sat, 1 Oct 2022 04:13:27 +0000 (30 21:13 -0700)
committerNoah Goldstein <goldstein.w.n@gmail.com>
Mon, 3 Oct 2022 21:13:49 +0000 (3 14:13 -0700)
tree67c128963342987f4023beba5f752f20c59c8863
parent10c779f44ab3e9525f2d2a3c9a0aa9dedea5f1ec
x86: Cleanup pthread_spin_{try}lock.S

Save a jmp on the lock path coming from an initial failure in
pthread_spin_lock.S.  This costs 4-bytes of code but since the
function still fits in the same number of 16-byte blocks (default
function alignment) it does not have affect on the total binary size
of libc.so (unchanged after this commit).

pthread_spin_trylock was using a CAS when a simple xchg works which
is often more expensive.

Full check passes on x86-64.
sysdeps/x86_64/nptl/pthread_spin_lock.S
sysdeps/x86_64/nptl/pthread_spin_trylock.S