middle-end/110452 - bad code generation with AVX512 mask splat
The following adds an alternate way of expanding a uniform
mask vector constructor like
_55 = _2 ? -1 : 0;
vect_cst__56 = {_55, _55, _55, _55, _55, _55, _55, _55};
when the mask mode is a scalar int mode like for AVX512 or GCN.
Instead of piecewise building the result via shifts and ors
we can take advantage of uniformity and signedness of the
component and simply sign-extend to the result.
Instead of
cmpl $3, %edi
sete %cl
movl %ecx, %esi
leal (%rsi,%rsi), %eax
leal 0(,%rsi,4), %r9d
leal 0(,%rsi,8), %r8d
orl %esi, %eax
orl %r9d, %eax
movl %ecx, %r9d
orl %r8d, %eax
movl %ecx, %r8d
sall $4, %r9d
sall $5, %r8d
sall $6, %esi
orl %r9d, %eax
orl %r8d, %eax
movl %ecx, %r8d
orl %esi, %eax
sall $7, %r8d
orl %r8d, %eax
kmovb %eax, %k1
we then get
cmpl $3, %edi
sete %cl
negl %ecx
kmovb %ecx, %k1
Code generation for non-uniform masks remains bad, but at least
I see no easy way out for the most general case here.
PR middle-end/110452
* expr.cc (store_constructor): Handle uniform boolean
vectors with integer mode specially.