Optimize load double into xmm with zero_extend
commit3f6e13de560249f66d6c3373db116e5ba2ca5c2a
authorhjl <hjl@138bc75d-0d04-0410-961f-82ee72b054a4>
Mon, 18 Apr 2016 19:40:30 +0000 (18 19:40 +0000)
committerhjl <hjl@138bc75d-0d04-0410-961f-82ee72b054a4>
Mon, 18 Apr 2016 19:40:30 +0000 (18 19:40 +0000)
tree12cb956dfcb4981785e0a7d11d784aac19142133
parente6e7a479fb9ff9b4af908c43877f2d17c4f30c37
Optimize load double into xmm with zero_extend

"movq" should used to load double into xmm register with zero_extend:

(set (reg:V2DF 90)
     (vec_concat:V2DF (reg/v:DF 88 [ d ])
                      (const_double:DF 0.0 [0x0.0p+0])))

Unlike "movsd", which only works with load from memory, "movq" works
with both memory and xmm register.

gcc/

PR target/70708
* config/i386/sse.md (sse2_loadlpd): Accept load from "xm" and
replace %vmovsd with "%vmovq".
(vec_concatv2df): Likewise.

gcc/testsuite/

PR target/70708
* gcc.target/i386/pr70708.c: New test.

git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@235169 138bc75d-0d04-0410-961f-82ee72b054a4
gcc/ChangeLog
gcc/config/i386/sse.md
gcc/testsuite/ChangeLog
gcc/testsuite/gcc.target/i386/pr70708.c [new file with mode: 0644]