This PR shows that we get the load/store_lanes logic wrong for arm big-endian.
commit38705ca9201541b6f49f42ac4d061a7d275c54ef
authorktkachov <ktkachov@138bc75d-0d04-0410-961f-82ee72b054a4>
Tue, 20 Mar 2018 17:13:16 +0000 (20 17:13 +0000)
committerktkachov <ktkachov@138bc75d-0d04-0410-961f-82ee72b054a4>
Tue, 20 Mar 2018 17:13:16 +0000 (20 17:13 +0000)
tree16f58159f03d1ab59f47648999b34f8d718cbae5
parent28b9418b84046e41e9a0b5c00b0f481b174824f5
This PR shows that we get the load/store_lanes logic wrong for arm big-endian.
It is tricky to get right. Aarch64 does it by adding the appropriate lane-swapping
operations during expansion.

I'd like to do the same on arm eventually, but we'd need to port and validate the VTBL-generating
code and add it to all the right places and I'm not comfortable enough doing it for GCC 8, but I am keen
in getting the wrong-code fixed.
As I say in the PR, vectorisation on armeb is already severely restricted (we disable many patterns on BYTES_BIG_ENDIAN)
and the load/store_lanes patterns really were not working properly at all, so disabling them is not
a radical approach.

The way to do that is to return false in ARRAY_MODE_SUPPORTED_P for BYTES_BIG_ENDIAN.

Bootstrapped and tested on arm-none-linux-gnueabihf.
Also tested on armeb-none-eabi.

     PR target/82518
     * config/arm/arm.c (arm_array_mode_supported_p): Return false for
     BYTES_BIG_ENDIAN.

     * lib/target-supports.exp (check_effective_target_vect_load_lanes):
     Disable for armeb targets.
     * gcc.target/arm/pr82518.c: New test.

git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@258687 138bc75d-0d04-0410-961f-82ee72b054a4
gcc/config/arm/arm.c
gcc/testsuite/gcc.target/arm/pr82518.c [new file with mode: 0644]
gcc/testsuite/lib/target-supports.exp